D - Four Jewels Editorial by evima
We sort the gems so that \(A_i\) is in descending order. We add gems \(1, 2, \cdots, N\) in this order and recalculate the optimal solution each time we add one. Here, the following claim holds:
- ☆ We can obtain the optimal solution without undoing the gem-cutting operation
Proof of ☆:
We can formulate and discuss using flow. Consider the following graph:
- For each \(1 \leq x \leq N, 0 \leq y \leq K\), there exists vertex \((x,y)\)
- For each \(1 \leq x \leq N-1\), there exists edge \((x,0) \to (x+1,0)\) with capacity \(\infty\) and cost \(0\)
- For each \(1 \leq x \leq N-1, 1 \leq y \leq K\), there exists edge \((x+1,y) \to (x,y)\) with capacity \(\infty\) and cost \(W_y\)
- For each \(1 \leq x \leq N, 1 \leq y \leq K\), there exists edge \((x,y) \to (x,y-1)\) with capacity \(\infty\) and cost \(0\)
Adding gem \(i\) corresponds to flowing \(1\) unit of flow along the shortest path from vertex \((B_i, K+1-A_i)\) to vertex \((N+1-i, 0)\). Suppose the shortest path includes a negative-cost pushback. We show that by appropriately rearranging the path, we can obtain a shortest path that doesn’t use negative-cost pushback.
Suppose the pushback is \((x,y) \to (x+1,y)\). Since pushback is occurring, there must have originally been flow from here toward \(y=0\). Take an arbitrary path of this flow and call it \(P\). Also, call the shortest path we’re considering \(Q\). Focus on when \(P\) and \(Q\) first intersect. We case-split based on whether \(Q\) enters from \(P\)’s upper-left side or lower-right side.
Details are omitted, but after case-splitting, we can prove as follows:
Upper-left: Focus on when \(P\) and \(Q\) last intersect. Since we don’t lose by directly moving from the first intersection to the last intersection via \(P\), we should rearrange \(Q\) to such a path.
Lower-right: After \(P\) and \(Q\) intersect, there must be a movement \((z,w) \to (z,w+1)\) somewhere. Consider the first such movement. This is also a pushback, so it must have been part of some path originally. Call this path \(R\). From when \(Q\) and \(R\) first intersect (which we know is before \((z,w)\)) to \((z,w)\), if we directly follow \(R\), we don’t lose.
Note that when proving rigorously, we need to ensure that path rearrangement eventually stops by taking the shortest path with minimum Manhattan distance.
End of ☆ proof
Consider the method of adding gems one by one and updating the optimal solution each time.
Position \(x\) is a star if it satisfies: - The number of gems with size \(\geq x\) is exactly \(N+1-x\).
Suppose we newly add a gem of size \(B_i\). For any star \(x\) satisfying \(x \leq B_i\), some gem needs to change from size \(\geq x\) to size \(< x\).
Consider the change in optimal solution when adding one gem, and let the gems whose sizes changed be \(i_1, \cdots, i_t\). For a gem that changed from size \(a \to b\) and another that changed from \(b \to c\), we can assume the former’s value is strictly greater than the latter’s (we can take such a solution).
Consider a group of gems that undergo changes like size \(a \to b \to c \to d\). Each group has size at most \(K\). The size of the last gem in the group is (size of some existing star)\(-1\), and it becomes a new star. Otherwise, we would be cutting the gem size unnecessarily, violating optimality.
Once a position becomes a star, it remains a star thereafter. From the above, the total number of times gem sizes change throughout all operations is \(O(NK)\).
Therefore, if we can efficiently enumerate the gems whose sizes should change, we’re done.
Consider a DP to find the optimal change when adding one gem. Let \(dp[i][j] =\) “minimum cost when processing positions \(\geq i\), and currently cutting a type \(j\) gem that was originally of size \(\geq i+1\) down to size \(i\)”, and process \(i = N, \cdots, 1\). (We also need \(j = -1\) corresponding to the state where no gem is currently being cut)
If we formulate this with min-plus matrices, we can put it on a segment tree. Note that in practice, we need to hold data other than the minimum value to reconstruct the solution, but the approach doesn’t change much.
Each node holds \(O(K^2)\) data, and merging nodes takes \(O(K^3)\). Therefore, the overall complexity is \(O(NK^4 \log N)\).
posted:
last update: