Contest Duration: - (local time) (100 minutes) Back to Home
Official

## G - Baseball Editorial by en_translator

Let $$S\coloneqq \sum_{i=1}^N P_{i}$$.

### Graph representation and formalization into DP (Dynamic Programming)

We will represent the problem as a graph theory problem. For a graph with $$N+2$$ vertices numbered $$0,1,\dots$$, and $$N+1$$, we add the following directed edges:

• Edge $$(0,j)\,(1 \leq j \leq N)$$ with weight $$\sum_{y=1}^{j-1} P_y \times (j-y)$$.
• Edge $$(i,j)\,(1 \leq i < j \leq N)$$ with weight $$\sum_{y=i}^{j-1} P_y \times \min(y-i,j-y)$$.
• Edge $$(i,N+1)\,(1 \leq i \leq N)$$ with weight $$\sum_{y=i}^N P_y \times (y-i)$$.

When Takahashi chooses $$x_1,x_2,\dots$$, and $$x_K$$, expected Aoki’s score multiplied by $$S$$ equals the weight of the length-$$(K+1)$$ path $$0 \to x_1 \to \dots \to x_K \to N+1$$. Minimizing expected Aoki’s score boils down to finding the shortest path from vertex $$0$$ to vertex $$N+1$$ using exactly $$K+1$$ edges.

Since the graph is DAG (Directed Acyclic Graph), the problem can be solved with DP. Let $$\mathrm{dp}[k][j]$$ be the weight of the shortest path from vertex $$0$$ to vertex $$j$$ using $$k$$ edges; then $$\mathrm{dp}[K+1][N+1]$$ is the sought answer. Transitions occur as follows:

$\mathrm{dp}[k][j] = \min_{i<j} \{\mathrm{dp}[k-1][i] + c(i,j)\}, \qquad (\ast)$

where $$c(i,j)$$ is the weight of the edge $$(i,j)$$. $$c(i,j)$$ can be found in $$O(1)$$ time per query by precalculating the cumulative sums of $$P_y$$ and $$P_y \times y$$.

However, naively implementing this DP costs $$O(N^2 K)$$ time.

### Mongeness of the cost, and optimization using Alien DP

The DP represented by equation $$(\ast)$$ can be optimized using the trick called Alien DP when the transition cost $$c(i,j)$$ has a property called Mongeness.

Cost $$c(i,j)$$ is said to be Monge if and only if:

$\forall i,j,k,l,\quad 0 \leq i < j < k < l \leq N + 1 \Rightarrow c(i,l)+c(j,k) \geq c(i,k) + c(j,l).$

One can intuitively understand that the cost function for our problem is indeed Monge by the following figure. $$c(x,y)$$ roughly corresponds to the area of the triangular region spanned by $$x$$ and $$y$$. We have $$c(i,l)+c(j,k) = (A+B+C+D) + C = A+B+2C+D$$ and $$c(i,k)+(j,l)=(B+C)+(C+D) = B+2C+D$$, satisfying the inequality of the Mongeness criteria.

Next, we will explain Alien DP. Leaving the justification of the algorithm to the following articles, here we will just state the algorithm.

First, define a function $$f(\lambda)$$ as follows:

• Let $$f(\lambda) \coloneqq d - \lambda (K+1)$$, where $$d$$ is the length of the shortest path from vertex $$0$$ to vertex $$N+1$$, ignoring the constraints of edge count to be used, on the graph where $$\lambda$$ is uniformly added to the cost of the edges.

The sought answer is $$\max_{\lambda\in\mathbb{Z}} f(\lambda)$$. $$\lambda$$ is the penalty imposed every time an edge is used. We can regard that this algorithm searches for the penalty $$\lambda$$ such that exactly $$(K+1)$$ edges are used.

One can show that $$f(\lambda)$$ is concave with respect to $$\lambda$$, and the optimal $$\lambda$$ is within the range of $$0\leq \lambda \leq 3 \max_{i,j} c(i,j) \leq 3NS$$. Thus, this maximization problem can be solved by evaluating $$f(\lambda)$$ $$O(\log (NS))$$ times through binary search over the slope, trinary search, or golden-section search. These methods are compared in the following article:

All that left is to evaluate $$f(\lambda)$$ for a given $$\lambda$$ fast. Again, this can be done by DP. The transition is as follows:

$\mathrm{dp}[j]=\displaystyle \min_{i < j} \{\mathrm{dp}[i] + c(i,j) + \lambda\}. \qquad (\ast\ast)$

Now we have one fewer dimensions of DP, but naively implementing it costs $$O(N^2 \log (NS))$$, still exceeding the time limit. However, it is known that the DP represented by equation $$(\ast\ast)$$ can be computed fast if the cost $$c(i,j)$$ is Monge.

### Shortest path problem on Monge-edge-weighted graph

We will introduce several ways to execute the DP of equation $$(\ast\ast)$$ fast. Solutions 1 to 3 are explained in detail in the following material.

#### Solution 1. divide-and-conquer + monotone minima, $$O(N(\log N)^2)$$ time.

Define an $$(N+2)\times (N+2)$$ matrix $$A$$ by $$A_{ij}=\mathrm{dp}[j] + c(j,i) + \lambda$$. Then, the DP $$(\ast\ast)$$ boils down to finding the minimum value of each column. If $$c(i,j)$$ is Monge, then the matrix $$A$$ has a property called totally monotone. Monotone minima is an algorithm that can find the minimum value of each column of a totally monotone matrix in $$O(N\log N)$$.

Monotone minima requires random access to the elements of $$A$$. Nevertheless, in our setup one cannot obtain the value $$A_{ij}$$ until determining $$\mathrm{dp}[j]$$, which in turn requires the minimum value of the $$j$$-th column of $$A$$. As such, the order of element access is limited, making it impossible to directly apply monotone minima this time.

Here we apply the divide-and-conquer trick. Define $$\mathrm{solve}(l,r)$$ as the process of finding $$\mathrm{dp}[l],\dots,\mathrm{dp}[r-1]$$, specifically as follows:

1. Let $$m\coloneqq \lfloor (l+r)/2\rfloor$$, and execute $$\mathrm{solve}(l,m)$$ recursively.
2. Process the transitions from $$\mathrm{dp}[l],\dots,\mathrm{dp}[m-1]$$ to $$\mathrm{dp}[m],\dots,\mathrm{dp}[r-1]$$. All the required values of $$\mathrm{dp}$$ are known, where we can apply monotone minima.
3. Execute $$\mathrm{solve}(m,r)$$ recursively.

Thus we obtain an $$O(N(\log N)^2)$$ algorithm. Overall, the cost is $$O(N(\log N)^2 \log (NS))$$, which will be AC (accepted) with fast languages.

#### Solution 2. Divide-and-conquer + SMAWK, $$O(N\log N)$$ time

The monotone minima in solution 1 can be replaced with SMAWK algorithm, where we can reduce one log. SMAWK algorithm finds the minimum value of each column in a totally monotone matrix in a total of $$O(N)$$ time. SMAWK algorithm has a relatively heavy constant factor though, making indistinguishable from monotone minima under the constraints of this problem.

#### Solution 3. LARSCH, $$O(N)$$ time

One can even optimize Solution 2 to obtain an $$O(N)$$ algorithm called LARSCH algorithm. The implementation of LARSCH algorithm is rather complicated, but a simple $$O(N\log N)$$ algorithm is suggested in the following article by noshi91. While the complexity is worse, it was faster than $$O(N)$$ solution, both written by the writer, due to the small constant factor.

#### Solution 4. convex hull trick, $$O(N\log N)$$ time

For any two rows of a totally monotone matrix $$A$$, the ordering of the elements are monotone. That is, if you view the $$j$$-th row as a function of $$i$$, these two functions crosses at most once.

The minimum value of functions that crosses at most once with each other can be found with modified convex hull trick. The convex hull trick is originally an algorithm of finding the minimum value of affine functions, but the property that functions cross each other at most once enables to apply the same algorithm. By using binary search to find the intersection of two functions, $$(\ast\ast)$$ can be solved in a total of $$O(N \log N)$$ time.

One can also use a data structure called Li-Chao tree, which enables to find the minimum value of affine functions, to treat functions that intersect at most once with each other. With Li-Chao tree, one can obtain $$O(N \log N)$$ solution.

### Sample code

With writer’s implementation, the second implementation was the fastest.

posted:
last update: