Official

D - Pure Straight Editorial by evima


◆ With fixed terms to use and fixed position of the subsequence

Let us first consider the minimum number of operations when the terms to use are fixed to \(A_{i_1}, \ldots, A_{i_K}\) (\(i_1 < \cdots < i_K\)), and the position for the contiguous subsequence \((1, 2, \ldots, K)\) is fixed to \(j_1, \ldots, j_K = n, n+1, \ldots, n+K-1\).

For increasing sequence of indices \(I = (i_1, \ldots, I_K)\) and \(J = (j_1, \ldots, j_K)\), let us define their distance as \(\sum_{k=1}^K |i_k - j_k|\). Also, let us define the inversion number of a sequence \((x_1, \ldots, x_K)\), \(\text{inv}(x_1, \ldots, x_K)\), as the number of pairs \((i,j)\) such that \(i<j\) and \(x_i>x_j\).

We can show that minimum number of operations needed to sort \(A_{i_1}, \ldots, A_{i_K}\) in asending order at the positions \(j_1, \ldots, j_K = n, n+1, \ldots, n+K-1\) is \(\text{dist}(I, J) + \text{inv}(A_{i_1}, \ldots, A_{i_K})\).

Indeed, we need at least \(\text{dist}(I, J) + \text{inv}(A_{i_1}, \ldots, A_{i_K})\) operations because of the following holds.

  • When swapping two of the \(K\) terms to use: the inversion number decreases by at most \(1\), but the distance of \(I\) and \(J\) does not change.
  • When swapping two elements otherwise: the distance of \(I\) and \(J\) decreases by at most \(1\), but the inversion number does not change.

On the other hand, we can achieve the objective in \(\text{dist}(I, J) + \text{inv}(A_{i_1}, \ldots, A_{i_K})\) operations, by moving each \(A_{i_k}\) to the position \(j_k\) and then repeatedly swapping two of these \(K\) terms.


◆ Only fixing the terms to use

Next, let us consider how to compute the minimum number of operations when the terms to use are fixed but not the position. We will need to choose the position for the contiguous subsequence \((1, 2, \ldots, K)\), \(j_1, \ldots, j_K = n, n+1, \ldots, n+K-1\), to minimize \(\sum |i_k - j_k|\).

For \(m = \lceil K/2\rceil\), we can see the following.

  • If \(i_m < j_m\), decreasing every \(j_1, \ldots, j_K\) by \(1\) decreases \(|i_k - j_k|\) by \(1\) for \(1\leq k\leq m\) and increases \(|i_k - j_k|\) by at most \(1\) for \(m < k\leq K\).

  • If \(i_m > j_m\), decreasing every \(j_1, \ldots, j_K\) by \(1\) increases \(|i_k - j_k|\) by at most \(1\) for \(1\leq k< m\) and decreases \(|i_k - j_k|\) by \(1\) for \(m \leq k\leq K\).

From these, we can see that there is an optimal solution such that \(i_m = j_m\), where:

  • \(|i_k - j_k| = j_k - i_k = (j_m + k-m) - i_k\) if \(1 < k < m\), and
  • \(|i_k - j_k| = i_k - j_k = i_k - (j_m + k - m)\) if \(m< k \leq K\).

Thus, we can represent \(\text{dist}(I, J)\) as a formula with \(i_1, \ldots, i_K\): \(\sum a_ki_k + b\) (\(a_k\in \{-1,0,1\}\)). Adding the inversion number of \((A_{i_1},\ldots, A_{i_K})\) yields the minimum number of operations when using \((A_{i_1},\ldots, A_{i_K})\).

◆ Computation with DP

We have found out that the minimum number of operations when using \((A_{i_1},\ldots, A_{i_K})\), excluding the constant difference, is the sum of:

  • a linear combination of \(i_k\): \(\sum_{k=1}^K a_ki_k\), and
  • the inversion number of \((A_{i_1}, \ldots, A_{i_K})\).

Particularly, for \(0\leq K'\leq K\), we can define the “partial” cost when the first \(K'\) terms to use are decided as \((A_{i_1}, \ldots, A_{i_{K'}})\), as the sum of:

  • a linear combination of \(i_k\): \(\sum_{k=1}^{K'} a_ki_k\), and
  • the inversion number of \((A_{i_1}, \ldots, A_{i_{K'}})\).

We can solve the problem with DP where \(\text{dp}[S]\) is the minimum cost when the set of chosen \(K'\) elements is \(S\). The time complexity is \(O(2^KN)\).


◆ Sample Implementations

There are also solutions without DP, with varying complexity. The latter of the implementations below, for example, has the complexity of \(\Theta(N^2+2^KK^{1.5}N)\).

posted:
last update: