Official

D - Between Two Arrays Editorial by en_translator


Let us first consider a naive DP (Dynamic Programming). Define a DP table \(dp_{i,j}\) by

  • \(dp_{i,j} := \) the number of prefixes of lengths \(i\) of such sequences such that \(c_i = j\).

Then, focusing on the \(i\)-th element,

  • if \(a_i \leq j \leq b_i\), it is accepted as long as it is is greater than or equal to the previous element
  • otherwise, it is unaccepted.

Thus, we have the following transitions.
(For convenience, we regard that the \(0\)-th element of C is \(0\), in order to reduce case divisions.)

\[dp_{i, j} = \begin{cases} 1 & i = 0 \wedge j = 0 \\ \sum_{0 \leq k \leq j} dp_{i-1,k} & i \geq 1 \wedge a_i \leq j \leq b_i \\ 0 & \mathrm{otherwise.} \\ \end{cases} \]

This expression can be computed in the increasing order of \(i\). Also, the answer is expressed as the sum of all possible prefixes of lengths \(N\), that is,

\[dp_{N,0} + dp_{N,1} + \dots + dp_{N, M},\]

where \(M := \max( \max(A),\max(B))\).
Therefore, we have now found an algorithm of finding the answer in a polynomial time. The DP defined by the transitions above has

  • \(\mathrm{O}(NM)\) number of elements that are to updated, and
  • \(\mathrm{O}(M)\) time complexity required to compute each element,

so the total time complexity is \(\mathrm{O}(NM^2)\). However, this leads to TLE (Time Limit Exceeded), so we somehow want to reduce the complexity of the DP.

Now, notice that the transition has a form of \(\sum_{0 \leq k \leq j} dp_{i-1,k}\), which enables us to manage cumulative sums on DP. Let us define a table of cumulative sums of DP, \(R_{i,j}\), as

  • \(R_{i, j} := \sum_{0 \leq k \leq j} dp_{i,k}\).

Then the DP mentioned above is expressed as

\[R_{i, j} = \begin{cases} 1 & i = 0 \wedge j = 0 \\ R_{i, j - 1} + R_{i-1,j} & i \geq 1 \wedge a_i \leq j \leq b_i \\ R_{i,j-1} & \mathrm{otherwise,} \\ \end{cases} \]

so the time complexity required to compute each element is reduced to \(O(1)\). By making use of this expression to compute DP, the problem has been solved in a total of \(O(NM)\) time.

Alternatively, if we use Fenwick Tree or Segment Tree, we can obtain cumulative sums in an \(\mathrm{O} (\log M)\) time, so the original \(\mathrm{O}(NM^2)\) DP can be improved to a complexity of \(\mathrm{O}(NM \log M)\) without changing the transitions.
If you are good at using data structures, this solution may be more intuitive. (However, the time complexity becomes worse, so you have to be careful enough of the constant factor.)

Writer’s solutions in C++ and Python follow.

#include <iostream>
#include <vector>

#include "atcoder/modint.hpp"
using namespace std;
using mint = atcoder::modint998244353;
#define rep(i, n) for (int i = 0; i < (n); i++)

int main() {
  int N;
  cin >> N;
  vector<int> A(N), B(N);
  for (auto& x : A) cin >> x;
  for (auto& x : B) cin >> x;
  int MAX = 3000;
  vector dp(vector(N + 1, vector(MAX + 1, mint{0})));
  dp[0][0] = 1;
  rep(i, N + 1) {
    rep(j, MAX) dp[i][j + 1] += dp[i][j];
    if (i != N) {
      for (int j = A[i]; j <= B[i]; j++) dp[i + 1][j] += dp[i][j];
    }
  }
  cout << dp[N][MAX].val() << "\n";
}
  • Python (You have to use PyPy3 processor)
mod = 998244353
N = int(input())
A = list(map(int, input().split()))
B = list(map(int, input().split()))
M = 3000
dp = [1] * (M + 1)
for i in range(N):
  nx = [0] * (M + 1)
  for j in range(A[i], B[i] + 1):
    nx[j] = dp[j]
  dp = nx
  for i in range(M):
    dp[i + 1] += dp[i]
    dp[i + 1] %= mod
print(dp[M])

posted:
last update: