H - Random Robots Editorial by liouzhou_101


It has been a long time waiting for the editorial to this problem since the contest ended, but still no editorials are found. During contest time, I was able to figure out that Lindström–Gessel–Viennot Lemma may be useful to this problem but failed to solve it. Thanks to the absence of official editorials, I have a chance to work it out by myself.

For those who are not familiar with Lindström–Gessel–Viennot Lemma, see here. In this problem, you should only know that, given a directed acyclic graph (DAG) \(G = (V, E)\), and two sets \(S = \{s_1, s_2, \dots, s_k\} \subseteq V\) and \(T = \{t_1,t_2,\dots,t_k\} \subseteq V\) of sources and sinks, respectively, where \(|S| = |T| = k\), if every two paths from \(s_i\) to \(t_j\) and from \(s_j\) to \(t_i\) (\(i \neq j\)) have at least one common vertex (i.e., they are not disjoint), then the number of \(k\)-tuples of disjoint paths from \(s_i\) to \(t_i\) is

\[ \left| \begin{matrix} e(s_1, t_1) & e(s_1, t_2) & \dots & e(s_1, t_n) \\ e(s_2, t_1) & e(s_2, t_2) & \dots & e(s_2, t_n) \\ \vdots & \vdots & \ddots & \vdots \\ e(s_n, t_1) & e(s_n, t_2) & \dots & e(s_n, t_n) \end{matrix} \right|, \]

where \(e(s_i, t_j)\) denotes the number of paths from \(s_i\) to \(t_j\).

Now let us rephrase the problem statement as: Given \(k\) sources \(x_i = (0, x_i)\) for \(1 \leq i \leq k\), where \(x_1 < x_2 \dots < x_k\), find the number of disjoint paths with \(k\) sinks \(y_i = (n, y_i)\), where \(y_1 < y_2 < \dots < y_k\). Here, starting from vertex \((x, y)\), there are two edges that end at \((x+1,y)\) and \((x+1,y+1)\), respectively.

According to Lindström–Gessel–Viennot Lemma, the answer is just

\[ \mathit{ans} = \sum_{0 \leq y_1 < y_2 < \dots < y_k \leq x_k+n} \left| \begin{matrix} e(x_1, y_1) & e(x_1, y_2) & \dots & e(x_1, y_n) \\ e(x_2, y_1) & e(x_2, y_2) & \dots & e(x_2, y_n) \\ \vdots & \vdots & \ddots & \vdots \\ e(x_n, y_1) & e(x_n, y_2) & \dots & e(x_n, y_n) \end{matrix} \right|, \]

where \(e(x_i, y_j) = \begin{cases} \binom{n}{y_j-x_i} & 0 \leq y_j-x_i \leq n \\ 0 & \text{otherwise} \end{cases}\).

In order to compute the answer, for \(S = \{s_1,s_2,\dots,s_{|S|}\} \subseteq [k] = \{1,2,\dots,k\}\) being a subset of indices and \(b\) being an upper bound of \(y_{i}\)’s, we define

\[ f(S, b) = \sum_{0 \leq y_1 < y_2 < \dots < y_{|S|} \leq b} \left| \begin{matrix} e(x_{s_1}, y_1) & e(x_{s_1}, y_2) & \dots & e(x_{s_1}, y_{|S|}) \\ e(x_{s_2}, y_1) & e(x_{s_2}, y_2) & \dots & e(x_{s_2}, y_{|S|}) \\ \vdots & \vdots & \ddots & \vdots \\ e(x_{s_{|S|}}, y_1) & e(x_{s_{|S|}}, y_2) & \dots & e(x_{s_{|S|}}, y_{|S|}) \end{matrix} \right|. \]

In this way, the answer is \(f([k], x_k+n)\). Let’s see from which states we can derive \(f(S, b)\). There are only two cases.

  1. \(y_{|S|} < b\). In this case, it reduces to \(f(S, b-1)\).
  2. \(y_{|S|} = b\). In this case, we need to compute the sum of all possible determinants with \(y_{|S|} = b\), which is

\[ \begin{aligned} \sum_{0 \leq y_1 < y_2 < \dots < y_{|S|-1} < y_{|S|} = b} \left| \begin{matrix} e(x_{s_1}, y_1) & e(x_{s_1}, y_2) & \dots & e(x_{s_1}, y_{|S|}) \\ e(x_{s_2}, y_1) & e(x_{s_2}, y_2) & \dots & e(x_{s_2}, y_{|S|}) \\ \vdots & \vdots & \ddots & \vdots \\ e(x_{s_{|S|}}, y_1) & e(x_{s_{|S|}}, y_2) & \dots & e(x_{s_{|S|}}, y_{|S|}) \end{matrix} \right| & = \sum_{i=1}^{|S|} (-1)^{|S|+i} e(x_{s_i}, y_{|S|}) \left| \begin{matrix} e(x_{s_1}, y_1) & e(x_{s_1}, y_2) & \dots & e(x_{s_1}, y_{|S|-1}) \\ \vdots & \vdots & \cdots & \vdots \\ e(x_{s_{i-1}}, y_1) & e(x_{s_{i-1}}, y_2) & \dots & e(x_{s_{i-1}}, y_{|S|-1}) \\ e(x_{s_{i+1}}, y_1) & e(x_{s_{i+1}}, y_2) & \dots & e(x_{s_{i+1}}, y_{|S|-1}) \\ \vdots & \vdots & \cdots & \vdots \\ e(x_{s_{|S|}}, y_1) & e(x_{s_{|S|}}, y_2) & \dots & e(x_{s_{|S|}}, y_{|S|-1}) \end{matrix} \right| \\ & = \sum_{i=1}^{|S|} (-1)^{|S|+i} e(x_{s_i}, y_{|S|}) f(S\setminus\{s_i\}, b-1). \end{aligned} \]

Combining the both cases, we obtain a DP recurrence

\[ f(S, b) = f(S, b-1) + \sum_{i=1}^{|S|} (-1)^{|S|+i} e(x_{s_i}, b) f(S\setminus\{s_i\}, b-1). \]

Moreover, the boundary condition is \(f(S, -\infty) = [S = \emptyset] = \begin{cases} 1 & S = \emptyset \\ 0 & \text{otherwise} \end{cases}\).

It is clear to see that there are \(O(2^k(x_k+n))\) states in total, and each state needs \(O(k)\) time to compute. So the overall time complexity is \(O(k2^k(x_k+n))\). Example code is here.

See my blog for a Chinese explanation.

posted:
last update: