G - Counting Buildings Editorial by en_translator
Counting with Insertion DP
When it comes to combinatorics on permutations, the idea of insertion DP (Dynamic Programming) is often effective. The key idea of the insertion DP is to enumerate the elements of a permutation in ascending or descending order, and consider how the sought value changes depending on the positions to insert elements.
This time, enumerating the elements in descending order will work. Let \(\mathrm{dp}(i,k)\;(i=0,\dots,N-1,\,k=-i,\dots,i)\) be the number of permutations of \((N-i,N-i+1,\dots,N)\) with \(L-R=k\). We have \(\mathrm{dp}(0,0) = 1\). Supposing that we already know \(\mathrm{dp}(i,k)\) for all \(k\), let us consider how to find \(\mathrm{dp}(i+1,k)\).
There are \((i+2)\) ways to insert \((N-i-1)\) into a permutation of \((N-i,N-i+1,\dots,N)\). How does \(L\) and \(R\) changes depending on the position to insert \((N-i-1)\)?
- When inserted on the left end: \(N-i-1\) is newly visible from the left. Since \(N-i-1\) is the current minimum element, so it never blocks the numbers originally visible from the left. Also, \(N-i-1\) is never visible from the right. Thus, \(L\) increases by \(1\), and \(R\) remains invariant, so \(L-R\) increases by \(1\).
- When inserted on the right end: just as the left end case, \(L-R\) decreases by \(1\).
- When inserted in the other \(i\) positions: \((N-i-1)\) is visible neither from left nor right, so \(L-R\) remains the same.
By the observation above, the answer can be found by the following DP:
\[ \mathrm{dp}(i+1,k) = \mathrm{dp}(i,k-1) + \mathrm{dp}(i,k) \times i + \mathrm{dp}(i,k+1). \]
Naive implementation of the DP above would yield an \(O(N^2)\) solution.
Optimization using formal power series
Let \(f_i(x)\) be a polynomial whose coefficient for \(x^k\) is \(\mathrm{dp}(i,k)\). We have \(f_0(x) = 1\). The transition of the DP corresponds to the product of polynomials as follows:
\[ f_{i+1}(x) = (x^{-1} + i + x) f_i(x). \]
The sought answer is the coefficient of \(f_{N-1}(x)\) for \(x^K\) (which we denote by \([x^K] f_{N-1}(x)\)), which equals
\[ [x^K] f_{N-1}(x) = [x^K] \prod_{i=0}^{N-2}(x^{-1}+i+x)f_0(x) = [x^{K-N+1}] \prod_{i=0}^{N-2}(1+ix+x^2). \]
Thus, the problem can be solved if the product of \((N-1)\) quadratic polynomials on the rightmost-hand side.
Two polynomials of degrees \(N\) or less can be convolved in \(O(N \log N)\) time, but naively using that method to convolve \((N-1)\) degree-\(2\) polynomials sequentially costs a total of \(O(N^2 \log N)\) time. In fact, the product of a sequence of polynomials \(p_1,\dots,p_M\) whose degrees sum to \(N\) can be convolved in a total of \(O(N (\log N)^2)\) time. We introduce two ways to do so.
One is so-called merge technique, in which the two polynomials with the smallest degrees are repeatedly chosen and convolved.
Complexity analysis
Let $d_i$ be the degree of the polynomial $p_i$. We evaluate how much $p_i$ contributes to the overall time complexity. Every time a polynomial containing $p_i$ (a polynomial obtained by multiplying $p_i$ with other polynomials) is convolved, $p_i$ contributes to the time complexity by $O(d_i \log d_i)$. Thus, if we can assert that polynomials containing $p_i$ are convolved at most $O(\log N)$ time, then the overall complexity will be $O(\sum_{i=1}^M d_i \log d_i \log N) = O(N(\log N)^2)$. Let $d$ be the degree of a polynomial containing $p_i$ at some point. Then, when that polynomial is chosen, the degree of the other polynomials all have degrees of at least $d$. Thus, when a polynomial containing $p_i$ is chosen for the next time, its degree increases by at least $d$. Hence, the degree of the polynomial containing $p_i$ is doubled in at least two operations, so polynomials containing $p_i$ is chosen at most $O(\log N)$ time.Another approach is **divide-and-conquer algorithm**, in which we divide the sequence of polynomials into two, compute the product of each subsequence recursively, and finally find the product of the results. In other words, we compute the product $q_1$ of $(p_1,\dots,p_{\lfloor M/2 \rfloor})$ and the product $q_2$ of $(p_{\lfloor M/2 \rfloor+1},\dots,p_M)$ recursively (where the same algorithm is recursively applied to find the products), and finally find the product $q_1q_2$.
Complexity analysis
Suppose that the length of sequence of polynomials $M$ is a power of two. After dividing $k$ times, there are $2^k$ sequences; suppose that their products are already found. To find the product after dividing $(k-1)$ times, we can convolve the products of the adjacent polynomial pairs. Since the degrees of the $2^K$ polynomials sum to $N$, this operation costs $O(N \log N)$ time. Since division occurs at most $\log_2 M$ time, the overall time complexity is $O(N (\log N)^2)$.
posted:
last update: