D - Sum of Hash of Lexmin Editorial by evima
First, let’s come up with a simple way to determine whether a permutation \(P\) is good or not.
If there exist an ancestor and a descendant \(u < v\) such that they appear in the order \(v, u\) as adjacent elements, then obviously \(P\) is not a good permutation. Let’s call such a pair \((u, v)\) an “improvable pair”.
Here, if there is no improvable pair in \(P\), it can be shown that \(P\) is a good permutation.
Assume that for a \(P\) without any improvable pair, we can obtain a lexicographically smaller permutation by performing the operation multiple times, leading to a contradiction. Let \(Q\) be the lexicographically smallest permutation obtainable from \(P\). Let \(i\) be the first position where \(P\) and \(Q\) differ, and let \(Q_i = P_j\). For each \(i \leq k < j\), we need to swap \(P_k\) and \(P_j\).
Focus on \(k = j - 1\). We can swap \(P_{j - 1}\) and \(P_j\), but since there is no improvable pair in \(P\), we know that \(P_{j - 1}\) is an ancestor of \(P_j\). Therefore, for each \(i \leq k < j - 1\), \(P_k\) and \(P_j\) can be swapped. Now, consider bringing \(P_{j - 1}\) to position \(i\), and we can obtain a permutation smaller than \(Q\), which contradicts the assumption that \(Q\) is lexicographically smallest. Thus, the claim is shown.
From the above observation, we find that we need to compute \(\operatorname{hash}(P)\) for all \(P\) that do not have any improvable pair.
We handle the condition of not having any improvable pair using the inclusion-exclusion principle.
To make the explanation simpler, let’s first try to solve the problem of counting the number of such \(P\), instead of computing the sum of \(\operatorname{hash}(P)\). This can be done using tree DP. Let \(dp[v][k] = \) the number of ways to specify \(k\) paths connected by improvable pairs in the subtree rooted at vertex \(v\). However, to make things easier later, we use the number of paths \(k\) connected by improvable pairs as the key, instead of the number of improvable pairs. When there are \(k\) paths, there are \(k!\) ways to arrange them at the end. Rather than deciding this arrangement at the end, we will decide it during the DP. That is, when merging subtrees with \(a\) and \(b\) paths, there are \({a + b \choose a}\) ways to “mix” them.
For \(\operatorname{hash}(P)\), the basic idea is the same. The formula of \(\operatorname{hash}(P)\) focuses on each element and multiplies it by a weight of \(B\) to the power of the number of elements before it. In the tree DP, the keys can be the number of paths before the focused element, \(a\), and the number of paths after it, \(b\). Depending on whether the element of interest is included in the subtree, two series of DP tables appear.
For each vertex \(v\), we need to maintain DP tables with size proportional to the square of the size of its subtree. The computation for merging is of fourth order. If we naively implement this DP, the computational complexity is \(O(N^4)\). It doesn’t become \(O(N^5)\) for a reason similar to the well-known technique that reduces what looks like an \(O(N^3)\) DP to \(O(N^2)\) (what is it called?).
posted:
last update: