Official

Ex - Colorfulness Editorial by en_translator


Rephrasing the problem

First of all, directly finding \(F(1), F(2), \dots, F(M)\) is a little bit difficult, so let us define \(a\) as follows:

  • \(a_n :=\) (the number of \(P\) such that \(C(P) = n\))

Then this problem is decomposed into two parts:

  • Finding \(a_0, a_1, \dots, a_{N-1}\)
  • Find \(F\) from \(a\)

First, let’s find \(a\).

Inclusion-Exclusion Principle

Explicitly computing \(a\) is still a bit difficult. Instead, let us define \(p\) and \(q\) as follows:

  • \(p_n :=\) (the number of \(P\) that constructs a sequence of balls with \(n\) pairs of adjacent balls with the same color)
  • \(q_n :=\) (the number of \(P\) that constructs a sequence of balls with at least \(n\) pairs of adjacent balls with the same color)

Obviously, \(a_n = p_{N-1-n}\), so all we need is \(p\).
We have the following relation between \(p\) and \(q\):

\[q_{n} = \sum_{k=n}^{N-1} \binom{k}{n} p_k\]

By the inclusion-exclusion principle, we obtain an expression of \(p\):

\[p_n = \sum_{k=n}^{N-1} (-1)^{k-n}\binom{k}{n} q_k \]

We can find \(p\) from \(q\) with convolution in an \(\mathrm{O}(N \log N)\) time, so it is sufficient to compute \(q\) fast.

Computing \(q\) with Exponential Generating Function

Let us find \(q_0, q_1, \dots, q_{N-1}\).

Suppose that there are \(m_c\) balls painted in color \(c\). There may be at most \((m_c-1)\) pairs of adjacent balls in this color. Suppose that there are at least \(k_c\) such pairs. There are \(\binom{m_c-1}{k_c}\) ways of choosing the positions of such pairs, which we denote by \(a_{c, k}\). With proper observation, we see that \(q_n\) can be calculated by:

\[q_n = \sum_{k_1, k_2, \dots, k_N ,\sum_{c} k_c = n} \left(\binom{N-n}{m_1-k_1, m_2-k_2, \dots, m_N-k_N}\prod_{c} a_{c,k}\right).\]

Using \(r_0, r_1, \dots, r_N\)

\[r_0=0, r_i = q_{N-i} (i \geq 1)\]

to simplify the expression, we have

\[r_n = \sum_{k_1, k_2, \dots, k_N ,\sum_{c} k_c = n} \left(\binom{n}{k_1, k_2, \dots, k_N}\prod_{c} a_{c,k}\right).\]

We can use this expression to compute \(r\) in a total of \(\mathrm{O}(N^2)\) with DP (Dynamic Programming) (which is a bit difficult), but we can further optimize it. Let

\[g_c(x) = \sum_{k=0}^{m_c} \frac{a_{c,k}}{k!} x^k,\]

then \(r_n\) can be expressed with the \(n\)-th coefficient of \(r_n\) as

\[r_n = n! \times [x^n] \prod_{c} g_c(x).\]

Therefore, we can compute \(\prod_{c} g_c(x)\) with convolution and merging technique in a total of \(\mathrm{O}(N \log^2 N)\) time to enumerate \(r\), thus obtaining \(q\).

The background of such convolution is a concept called exponential generating function. For a sequence \((a_n)\), the exponential generating function \(A(x)\) of \((a_n)\) is such function that

\[A(x) = \sum_{n=0}^\infty \frac{a_n}{n!} x^n.\]

The property of the exponential generating function is that the generating function obtained as a convolution of two exponential generating functions is a convolution weighted by binomial coefficients.

\[ \begin{aligned} &A(x) B(x) \\ &= \left(\sum_{n=0}^\infty \frac{a_n}{n!} x^n\right)\left(\sum_{m=0}^\infty \frac{b_m}{m!} x^m\right) \\ &= \sum_{l=0}^\infty \left( \sum_{n+m=l} \frac{a_n b_m}{n! m!} \right) x^l\\ &= \sum_{l=0}^\infty \left( \sum_{n+m=l} \binom{l}{n} a_n b_m \right) \frac{x^l}{l!}\\ \end{aligned} \]

\(r\) is obtained above based on the expansion of this formula to multi-term coefficients.
Hence, \(q\) have been found, and \(a\) too.

The ordinary generating function of \(F(n)\)

We have $\(F(n) = \sum_{k=0}^{N-1} k^n a_k.\)$

Let us define the ordinary generating function \(f(x)\) of \(F(n)\):

\[f(x) = \sum_{n=0}^\infty F(n) x^n.\]

Then \(f(x)\) is expressed by \(a\) as follows:

\[ \begin{aligned} &f(x) \\ &=\sum_{n=0}^\infty F(n) x^n \\ &=\sum_{n=0}^\infty \left( \sum_{k=0}^{N-1} k^n a_k \right) x^n \\ &=\sum_{k=0}^{N-1} a_k \left( \sum_{n=0}^\infty (kx)^n\right) \\ &=\sum_{k=0}^{N-1} \left(\frac{a_k}{1-kx}\right). \end{aligned} \]

Thus, it is sufficient to find the \(1\)-th through \(M\)-th coefficients of

\[\sum_{k=0}^{N-1} \left(\frac{a_k}{1-kx}\right).\]

By assigning the values of \(a\) to the expression above and using convolution and merging technique to compute the summation, we can obtain an expression of form

\[f(x) = \frac{s(x)}{t(x)}\]

in a total of \(\mathrm{O}(N \log^2 N)\) time (where \(s(x)\) and \(t(x)\) are polynomials of degree at most \(N\) such that \(\deg(s) \lt \deg(t)\)).

The Inverse of Formal Power Series

Therefore, it is sufficient to find the terms of up to \(M\)-th order of the power series expansion of \(\frac{s(x)}{t(x)}\). Here, consider the following problem:

You are given \(t_0, t_1, \dots, t_M\), where \(t_i\) is \(i\)-th order coefficient of \(t(x)\) and \(t_0=1\).
Find \(u_0, u_1, \dots, u_M\) such that:

  • \(u_0 = 1\);
  • \(\sum_{i=0}^n t_i u_{n-i} = 0\) for \(n \geq 1\).

Let \(u(x)\) be the generating function of such \(u_0, u_1, \dots, u_M\), then we can confirm that

\[t(x)u(x) \equiv 1 \pmod{x^{M+1}}\]

and

\[f(x) \equiv s(x) u(x) \pmod{x^{M+1}}\]

holds. Thus, once we obtain \(u(x)\), we can obtain the terms of up to \(M\)-th order of \(f(x)\) as a convolution of \(s(x)\) and \(u(x)\).

In general, for a formal power series \(t(x)\), if \(u(x)\) satisfies

\[t(x)u(x) \equiv 1 \pmod{x^{M+1}},\]

then \(u(x)\) is called the (multiplicative) inverse of \(t(x)\).

Let us find \(u(x)\). The \(n\)-th order coefficient \(u_n\) can be represented by \(t\) as

\[u_n = \sum_{i=1}^{n} t_i u_{n-i},\]

so it can be computed with divide-and-conquer FFT (Fast Fourier Transform) in a total of \(\mathrm{O}(M \log^2 M)\) time. This can be further optimized using doubling.
Let us define polynomials \(u_k\) by \(u_k := u(x) \bmod x^k\). Suppose we know \(u_k\). How can we find \(u_{2k}\) fast? Since

\[u_k - u_{2k} \equiv 0 \pmod{x^k},\]

we have

\[(u_k - u_{2k})^2 \equiv u_k^2 - 2 u_k u_{2k} + u_{2k}^2 \equiv 0 \pmod{x^{2k}}.\]

Multiplying \(t\) to both sides yields

\[t u_k^2 - 2 t u_k u_{2k} + t u_{2k}^2 \equiv 0 \pmod{x^{2k}}.\]

Using \(t u_{2k} \equiv 1 \pmod{x^{2k}}\), it is transformed to

\[u_{2k} \equiv 2 u_k - t u_k^2 \pmod{x^{2k}}.\]

Using this formula, we can double the length of \(u\) starting from \(u_1=1\) to obtain the coefficients of up to \(M\)-th order of \(u(x)\).

By the way, there is a generalization of this method called Newton’s method on polynomials. (Assigning \(A(F) = F \times t(x) - 1\) into the following equation yields the exactly same algorithm described above.)

Newton’s method on polynomials

We want to find \(F\) such that \(A(F) = 0\). (Suppose that we know \([x^0]F\).) Suppose we know \(\hat{F}(x) = F(x) \bmod {x^d}\). Since we have

\[A(F) = A(\hat{F}) + A'(\hat{F})(F - \hat{F}) + \mathrm{O}((F - \hat{F})^2),\]

by taking \(\bmod{x^{2d}}\) of both sides, we can ignore \(\mathrm{O}((F - \hat{F})^2)\), so we have

\[A(F) \equiv A(\hat{F}) + A'(\hat{F})(F - \hat{F}) \pmod{x^{2d}}.\]

Thus, we can obtain \(F \mod x^{2d}\) from this expression.

Therefore, the problem has been solved in a total of \(\mathrm{O}(N \log^2 N + M \log M)\) time.

posted:
last update: