Official

D - King Editorial by evima


First, we would like to consider applying the typical approach for this type of problem - the reflection principle (also known as the mirror method) - but it seems difficult to apply the reflection principle directly to this problem. So we need to be a bit clever.

In each operation, in addition to moving to one of the eight adjacent squares, we also allow the option to “not move”.

Let:

  • \(f(t)\) be the answer for the original problem when \(T=t\), and
  • \(g(t)\) be the number of ways to move from \((A,B)\) to \((C,D)\) in \(t\) operations when “not moving” is added as an option.

Then we have:

\[g(T) = \sum_{t=0}^T \binom{T}{t} f(t) \iff f(T) = \sum_{t=0}^T (-1)^{T-t} \binom{T}{t} g(t)\]

Therefore, if we can calculate \(g(0), g(1), \dots, g(T)\), we can compute the desired answer \(f(T)\).

When we observe the operations with “not moving” added, we can reinterpret this as: first move \(h \in \{-1,0,1\}\) in the row direction, then move \(w \in \{-1,0,1\}\) in the column direction. This allows us to consider the two axes independently.

Based on the above analysis, we ultimately need to solve the following problem:

Reformulated Problem

Solve the following problem for \(t=0,1,\dots,T\):

  • There is a piece on the \(S\)-th column from the left on a \(1 \times N\) board. You repeat the \(3\)-choice operation of “move the piece left”, “move the piece right”, or “don’t move the piece” \(t\) times. The piece cannot leave the board. What is the number of ways for the piece to be on the \(G\)-th column from the left after \(t\) operations?

The reformulated problem can be solved using the reflection principle and a special divide-and-conquer FFT. Example problems for the reflection principle include ABC309Ex, and examples for the special divide-and-conquer FFT include ABC281H and ABC345G. The algorithms are explained in detail in the editorials of these problems, so please refer to them as well.

Consider a cycle graph with \(2N+2\) vertices arranged in this order: \(1, 2, \dots, N-1, N, M_1, N', N-1', \dots, 2', 1', M_2\). (Here \(M_*\) represents mirrors.)

The answer we want is the number of ways to move from vertex \(S\) to vertex \(G\) in \(t\) operations, where each operation allows the three choices: “move to an adjacent vertex” or “don’t move”. Here, we can prove that “the number of ways to move from vertex \(S\) to \(G\) via \(M_*\)” equals “the number of ways to move from vertex \(S\) to \(G'\) via \(M_*\)”. Therefore, we ultimately need to subtract:

  • “The number of ways to move from vertex \(S\) to vertex \(G'\) in \(t\) operations”

from:

  • “The number of ways to move from vertex \(S\) to vertex \(G\) in \(t\) operations.”

Using generating functions to reformulate the above problem, for appropriate constant \(M\) and \(t=0,1,\dots,T\), we need to compute:

\[[x^0] x^M (x+1+x^{-1})^t \bmod {x^{2N+2} - 1}\]

(The \(\bmod {x^{2N+2}-1}\) in the above expression is intended as an operator that takes the remainder of all coefficient exponents \(e\) of \(x\) modulo \(2N+2\) to keep them in the range \(0 \leq e < 2N+2\).)

This problem can be computed using special divide-and-conquer FFT. Let me explain briefly. Suppose we are interested in answers for the range \(t=L,L+1,\dots,R-1\), and we have information about \(x^M (x+1+x^{-1})^L\). When finding the answers we are interested in, we only multiply \(x^M (x+1+x^{-1})^L\) by \((x+1+x^{-1})\) at most \(R-L-1\) times. This means we can discard coefficient information where the exponent of \(x\) is more than \(R-L\) away from \(x^0\). This leads to the following algorithm written in Python-like pseudocode:

ans = [0]*(T+1)

# Function to compute ans[L:R] (L, R is half-open interval)
def f(L, R, prod):
    Shrink prod to the range from x^{-(R-L-1)} to x^{R-L-1}
    if L + 1 == R:
        ans[L] = prod[0]
        return
    M = (L + R) // 2
    f(L, M, prod)
    f(M, R, prod*(x+1+x^{-1})^{M-L})

f(0, T+1, x^M)

The above algorithm runs in \(O(T \log^2 T)\) time (where the shrink part affects the complexity). However, computing the (x+1+x^{-1})^{M-L} part is troublesome, so let us compute this as part of the f(L,R,prod) computation. That is, let f(L,R,prod) return (x+1+x^{-1})^{R-L}. This can be easily added:

ans = [0]*(T+1)

# Function that computes ans[L:R] and returns (x+1+x^{-1})^{R-L}
def f(L, R, prod):
    Shrink prod to the range from x^{-(R-L-1)} to x^{R-L-1}
    if L + 1 == R:
        ans[L] = prod[0]
        return x+1+x^{-1}
    M = (L + R) // 2
    resL = f(L, M, prod)
    resR = f(M, R, prod*resL)
    return resL*resR

f(0, T+1, x^M)

This completes the solution. The above algorithm can be computed in \(O(T \log^2 T)\) time, and the only required library is convolution. (Note that in the shrink operation, you need to appropriately take \(\bmod x^{2N+2}-1\) as needed.)

Therefore, this problem can be computed in \(O(T \log^2 T)\) time or similar, which is sufficiently fast. Although the explanation is omitted, there also exists a solution with complexity around \(\tilde{O}((H+W+T)^{1.5})\), which should also pass with a good constant factor. (As an aside, the constraints were made quite large to eliminate \(O((H+W)T)\) solutions, which run very fast.)

posted:
last update: