Official

E - 畑の水やり計画 / Field Watering Plan Editorial by admin

GPT 5.2 High

Overview

We reduce the problem to a 2-state DP where the only state we track is “whether method A was chosen on the previous day.” Since \(N\) can be as large as \(10^{18}\), we use max-plus matrix exponentiation (speeding up DP via matrix exponentiation) to find the maximum value in \(O(\log N)\).

Analysis

Key Insight (Two States Are Sufficient)

Whether each day’s growth is halved depends only on “whether A was done on the previous day.” In other words, we don’t need to remember the entire history — the following 2 states are sufficient:

  • State \(0\): The previous day was not A (previous day was B, or before day 1)
  • State \(1\): The previous day was A

Given these states, simply choosing A or B for today determines the next state and the score.

Why the Naive Approach Fails

Running the DP naively for \(N\) days takes \(O(N)\), but the constraint is \(N \le 10^{18}\), so this is far too slow (\(10^{18}\) updates are impossible).

How to Solve It

Since the transition is “the same form repeated \(N\) times,” we represent the DP transition as a matrix and compute \(N\) iterations at once using binary exponentiation (repeated squaring). However, since this DP takes the “maximum” rather than the “sum,” instead of the usual matrix product we use the matrix product over the max-plus (tropical) semiring, where:

  • Multiplication: \(+\)
  • Addition: \(\max\)

Algorithm

1. Definition of the 2-State DP

Let \(dp_t[s]\) be “the maximum total growth when in state \(s\) after \(t\) days have passed” (\(s \in \{0,1\}\)).

The initial state (before day 1) is “the previous day was not A,” so \(dp_0 = [0, -\infty]\).

2. Transition for One Day (Matrix Form)

The score when choosing A or B today is as follows:

  • Previous day was not A (state 0):
    • Choose B today: score \(b\), next state 0
    • Choose A today: score \(a\), next state 1
  • Previous day was A (state 1):
    • Choose B today: score \(\lfloor b/2 \rfloor\), next state 0
    • Choose A today: score \(\lfloor a/2 \rfloor\), next state 1

We organize this into a \(2\times2\) matrix \(M\) as “transition score from state \(i\) to state \(k\)”:

\[ M = \begin{pmatrix} b & a \\ \lfloor b/2 \rfloor & \lfloor a/2 \rfloor \end{pmatrix} \]

(rows represent the “current state,” columns represent the “next state”)

The DP update in max-plus is: $\( dp_{t+1}[k] = \max_{i \in \{0,1\}} \left( dp_t[i] + M[i][k] \right) \)$ which is a “vector × matrix (max-plus)” operation.

3. Combining \(N\) Days (Matrix Exponentiation)

Since we repeat the same transition \(N\) times: $\( dp_N = dp_0 \otimes M^N \)\( (\)\otimes$ denotes the max-plus product)

\(M^N\) can be computed in \(O(\log N)\) using repeated squaring.

Finally, since the final state can be either one, the answer is: $\( \max(dp_N[0], dp_N[1]) \)$

(Concrete example) When \(N=3\), we compute \(dp_3 = dp_0 \otimes M^3\) and take the maximum, which gives the optimal value without trying all \(2^3\) possibilities.

Complexity

  • Time complexity: \(O(\log N)\) (just repeatedly multiplying \(2\times2\) max-plus matrices)
  • Space complexity: \(O(1)\) (only constant-size matrices and vectors)

Implementation Notes

  • Representing \(-\infty\): To represent unreachable states, we use a sufficiently small value (in the code, NEG = -10**80).

  • Max-plus identity matrix: Instead of the usual identity matrix, we use: $\( I = \begin{pmatrix} 0 & -\infty \\ -\infty & 0 \end{pmatrix} \)$ (“no transition = adding 0, everything else is impossible”).

  • Floor function: \(\lfloor a/2 \rfloor\) can be computed in Python with a//2.

  • Note that the matrix product is “\(C[i][k] = \max_j (A[i][j] + B[j][k])\)” (think of it as the usual + and * being swapped).

    Source Code

import sys

def mat_mul(A, B, NEG):
    C = [[NEG, NEG], [NEG, NEG]]
    for i in range(2):
        for k in range(2):
            best = NEG
            for j in range(2):
                if A[i][j] == NEG or B[j][k] == NEG:
                    continue
                val = A[i][j] + B[j][k]
                if val > best:
                    best = val
            C[i][k] = best
    return C

def mat_pow(M, n, NEG):
    R = [[0, NEG], [NEG, 0]]  # identity in max-plus
    while n > 0:
        if n & 1:
            R = mat_mul(R, M, NEG)
        M = mat_mul(M, M, NEG)
        n >>= 1
    return R

def vec_mul(v, M, NEG):
    res = [NEG, NEG]
    for k in range(2):
        best = NEG
        for i in range(2):
            if v[i] == NEG or M[i][k] == NEG:
                continue
            val = v[i] + M[i][k]
            if val > best:
                best = val
        res[k] = best
    return res

def main():
    N, a, b = map(int, sys.stdin.readline().split())
    NEG = -10**80

    M = [
        [b, a],          # from state 0 (prev not A): B -> 0, A -> 1
        [b // 2, a // 2] # from state 1 (prev A):     B -> 0, A -> 1
    ]

    Mp = mat_pow(M, N, NEG)
    dp0 = [0, NEG]  # start in state 0 before day 1
    dpN = vec_mul(dp0, Mp, NEG)
    print(max(dpN))

if __name__ == "__main__":
    main()

This editorial was generated by gpt-5.2-high.

posted:
last update: