C - 1 2 1 3 1 2 1 Editorial by en_translator


In the definition of \(S_n\) appears \(S_{n-1}\). Like this, if a description of some object contains the object itself, it is called a recursion.
The theme of this problem is how to treat recursion properly. We we introduce some ways to solve.

1. Recursive Function

A recursive function is a function in which the function calls itself inside the function.
The recursive definition of \(S_n\) can be directly implemented with a recursive function.
A sample code follows.

def S(n):
  if n == 1:
    return [1]
  else:
    return S(n - 1) + [n] + S(n - 1)


N = int(input())
print(*S(N))

In the code above, S(N) directly corresponds to the definition in Problem Statement. The advantage of using recursive functions is that we can describe recursive structure directly as a code.

However, the disadvantage of using recursions in this problem is that the same function is called with the same arguments many times. In this problem,

  • the invocation of \(S(N)\) calls \(S(N-1)\) twice,
  • each invocation of \(S(N-1)\) calls \(S(N-1)\) twice,
  • \(\vdots\)
  • each invocation of \(S(2)\) calls \(S(1)\) twice.

Therefore, when evaluating \(S(N)\), \(S(i)\) is called \(2^{N-i}\) times; especially \(S(1)\) is called \(2^{N-1}\) times.

Since \(S(N)\) generates a sequence of length \(2^N - 1\), the overall time complexity is

\[\mathrm{O}\left(\sum_{i=1}^N 2^{N-i}(2^{i}-1) \right) = \mathrm{O}(N 2^N - 2^N + 1) = \mathrm{O}(N 2^N).\]

Although this is not a problem for this problem, note that the complexity may grow exponentially depending on problems. (Please also refer to the example in 4.)

2. Memorized Recursion

If you have learned recursive function, we recommend you learn memorized recursion next.

The disadvantage of recursions is that, as mentioned above, “it calls the function with the same arguments many times.”
Instead, for the function \(S(n)\), consider using a dictionary (associative array) where \(n\) is a key and the return value of \(S(n)\) is the value and storing the return value every time \(S(n)\) is computed. In the second invocation and later, let the function return the value stored in the dictionary instead of actually evaluating the function. Such optimization is called memorized recursions.

This way, the process inside \(S(n)\) is called at most once, so the time complexity is

\[\mathrm{O}(\sum_{i=1}^N 2^{i}-1) = \mathrm{O}(2^N - 1) = \mathrm{O}(2^N),\]

which is a small improvement from the ordinary recursion.
The following is a sample code.

memo = {}

def S(n):
  if n in memo:
    return memo[n]
  if n == 1:
    ret = [1]
  else:
    ret = S(n - 1) + [n] + S(n - 1)
  memo[n] = ret
  return ret


N = int(input())
print(*S(N))

Also, in Python, we can used the feature called lru_cache so that the memorized recursion can be written in a bit simpler way as follows. (The line starting with @ as in Line 3 is called a decorator. Just prepending the decorator enables memorized recursion.)

from functools import lru_cache

@lru_cache(maxsize=None)
def S(n):
  if n == 1:
    return [1]
  else:
    return S(n - 1) + [n] + S(n - 1)


N = int(input())
print(*S(N))

3. Dynamic Programming

This problem can be solved with Dynamic Programming, or DP. DP is an algorithm of solving from smaller-sized subproblems to larger ones, and when solving a larger-sized one it uses the result of the smaller-sized ones.
In this problem, we can solve it by computing from \(S(1)\) in the increasing order of the argument so that we can properly use DP. The time complexity is \(\mathrm{O}(2^N)\), just as in the memorized recursion. Here is a sample code.

N = int(input())
dp = [[] for _ in range(N + 1)]
dp[1] = [1]
for n in range(2, N + 1):
  dp[n] = dp[n - 1] + [n] + dp[n - 1]
print(*dp[N])

The memorized recursion and DP do essentially the same thing. After all, during the memorized recursion, the subproblems are processed from smaller to larger, which is in the same way as DP does.

Indeed, DP problems can also be solved by memorized recursion (unless it requires special optimizations). The advantages of both ways are:

  • memorized recursion can be directly implemented by writing down the recursive structure, so it is often easier to implement than DP;
  • DP does not have overheads of function calls or memorization using dictionaries compared to memorized recursion, so it has a better constant factor of execution time.

Many people seems using both memorized recursions and DP depending on problems.

  • Seemingly, while many people basically use DP, but they implement complex DP like digit DP and segment DP with memorized recursion.

4. Bonus: exercise

Speaking of complexities of recursions, Fibonacci sequence is often picked up:

\[f_0 = 0, f_1 = 1, f_{n} = f_{n-1} + f_{n-2} (n \geq 2)\]

def f(n):
  return n if n <= 1 else f(n - 1) + f(n - 2)

print(f(70))

The code above uses a recursion to compute \(f_{70}\), but the complexity is \(\mathrm{O}(1.619^{70})\) which is about \(10^{16}\), so it won’t finish in time.
Speed it up with memorized recursions or DP. (The correct answer is 190392490709135.)

posted:
last update: