E - 配送ルートの最適化 / Optimization of Delivery Routes Editorial by admin
GPT 5.2 HighOverview
Starting from the delivery center (point \(1\)), we visit all points exactly once and return to point \(1\) at the end, minimizing the total cost (sum of squared distances for each move). This is a classic Traveling Salesman Problem (TSP) solved for \(N \le 16\).
Analysis
- A brute-force approach trying all visit orders has \((N-1)!\) permutations. For example, when \(N=16\), \(15! \approx 1.3 \times 10^{12}\), which is computationally infeasible (TLE).
- However, since \(N \le 16\) is small, we can use bit DP (Held-Karp algorithm), managing the “set of visited points” with a bitmask.
- The cost is \((a-c)^2+(b-d)^2\) (squared distance), and the triangle inequality does not necessarily hold. However, bit DP for TSP simply builds up “optimal values over subsets,” so it correctly finds the minimum even with this cost function.
- Since the starting point is always \(1\), we can reduce the number of states by only including the remaining \(N-1\) points in the subset DP.
Algorithm
Let point \(1\) be the start (index \(0\)), and treat the remaining points as \(0 \ldots n-1\) (where \(n=N-1\)).
1. Precomputation of Distances (Costs)
- \(d0[i]\): cost from point \(1\) → point \(i\) (\(i\) is an index among the “remaining points”)
- \(d[i][j]\): cost from remaining point \(i\) → remaining point \(j\)
- Return cost \(dback[i]\) is the cost from point \(i\) → point \(1\), which equals \(d0[i]\) here (squared distance is symmetric).
Computing the squared distance from \((x,y)\) each time would make transitions slow, so we precompute everything.
2. DP States
Define \(dp[mask][i]\) as follows: - \(mask\): the set of visited “remaining points” (bitmask, size \(2^n\)) - \(i\): the last point visited (one of the remaining points) - Value: the minimum cost of starting from point \(1\), visiting all points in \(mask\) exactly once, and ending at point \(i\)
Initialization: - \(dp[1 \ll i][i] = d0[i]\) (go directly from point \(1\) to \(i\))
Transition:
- \(dp[mask \cup \{j\}][j] = \min\left(dp[mask \cup \{j\}][j],\ dp[mask][i] + d[i][j]\right)\)
(where \(j \notin mask\))
Answer:
- After visiting all points (\(mask = (1 \ll n)-1\)), we return to point \(1\), so
\(\min_i \left(dp[all][i] + dback[i]\right)\)
3. Implementation Tricks (Optimization)
- Instead of a 2D array, \(dp\) is flattened to 1D, accessed as
dp[base[mask] + i]using offsets likebase[mask] = mask * n. - Bit operations are used to “enumerate elements of a set,” scanning efficiently with
lsb = m & -m(least significant bit). bit_index[1<<i] = iis prepared so that “set bit → vertex index” can be retrieved in \(O(1)\).
Complexity
- Time complexity: \(O(n^2 2^n)\) (where \(n=N-1\))
For each subset \(mask\) (\(2^n\) possibilities), we transition from endpoint \(i\) (up to \(n\) choices) to unvisited \(j\) (up to \(n\) choices). - Space complexity: \(O(n 2^n)\)
To store all states of \(dp[mask][i]\).
Implementation Notes
Exclude point \(1\) from DP: Since the start is fixed, using subsets of only the remaining \(N-1\) points makes the implementation simpler and faster.
Squared distances are always integers: \((a-c)^2+(b-d)^2\) is an integer, so integer DP can be performed without worrying about precision errors.
Special handling for \(N=2\): The visit order is essentially only 1→2→1, so the cost is \(2 \times \text{dist}^2(1,2)\) and can be output immediately (the code also branches for this case).
Setting INF: The maximum cost is sufficiently small (coordinate differences are at most \(2000\), squared is \(4\times10^6\), with at most about \(16\) edges), so setting
INF=10**18or similar is safe.Source Code
import sys
from array import array
def main():
input = sys.stdin.readline
N = int(input())
xs = [0] * N
ys = [0] * N
for i in range(N):
x, y = map(int, input().split())
xs[i] = x
ys[i] = y
if N == 2:
dx = xs[0] - xs[1]
dy = ys[0] - ys[1]
c = dx * dx + dy * dy
print(2 * c)
return
n = N - 1 # nodes excluding start(0), indices 0..n-1 correspond to original 1..N-1
x0, y0 = xs[0], ys[0]
d0 = [0] * n
dback = [0] * n
d = [[0] * n for _ in range(n)]
for i in range(n):
dx = x0 - xs[i + 1]
dy = y0 - ys[i + 1]
c = dx * dx + dy * dy
d0[i] = c
dback[i] = c
for i in range(n):
xi, yi = xs[i + 1], ys[i + 1]
row = d[i]
for j in range(n):
dx = xi - xs[j + 1]
dy = yi - ys[j + 1]
row[j] = dx * dx + dy * dy
full = 1 << n
fullmask = full - 1
base = [m * n for m in range(full)]
bit_index = [0] * full
for i in range(n):
bit_index[1 << i] = i
INF = 10**18
dp = array('Q', [INF]) * (full * n)
for i in range(n):
dp[base[1 << i] + i] = d0[i]
for mask in range(1, full):
base_mask = base[mask]
rem0 = fullmask ^ mask
m = mask
while m:
lsb = m & -m
i = bit_index[lsb]
cur = dp[base_mask + i]
if cur != INF:
row = d[i]
r = rem0
while r:
lb = r & -r
j = bit_index[lb]
nm = mask | lb
idx = base[nm] + j
val = cur + row[j]
if val < dp[idx]:
dp[idx] = val
r -= lb
m -= lsb
ans = INF
base_all = base[fullmask]
for i in range(n):
cur = dp[base_all + i]
val = cur + dback[i]
if val < ans:
ans = val
print(ans)
if __name__ == "__main__":
main()
This editorial was generated by gpt-5.2-high.
posted:
last update: