## D - Intersecting Intervals Editorial by en_translator

Inspecting all pairs of segment to check if they intersect costs a total of \(O(N^2)\), which does not finish in the execution time limit. We need to use an appropriate algorithm to optimize it.

Here, instead of counting overlapping segments, let us find the number of disjoint segments \(x\) and subtract it from \(N(N-1)/2\). (This is a typical application of **complementary event**.)

The segments in the left of segment \(i\) that do not intersect with segment \(i\) are the segments \(j\) with \(r_j < l_i\). Thus, for each \(i=1,2,\dots,N\), count \(j\) such that \(r_j < l_i\), and the sought \(x\) is the sum of these counts.

This can be processed fast using a technique called **sliding window**.
First, let \(L\) and \(R\) be the sequence obtained by sorting \((l_1,l_2,\dots,l_N)\) and \((r_1,r_2,\dots,r_N)\), respectively.
For each \(i=1,2,\dots,N\), let us find the number \(c_i\) of \(j\) such that \(R_j < L_i\). Since \(R\) is in ascending order, the indices \(j\) with \(R_j < L_i\) are \(1,2,\dots\), and \(c_i\).
Therefore, we can come up with the following algorithm:

- For each \(i=1,2,\dots,N\) in order:
- let \(c \leftarrow 1\).
- While \(R_c < L_i\), let \(c \leftarrow c + 1\).
- Let \(c_i \leftarrow c-1\).

This algorithm still costs a total of \(O(N^2)\) time because step 1.2 runs at most \(N-1\) times for each \(i\), so we need to devise a way to optimize it.

Here, an important property is that \(c_i\leq c_{i+1}\), since \(L\) is sorted in ascending order. This is because, since \(R_{c_i} < L_i \leq L_{i+1}\), at least \(c_i\) elements of \(R\) is not greater than \(L_{i+1}\).

Thus, when finding \(c_{i+1}\), instead of scanning from scratch like \(R_1,R_2,\dots,\), we can start inspecting from \(c_i\), like \(R_{c_i},R_{c_i+1},\dots\). That is, we can modify the algorithm above as follows:

- let \(c \leftarrow 1\).
- While \(i=1,2,\dots,N\), do the following:
- while \(R_c < L_i\), let \(c \leftarrow c + 1\).
- Let \(c_i \leftarrow c-1\).

Let us analyze the complexity of this algorithm. Every time step 2.1 is executed, \(c\) increases by \(1\). Also, \(c\) is at most \(N\). Thus, step 2.1 is executed at most a total of \((N-1)\) times throughout the program. Therefore, the time complexity of this algorithm is \(O(N)\) (except for the sort), which runs fast.

Sample code (Python)

```
N = int(input())
l = [0] * N
r = [0] * N
for i in range(N):
l[i], r[i] = map(int, input().split())
l.sort()
r.sort()
ans = N * (N - 1) // 2
j = 0
for i in range(N):
while r[j] < l[i]:
j += 1
ans -= j
print(ans)
```

posted:

last update: