## G - Hash on Tree 解説 by en_translator

This problem has multiple solutions. In this editorial, we introduce a solution that uses a data structure called **Static top tree**.

## Objective: manage tree DP on a segment tree!?

First of all, let us consider what kind of operations is require to solve this problem.

First, let us consider solving this problem in \(\mathrm{O}(N)\) time per query. This is easy; we can just perform a typical tree DP (Dynamic Programming). That is, one can write code that evaluates \(f(n)\) bottom-up and call `calc_dp(0)`

every time \(A\) is updated (regarding vertex \(0\) as the root). However, this solution costs a total of \(\mathrm{O}(NQ)\) time, which leads to TLE (Time Limit Exceeded).

```
using mint = atcoder::modint998244353;
vector<vector<int>> g; // Adjacency list of the rooted tree
mint A[n_max];
mint calc_dp(int v) {
if(g[v].empty()) return A[v];
mint prod = 1;
for(auto& child : g[v]) prod *= calc_dp(child);
return A[v] + prod;
}
```

Now, let us regard the original problem as a problem that asks to process the following two kinds of query:

## Problem described in query style

- Given an \(N\)-vertex rooted tree and a sequence \((A_1, A_2, \dots, A_N)\), process the following two kinds of query:

- Set \(A_v\) to \(x\).
- Evaluate the tree DP above, and print \(f(1)\).

In other words, this problem asks two kinds of query about the tree DP: “updating the value of a vertex” and “retrieve the entire value.”

Although it is rare against a tree, most of you should know that this kind of problem against a sequence is very popular:

## Typical problem against a sequence

- Given a sequence \(A_1, A_2, \dots, A_N\), process the following two kinds of query:

- Set \(A_i\) to \(x\).
- Evaluate \(A_1 \oplus A_2 \oplus \dots \oplus A_N\), where \(\oplus\) is some kind of monoidal sum.

The problem against a sequence can be processed fast in \(\mathrm{O}(\log N)\) time per query using a segment tree. Even for a problem against a tree, if the query can be boiled down to a query against a sequence (e.g. the sum of the monoid on the vertices in a path), one can still handle using HLD (Heavy-Light Decomposition) and a segment tree.

However, if you regard the process of evaluating tree DP as a data structure, the original problem requires operations equivalent to element-wise update and segment-wise retrieval. In a sense, this problem asks to “manage tree DP on a segment tree.” (Of course, it cannot be managed in an ordinary segment tree.)

Now let us consider how we can store tree DP on a segment-tree-like data structure.

## Observation 1: solution for a perfect binary tree

As a simple corner case, let us try to solve the problem when the rooted tree is a perfect binary tree.

The tree DP described above can be hastily optimized as in the following code:

```
using mint = atcoder::modint998244353;
vector<vector<int>> g; // Adjacency list of the rooted tree
vector<int> P; // Parent vertex (with P[0] = -1)
mint A[n_max], f[n_max];
mint precalc(int v) {
if(g[v].empty()) return f[v] = A[v];
mint prod = 1;
for(auto& child : g[v]) prod *= precalc(child);
return f[v] = A[v] + prod;
}
void update(int v, mint x) {
A[v] = x;
for(; v != -1; v = P[v]) {
if(g[v].empty()) f[v] = A[v];
mint prod = 1;
for(auto& child : g[v]) prod *= f[child];
f[v] = A[v] + prod;
}
}
```

We explain the outline of the code. First, we precalculate and memorize the values \(f(1), f(2), \dots f(n)\). Then, when `A[v]`

is updated, \(f(n)\) changes for \(v\) and its ancestors, so we recalculate the ancestors’ results bottom-up while using the precalculated data for the other parts.

This solution works very fast for specific cases: when the tree is a perfect binary tree. A perfect binary tree has a depth of \(\mathrm{O}(\log N)\), and each vertex has at most two children, which enables us to update each vertex in \(\mathrm{O}(1)\) time, so the DP can be recalculated in \(\mathrm{O}(\log N)\) per query.

Of course, the worst complexity of this solution for a general tree is not improved from \(\mathrm{O}(NQ)\). Specifically, the worst complexity is capped to \(\mathrm{O}(N)\) time per query because:

- the worst depth is \(\mathrm{O}(N)\), and
- a vertex has at worst \(\mathrm{O}(N)\) children.

So, in order to improve this solution, it seems one has to convert a general tree into some binary tree of depth \(\mathrm{O}(\log N)\) so that the two issues above are overcome.

## Observation 2: relationship between decomposition of tree and tree DP

In order to develop the observation, we will interpret the tree DP from another viewpoint. In fact, the tree DP can be considered as a process of generating a rooted tree by merging subtrees, while some information is maintained. Now we describe this property.

To consider the process of generating a tree by merging, it is useful to consider the inverse operation: the process of decomposing the tree. So we first consider the process of decomposing the tree. By recursively performing (1)-(3) below until the tree has no edges, one can decompose a rooted tree into vertices and edges. (See also the figure below.)

- (1) Remove a root vertex. Here, do not remove the edges adjacent to the root; such edges are regarded to have a vertex without information as their endpoints. (We call such a vertex a
**virtual vertex**.) - (2) Split off the subtrees connected by the virtual vertex by cloning the virtual vertex.
- (3) Remove the virtual vertex and its adjacent edge from each subtree. Now each subtree forms an ordinary vertex.

By following this procedure in the reverse order, subtrees are gradually merged and eventually form a single rooted tree.

Let us represent the process of merge by functions. Define the following four functions. Hereinafter, we assume that vertices have information attached to them and are distinguishable, but edges do not and are indistinguishable.

`vertex(v)`

: function that generates vertex \(v\).`add_vertex(t, v)`

: function that performs the inverse of (1). That is, it assigns \(v\) to a tree \(t\) with a virtual root.`merge(x, y)`

: function that performs the inverse of (2). That is, it merges two trees \(x\) and \(y\) with virtual roots into one.`add_edge(t)`

: function that performs the inverse of (3). That is, it adds a virtual root to a rooted tree \(t\).

Then, define a function `generate_tree(v)`

as the function that generates a subtree rooted at vertex \(v\) of the rooted tree represented by an adjacency list \(g\). This can be implemented as follows:

```
vector<vector<int>> g; // Adjacency list of the rooted tree
Tree generate_tree(int v) {
if(g[v].empty()) return vertex(v);
vector<Tree> children;
for(auto& child : g[v]) {
Tree t = generate_tree(child);
children.push_back(add_edge(t));
}
Tree t = children[0];
for(int i = 1; i < (int)children.size(); i++) {
t = merge(t, children[i]);
}
return add_vertex(t, v);
}
```

The process of merging subtrees to obtain a new subtree coincides wit the process of tree DP. For example, the aforementioned tree DP can be abstracted as follows:

```
using mint = atcoder::modint998244353;
using T = mint;
T vertex(int v) { return A[v]; }
T add_vertex(T x, int v) { return x + A[v]; }
T merge(T x, T y) { return x * y; }
T add_edge(T x) { return x; }
vector<vector<int>> g; // Adjacency list of the rooted tree
T calc_dp(int v) {
if(g[v].empty()) return vertex(v);
vector<T> children;
for(auto& child : g[v]) {
T t = calc_dp(child);
children.push_back(add_edge(t));
}
T t = children[0];
for(int i = 1; i < (int)children.size(); i++) {
t = merge(t, children[i]);
}
return add_vertex(t, v);
}
```

Notice that the functions `generate_tree(v)`

and `calc_dp(v)`

have exactly the same form.

From this viewpoint, the tree DP can be considered as a process of generating a rooted tree by merging subtrees, while some information is maintained. More specifically, it can be described as follows:

- Starting from leaf vertices, consider merging subtrees by repeatedly performing the following three kinds of operations: adding a vertex, adding an edge, and merging rooted trees whose roots are virtual vertices.
- For all subtrees generated in the process of merge, define some information about the subtree.
- By appropriately defining a function that merges information, one can obtain the information corresponding to the new subtree generated by merging subtrees.

We summarize the discussion so far:

- One can reevaluate the tree DP in \(\mathrm{O}(\log N)\) time if the tree is a binary tree of depth \(\mathrm{O}(\log N)\).
- A general tree has the following two properties, which obstructs efficient recalculation:
- The depth is \(\mathrm{O}(N)\) at worst.
- A vertex has \(\mathrm{O}(N)\) vertices at worst.

- By the way, tree DP is a process of merging subtrees while maintaining some information.
- The process of merging subtrees can be generated by following the inverse operation of merging subtrees.

These two facts yield the following consequence:

- If we can generate a merging process of depth \(\mathrm{O}(\log N)\) in the form of binary tree, then tree DP may be recalculated efficiently?

The realization of such merging process is **Static top tree**.

## Static top tree

#### Notes: although Static top tree has the name “Top tree” in it, it is a totally different data structure than a Top tree (in a narrow sense). Do not confuse if you want to learn Top tree.

- More precisely, a Static top tree is a static version of “the data structure that supports most of the features of Top tree, that is achieved by managing information maintained by normal edges on a Link/Cut tree in a Splay tree” (Top tree in a broad sense).
- By the way, one can also construct a Static top tree by making a Top tree (in a narrow sense) static. Some people call such a data structure a Static top tree. We skip introducing it here, but if you are interested, please refer to the implementation and comment (in Japanese) by tatyam.

A static top tree is a binary tree of depth \(\mathrm{O}(\log N)\) that represents the process of merging subtrees (not subtrees actually; described later).

In order to describe how to construct a Static top tree, we first explain the inverse operation of merging process, the procedure of decomposing a tree.
First, apply HLD to the tree to classify each edge into heavy or light edge. (If you do not know HLD, please refer to the editorial of ABC269 Ex.

Then, repeat the following steps (1)-(4) until the tree has no edges. (The difference from the previous version of the decomposition are written in **bold text**. See also the figure below.)

- (1)
**Choose the heavy path connected to the root, and remove the heavy edges in it.** - (2) Remove a root vertex. Here, do not remove the edges adjacent to the root; such edges are regarded to have a virtual vertex as their endpoints.
- (3) Split off the subtrees connected by the virtual vertex by cloning the virtual vertex.
- (4) Remove the virtual vertex and its adjacent
**light edge**from each subtree. Now each subtree forms an ordinary vertex.

It is worth noting that a graph appearing during the decomposition is not necessarily a subtree of the rooted tree. That is, after removing edges in step (1), there may be a graph that cannot be regarded as a rooted tree depending on the order of removing edges. Instead, we borrow the term of a Top tree and call a graph appearing during the decomposition a **cluster**.

As you can see in the figure, the following two kinds of cluster appear during the decomposition:

- A cluster in which subtrees rooted at non-virtual vertices are connectd by zero or more heavy edges.
- A cluster that forms a subtree rooted at a virtual vertex.

Again, we borrow the terms of a Top tree to call the former a **path cluster** and the latter a **poin cluster**. (A path cluster without a heavy edge connected to the root can hardly be called a path, but we ignore the discomfort here.)

Next, we will follow the inverse operation of the decomposition to come up with a way to merge the cluster into a binary-tree form.
During the decomposition, two kinds of merge of cluster occur: merging path clusters, and merging point clusters. Let us further borrow the terms of Top tree to call the former **compress*, and the latter **rake**.

The key of a Static top tree is to keep the depth to \(\mathrm{O}(\log N)\) by applying a trick when merging clusters into a binary-tree form. However, there is no room of trick in the inverse operation of (2) and (4); what we can possibly improve is the inverse operations:

- (1) choosing a heavy path and remove heavy edges from them;
- (3) split off the subtrees connected with a virtual vertex by cloning the virtual vertex.

Intuitively, it seems a good idea to merge clusters according to the following strategy:

- Inverse operation of (1): when “compress”-ing path clusters, make sure that merging process forms a perfect binary tree.
- Inverse operation of (3): when “rake”-ing path clusters, make sure that merging process forms a perfect binary tree.

This is quite a rational strategy, because the two properties of a general tree that obstruct efficient computation are:

- the \(O(N)\) depth at worst; and
- the \(O(N)\) children that a vertex may have,

but the former is resolved by compressing, and latter by raking.

However, with this strategy the depth of the tree may reach \(\mathrm{O}(\log^2 N)\) at the worst case. (We omit the details here. The reason is the same as the property that naive implementation of processing a path query with HLD + segment tree costs \(\mathrm{O}(\log ^2 N)\) at worst.)

That’s why we need another ingenuity. “Merging clusters to form a perfect binary tree” means “merging so that left and right children have clusters with (almost) the same vertices.” In fact, we acn prove that the latter strategy allows us to keep the depth of the entire process of merging to \(\mathrm{O}(\log N)\). (We omit the details too. The reason is the same as the idea of processing a path query with HLD + segment tree in \(\mathrm{O}(\log N)\) time at worst. Reference: article by Nachia (in Japanese), chapter “Balanced HLD” in an article by errorgorn.)

By the trick above, the process of merging clusters has been represented by a binary tree of depth \(\mathrm{O}(\log N)\).

Sample code (line 5 - 74): We referred to implementation by maspy and tatyam.

## Solution to the original problem

Now we describe the solution to the original problem using Static top tree.

The following five functions are required while merging clusters:

`vertex(v)`

: function that generates a path cluster consisting only of the vertex \(v\).`compress(p, c)`

: function that performs the inverse operation of (1), i.e. merges path clusters \(p\) and \(c\) (with \(p\) being closer to the root).`add_vertex(t, v)`

: function that performs the inverse operation of (2), i.e. assigns vertex \(v\) to the root of the path cluster \(t\) to form a new path cluster.`rake(x, y)`

: function that performs the inverse operation of (3), i.e. merges point clusters \(x\) and \(y\).`add_edge(t)`

: function that performs the inverse operation of (4), i.e. adds a virtual root to the path cluster \(t\) to form a point cluster.

Just as the tree DP, the problem can be solved by defining information that should be stored on a tree, and constructing transitions of DP corresponding to these five functions.

The most difficult among them is `compress`

, that merges path clusters. While merging point clusters can be somehow considered as an extension of tree DP and be achieved by storing the same information to perform rake, merging path clusters mean merging objects of “peculiar shape.”

Here, we adopt the terms of Top tree to call a cluster’s vertex next to another outside the cluster a **boundary vertex**. A path cluster basically has to boundary vertices, one nearer to the root and one further. (The exception arises when the path cluster is a rooted tree, in which the two boundary vertices refer to the same vertex.) Depending on problems, one can relatively easily construct transition of DP by focusing on these two boundary vertices.

After appropriate observation, it turns out that the path cluster must store values \((a, b)\) such that “when a subtree with hash value \(x\) is merged into the subtree rooted at the buondary vertex further from the root, the subtree rooted at the boundary vertex nearer to the root becomes \(ax+b\).” By defining information on a path cluster like this, `compress`

can be defined as a composition of affine functions. We can also define other functions as follows:

```
using mint = atcoder::modint998244353;
vector<int> A;
struct Path {
mint a, b;
};
using Point = mint;
Path vertex(int i) { return {1, A[i]}; }
Path compress(Path p, Path c) { return {p.a * c.a, p.a * c.b + p.b}; };
Point rake(Point l, Point r) { return l * r; };
Point add_edge(Path d) { return d.b; };
Path add_vertex(Point d, int i) { return {d, A[i]}; };
```

By appropriately abstracting a Static top tree, any problem can be basically solved by defining the five functions above. This is similar to the abstraction of segment tree that requires only two functions, isn’t it?

- By the way, the segment tree seems to be originally defined as algorithm not only against a perfect binary tree but also general balanced binary tree. (Reference, Japanese) According to this interpretation, updating tree DP using a Static top tree can be somewhat considered similar to a segment tree.

All that left is to process queries; this can be done by appropriately computing and updating DP.

## Further application of a Static top tree

We briefly summarize applications of Static top tree.

In this problem, a very elementary DP was stored in a Static top tree, but tree DP has many variants. Regardless of the type of tree DP, one can store it on a Static top tree if one can come up with the good definition of `compress`

.

For example, the following two problems look different from the original problem, but both can be processed in \(\mathrm{O}(\log N)\) time per query. (Use them as exercise.)

(Maximum independent set on a tree): You are given an \(N\)-vertex tree. Each vertex has one of the following three states assigned:

`B`

,`R`

, or`?`

. Process the following two types of query:

- Choose one vertex and change the state.
- Print the maximum size of a set of vertices such that:

- it contains all vertices with state
`B`

;- it does not contain any vertex with state
`R`

; and- no two vertices in the set are adjacent to each other in the tree.

(Diameter of tree): you are given an \(N\)-vertex edge-weighted tree. Process the following two types of query:

- Choose one edge and change its weight.
- Retrieve the diameter of tree, as well as the both ends of a diameter.

- Hint: Typical way to store an edge-weighted tree onto a Static top tree is to modify Static top tree so that it can also store information on edges, or consider a \((2N-1)\)-vertex graph with the original edges regarded as vertices. (The latter is easier as it requires less implementation.)

We also introduce problems in difficult contests for which Static top trees can be applied.

## Examples (UCup, JOI, CodeForces Div.1 Spoiler)

One can utilize a Static top tree not only to tree DP but also “rerooting” DP, enabling point-wise update. Specifically, one can solve the following problem:

SPOJ QTREE6: You are given an \(N\)-vertx tree. Each vertex is painted either black or white. Process the following two kinds of query:

- Given a vertex, flip its color
- Given a vertex, count vertices connected to it. Vertices \(u\) and \(v\) are said to be connected if and only if the colors of the vertices on the path from vertex \(u\) to vertex \(v\) are all equal (including the endpoints).

Applications of a Static top tree are not limited to optimizing tree DP.

Recall the correspondence of operations against a segment and against a tree. A Static top tree corresponds to “dividing a sequence to form a segment tree.” These are alike in that these are data structures with tree-structure that merges elements in the form of binary tree of depth \(\mathrm{O}(\log N)\). With this comparison in mind, one can expect that any algorithm realized by dividing a sequence into segment-tree form can also be applied to a tree. For example, the element-wise update of tree DP, which we introduced so far, can be regarded as the tree version of element-wise update and segment-wise retrieval against a segment tree.

Dividing a sequence in segment-tree style has other applications than a segment tree. For example, in the divide-and-conquer trick, a segment is divided in a segment-tree form. This implies that we can do divide-and-conquer on a tree by doing so on a Static top tree. Specifically, one can solve the following problem in \(\mathrm{O}(N \log^2 N)\) time using a Static top tree.

- ABC269Ex Antichain: You are given an \(N\)-vertex rooted tree. Define a binary relation such that (vertex \(a\) is an ancestor of vertex \(b\)) \(\iff a \leq b)\). For \(k=1, 2, \dots, N\), count antichains of size \(k\) modulo \(998244353\).

- yukicoder No.2595 Parsing Challenge : Evaluate a mathematical expression of length \(N\) consisting of
`1`

-`9`

,`+`

,`-`

,`*`

,`(`

, and`)`

. (Hint: consider the syntax tree)

From this viewpoint, a Static top tree is found to have a wide usage than merely performing element-wise update on tree DP.

In modern competitive programming, balanced binary trees and Link/Cut trees are rarely seen in a contest, but the limited static version of this data structure such as “segment tree” or “HLD + segment tree” appear so frequently in contests that it is accounted one of the fundamental algorithms.

Although we predict Static top tree will be a tier-1 algorithm in short-term contests, but it is attractive that abstraction enables very concise implementation. In the future by chance, Static top tree may become locally popular as the static version of a Top-tree-ish data structure.

投稿日時:

最終更新: