Difference between revisions of "Heavy-light decomposition"

From PEGWiki
Jump to: navigation, search
m
 
(13 intermediate revisions by 8 users not shown)
Line 2: Line 2:
  
 
==Definition==
 
==Definition==
The ''heavy-light decomposition'' of a tree <math>T=(V,E)</math> is a coloring of the tree's edges. Each edge is either ''heavy'' or ''light''. To determine which, consider the edge's two endpoints: one is closer to the root, and one is further away. If the size of the subtree rooted at the latter is more than half that of the subtree rooted at the latter, the edge is heavy. Otherwise, it is light.
+
The ''heavy-light decomposition'' of a tree <math>T=(V,E)</math> is a coloring of the tree's edges. Each edge is either ''heavy'' or ''light''. To determine which, consider the edge's two endpoints: one is closer to the root, and one is further away. If the size of the subtree rooted at the latter is more than half that of the subtree rooted at the former, the edge is heavy. Otherwise, it is light.
  
 
==Properties==
 
==Properties==
 
<p>Denote the size of a subtree rooted at vertex <math>v</math> as <math>\operatorname{size}(v)</math>.</p>
 
<p>Denote the size of a subtree rooted at vertex <math>v</math> as <math>\operatorname{size}(v)</math>.</p>
<p>Suppose that a vertex <math>v</math> has two children <math>u</math> and <math>w</math> and the edges <math>v</math>-<math>u</math> and <math>v</math>-<math>w</math> are both heavy. Then, <math>\operatorname{size}(u) + \operatorname{size}(w) > \frac{1}{2}\operatorname{size}(v)+\frac{1}{2}\operatorname{size}(v) > \operatorname{size}(v)</math>. This is a contradiction, since we know that <math>\operatorname{size}(u) + \operatorname{size}(v) + 1 \le \operatorname{size}(v)</math>. (<math>v</math>, of course, may have more than two children.) We conclude that '''there may be at most one heavy edge of all the edges joining any given vertex to its children.'''</p>
+
<p>Suppose that a vertex <math>v</math> has two children <math>u</math> and <math>w</math> and the edges <math>v</math>-<math>u</math> and <math>v</math>-<math>w</math> are both heavy. Then, <math>\operatorname{size}(u) + \operatorname{size}(w) > \frac{1}{2}\operatorname{size}(v)+\frac{1}{2}\operatorname{size}(v) = \operatorname{size}(v)</math>. This is a contradiction, since we know that <math>\operatorname{size}(u) + \operatorname{size}(w) + 1 \le \operatorname{size}(v)</math>. (<math>v</math>, of course, may have more than two children.) We conclude that '''there may be at most one heavy edge of all the edges joining any given vertex to its children.'''</p>
 
<p>At most two edges incident upon a given vertex may then be heavy: the one joining it to its parent, and at most one joining it to a child. Consider the subgraph of the tree in which all light edges are removed. Then, all resulting connected components are paths (although some contain only one vertex and no edges at all) and two neighboring vertices' heights differ by one. We conclude that '''the heavy edges, along with the vertices upon which they are incident, partition the tree into disjoint paths, each of which is part of some path from the root to a leaf.'''</p>
 
<p>At most two edges incident upon a given vertex may then be heavy: the one joining it to its parent, and at most one joining it to a child. Consider the subgraph of the tree in which all light edges are removed. Then, all resulting connected components are paths (although some contain only one vertex and no edges at all) and two neighboring vertices' heights differ by one. We conclude that '''the heavy edges, along with the vertices upon which they are incident, partition the tree into disjoint paths, each of which is part of some path from the root to a leaf.'''</p>
 
<p>Suppose a tree contains <math>N</math> vertices. If we follow a light edge from the root, the subtree rooted at the resulting vertex has size at most <math>N/2</math>; if we repeat this, we reach a vertex with subtree size at most <math>N/4</math>, and so on. It follows that '''the number of light edges on any path from root to leaf is at most <math>\lg N</math>.'''</p>
 
<p>Suppose a tree contains <math>N</math> vertices. If we follow a light edge from the root, the subtree rooted at the resulting vertex has size at most <math>N/2</math>; if we repeat this, we reach a vertex with subtree size at most <math>N/4</math>, and so on. It follows that '''the number of light edges on any path from root to leaf is at most <math>\lg N</math>.'''</p>
 +
 +
==Construction==
 +
 +
The paths can be obtained by the following pseudocode:
 +
 +
<pre>
 +
def getPath(node):
 +
    if node is a leaf:
 +
        return [node]
 +
    for each subtree S:
 +
        if S is not the largest subtree:
 +
            allPaths.append(getPath(S))
 +
    for each subtree S:
 +
        if S is the largest subtree:
 +
            return getPath(S).append(node)
 +
</pre>
 +
 +
Then all paths can be obtained by calling:
 +
 +
<pre>allPaths.append(getPath(root))</pre>
 +
 +
Note that by this construction, an edge is heavy if and only if the vertices which it connects are both in the same path. This creates more heavy edges than are strictly necessary by the definition, but the complexity is unchanged.
  
 
==Applications==
 
==Applications==
<p>The utility of the H-L decomposition lies in that problems involving paths between nodes can often be solved more efficiently by "skipping over" heavy paths rather than considering each edge in the path individually. For example consider the following problem: A weighted, rooted tree is given followed by a large number of queries and modifications interspersed with each other. A ''query'' asks for the distance between a given pair of nodes and a ''modification'' changes the weight of a specified edge to a new (specified) value. How can the queries and modifications be performed efficiently?</p>
+
The utility of the H-L decomposition lies in that problems involving paths between nodes can often be solved more efficiently by "skipping over" heavy paths rather than considering each edge in the path individually. This is because long paths in trees that are very poorly balanced tend to consist mostly of heavy edges.
<p>First of all, we notice that the distance between nodes <math>u</math> and <math>v</math> is given by <math>\operatorname{dist}(u,v) = \operatorname{dist}(root,u) + \operatorname{dist}(root,v) - 2\operatorname{dist}(root,\operatorname{LCA}(u,v))</math>. Hence, if we can efficiently solve [[lowest common ancestor]] queries (which is easy if we preprocess the tree), as well as queries of a node's distance from the root, a solution to this problem follows trivially.</p>
+
 
<p>To solve this simpler problem of querying the distance from the root to a given node (and modifying edge weights efficiently), we can use the H-L decomposition. We start at the given node and "walk up" the tree until we reach the root. When we encounter a light edge, we merely traverse it, adding its weight to the running total. (The total time spent on such operations is <math>O(\lg N)</math>, because there are <math>O(\lg N)</math> light edges on the path.) When we encounter a heavy edge, we "skip" all the subsequent heavy edges, adding all of their weights to our running total and arriving at a vertex which is joined to its parent by a light edge (or at the root).</p>
+
To "skip" from any node in the tree to the root:
<p>To perform this "skipping" operation, every vertex which is joined to its parent by a heavy edge must be augmented with a pointer to the vertex at the top of the heavy path, so that we can immediately find the next vertex up, skipping over all the black edges along the way. To find the sum of the weights of all of the skipped edges, we make a query on a structure such as a [[segment tree]] or a [[binary indexed tree]]. (The elements of the underlying array of the segment tree or BIT are the weights of the edges on the heavy path, in order of course; the query is then simply "find the sum of all elements between the first and the given", which can be answered in <math>O(\lg N)</math> time.)</p>
+
<pre>
<p>Thus, when we perform a modification, we modify either a light edge, which requires only changing the variable storing its weight, or a heavy edge, which requires us to change one element in the data structure used along the heavy paths, which is doable in <math>O(\lg l)</math> time, where <math>l</math> is the length of the path. Since <math>l<N</math>, we conclude that modifications can be performed in time logarithmic in the size of the tree.</p>
+
while current_node ≠ root
<p>In a query, we consider at most logarithmically many light edges, so that the total time spent considering light edges is <math>O(\lg N)</math>. Heavy paths are more complicated to analyze; clearly, the number of heavy paths encountered is at most one more than the number of light edges, so that there are at most <math>O(\lg N)</math> of them, and the time spent on each is logarithmic in its size, so that an upper bound of <math>O(\lg^2 N)</math> trivially follows. It turns out that a stronger bound holds; the time taken for an update is also, in fact, <math>O(\lg N)</math>.</p>
+
    if color(current_node,parent(current_node)) = light
 +
          current_node = parent(current_node)
 +
    else
 +
          current_node = skip(current_node)
 +
</pre>
 +
In the above, <code>color(u,v)</code> represents the color (heavy or light) of the edge from <code>u</code> to <code>v</code>, <code>parent(v)</code> represents the parent of a node <code>v</code>, and <code>skip(v)</code> represents the parent of the highest (''i.e.'', least deep) node that is located on the same heavy path as node <code>v</code>. (It is so named because it allows us to "skip" through heavy paths.) In order for the latter to be implemented efficiently, every node whose parent link is heavy must be augmented with a pointer to the result of the <code>skip</code> function, but that is trivial to achieve.
 +
 
 +
What is the running time of this fragment of code? There are at most a logarithmic number of light edges on the path, each of which takes constant time to traverse. There are also at most a logarithmic number of heavy subpaths, since between each pair of adjacent heavy paths lies a light path. We spend at most a constant amount of time on each of these, so overall this "skipping" operation runs in logarithmic time.
 +
 
 +
This might not seem like much, but let's see how it can be used to solve a simple problem.
 +
 
 +
<!--
 +
===Lowest common ancestor===
 +
<p>The H-L decomposition gives an efficient solution to the [[lowest common ancestor]] (LCA) problem. Consider first a naive solution:
 +
<pre>
 +
function LCA(u,v)
 +
    while depth(u) < depth(v) and u ≠ v
 +
          v = parent(v)
 +
    while depth(u) > depth(v) and u ≠ v
 +
          u = parent(u)
 +
    while u ≠ v
 +
          u = parent(u)
 +
          v = parent(v)
 +
    return u
 +
</pre>
 +
(In this code, <code>u</code> and <code>v</code> are the nodes whose LCA is desired.)</p>
 +
<p>This solution can take time proportional to the height of the tree, which can be up to linear. However, we can use the "skip" operation of the H-L decomposition to do much better:
 +
<pre>
 +
function LCA_improved(u,v)
 +
    let last = root
 +
    while u ≠ v
 +
          if depth(u) > depth(v)
 +
              swap(u,v)
 +
          if color(v,parent(v)) = light
 +
              last = root
 +
              v = parent(v)
 +
          else
 +
              last = v
 +
              v = skip(v)
 +
    if last = root
 +
          return u
 +
    else
 +
          return last
 +
</pre>
 +
Here's how this code works: in each iteration of the while loop, the deeper of the two nodes is advanced up the tree (that is, if its parent link is light, it becomes its parent, otherwise it "skips", as previously described). Note that if the two nodes are equally deep, one of them still gets advanced; this is intentional and it does not matter which one is advanced. When the two nodes become equal, we are done.</p>
 +
<p>Or are we? Suppose that the highest edges on the paths from each node to the LCA are light. Then, it is clear that the while loop ends with both nodes now pointing to the LCA, after having advanced through the respective light edges. If, however, one is heavy, there is a possibility that we "overshoot"; that is, that we end up at a node higher in the tree than the actual LCA. If this occurs, then the last advancement before its occurrence is from the actual LCA to the ending node (convince yourself of this by drawing a diagram if necessary), so that all we have to do is "remember" the last node visited in this case. That is what the <code>last</code> variable holds. (If both final links to the LCA are light, it is set to the root of the tree, which is otherwise impossible, to indicate this.) Notice that we do not have to consider the case in which ''both'' final links to the LCA are heavy, since this is impossible; two child links from a node cannot both be heavy, so these two heavy links would have to be the same link, making the child of that link a lower common ancestor (a contradiction).</p>
 +
-->
 +
 
 +
===Dynamic distance query===
 +
Consider the following problem: A weighted, rooted tree is given followed by a large number of queries and modifications interspersed with each other. A ''query'' asks for the distance between a given pair of nodes and a ''modification'' changes the weight of a specified edge to a new (specified) value. How can the queries and modifications be performed efficiently?
 +
 
 +
First of all, we notice that the distance between nodes <math>u</math> and <math>v</math> is given by <math>\operatorname{dist}(u,v) = \operatorname{dist}(root,u) + \operatorname{dist}(root,v) - 2\,\operatorname{dist}(root,\operatorname{LCA}(u,v))</math>. There are a number of techniques for efficiently answering [[lowest common ancestor]] queries,  so if we can efficiently solve queries of a node's distance from the root, a solution to this problem follows trivially.
 +
 
 +
To solve this simpler problem of querying the distance from the root to a given node, we augment our walk procedure as follows: when following a light edge, simply add its distance to a running total; when following a heavy path, add all the distances along the edges traversed to the running total (as well as the distance along the light edge at the top, if any). The latter can be performed efficiently, along with modifications, if each heavy path is augmented with a data structure such as a [[segment tree]] or [[binary indexed tree]] (in which the individual array elements correspond to the weights of the edges of the heavy path).
 +
 
 +
This gives <math>O(\log^2 n)</math> time for queries. It consists of two "skips" and a LCA query. In the "skips", we may have to ascend logarithmically many heavy paths each of which takes logarithmic time, and so we need <math>O(\log^2 n)</math> time. An update, merely an update to the underlying array structure, takes <math>O(\log n)</math> time.
 +
 
 
==References==
 
==References==
 
* Wang, Hanson. (2009). Personal communication.
 
* Wang, Hanson. (2009). Personal communication.
 
* D. D. Sleator and R. E. Tarjan, A data structure for dynamic trees, ''in'' "Proc. Thirteenth Annual ACM Symp. on Theory of Computing," pp. 114–122, 1981.
 
* D. D. Sleator and R. E. Tarjan, A data structure for dynamic trees, ''in'' "Proc. Thirteenth Annual ACM Symp. on Theory of Computing," pp. 114–122, 1981.

Latest revision as of 12:40, 6 August 2015

The heavy-light (H-L) decomposition of a rooted tree is a method of partitioning of the vertices of the tree into disjoint paths (all vertices have degree two, except the endpoints of a path, with degree one) that gives important asymptotic time bounds for certain problems involving trees. It appears to have been introduced in passing in Sleator and Tarjan's analysis of the performance of the link-cut tree data structure.

Definition[edit]

The heavy-light decomposition of a tree T=(V,E) is a coloring of the tree's edges. Each edge is either heavy or light. To determine which, consider the edge's two endpoints: one is closer to the root, and one is further away. If the size of the subtree rooted at the latter is more than half that of the subtree rooted at the former, the edge is heavy. Otherwise, it is light.

Properties[edit]

Denote the size of a subtree rooted at vertex v as \operatorname{size}(v).

Suppose that a vertex v has two children u and w and the edges v-u and v-w are both heavy. Then, \operatorname{size}(u) + \operatorname{size}(w) > \frac{1}{2}\operatorname{size}(v)+\frac{1}{2}\operatorname{size}(v) = \operatorname{size}(v). This is a contradiction, since we know that \operatorname{size}(u) + \operatorname{size}(w) + 1 \le \operatorname{size}(v). (v, of course, may have more than two children.) We conclude that there may be at most one heavy edge of all the edges joining any given vertex to its children.

At most two edges incident upon a given vertex may then be heavy: the one joining it to its parent, and at most one joining it to a child. Consider the subgraph of the tree in which all light edges are removed. Then, all resulting connected components are paths (although some contain only one vertex and no edges at all) and two neighboring vertices' heights differ by one. We conclude that the heavy edges, along with the vertices upon which they are incident, partition the tree into disjoint paths, each of which is part of some path from the root to a leaf.

Suppose a tree contains N vertices. If we follow a light edge from the root, the subtree rooted at the resulting vertex has size at most N/2; if we repeat this, we reach a vertex with subtree size at most N/4, and so on. It follows that the number of light edges on any path from root to leaf is at most \lg N.

Construction[edit]

The paths can be obtained by the following pseudocode:

def getPath(node):
    if node is a leaf:
        return [node]
    for each subtree S:
        if S is not the largest subtree:
            allPaths.append(getPath(S))
    for each subtree S:
        if S is the largest subtree:
            return getPath(S).append(node)

Then all paths can be obtained by calling:

allPaths.append(getPath(root))

Note that by this construction, an edge is heavy if and only if the vertices which it connects are both in the same path. This creates more heavy edges than are strictly necessary by the definition, but the complexity is unchanged.

Applications[edit]

The utility of the H-L decomposition lies in that problems involving paths between nodes can often be solved more efficiently by "skipping over" heavy paths rather than considering each edge in the path individually. This is because long paths in trees that are very poorly balanced tend to consist mostly of heavy edges.

To "skip" from any node in the tree to the root:

while current_node ≠ root
     if color(current_node,parent(current_node)) = light
          current_node = parent(current_node)
     else
          current_node = skip(current_node)

In the above, color(u,v) represents the color (heavy or light) of the edge from u to v, parent(v) represents the parent of a node v, and skip(v) represents the parent of the highest (i.e., least deep) node that is located on the same heavy path as node v. (It is so named because it allows us to "skip" through heavy paths.) In order for the latter to be implemented efficiently, every node whose parent link is heavy must be augmented with a pointer to the result of the skip function, but that is trivial to achieve.

What is the running time of this fragment of code? There are at most a logarithmic number of light edges on the path, each of which takes constant time to traverse. There are also at most a logarithmic number of heavy subpaths, since between each pair of adjacent heavy paths lies a light path. We spend at most a constant amount of time on each of these, so overall this "skipping" operation runs in logarithmic time.

This might not seem like much, but let's see how it can be used to solve a simple problem.


Dynamic distance query[edit]

Consider the following problem: A weighted, rooted tree is given followed by a large number of queries and modifications interspersed with each other. A query asks for the distance between a given pair of nodes and a modification changes the weight of a specified edge to a new (specified) value. How can the queries and modifications be performed efficiently?

First of all, we notice that the distance between nodes u and v is given by \operatorname{dist}(u,v) = \operatorname{dist}(root,u) + \operatorname{dist}(root,v) - 2\,\operatorname{dist}(root,\operatorname{LCA}(u,v)). There are a number of techniques for efficiently answering lowest common ancestor queries, so if we can efficiently solve queries of a node's distance from the root, a solution to this problem follows trivially.

To solve this simpler problem of querying the distance from the root to a given node, we augment our walk procedure as follows: when following a light edge, simply add its distance to a running total; when following a heavy path, add all the distances along the edges traversed to the running total (as well as the distance along the light edge at the top, if any). The latter can be performed efficiently, along with modifications, if each heavy path is augmented with a data structure such as a segment tree or binary indexed tree (in which the individual array elements correspond to the weights of the edges of the heavy path).

This gives O(\log^2 n) time for queries. It consists of two "skips" and a LCA query. In the "skips", we may have to ascend logarithmically many heavy paths each of which takes logarithmic time, and so we need O(\log^2 n) time. An update, merely an update to the underlying array structure, takes O(\log n) time.

References[edit]

  • Wang, Hanson. (2009). Personal communication.
  • D. D. Sleator and R. E. Tarjan, A data structure for dynamic trees, in "Proc. Thirteenth Annual ACM Symp. on Theory of Computing," pp. 114–122, 1981.