DEG3MAXT - Editorial

PROBLEM LINK:

DIFFICULTY:

PREREQUISITES:

PROBLEM:

Given a weighted undirected graph whose largest biconnected component has at most 9 vertices, find how many degree three subtrees of maximum weight exist, also find this maximum weight.

QUICK EXPLANATION:

First partition the graph into biconnected components. Based on the constraints specified in the problem we can deduce that each component will have 9 or fewer vertices. For each component we can find the number of maximum weight subtrees satisfying a particular degree distribution using an exponential algorithm (more details in the next section). Using this degree distribution, we can combine the subtrees of two components.

EXPLANATION:

We are given a connected weighted undirected graph with some property, and the task is to find its degree three subtree (i.e., each vertex in the induced subtree has degree 3 or less) of maximum weight. Moreover, the problem also requires to find the number of such maximum weighted subtrees.

Solving the problem for general graphs:

Let us first try to solve the problem for general graphs without having any special property.

One possibility could be to enumerate all degree three subtrees of the graph and then pick the ones having maximum weight. This approach is however extremely inefficient as the number of degree three subtrees can be as many as O (2n2).

Instead, we categorize the subtrees based on the degree distributions of the nodes, and for each category compute the maximum weight of the subtrees having that fixed degree distribution as well as the number of subtrees with maximum weight.

It can be seen that the degree distribution of both subgraphs is (1, 1, 1, 2, 1, 0). Hence, both subtrees will be in the same category with signature distribution of (1, 1, 1, 2, 1, 0). Note that, we are including all vertices in the degree distribution irrespective of the fact whether it is in the subtree or not. Hence, the degree distribution will always be a N-tuple irrespective of the size of the subtree. (N = number of nodes in the original graph).

We can create a directed hyper graph where nodes correspond to the signature degree distribution of the subtrees. There is an edge between from node u to v, if the signature distribution of u and v satisfy the following property:
signature (v) - signature (u) = (0, 0, 1, 1, 0, ..., 1, r, 0, 1, ..., 1),
i.e., their difference is an N-tuple which has (r + 1) nonzero entries, one of the nonzero entries is r, and the remaining r nonzero entries are 1's.

Informally, we can realize it like this: we take a subtree of the original graph and extend it by adding some new nodes into it. All the new nodes are connected to a single node of the original subtree. In this case there will be an hyper edge from the hyper node containing the old subtree to the hyper node containing the new subtree.

Along with the signature degree distribution, each hyper node also contains the maximum weight of all subtrees in it, and the number of subtrees with maximum weight. We can call this maximum weight as the cost of hyper node.

The advantage of defining this hyper graph is that now we can iterate through the nodes of this hyper graph in increasing order of cost using Dijkstra's algorithm.
The starting hyper nodes in the dijkstra's algorithm will be the ones which contain a single subtree corresponding to an edge of the graph.

However, there is one issue: how do we really generate the outgoing edges of a hyper node. If we use the method of expanding subtrees as described above, then we need to know all the subtrees contained in a hyper node, which are too many. In order to avoid this, we split a single hyper node into multiple smaller hyper nodes in such a way that the outgoing edges of a smaller hyper node can be generated without considering all its subtrees.

We call these smaller hyper nodes as "mini hyper node". A mini hyper node inherits the signature degree distribution from its originator hyper node. However, it also contains a binary vector of size N, which contains the information that which nodes of the underlying subtrees are still free to connect to outside nodes, and which nodes are already saturated.

More formally, if the binary vector of mini hyper node is (1, 0, 1, 1, 0, 0, 0), that means we can expand the subtrees of this mini hyper node only by adding some new nodes and connecting them to any of the nodes 2, 5, 6 or 7, but not to nodes 1, 3 and 4, which are already saturated.

It can be observed that using this extra information we can generate the outgoing edges of a mini hyper node without considering all its subtrees. We only need to look at its free vertices (using the binary vector), and the edges which connect these nodes to the nodes not yet in the subtree (the outside nodes are the ones whose entry in the signature distribution is 0). In order to find an outgoing edge of the mini hyper node, one should pick a free vertex, and choose any subset of edges which connect this vertex to the vertices not in the subtree. The signature of the mini hyper node containing this extended subtree can be computed easily. Its binary vector should be the same as the binary vector of source mini hyper node with the exception that the picked vertex of subtree should be now marked as saturated.

We can use a specific order in which the free nodes should be picked in order to avoid multiple edges between mini hyper nodes. One strategy could be to always pick the free vertex with smallest index.

Since we are interested only in degree 3 subtrees, we will consider only those mini hyper nodes whose signature distributions have all entries smaller than 4.
Number of such mini hyper nodes will be at most (4N * 2N).
Hence, the complexity of the Dijkstra's algorithm will be O (N * 8N). Note that, this is very crude overestimation and in practice the number of mini hyper nodes is much lower, as not all quaternary vectors are valid degree distributions.

However, still this algorithm will not work for the original graph where N can be as large as 100. In the next section, we show how to make use of the property specified in the problem to improve the complexity.

Note that, even though this algorithm is exponential, it is much better compared to the naive version of enumerating all degree three subtrees. The following tables shows the number of subtrees and the number of hyper nodes of the graph as N varies (all these numbers are for the worst case scenario, when the underlying graph is a complete graph).

How to use the property to improve the complexity:

The problem has a very specific constraint which says that any subset of 10 or more vertices can be made disconnected by removing a single vertex from the graph. This means that the maximum size of a biconnected component (a subgraph which cannot be disconnected by removing a single vertex) in the graph is 9.

There is a fairly standard algorithm which partitions the graph into maximal biconnected components. The biconnected components are connected together by vertices known as articulation points or cut vertices. These cut vertices are the ones whose removal makes the graph disconnected.

Two maximal biconnected components can share at most a single vertex, which should be a cut vertex. Hence, if we choose a subtree that contains nodes from two different biconnected components, it must contain all the cut vertices lysing in the path from the nodes of first biconnected component to the nodes of the second biconnected component.

This is an important observation because using this we can compute the maximal subtrees of each biconnected component locally and then merge them together. However, we can merge the subtrees of two components only if the two components are connected by a cut vertex and this cut vertex is the part of both subtrees.
This is where the algorithm explained in the previous section comes handy, as it tells the maximum weight of a subtree having a fixed degree distribution. During the merging step We can use this degree distribution to ensure that only those subtrees are considered which contain the common cut vertex, i.e., the degree distributions have a nonzero entry for cut vertex node.

More formally, we can describe the process like this (which is taken verbatim from Mugurel Ionut Andreica's comments):
After identifying the biconnected components and the cut vertices
we construct a bipartite graph which has the biconnected components
on the left side and the cut vertices on the right side. We add an
edge between a biconnected component A and a cut vertex B if B belongs
to A (note that a cut vertex belongs to multiple biconnected components).
This bipartite graphs is in fact a tree, called the block-cut vertex tree
(because biconnected components are also called blocks).

We will now perform dynamic programming on the block-cut vertex tree.
To unify biconnected components and cut vertices we use the term "bag".
A biconnected component has multiple vertices in its "bag", while a
cut vertex forms a bag with just 1 vertex. Thus, the block-cut vertex tree
becomes a tree of bags in which the bags with just 1 node in them are the cut
vertices and the other bags are the biconnected components.

We will root the tree at a bag corresponding to a cut vertex (for
simplicity, in order to avoid the special cases in which no cut vertex
exists, we also define node number 1 to be a cut vertex, even if it
may not actually be one).

For each bag we use the algorithm described in the previous section to compute the following tables:
- CMAX[S] = the maximum weight of a degree three subtree containing
only nodes from the current bag and whose degree distribution is S.
- CNT[S] = the number of degree three subtrees having degree distribution S and weight CMAX[S].

Since size of each biconnected component is at most 9, the exponential algorithm of previous section will fit in time.

Once these tables are computed, we can reduce the entries in the table by merging some of them, as we are only interested in the degree distribution of cut vertices. If two entries have the same degree distribution of cut vertices, we can combine them together.

After the reduction step we run a knapsack-like dynamic
programming for the current bag considering each of its children. Note
that for each child we only maintained information for each possible degree
values of the cut vertex which it has in common with the current bag
(thus, only 4 states per child are needed).

We consider each child at a time and combine its 4 states with the states
for the current bag, making sure that the total degree of the common cut vertex
(which is the sum of its degree in the child and in the current bag)
never exceeds 3.

The optimal solution is not necessarily found at the root,
as we might expect. Actually, a potential optimal solution needs
to be considered whenever a state may not be extended further up
in the tree:
- when a state in the current bag has degree 0 for all the cut vertices
which it has in common with its remaining children and its parent
- when the current bag contains just 1 vertex (because this vertex
may be discarded in its parent).

Time Complexity:

O (N * 8^K), where K is the size of the largest biconnected component.

AUTHOR'S AND TESTER'S SOLUTIONS:

Author's solution can be found here.
Tester's solution can be found here.

The most time-intensive part of my algorithm is finding the number of subtrees of maximal cost for every degree distribution, for every biconnected component (of size K ≤ 9).

Let S be a map of the form {node1:degree1,node2:degree2...} where 1≤degree≤3 (we call S a degree distribution). Since the degree of each node in the subtree is at most 3, there are exactly 4K degree distributions per component.

Now let dp[S] be the maximum cost of a connected subtree whose node set is exactly S.keys() and with exactly that degree distribution.

We add one node at a time, in increasing order of ID. For the current node i:
1. Calculate dp[S] for all degree distributions S such that S[i] = 1:
This involves selecting a degree distribution A from [0,i), choosing a node x to attach i to (A[x] should either be 1 or 2 so i can be attached to x successfully), and setting dp[S] = max(dp[S], dp[A]+Edge[i][x]), where S = A+{x:1,i:1}. (here the sum of two degree distributions A and {x:1,i:1} is formed by summing degrees of corresponding nodes).
This runs in time O(i·4i).
2. Calculate dp[S] for all degree distributions S such that S[i] > 1:
This involves selecting two disjoint degree distributions A, B from [0,i), and setting dp[S] = max(dp[S], dp[A+{i:1}]+dp[B+{i:d-1}]), where S=A+B+{i:d} and 2≤d≤3.
This runs in time O(7i).

We also track the number of times such maximum-cost trees can be created, as part of the DP. Care must be taken to ensure that every subtree is counted exactly once. For speedups, a degree distribution S can be encoded as a 32-bit integer, where every two bits 2·i and 2·i+1 correspond to S[i] (this has the added advantage that the sum of two distributions A and B is simply A+B).

The overall runtime of all cases 1 and 2 is O(K·4K) and O(7K), respectively. Therefore, worst-case complexity is O(7K) (can be triggered by a K-clique with edges of cost 0). Among all components, the overall running time of this DP is O(C·7K) where C is the number of biconnected components.

The rest of the program (calculating biconnected components, dynamic programming on components, and adding "children" subtrees from different components) runs asymptotically faster (at most O(C·K·4K)), so overall runtime is O(C·7K).

The maximum subtrees for all degree distributions can be found in O(F(K) * K) time and O(F(K)) memory, where F(K) is the overall number of degree distributions for the given size K. I believe that F(K) = O(4K / sqrt(K)) (see P.S.)

At first we can find all degree distributions as 4-base numbers by iterating over number of vertexes of degree 2 (let it be B) and 3 (let it be C). For fixed B and C the number of vertexes of degree 1 is A=C+2. Now we can make an array of A ones, B twos, C threes and K-A-B-C zeros and iterate over all actual degree distributions using, e.g., next_permutation in C++. This will take O(F(K) * K) time. Alternatively we can do some smart recursion that recompute the 4-base mask in O(1) and calculate them all in O(F(K)) time (e.g. Knuth permutation generation algorithm that swap exactly to numbers each iteration). We also need to sort them which in lazy case will require O(F(K) * log F(K)) time. But with another smartness in recursion we can generate them in sorted order.

Now when we have the sorted list of all degree distributions we iterate over it and compute DP for each mask which is the pair (max, cnt), where max is the cost of the tree and cnt is the count of subtrees with cost max. For each mask we find the vertex (say V) with degree one (each distribution should have at least two such vertexes) and iterate over its neighbors (say U) with degrees at least 2 (unless the mask itself only have two ones and no 2s and 3s). For given U we look up at DP value for mask with degrees of U and V decreased by 1 and update our DP correspondingly. Clearly in this way we consider each subtree exactly once and DP will be calculated correctly.

This also shows that actually we don't need the list of masks strictly in sorted order. It will be enough for this list to be sorted by sum of degrees in mask. This obviously can be done in O(F(K)) time by the counting sort. Or we can simply iterate over B and C is a smart way to create this array in sorted-by-sum order on the fly. But in my solution I was extremely lazy and simply use trivial nested loop over B and C, next_permutation and sort :P

Note that this approach can be easily extended to any other restriction on degrees of vertexes. The complexity will be O((D+1)K * sqrt(K)) where D is the bound on degree. But larger D will require some smart inner DP on aggregating DPs for children biconnected components, while for D = 3 it is easier to hardcode all possibilities.

You can check it out my solution for details.
I try to make clean and well-organized. So it should be clear after this explanation.