Single Round Match 743 Editorials

Share

PlayingWithPlanks – Div2 Easy

The intended solution for this problem is simple — test whether C(pieces+1, 2) is at most plankLength or not. If it is bigger, return “impossible”, and otherwise return “possible”.

We now sketch why this approach is correct.

Observe that if C(pieces+1, 2) > plankLength then we can not construct a valid solution. That is, even if all the assigned lengths are the smallest possible (the i-th piece having length i), their sum is C(pieces+1, 2).

On the other hand, if C(pieces+1, 2) <= plankLength, we can set lengths for first pieces-1 pieces to be from 1 through pieces-1, and the last piece to have length plankLength-C(pieces, 2) >= pieces.

FriendlyRooks – Div2 Medium

This task is a graph theory problem. Namely, consider the graph in which the vertex set consists of the rooks. Two vertices, i.e., two rooks, are connected by an edge iff for those two rooks all the three conditions are satisfied (the rooks are in the same row or in the same column, and there are no other rooks between them).

It is not hard to see that in this graph all the rooks that are in one connected component have to be of the same color. To see that, assume that two rooks in the same component have different colors and consider a shortest path between differently colored rooks. On this shortest path there exists an edge whose endpoints have different colors, which violates friendly coloring.

Similarly, if two rooks are not in the same component, then they can be colored by different colors.

Now this tasks comes down to building such graph. The final output is the number of connected components that this graph contains.

To count the number of connected components one can simply run DFS or BFS and that would be more than sufficient to solve this problem in time.

Note: Observe that out of those three conditions, the third condition (the one that says that there should be no rooks between two rooks attacking each other) also can be omitted and this task would not get changed. So, for the sake of easier implementation we can as well just ignore this constraint.

MaximizingGCD – Div2 Hard and Div1 Easy

We reduce this task to one that for a given g decides whether there is a pairing C such that each number of C is divisible by g. This question can be solved as follows.

Testing whether g divides each number of some pairing:

For a given g, compute A[i]%g for each i. Then, the number of i’s such that A[i]%g == 0 has to be even. If g is even, then the number of i’s such that A[i]%g == g/2 has to be even as well. For all the other modulos of g, the number of i’s and the number of j’s such that A[i]%g == k and A[j]%g == g-k has to be the same.

So, we now know how to decide whether there exists a pairing that certifies that the output to the problem is at least g. But how to find all the relevant values of g for which we should run the above procedure? One choice is to try all the values 1 through 2 * 10^9 in place of g, but that would be too slow. Next we discuss how to select relevant but small number of candidates for g.

A small number of relevant g’s:

Observe that A[0] has to be paired with some element of A. So, the final output has to divide A[0]+A[i] for some i. In our solution, for each of the divisors d of A[0]+A[i] we ask whether there exists a pairing as described above (for g = d). Luckly, the number of positive divisors of an integer X is at most 2 * sqrt(X).In our case, that is at most 2 * sqrt(2 * 10^9) many divisors of A[0]+A[i], for any fixed i. Since there are at most 30 such candidates A[0]+A[i], the total number of divisors of interest is at most 30 * 2 * sqrt(2 * 10^9). For each of those divisors we invoke the procedure described above. The largest of those candidates for which the procedure above reports that there exists the corresponding pairing is the output of the solution.

Note that since each element ranges from 0 to 50 and it never pays off to have subarrays of negative value, there are at most 2501 possibilities for the second and the third dimension of dp. sequence is of length at most 50, so there are at most 50 possibilities for first dimension. Furthermore, from each state we make only two transitions. So, the overall running time of the solution above is 2 * 50 * 2501 * 2501, which runs in time. However, defining the full dp as described above would require too much memory.

To reduce the memory, we use a standard trick. Instead of storing all the 50 coordinates for the first dimension of dp at each step of the algorithm, we store only two consecutive ones. This is possible as to feel dp[idx] we only need information from dp[idx + 1]. This would be sufficient to cut the space by factor of 25 which is good enough to store this compressed dp table in memory.

Alternative explanation (courtesy of misof):

Another point of view is as follows. We can read the input from left to right and run Kadane’s algorithm in parallel on all possible sequences. After each step, the dp values simply represent the probabilities with which each state of the algorithm was reached. Here, state is the pair

(best solution so far, best solution that ends at the end of what we processed).

PermuteTheArray – Div1 Hard

This is a mix of graph theory and dynamic programming. We split the explanation in two parts. First, we describe how to test whether there exists any permutation satisfying the constraints. Then we show how to use this procedure to construct the lexicographically smallest one.

Verifying whether there exists any permutation:

Make a graph on n vertices, vertex i corresponding to the i-th index of the output. Two vertices u and v are connected iff there exists i such that x[i]==u and y[i]==v. If d[i]==1, then color the edge {u, v} in red and otherwise color {u, v} in blue.

Now, for every component, we start from an arbitrary vertex and assign label 0 to it. Then we run DFS to assign labels to the other vertices as well. If two vertices are connected by the blue edge, then they should have the same labels; otherwise, one vertex should have label 0 and the other one label 1. This also implies that the same label within the same component denotes vertices of the same parity. If at any point we discover an edge that does not follow these rules, we conclude that there is no such array P.

Once we have all the 0-1 labels, for every component we count the number of labels being 0 and the number of labels being 1. For component comp, let those counts be cnt0[comp] and cnt1[comp]. Then, to each comp we should either assign cnt0[comp] even and cnt1[comp] odd numbers of val; or the other way around, i.e., we should assign cnt0[comp] odd and cnt1[comp] even numbers of val. Moreover, we should make these assignments for all the components simultaneously, and each element of val should be assigned to exactly one component. If this is not possible to achieve, then there is no required P.

Let E be the number of even and O be the number of odd numbers of val. We perform a dynamic programming to check whether there is a way to assign E and O across the components as described. This is a standard DP problem that can be implemented to run in O(E * number_of_components) time.

We now describe how to reconstruct the lexicographically smallest P.

Reconstruction of lexicographically smallest P:

We begin by explaining how to reconstruct P[0]. Let comp be the component containing vertex 0. Without loss of generality, assume that vertex 0 is labeled by 0 in comp. Let S_even be the smallest even and S_odd be the smallest odd number of val. (If S_even or S_odd does not exist, set its value to infinity.) Assume that S_even < S_odd. Then, we check whether there is a permutation P such that P[0]=S_even. To check that, we test whether we can assign cnt0 and cnt1 of the components other than comp so that the total number of even values and the total number of odd values of val in this assignments is E-cnt0[comp] and O-cnt1[comp], respectively. Here we are subtracting cnt0[comp] from E (and also cnt1[comp] from O) as we are assuming that all the vertices of comp having the same parity as vertex 0 can be assigned to have even values of val.

If there does not exist such assignment, then we set P[0]=S_odd. Note that to verify whether such assignment exists we can use the same method as we used in the first part.

When we set the value of P[0] (that is S_even or S_odd), then we mark which vertices in comp should be set to even values and which to odd values. Notice that this is uniquely determined once we decide the parity of vertex 0. Note: at this point we only mark the parity of the remaining vertices in comp, but not assign any values to them yet.

The analogous approach is applied if S_odd < S_even.

The rest of the reconstruction of P is obtained in a similar way. Namely, assume that so far we have reconstructed P[0], …, P[idx – 1], and next we want to reconstruct P[idx]. If the parity of vertex idx was already decided, then we let P[idx] be the smallest unused value of val which has the same parity as vertex idx.

If the parity of vertex idx has not been decided yet, then we apply the same procedure over idx as we applied over vertex 0 above. We only have to take into account that some of the components has been processed already.