Single Round Match 741 Editorials

Share

SRM 741 was held on 30th October. The problem set was prepared by Blue.Mary. Thanks to misof for testing and preparing the editorials. This was the last SRM in stage 1 of TCO19 Algorithm Qualification. Congratulations to tourist for qualifying for TCO19 Algorithm Finals!

As is usually the case in the easy problem in Division 2, the constraints are so small that any correct solution will pass. And as is usually the case, the easiest way to write a correct solution is to use brute force.

In this problem, brute force looks as follows: We’ll generate all possible substrings of S. We can do this simply by trying all possibilities for the indices of its beginning and end. For each of the substrings, we will check whether it starts with a non-zero digit, and then we convert it to an integer and compare it to X.

In order to convert a string to an integer, you can either use a built-in function (most languages will have one), or you can iterate over the string one character at a time, as follows:

The function value f(n) is defined as the sum of g(n,p) over all primes p. However, if p > n, then clearly p does not divide n, which means that g(n,p) is zero. Thus, when computing f(n) we only care about primes that are less than or equal to n.

The most straightforward way to solve this task was to actually compute the functions f and g as specified in the statement. Computing g is easy: we simply compute powers of p until we are about to exceed n. This runs in O(log n) time, which is very fast. (In the worst case, for n=444,777 and p=2, the function will still make fewer than 20 iterations.)

long long g(long long n, long long p) { // assumes that we already checked that p divides n long long answer = 1; while (answer * p <= n) answer *= p; return answer;}

Now we need to implement f. In order to compute f(n), we need to sum all non-zero g(n,p). In other words, we need to find all primes p that divide n, and we need to compute g for each of them. How can we do that quickly?

One simple way to find all prime factors of n (and, in fact, the full factorization of n) is to use one simple observation: Among all the primes in n’s prime factorization there can be at most one that is bigger than the square root of n. Why? Because already if you take two such primes, their product is obviously more than n.

Based on this observation, we can now write the following implementation of f:

If we don’t count the calls to g, this implementation of f obviously runs in O(sqrt(n)). Thus, for any n <= 444,777 it will only require at most sqrt(444,777) <= 667 iterations, and usually it will be even fewer than that.

Two things to note in the above implementation: First, whenever we find a d that actually divides n, we can be sure that this must be a prime number. It cannot be the product of smaller prime numbers, because we already tried all of those and we divided n by each of those, so the current value of n isn’t divisible by anything smaller than d. Second, note that the value of n changes during the computation, and that new, smaller n is then used as the upper bound for the cycle. This is still correct, because the same argument still applies: the value that remained in n is either a prime, or it still has a divisor that is at most equal to its own square root.

To finish the solution, we now just run a simple for-cycle to compute the sum f(1) + … + f(X).

Another reasonably simple solution is based on the following steps:

Use the Sieve of Eratosthenes (or a primality test similar to the algorithm used above) to generate all primes up to X.

For each prime p in that set, go through n=p, 2p, 3p, … until you reach X, and for each such n compute g(n,p) and add it to the total.

There are much faster solutions, but those were not needed to get accepted in the Division 2 version of this problem. If you want to see them, one will be presented below.

The main observation needed to solve this problem is that regardless of the shape and orientation of a tromino, its perimeter is always 8. Hence, each tromino is adjacent to at most 8 other unit squares. And as we have 10 available colors, we can color the trominoes greedily. More precisely, we can do the following:

for each tromino on the given board (in any order): look at the colors already used on squares adjacent to this tromino pick a color that is not among them use that color for this tromino

All that remains is to implement the above algorithm in a painless way. Let’s do one more iteration of writing pseudocode, but now with more details.

for each row r: for each column c: if the square at (r,c) belongs to an uncolored tromino: find the rest of the tromino find all squares adjacent to those collect all their colors, etc.

In order to find the rest of the tromino, we can use any standard graph search (e.g., BFS or DFS), but I opted for a different technique: I wrote a function that would generate all adjacent cells to a given cell by simply trying all four directions. Then, you can generate the tromino by essentially iterating this function twice. More precisely, you take all the neighbors of the original cell that share the same color in the input, and then you take all their neighbors with the same property, and you have the tromino. As a bonus, you can then use the same function in the next step, only now you take the cells that had a different color in the input.

Exercise: Above, we have shown that 10 colors are enough. However, this algorithm would actually work with fewer than 10 colors. What is the smallest number of colors C for which the above algorithm still works? And is that number of colors worst-case optimal, or is there another algorithm that will always find a coloring with fewer than C colors?

There are 2^47 ways to erase characters from a 47-character string, and we cannot afford to check all of them. Thus, we need some clever way of counting.

Almost all the ways to erase characters will usually be good. As long as we make sure that the first character we leave is not a ‘0’ and that we leave at least 10 characters, we will certainly have a number that’s big enough (as the largest valid X has only 9 digits). However, this observation is still not enough to get an algorithm that would be fast enough: there are (47 choose 9) = 1,362,649,145 ways to choose which 9 of 47 characters we should leave, and in general we would need to examine each of those, even if we could handle all the bigger numbers by some formula.

Another line of thought would be to try some dynamic programming. Imagine that we go through the input string from the left to the right, and for each character we’ll recursively try out both possibilities, one after another: either we keep it or we erase it. How can we describe our state somewhere in the middle of this recursive search? One possibility looks as follows. We need to specify:

How many input characters we already processed.

How many of them we kept.

If we kept at most nine, what is the number they form.

Sadly, the above is still too slow, as for a random string of digits we can eventually get very many different 9-digit numbers, so we still have way too many states.

Luckily, this approach can still be saved, we just need to make one more observation. Suppose X = 456789 and you already kept three digits. The state where your current number is “123” and the state where it is “455” are exactly the same — in either case, you need any 4 digits to be bigger than X, but no 3 digits will be enough. Similarly, “457” and “989” are equivalent in that any 3 appended digits will give you a number bigger than X. Inbetween there is a third case: “456”, in other words, the digits we kept so far form a prefix of X.

Thus, we can reduce our state space as follows. Let D be the number of digits of X. Then, each state can be described by the following:

a = How many input characters we already processed.

b = How many of them we kept.

c = 0/1/2, where:

c = 0 means that the number we kept is smaller than the number formed by the first b digits of X

c = 1 means that it is equal

c = 2 means that it is bigger (and we also use this state whenever b > D).

This leaves us with just 47*47*3 states, and for each of them we can compute the answer simply by trying out two possibilities: whether to keep or to erase the next character of the input string.

In my implementation I used a slightly different approach that is easier to code. I simply noticed that, for example, if you have X = 456789, then any prefix that is between “457” and “4566”, inclusive, is equivalent. Then, I just implemented the previous solution, but whenever my current number falls between two prefixes of X, before doing memoization I simply change it to (the shorter prefix of X) + 1. This is implemented in the function “fix” below.

This problem is a generalization of Golomb’s famous problem: show that any square board with side length 2^k and one black cell can be tiled. That version has a short and beautiful recursive solution, and if you don’t know it, try solving that problem before this one.

An obvious necessary condition for a solution to exist is that the number of squares on the n times n board must be of the form 3k+1 for some k. In other words, n cannot be divisible by 3. It turns out that this condition is also sufficient — all remaining boards can be tiled. Even better: all rectangles that have both dimensions at least 2 and area of the form 3k+1 can be tiled.

A good strategy for problems like this one is to reduce the bigger problems to smaller ones: find a way to take any large instance and construct a partial solution (in our case, a partial tiling) in such a way that the part that remains to be solved is another valid and solvable instance. If you can do that, you can then take an arbitrarily large instance and apply the above steps repeatedly, until you get one of finitely many constant-size instances that are too small for the rule to apply.

Another good strategy: when left with finitely many cases, before you solve them manually, use symmetry to reduce their number. In our case, we can flip the board diagonally, horizontally, and/or vertically to get it into one of very few “canonical positions”.

For the first part of this solution, note that if you have to tile a large rectangle, one thing you can do is choose a side that is far from the black square, and tile the first 3 rows/columns along that side by laying down a sequence of I-shaped trominoes. This leaves you with a smaller rectangle to tile. If you repeat steps of this form, you will eventually be left with one of finitely many tiny rectangles.

The pseudocode of the resulting solution follows. (R and C denote the number of rows and columns of the current board, rb and cb denote the coordinates of the black square within that board.)

while True: if R >= 5 and you can tile the first or the last three rows, do so if C >= 5 and you can tile the first or the last three columns, do so if you didn’t tile anything new, break

let “board” denote the part that remains to be tiled

flip the board horizontally and vertically, if needed, to get the black cell into its upper left quarter

flip the manually-constructed solution to undo the flips you did above

once you have the full tiling, solve the Div2 version of this task (i.e., just color the trominoes greedily, always using the smallest available color)

Can you find a solution with fewer cases that need to be solved by hand? There certainly are such solutions. The main reason why I picked this one was that it’s conceptually as simple as can be, even if it may require a bit more manual work.

Obviously, this is a problem about primes. Also obviously, 3,333,377,777 is quite a large number.

After parsing the problem statement it’s quite obvious that the sum f(1) + … + f(X) only involves primes up to X, and one possible strategy is to consider each of those primes separately. Our hopes die in a fire as soon as a quick query to Wolfram Alpha confirms what we already estimated: there are over 159 million primes in that range and we cannot afford to generate them and consider each of them separately.

On the other hand, the square root of X is small enough (less than 60,000) and the number of primes up to sqrt(max X) is less than 6,000. This observation gives us a possible line of attack:

Find all primes up to sqrt(X).

For each of these primes, count its contribution to the result.

Do something magical to find the contribution of all those larger primes.

Well, steps 1 and 2 are easy, but what about step 3? The intuition why we can be hopeful that this approach can succeed is that these primes are “easier” than the ones in the general case: if you have a prime p > sqrt(X), p^2 is already too large, and therefore each n that is divisible by p only contributes p to the total. In other words, the contribution of this p to the sum of all f(n) is simply (p times (the number of n divisible by p)).

Thus, the sum we need is sum( p*(X div p) ) over all primes p in (sqrt(X),X]. The question remains: how can we find this sum without finding all of the primes?

Let’s split these primes into smaller groups according to the value of (X div p). For each of these groups, we will compute the sum of primes it contains, and we will then multiply their sum by the corresponding value.

The value of (X div p) lies in the interval [1,p). For d > 1, the primes such that X div p = d lie in the interval [X/d,X/(d-1)). (Both endpoints are reals, not necessarily integers.)

Let S(n) denote the sum of all primes that are less than or equal to n. We can compute the sum of primes in each of our intervals by computing S(n) for the sqrt(X) values of n that are of the form X/d, and taking differences.

Additionally, let T(n,x) denote the sum of all numbers between 1 and n that do not have any factor between 2 and x, inclusive. Clearly, S(n) = T(n,n-1), but a more clever way is to say S(n) = T(n,p) + S(p) – 1, where p is the largest prime <= sqrt(n). The values S(p) we need in this step can be easily precomputed, so all we need is T(n,p).

A clever way to compute these values is based on the following recurrences:

T(n,1) = n(n+1)/2

T(n,p) = T(n,p’) – p*T(n/p,p), where p’ is the biggest prime smaller than p, or 1 if there is no such prime

(For the recursive step, we need to subtract the sum of all numbers that are <=n and whose smallest prime factor is exactly p. When we divide each of these numbers by p, we get precisely the numbers that are <=n/p and whose smallest factor is p or more. Again, note that the first argument in T(n/p,p) is a real, not necessarily an integer.)

If we precompute answers with n <= sqrt(X), the remaining states are rather sparse. There are sqrt(X) possible values for the first argument: X, X/2, …, X/floor(sqrt(X)). If the first argument is n, for the second one we only need to consider primes up to sqrt(n). Summing up those, we can show that the total number of states we’ll need to evaluate is O(X^(¾) / log(X)).