As usual, the round will be rated for Div. 2. It will be held on extented ACM ICPC rules. After the end of the contest you will have one day to hack any solution you want. You will have access to copy any solution and test it locally. You will be given 7 problems and 2 hours to solve them.

In Educational, after that 2 hours for solving, hacks are open for 24 hours. What I was asking was if I register but not solving anything and just do some hacks in that 24 hours period, will my rating be modified.

Lazy propagation can be applied by keeping track of the number of time you want to apply D(ai) operation on the range. For each range [l : r] store the numbers who have values > 2 because D(2) = 2 & D(1) = 1 and it's redundant to apply update operation on these 2 numbers. so when you are updating the range of numbers, pass the value of Lazy[range] as argument so that you can apply D(ai) operation on numbers > 2 in the range Lazy[range] of times and then return the updated numbers in the range. i hope this helps.

My idea is, first form a graph with the non edges. Insert every element from 1 to n into the set. For every element from 1 to n if it is in the set perform the dfs. DFS: In dfs we take a vertex and then iterate through all the remaining elements in the set and check if there is no edge between this vertex and the vertex in the set. if yes, continue. else delete this vertex from the set. Now keep track of all the elements you deleted from the set. Main: Now for every element you pushed into the vector or deleted, perform the dfs. Now this is one connected component. Now repeat the procedure for the remaining elements. Here is the solution https://ide.geeksforgeeks.org/0JuIOaiKWO But i dont know if it is correct.

What i think is 1: You are not erasing from the set. So it will be O(n2); You need to erase in the dfs. And actually it is not dfs. I gave name as dfs cuz i din think of any other name. You are doing it by brute force. As you are calling for each vertex. Just check my code https://ide.geeksforgeeks.org/0JuIOaiKWO.2: You are using map. it has O(logn) while vector has O(1).

If the component graph is not connected , then the original graph is connected => number of connected component is 1. else , start bfs in the complement graph in a different manner , start from a vertex push in the queue all the unreached vertices and continue like this and for each node we use DSU to count number of sets which in result is the number of connected components.

My solution: build 2 segment trees, one to maintain the sum of elements in segments of array a, one to maintain the count of floor numbers in segments of array a.

Here, let's say that floor numbers are numbers which D(x) = x. (People say only 1 and 2 satisfied that rule, but I'm not so certain about that, so, just check the criterium manually — it won't cost much actually ;) ).

You can do both REPLACE and SUM queries without lazy propagation — simply traverse throughout the trees. However, to avoid TLEs, you can skip traversing if all elements in a segment are floor numbers.

For example, if a segment is from the 3rd to 6th element, and the floor numbers count is 4, you can skip the following traverse (since these numbers can't be replaced by any distinct integers anyway). If you're working on a REPLACE query, simply ignore it and break, otherwise, add the sum of elements in the segment represented by the node, then break the current traverse.

I knew the maximum processes possible for an element, this is why I said I don't need lazy propagation for my segment trees, just traversing with worst complexity O(segment_length) (but the average complexity is whole lot lower thanks to the notice of no-longer-processed-elements in a segment).

in E can there be ever more than 100000 components? I think for that you have to provide something like 100000*(100000-1)/2 forbidden edges. I actually overlooked the constraints. If there can be more than that components then there is a free hack for You.submission link

Logic : I build an index array for input, and stored the indices with 0 in a vector. I traversed the array and if I found an element with lower index than a smaller element, I checked if there's a zero between them.

Ex : 1 3 4 5 6 2

Here, 5 has 2 (smaller element) with greater index, so I checked if there's a 0 (unswapable element) between them

given that if you sum all the tanks water in one tank, then you can take whatever you want water in amount multiple of k. now, the problem is to bring the (V%k) amount. make a knapsack from all the tanks on an array of all values under K. The cruel idea is that you maybe need to module over k when you are summing tanks values in knapsack process.

First of all, it is mathematically provable that if G is non-connected OR if diam(G) >= 3, then !G is connected (where !G is the complement of G). So if any of the above hold for !G we draw the conclusion that G is connected. We quickly eliminate these cases.

Now we are left with only cases in which !G is a connected graph with diam(!G) <= 2. If we analyze it, we can draw the conclusion that the original G is almost connected (i.e. it has one very big connected component). We want to brute-force the answer for the original G, but we can't do that yet (it might have a large number of edges). We choose some number (O(M)) of random edges from G and brute-force on this partial graph. We remove all nodes in the largest component and all nodes adjacent to nodes in the largest component (we remember the size of this component). Finally, build the sub-graph of the original G which has only the remaining nodes. Brute force on this sub-graph and output the answer + the size of the big component we discovered before.

It seems like this always give the correct answer and the run-time shouldn't be too high provided that we discover a large enough part of the big component during the brute-force on the partial graph. I'm curious if the idea is hackable.

It seems to work (get AC) without the initial elimination (I guess the condition that one very big component exists holds anyway for larger graphs), but it might be easier to hack with RNG prediction. Maybe.

search for the node with adjacent nodes fewer than sqrt(n). let it x. there must be at least one , or the overall graph is small. Now, connect all the nodes (which are not disconnected with x) with x. Now, you have a component that has more than or equal (n-sqrt(n)) nodes. brute force on the rest nodes.

using brute force.Now there are (n-1)*n/2-m edges.If n less than 200,it's easy to solve,or we can prove that connected components never exceed 200.so we can sort the edges in order.And using union-find set to merge connected components.This is my submission 34858691.

There is a simple way like this: You use set A to store vetices, set B to store pair of vertices which is not edge. Then we will do dfs, for every vertex u, push all v which are in set A and pair <min(u, v), max(u, v)> doesn't in set B into a vector. Of course all vertices in the vector are adjacent with u. Then you erase all of vertices in vector from A and start dfs from each of them. Complexity: log n for std::set, n for visiting n vertices -> o(n log n)

Sorry there was a mistake. My solution was accepted so i though it's complexity was o(nlogn). It was o(n^2 logn) for the worst case (for every vertex, we check all the vertices remain in the set, so it should be n^2), but somehow passed pretest (maybe because of the constraint).

In CF Educational R 37 in Problem C, I had written the following code, where I have done cumulative sum of 1's and checked if all bits in the string between i and arr[i]-1 are 1 (& if arr[i] < i, then between arr[i] & i-1).

It is failing for Case #5 and the test case is of length 200,000 !

I have spent close to 5 hours to figure out why it is failing :P. It will be really helpful if someone can help me figure this out (maybe provide a simple test case !)

Also, I have tried all permutations of numbers till 10! and all 2^10-1 bits combinations for these and compared the result of my incorrect program with a correct program but could not find any test case that would produce a different results. (The program took 30 mins to run)

Could anyone provide any small but tricky test case for problem F? I am having trouble understanding the bug with my logic/implementation ( submission link: 34884739 ) and all test cases except #1 seems to be intractable.

Could someone explain why am getting TLE on E with O(m·log(n)) algorithm?Here is my code 34891034 and here is explanation of algorithm and proof of complexity:So I will use set to save nodes that are adjacent to current node,and not marked yet.Note that with BFS I only need one set ,every time node is visited I delete it from set,and when it's turn to traverse adjacent nodes for node A we will delete all nodes that aren't adjacent with A and after that we will add them back.Proof of complexity:Deleting and adding nodes that aren't adjacent to current node will take at most O(m * log(n)) time where constant factor is 4 because we will add and erase those nodes.Note that we will iterate exactly once for every node in set possible so that part is linear.So maximal number of operations is 4 * m * log(n) + n < 15 * 105.What I got wrong?

Right now maybe Codeforces is having some issues. It's very slow, I can't submit my solution and (maybe) the leaderboard in Educational Codeforces Round 37 is showing the results of about 10 hours ago.

In statement of problem 920E - Connected Components? it is said "denoting that there is no edge between x and y. Each pair is listed at most once; (x, y) and (y, x) are considered the same (so they are never listed in the same test)". But in test 85 we can found edges 1 2 and 2 1 in one test. Why?

I don't have any permission to rejudge the whole problem, so I tried to find all solutions that failed on this test and rejudge them manually. If your solution still fails on test 85, please tell me so I can rejudge it.

I think the data of problem E in Test#85 is wrong. In the description of problem E, you said " Each pair is listed at most once; (x, y) and (y, x) are considered the same (so they are never listed in the same test). ", but it seems both (3, 6) and (6, 3) are in Test#85.