I wrote up some quick solution sketches for the past three rounds. Please let me know if you need clarification or elaboration. :)

2.1 Wandering Billy
Though we could come up with a formula to get this to run in constant time, it's unnecessary in this case. It's just an easy simulation: make a loop that increases the length of each consecutive sequence of steps, and make a variable that acts as a step counter. Terminate when you reach the appropriate number of steps.

2.2 Scratch Card
A greedy algorithm will suffice. Keep a position for the next character to reveal, which starts at the first character. Go through the grid; when character doesn't match, '#' them out. When they do, put a '.' and increment the "next character" position. If you reach the end of the word before you reach the end of the grid, '#' out the rest of the grid. If you reach the end of the grid before you reach the end of the word, there is no solution.

2.3 Anonymous Shopping
You wish to count the number of people who share an ID with a group of people (possibly of size 1) who all have the same drink preference. First, partition the people into groups with the same ID. Sorting is one way to do this; then, each equivalence class will be a contiguous set of people. Then, determine whether each equivalence class shares the same drink preference. The input is small, so any absurdly inefficient algorithm could do this.

2.4 Bear Trees
Root the tree: pick one vertex to be the root, let its neighbours be children of the root, and repeat on the subtrees rooted on each child vertex. Now, let's try to find a recursive formula:

- If the root vertex has no children, then you only need one element in the queue to explore the tree.
- Otherwise, the problem is reduced to exploring the root and the subtrees rooted at each child of the root. Of course, we must explore the root first; if it has C children, then that adds C vertices to the queue. It can be shown that it always pays off to finish exploring a subtree completely before moving on to the next one. So, we explore each subtree one by one.

But in what order? Let f(v) be the cost of exploring the subtree rooted at v. Let c[1], c[2], ... c[C] be the C children of v, in order of exploration. Then, when we explore the subtree rooted at c[1], the queue will get however big it gets due to exploring it, and n-1 vertices ({c[2], c[3], ... c[C]}) will be left in the queue. When we explore c[2], n-2 vertices ({c[3], c[4], ..., c[C]}) will be left in queue. So, it will pay off to explore the subtrees in increasing order of f(v).
Which leads to the recursive formula: f(a leaf) = 1, and otherwise, we compute f(v) by computing sorting the children of v by f, and taking the max of f(c[i]) + C - i, where c[i] is the child with the ith smallest f.

2.5 Portals Check
This is a standard data structure problem, which we won't care to summarize here. Read about the union-find (also known as the disjoint-set) data structure.

3.1 Weighted Presents
Keep track of the best match so far, and go through the items, updating the best match when the weight difference is smaller than that of your current best match, or when the weight difference is the same and the current item is heavier.

3.2 Magical Ponds
We had intended for this to be solved by brute force, by guessing the final number of apples and working backwards. The correct solution is found when none of the ponds you encounter have a fractional number of apples. However, when I wrote the test data, my brute force solution ran in time for the large test case, so I kept it. There is, however, a nice solution. Notice that when you multiply the starting number by a constant, the number of apples in each pond multiplies by the same constant. In other words, the problem scales up. So, if you assume that you are to end up with one apple in each pond, you can compute the (possibly fractional) number of apples at each step. Then, you want to find the smallest scaling factor such that all these numbers become integers. To find this, reduce the fractions and find the LCM of the denominators using the Euclidean algorithm. Too difficult for #2, as I'm sure you'll agree.

3.3 Combo Discounts
I think the intended solution was brute force: to try every way to apply discounts. This was Tony's problem, and he didn't specify the number of items, which would be quite important.

3.4 ABCA Maze
The Floyd-Warshall algorithm suffices.

3.5 Tautology
The expected solution was to try all (potentially 2^10) combinations of variables. To evaluate the expressions after substitution, you could use your programming language's eval function, or recursive descent, where the lowest-priority binary operator splits the input into two substrings.

4.1 Crossword Count
Hopefully the numbering system used by crossword makers gives you a clue as to how to approach this. Let's only consider the horizontal direction; the vertical case follows trivially. Every word corresponds to its leftmost square, which can be guaranteed not to have a white square to its left; otherwise, it wouldn't be a word, by the second property. Furthermore, any white square without another white square to its left corresponds to a word. Well, not any white square; we need to verify that the word is of length at least 2. So, we simply count the white squares preceded by a black square (or the edge of the grid) and followed by a white square.

4.2 Prime Time
Generate the primes up to 10000 using the Sieve of Eratosthenes. Now, let's answer the question: how many times does prime p occur in the expansion of n factorial? There are floor(n/p) multiples of p. Then, we need to count the numbers that contribute p twice: there are floor(n/p^2) multiples of p^2. And so on; we get the formula floor(n/p) + floor(n/p^2) + ... + floor(n/p^w), where w is the last number for which the corresponding term is not zero.

4.3 Breaking Bonds
This is the well-known bridge problem. It can be (quite famously) done in linear time, but for these pathetic input sizes, you can just try removing every edge and seeing if you've disconnected the compound. The latter can be done by finding how many carbons you can reach from carbon 0 with a depth-first search, and seeing if this is equal to the total number of carbons. Note that you don't even need to check redundant edges, since they never disconnect the graph.

4.4 Lego Ladder
Some of you got the recursive solution; the correct one was very close! Let's represent a state of the game as the subset of blocks left. We define a function f(S), which answers the question "in the state S, does the player to move have the winning strategy?"
For some S, we already have a ladder, so f(S) = false. Otherwise, we consider all states reachable from S. If there is some state T reachable from S such that f(T) = false, then f(S) = true; our winning strategy is just to take the move that brings S to T. If there exists no such move, then f(S) = false.
To get this brute-force solution to run in time, simply note that there are only 2^n = 15 states. So, once we've computed f(S), we can put it in a lookup table, and we'll be able to access it in constant time. Then, the running time will be something like O(n * 2^n), which will run in time.

4.5 Comet Vomit
Oops! There was one solution that received full marks, and it was not the intended one. Basically, the obvious solution is to check all pairs, but this would not run in time for 100000 points. Sirui's heuristic was to check only the endpoints u and v. Of course, this is not correct in general; it's very easy to think of a counterexample. But congratulations, Sirui, on outsmarting my test data! :)
Now, let's take a look at the intended solution. Manhattan metric furthest pair is a classical problem. Unlike related problems, it can actually be solved in linear time!
Let (x, y, z) and (x', y', z') be two points. We wish to find the points that maximize |x - x'| + |y - y'| + |z - z'|. Let X = x - x', Y = y - y', and Z = z - z'. Now, notice that |f| = max(f, -f). So, we can simplify:

|X| + |Y| + |Z| = max(X, -X) + max(Y, -Y) + max(Z, -Z)

Next, notice that we can take each combination of signs of X, Y, and Z, and maximize the sum:

There's a very easy way to find the points that maximize each of these quantities. If we want to maximize X-Y+Z:

- Compute x - y + z for each coordinate.
- Find the point with the smallest x - y + z.
- Find the point with the largest x - y + z.
- The maximum X-Y+Z is given by the difference between the maximum and the minimum.

So, just do this for each combination of signs, and take the largest difference. This algorithm runs in O(n * 2^d) time, where d is the dimension of the space.

Of course, finding the maximum difference of X-Y+Z is the equivalent to finding it for -X+Y-Z; if you invert all of the signs, then the quantity is just negated. So, you really just need to do this for four different quantities: X+Y+Z, X+Y-Z, X-Y+Z, and X-Y-Z. Done!

Thanks for the solutions Cyril. Could you possibly give pseudo code for number 3 from today? I don't really understand that solution (I have to look 4 and 5 up, but that is just from my lack of algorithm knowledge)

Just a comment for 4.5, the runtime complexity can be O(n*2^(d-1)).
You can eliminate half of the possible combinations by making the following observation:
The smallest value for -x-y-z is the highest value for x+y+z.
Likewise, the highest value for -x-y-z is the lowest value for x+y+z.

Thanks Cyril, I think I understand the implementation from that. I'm starting CS finally next semester in about 2 weeks and at that time I'm going to be looking at coding the solutions for all of these dwite problems and just wanted to understand how to go about it.

4.4 Lego Ladder
Some of you got the recursive solution; the correct one was very close! Let's represent a state of the game as the subset of blocks left. We define a function f(S), which answers the question "in the state S, does the player to move have the winning strategy?"
For some S, we already have a ladder, so f(S) = false. Otherwise, we consider all states reachable from S. If there is some state T reachable from S such that f(T) = false, then f(S) = true; our winning strategy is just to take the move that brings S to T. If there exists no such move, then f(S) = false.
To get this brute-force solution to run in time, simply note that there are only 2^n = 15 states. So, once we've computed f(S), we can put it in a lookup table, and we'll be able to access it in constant time. Then, the running time will be something like O(n * 2^n), which will run in time.

Could anyone help me with a problem that I have?
So I made a program that uses the above algorithm, and it works for all the test cases.

However, I have one major problem with representing the states of a game. Right now, for a game that initially has n blocks, I have 2^n states, such that each state has a different combination of legoes in the row. Each state is represented by a decimal integer from 0 to (2^n)-1, so that the binary expansion of the decimal integer represents the combination of the lego blocks in the game (with 0= not present and 1=present) .

For example, say there are 4 blocks in a row, state 6 will be:
-> 0110 in binary
-> the state where only the second and third blocks are left.

So I have a function, f(n) that sees whether a player has a winning strategy at state n:
1. Converts n to binary, and checks if it forms a ladder
2. If not, generates a list of binary numbers that have exactly one less '1' than the binary of n
3. Call f(n) for the decimal representation of each of the binary numbers

And as you can tell, it's kind of redundant, as I have to constantly convert the numbers from binary to decimal or from decimal to binary. So is there anyway to do this more efficiently?
Thanks.

Also for #5, depending on the language, you might or might not need to use the quadratic formula and a bunch of if statements.
This submission (in Python) works fine in C: http://dwite.org/uploads/7ypdjhf1de1lrnj/2583/187/0/d5.py
C version: http://pastebin.com/TL285AK2
I get about a 200x speedup in C (.3 seconds instead of 1 minute).
In fact, just starting Python and reading in the test data takes about as much time as calculating the answer in C.
$ time ./d5#c solution

Just a comment for 4.5, the runtime complexity can be O(n*2^(d-1)).
You can eliminate half of the possible combinations by making the following observation:
The smallest value for -x-y-z is the highest value for x+y+z.
Likewise, the highest value for -x-y-z is the lowest value for x+y+z.

I covered this in the last line of my solution.

And pshhh, O(n * 2^(d-1)) and O(n * 2^d) are exactly the same thing, hahaha.

Nice reasoning. Your strategy of mapping subsets of a set of size N to integers from 0 to 2^N - 1 is a standard technique, and for testing purposes, we used it in exactly the way you described. There is a nice way to do this!

Use bitwise operators.

x << n: shifts x to the left by n bits (you can think of this as multiplication by 2^n)
x >> n: shifts x to the right by n bits (you can think of this as integer division by 2^n)
a | b, a & b, a ^ b: takes the or/and/xor of corresponding bits
~x: inverts every bit in x

So, if you have such a representation stored in an integer N:

N | (1 << r)
is 0 if the rth bit of N is 0,
and non-zero otherwise.

N -= 1 << r
removes item r.
Or, if you like, N &= ~(1 << r). You can think of ~(1 << r) as a "mask" that takes out the rth bit.

(1 << N) - 1
should be your initial state.

They are actually more primitive (and faster) than your conventional arithmetic operations.

If you use Java, and want to investigate further (in particular, find out how negative numbers are treated), be sure take a look at <<< and >>> as well.