Category Archives: programming problems

My labmate pointed out to me that Facebook has some programming puzzles. I must have really been aching for Haskell again, because I decided to write up the first one, Evil Gambling Monster (Gamblor); it’s pasted below (TODO: upload the original problem somewhere for posterity). If we let A(t,x) = max possible score at time t at node x, then the recurrence relation is A(t,x) = delta score + (max over all possible previous positions px {A(t-1,px)}, or 0 if all the preceding are negative to capture the possibility of starting right at (t,x)). This one is actually fairly easy to reason about since any attempt to map out a table starting at 0,0 will take you to straight to the correct representation, which is the hardest part.

My first and only other Haskell DP is for the simple coin denomination problem, also pasted below. Haskell was fine for that, but I believe an imperative solution lends itself more naturally for Gamblor, because then we could avoid referencing ‘undefined’ cells of Nothings. Right now I’m a bit too sleepy to rewrite this in a mixed imperative/functional style (using, e.g., Scala or Python), but I’ll briefly try to explain what I mean.

The reason I believe an imperative solution is “superior” is because the data dependencies here are selective and resemble a sort of index-based permutation, and we’re trying to express this data flow with “pulling.” However, it could result in greater efficiency (and clarity) to express the propagation with “pushing,” i.e. destructively updating state (in this case, from the row earlier along the time dimension to the row later). This brought me to the realization that functional programming is “pull,” and imperative programming is “push.” This jives with my previously accepted perspectives on FP’s relative weaknesses – for instance, an imperative histogram builder is known to be more efficient because we push to a hash table instead of pulling tree map after tree map from the source data, and an imperative permutation inverter directly expresses the indices to mutate. In fact, both of these exhibit dictionary indirection in the push – perhaps that’s a more useful conclusion.

(To be sure, this is realized as a pretty small difference in the particular problem at hand – I doubt an imperative solution would be substantially better, especially in the face of a declarative style pervasive throughout the rest of the code. But use your imagination!)

Oh, one quick note about the following: my first attempt (Rec) ends up recursing. This is not a problem with the performance of the “DP” per se (since the subproblem results do get memoized), but rather with the laziness; it only occurs you try evaluating for n >= 9999 or something. The way around this is to unwind the schedule of thunk evaluation with seqArray, which I should add to my Haskell Commons library. Some day! Some day my libraries will see the light of that day.