MTD(f) is a minimax search algorithm developed in 1994 by Aske Plaat, Jonathan Schaeffer, Wim Pijls, and Arie de Bruin. Experiments with tournament-quality chess, checkers, and Othello programs show it to be the most efficient minimax algorithm. The name MTD(f) is an abbreviation for MTD(n,f) (Memory-enhanced Test Driver with node n and value f). It is an alternative to the alpha-beta pruning algorithm.
MTD(f) was first described in a University of Alberta Technical Report authored by Aske Plaat...

Unfortunately I couldn’t seem to get my transposition tables to work
correctly. The garbage collector kept crashing. I think that to get a
solution like this to work well you need to store the transposition
tables on disk. Pity I haven’t finished cFerret yet.

By the way, I’m afraid my solution probably isn’t very Rubyish. You
can probably tell I’ve been doing a lot of C coding lately. If anyone
wants to clean it up and improve it, please do and let me know about
it.

return @move_list.pop if @move_list and @move_list.size > 0
# If the next move is from the top then we rotate the board so that

all
# operations would be the same as if we were playing from the bottom
if (@game.player_to_move == KalahGame::TOP)
# We do iterative deepening here. Unfortunately, due to memory
# constraints, the transpositon table has to be reset every turn
so we
# can’t go very deep. For a depth of 8, one step seems to be the
same as
# two but we’ll keep it for demonstration purposes.@depth.each do |depth|@guess, @move_list = mtdf(board[7,7] + board[0,7], @guess,
depth)@previous_transposition_table = @transposition_table@transposition_table = {}
end@move_list.size.times {|i| @move_list[i] += 7}
else@depth.each do |depth|@guess, @move_list = mtdf(board.dup, @guess, depth)@previous_transposition_table = @transposition_table@transposition_table = {}
end
end
return @move_list.pop
end

that we
# found last time
first_move = (bounds.upper_move||bounds.lower_move).last
else
# We haven’t tried this board before during this round
bounds = @transposition_table[board] = Bounds.new(-1000, 1000,
nil, nil)

# If we tried this board in a previous round see what move was

found to
# be the best. We’ll try it first.
if (prev_bounds = @previous_transposition_table[board])
first_move =
(prev_bounds.upper_move||prev_bounds.lower_move).last
end
end

next_guess *= -1
move_list = []
end
if (next_guess > guess)
guess = next_guess
best = move_list + [i]
# beta pruning
break if (guess >= upper)
end
#lower = guess if (guess > lower)
end
end
# record the upper or lower bounds for this position if we have

found a
# new best bound
if guess <= lower
bounds.upper = guess
bounds.upper_move = best
end
if guess >= upper
bounds.lower = guess
bounds.lower_move = best
end
return guess, best
end
end

Here are my Kalah Players.
My first trick was to create subclass of Player so that my players are
always playing from bins 0…5. This got rid of a lot of messy
conditions. It also contains a move simulator.

AdamsPlayers.rb contains some very simple test players, as well as 2
reasonably good players:
DeepScoreKalahPlayer tries to find the biggest gain in points for a
single move, and APessimisticKalahPlayer does the same, but subtracts
the opponent’s best possible next move.

AHistoryPlayer.rb contains a player which ranks the results of each
move, and keeps a history. The idea is that the more it plays, the
less likely it is to choose bad moves.
It stores the history in a Yaml file, which definitely causes a
slowdown as it gets bigger.
That’s one thing I’d like to improve if I have time. I also added a
line to the game engine to report the final score back to the players.
AHistoryPlayer still works without it, but it’s less accurate I
think, since it never records the final result.

None of these players pay much attention to the number of stones
remaining on the opponent’s side, which is one factor in why they fail
miserably against Dave’s player. But they do tend to beat Rob’s
players. I’m hoping some more people submit solutions.

Cool.
I’m suprised RemoveRight did better than DeepScore.
I was looking more at HistoryPlayer, (which should do better than
Pessimistic, since it uses the same choice for any unknown situations)
and I realized that when scoring a move, it is giving too much weight
to the subsequent turns. So it can choose the absolute best move on
turn 2, for instance, then make a bad move 3 turns later, and end up
ranking the turn 2 choice as the worst possibility. So for now, the
history information it keeps is mostly useless, except for a speedup.
My history algorithm needs some tuning (if it can be salvaged at all

I’d be curious to see what happens if you add the other submitted
players to the tournament. Can you post the tourney framework?

I’m gonna actually have to see some real code with the Ruby unit test.

There are many examples on the Ruby Q. site.

Rubyforge would also be an excellent place to search. I can tell you
at least two projects that have a complete test suit (because they
are mine): HighLine and FasterCSV. Download the source and take a
peak.