However, for the 5x5 board I just terminate it after awhile because it fails to find a move. With alpha beta pruning should it just take this long for a 5x5 board, or am I missing something in my code?

The exponential branching factor of a 3x3x3 board at the initial move is 27. For a 5x5x5, it is 125. At two ply (4 moves ahead), the prior program will have examined 421,200 states. For the 5x5x5, it is 232,593,000 states.

So even at two ply search depth, the 5x5x5 program will be 552 times slower than the 3x3x3 program. And that's assuming you have no bugs in your code.

EDIT: of course, since you are using alpha beta pruning, you're really looking at the square root of that number (sqrt(552) = ~23), but that's assuming the move-ordering is absolutely perfect, which I doubt it is.

So the program is now working as expected? Could you post the updated code, then? I'd like to have a look at what can be done to make it faster. It is almost completely pointless to prune the game tree with alphabeta if you're not going to order the movelist, since in the worst case you will be searching a plain minimax tree (i.e. no cut-offs). So that would be a good place to start. One very easy way to get some kind of move ordering is to implement iterative deepening in your search. This is simply the process of starting the search by looking with a maxDepth of 1. Then repeating the search at maxDepth=2 and using the results we found on the previous iteration. Then search at maxDepth=3 using information from the previous iteration, again and again and again. You get it...; then, after each iteration you can order the list of available moves from the root position so the one with the highest score is searched first on the next iteration, saving a lot of time by reaching a cut-off as early as possible. Very nifty!

It is almost completely pointless to prune the game tree with alphabeta if you're not going to order the movelist, since in the worst case you will be searching a plain minimax tree (i.e. no cut-offs).

Alpha-beta still does better than minimax even with a random move ordering. You don't get worst case performance until the move ordering is maximally bad (i.e., exactly reverse to the optimal ordering).

With random move ordering you could still achieve a cutoff just by chance. But yes, to get MAXIMUM benefit from alpha-beta you need a perfect ordering.