The minimax tree has leaf values like -1 0 or 1. Min selects the minimum i.e. -1. Max selects the maximum among the available after Min would have taken its move. i.e. 0 or 1. So it has available moves like -1, -1, 0, 0, +1, +1 at the top most node (root, which is the present game state) and selects the first +1 value as it has a chance of victory no matter what other opponent does (it has already been calculated like if the opponent does this, I can do this (it always assumes the opponent takes the smartest move to destroy you) and I can win). There is no summation of values taking place like +3, +4, +1 so that max selects +3 and like that. This kind of approach is used in other implementation where you don't have the time to evaluate the whole tree and just want to evaluate the tree at the (say 4th) level and then rank the states accordingly like I have 3 rows in which I can get 3 Xs so that gives a score +1000, 2 rows = +100 and 1 row = +10. -ve values for the opponents position. So you get values like +1200 or -2300 at the leaves from where you select the move which is favorable (this player is not perfect, it can lose. It has not evaluated all the possibilites. It has just gone to some extent of computation to get some available move in some limited time).

This AI is a perfect player i.e. it can never lose. It wins or it draws the match.

Source: The code isn't really hard. Function names are self explanatory. Comments are added wherever necessary. I've supposed that 2 is the human player playing the game and 1 is the computer. If you want to see the actual calculations, just uncomment the commented lines.

Update:
Here is an improved version. We don't need to evaluate more when we already have found a winning move, so we just skip rest of the evaluations as soon as we get +1 (for max) or -1 (for min) for some state for the next move.