Well, different authors might use them in different ways, but here's how I learned it...
"Average case" running time is for a deterministic algorithm (the most familiar example that you use average case runtime for is quicksort). It is the expected value of the running time that the algorithm would have if inputs were chosen randomly.

"Expected" running time is usually for a randomized algorithm (for example, Karger's algorithm). It is the expected value for the runtime of the algorithm on a particular input. You might also speak of expected worst case running time, which would be the worst case of the expected running time over all inputs of a certain size.