Suppose you have an object*x, and you call a methodx->meth(args...) (I'll be using C++-style pointers to objects for the examples, as there the difference is most evident). Suppose further that x was defined X* x, for some polymorphicbase classX. Then the compiler (usually) cannot know at compile time which method meth should be called!

Instead, it uses dynamic binding: The object *x contains sufficient information for the correct method x->meth() to be found. In C++, this typically takes place by storing in *x a pointer to its class' vtable, a table of function pointers to each of its methods. (Organising a vtable in the presence of multiple inheritence is decidedly non-trivial!)
The compiler produces code which, at run time, accesses the function pointer for meth inside the vtable of *x and calls it. In particular, it cannotinline the call, since it cannot even know which function to inline!

Naturally, all this takes time, so dynamic binding is invariably slower than static binding. Assuming the vtable model, first the code has to follow a pointer from the object to the vtable (which is not too bad, but it's still another memory access), then it has to call code via a function pointer. And function calls through function pointers are slow compared to direct function calls (every such call guarantees a pipeline stall!). Still, if your problem needs dynamic binding, there'd be no way around having a function pointer -- you're in inherently slower territory.

Some programming languages employ dynamic binding exclusively: Smalltalk, Perl (see the discussion of @ISA), Python. Of course, any language with a compiler that can deduce types may sometimes be able to replace dynamic binding with static binding. For instance, in the following C++ code:

the compiler could prove that the method call (*) must call Y::colour and bind the correct method (presumably either Y::colour or, if that doesn't exist, X::colour by inheritance) at compile time. But it's not required to do so.