In normal, static-type programming, you say things like, "I know foo is an A, and bar is a B, so I know what type I'll get when I write foo + bar." This is pretty efficient: you know what type everything will be, the compiler does too, and if you're wrong there's a clear and efficient way of telling you so. (ie. The compiler can just go and say, "type A is not type Q," and that will be that.)

However, suppose you want to say, "I know (or believe) foo has some member baz, and bar has some members twiddle and fiddle, but I don't know or care what type they actually are. Let me write a function that takes foo and bar and does something useful." In languages like C, you can...write something else, because C doesn't let you do anything about it. A is A, and nothing else is A, no matter how much it dreams of being A at night. (Unless you do some crazy type-punning. Don't do that). In other, more developed OO languages, you have (up to) three solutions:

Interface-based programming, or thinking far-enough ahead and giving your objects a relevant base type to do what you want them to. This is nice when possible, but may have problems when you can't modify the interface after-the-fact. (In these cases the Adapter design pattern may help.) Languages with interfaces also often have some kind of generics to apply the concept to containers and the like.

Duck-typing. This basically says, "Screw type-safety, figure it out when you try to run the code. If foo doesn't have a baz, deal with it." This is extremely common in scripting languages, and the source of endless headaches and bugs that I won't go into here. (I have to save something for my Python rant.)

Templates.

Templates are a solution to generic programming the same way a hammer is a solution to a Rubik's Cube. Take the following function:

Looks kind of like duck-typing, right? Sure...if duck-typing generated more code every time you called the function with a different set of types. Further, the header-source model of C++ becomes impossible with templates, since the compiler needs the definition (and not just the declaration) to see if it will compile or not. But, you know what? None of this is that bad. You could argue this is actually a good way of doing things and, on a good day, I might agree with you. No, let's go where it actually gets fucked up.

Numerical Templates

If you're a Java guy or girl (and I know you are), you're probably going, "Oh neat, templates are just like generics with less obvious error messages." And that means you've never seen templates on anything else, like, say, numbers.

Please pick up your jaw. It doesn't belong on the floor.

Sure, instead of typename, you can put size_t or bool or even MyMessedUpEnumType inside the angle brackets. Ostensibly for fixed-size arrays (or something?), numerical templates have a not-so-nice unintended feature: template metaprogramming. As in, you can write programs inside template declarations. Neat, sure, but unclear as hell. For instance, here's a way of computing factorials of compile-time constants:

Put Fact<5> in your program, and it'll just be replaced by 120 seamlessly. Put Fact<7000> and, well, it might compile. Eventually. (GCC does not compile, it just grumbles about "maximum template depths" and mutters curses from the Old World under its breath.)

The great thing about numerical templates is that they play some part in making compiler errors completely unreadable. The compiler doesn't know you. It can't assume you won't template on an enumeration. It doesn't know your life and what you've been through. GCC can't afford to take you out to dinner and learn your favourite band. Hell, you might've even wanted the template not to compile...

SFINAE and type-traits

C++ has this neat (read: horrifying) idiom called, "Substitution Failure is not an Error," or SFINAE for short. It basically says to the compiler, "I've defined this template in 2 (or more) ways. Figure out which one I mean." While (ostensibly) included for good reasons (for instance, you might want to handle pointer-types and value-types differently), it's usually used in ways that fit more into dynamically typed languages: determining attributes of types after-the-fact. Which, y'know, defeats the original purpose of templates.

But you know what? Compile-time duck-typing isn't the worst solution. It's strictly better than runtime duck-typing, even if you have to know medieval Cyrillic to understand why the compiler thinks you're wrong when you are. It's also more flexible (twitch) than Java-style generics in a lot of ways. Lastly, you have to remember who you're dealing with: C++ developers will run wild with whatever feature you give them. A way to make the compiler Turing-complete? That's a small price to pay.