What happens when you call C.DoIt<string>? Many people expected that “string” is printed, when in fact “everything else” is always printed, no matter what T is.

The C# specification says that when you have a choice between calling ReallyDoIt<string>(string) and ReallyDoIt(string) – that is, when the choice is between two methods that have identical signatures, but one gets that signature via generic substitution – then we pick the “natural” signature over the “substituted” signature. Why don’t we do that in this case?

Because that’s not the choice that is presented. If you had said

ReallyDoIt("hello world");

then we would pick the “natural” version. But you didn’t pass something known to the compiler to be a string. You passed something known to be a T, an unconstrained type parameter, and hence it could be anything. So, the overload resolution algorithm reasons, is there a method that can always take anything? Yes, there is.

This illustrates that generics in C# are not like templates in C++. You can think of templates as a fancy-pants search-and-replace mechanism. When you say DoIt<string> in a template, the compiler conceptually searches out all uses of “T”, replaces them with “string”, and then compiles the resulting source code. Overload resolution proceeds with the substituted type arguments known, and the generated code then reflects the results of that overload resolution.

That’s not how generic types work; generic types are, well, generic. We do the overload resolution once and bake in the result. We do not change it at runtime when someone, possibly in an entirely different assembly, uses string as a type argument to the method. The IL we’ve generated for the generic type already has the method its going to call picked out. The jitter does not say “well, I happen to know that if we asked the C# compiler to execute right now with this additional information then it would have picked a different overload. Let me rewrite the generated code to ignore the code that the C# compiler originally generated...” The jitter knows nothing about the rules of C#.

When the compiler generates the code for the call to ReallyDoIt, it picks the object version because that’s the best it can do. If someone calls this with a string, then it still goes to the object version.

Now, if you do want overload resolution to be re-executed at runtime based on the runtime types of the arguments, we can do that for you; that’s what the new “dynamic” feature does in C# 4.0. Just replace “object” with “dynamic” and when you make a call involving that object, we’ll run the overload resolution algorithm at runtime and dynamically spit code that calls the method that the compiler would have picked, had it known all the runtime types at compile time.

Using dynamic in this way is how I now choose to implement the C# equivalent of partial specialization. Prior to dynamic, the only alternatives were to use extension methods (which was fragile and confusing) or implement your own dynamic dispatch using delegates.

The constraint on U in D’s version of M<U> is sealed type string, which would not be legal in any other situation. This oddity hits some corner cases in the CLR and causes a great deal of difficulty in type analysis and code gen; I’ll blog someday about how I’ve screwed it up multiple times. — Eric

dynamic will NOT come close to duplicating what C++ template do….it is something completely different [not better or worse…but very different]

C++ templates do ALL of their work at compile time. There are NO runtime decisions made. So from a performance perspective, templares and (pre-dynamic) generics have very similar performance characteristics.

When dynamic is used, all of the work gets moved into the runtime. Of course much will depend on the internals of C#/CLR and other aspects will depend on usage; but just consider what would happen (to performance) if a person writes image procesing code there each pixel gets passed to a method as a parameter that is "dynamic"…..

I am just waiting until we see an "explosion" of places where dynamic is used because it "seemed like a good idea", and the application suffers enough that my company gets called in to address "performance issues".

TheCPUWizard said: So from a performance perspective, templares and (pre-dynamic) generics have very similar performance characteristics.

This is definitely not true. An array type meets the constraints for IEnumerable. If I write a foreach loop against a type parameter (constrained to IEnumerable), the behavior is very different. With templates, I get efficient array iteration with no bounds checking. With generics, I get virtual calls to MoveNext and Current. If the template wants to do something special for arrays and std::vector (because it knows the data are stored contiguously) it can. Generics can’t.

.NET manages to do spot optimization of generics only when working with type parameters which are value types, otherwise the optimizer is pretty much helpless.

And this doesn’t even begin to address the things that parameters that aren’t types enable for templates… many of which are huge perf wins.

Of course, templates being fully instantiated at compile-time, don’t provide any mechanism for extensibility. Dynamic does (at a price of course).

Hubs repeat all incoming traffic to all other ports. Bridging hubs are a little better, they work on a packet level and can queue up traffic when there’s a collision instead of discarding it (needed for say translating between segments with different speeds).

Switches look at the layer 2 (ethernet, typically) destination address to decide which port to forward a packet through. Usually the list of addresses reachable via each port is learned automatically by observing traffic. Packets where the connectivity of the destination isn’t known are flooded to all ports, like the hub.

Routers look at the layer 3 (IP, typically) destination address to decide which port to forward a packet through. The list of addresses (grouped into binary blocks) reachable via each port is managed either through manual configuration or exchange with peer routers. Packets where the connectivity of the destination isn’t known are dropped.

So was the choice to give them a syntax very much like templates in C++ done out of a desire to deliberately confuse programmers?

Something I’ve picked up in my years as a developer is to not create something, e.g. an API, that looks and feels a lot like an existing something else which is similar (call it "A"), but whose use cases and/or implementation is actually different in subtle and confusing ways. Either make it so the implementation is not different from "A" at all, or make it so that the implementation notices and squawks loudly if you try to use it as if it were an "A", or if none of those are possible make it so that it looks as different from "A" as you possibly can.

Ben Voight wrote: "An array type meets the constraints for IEnumerable. If I write a foreach loop against a type parameter (constrained to IEnumerable), the behavior is very different. With templates, I get efficient array iteration with no bounds checking. With generics, I get virtual calls to MoveNext and Current. If the template wants to do something special for arrays and std::vector (because it knows the data are stored contiguously) it can. Generics can’t."

You are 100% correct. Your example brings in other differences between C++ and C# (e.g. how arrrays are handled, differences between value and reference). Also the fact that std::vector is itself a template, and C++ can do "wonderful" things then templates are combined.

On the other hand, iy you used an example where you have (psuedo code)

class Base { vitrual void f(); }

class Leaf : Base { virtual void f(); }

And create a template <Leaf &> or generic <Leaf>, then the effects of calling "f()" will be identical between the two (both will be virtual calls)

The main point I was trying to illustrate, is that NEITHER C++ templates not C# generics make any "decisions" at execution time of a method.

Steven said:"C# has other confusing areas. For instance, C# uses the C++ ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C++ programmers."

Why not: "C++ has other confusing areas. For instance, C++ uses the C# ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C# programmers."

Oer the past 37 years, I have programmed in dozens of "high level" languages, not to mention close to 50 different assembly languages. I really hate to think how convoluted things would be if each environment avoided syntactical constructs (e.g. assembly language mnemonics) that had previously been used every time there was a difference in behaviour……

‘Why not: "C++ has other confusing areas. For instance, C++ uses the C# ~destructor syntax, but the behavior is (as we know) very different. This is very confusing for C# programmers."’

Uh, because C++ was written first, and C#’s syntax was clearly based on C++, not vice-versa. C++ could not have been written differently in order to avoid confusion with C#, as C# syntax had not been invented yet. It has something to do with the linearity of time, cause and effect, and other related concepts.

Karellen, The comment you quoted had nothing to do with decisions made at the time the language was authored. It had everything to do with the experience of a person who has worked in one ofthe languages and sees the other for the first time.

Given recent (past 5 year) trends, I am willing to bet that there are significantly more C# programmers who have never analyzed a single line of C++ code, than there are C++ programmers twoh have never analyzed a single line of C# code

And I think I know the answer Eric would give to Karellen: Contrary to (still, unfortunately) popular belief, the C# language designers were not attempting to replicate the functionality of C++ or even use it as a model. They were creating a new language, period, and any resemblance to any other language is purely coincidental.

C# generics have the same syntax as Java generics, which work similarly. So what might be confusing to one class of users is perfectly natural to others. The bottom line is that the world at large does not necessarily see everything the same way you do – most of us have learned to deal with that.

Ben Voigt [C++ MVP] said: "With templates, I get efficient array iteration with no bounds checking. With generics, I get virtual calls to MoveNext and Current."

I wouldn’t assume that. If the JIT can figure out statically what the type is, it may in theory use static calls instead of virtual, and it may even do some inlining. And if it doesn’t do it today, it may do in a future version. So the architecture of generics doesn’t rule out optimisation. On the contrary, by capturing a high-level description of our intentions, it potentially has *greater* scope for optimisation, by doing it later, when the maximum amount of information is available.

(I don’t use Java much in anger, but I believe Sun has been very agressive with this kind of thing in the ‘server’ flavour of their VM.)

Re: the syntax debate, regardless of the (very different) details of how generics/templates work in Java, C# and C++, there is one thing they all have in common. They all need a way to specify a list of type parameters or arguments: <T1, T2> or <int, double> – so naturally they all use the same syntax for that basic idea. It would have been perverse, given C++ is the most widely used generic system and is their closest syntactical ancestor language, for them to use different syntaxes for a particular small piece of the puzzle that happens to be common to them all, even though the rest of it differs. I wouldn’t expect C# to invent a different symbol for addition because its operator overloading works differently. Also note that neither C# or Java uses the keyword ‘template’ to introduce a declaration, so they don’t pretend to be precisely the same as templates.

I think you’ve missed the core difference between a template and a generic — at least for C++ people. The parameter of a template can be anything — a number, a type, a global variable, etc. And it can apply to a free function or a class. This allows us to do computations at compile time. See boost and loki for examples of the power this allows us — inline LL parsers, simple lambda functions, and precompiled regular expressions for just a few examples. These things are not possible using generics — they require other language features. They aren’t necessarily better in all cases, but they are certainly different at a more fundamental level than pointed out in the article.

So while some might enjoy the flexibility of a language-within-a-language that templates offer, others dislike the unmaintainability and undecipherable error messages that accompany it. I think C# generics strike a happy medium between C++ templates and Java generics.

> I’m still not super-clear in my head on the differences between a hub, router and switch and how it relates to the gnomes that live inside of each.

You have a right to be confused, despite the (completely valid) definitions given above, they keep changing what different bits of equipment do.

So, theoretically, Hubs are layer 1 (hardware) level devices, Switches are layer 2, and Routers are layer 3. Which doesn’t really account for Layer 3 Switches, which are like Routers but faster. Once upon a time, there were just Hubs and Routers. Then someone came up with Switches, which were designed to make networks more efficient by only sending data where it was needed. Then ‘Switch’ became a marketting term and at that point yu can kiss any technical validity goodbye.

Regarding the gnomes: Hub gnomes have been lobotomised, Router gnomes are a little slow, but have huge memories. Switch gnomes are hyperactive and schizophrenic.

@Joel Redman – I think Eric does explain that essential difference, in this quote:

"When you say DoIt<string> in a template, the compiler conceptually searches out all uses of “T”, replaces them with “string”, and then compiles the resulting source code."

The great value of CLR generics is that they allow us to define a generic type or method and expose it to other languages, without those languages needing to reparse the code of the language in which the generic entity was defined. So C++/CLI, C#,VB.NET, F# and countless other languages can share the same generic libraries.

And although I think C++ compile time metaprogramming is great, like many things in C++ it is mostly great relative to the other limitations of C++. std::tr1::shared_ptr is great if you don’t have GC! And similarly, C++ doesn’t have a standard API for type reflection or code generation.

The CLR has an extremely rich and powerful model for doing reflection and code generation at runtime, and the "wizards" in the C# world frequently come up with new ways to perform "magic" using this, and (perhaps surprisingly) still maintaining static type safety and high performance. It allows them to effectively extend the language, and blur the boundary between compilation and interpretation, much as templates do for C++.

So despite C# lacking the full power of templates, it is no less of a playground for advanced library authors who enjoy generating and picking through incomprehensible error messages.

He does say that, but completely misses the major implications of it. The template is fundamentally more powerful than the C# generic, but at the same time, it cannot be used cross-language. It isn’t better or worse.

Furthermore, shared_ptr is more suitable for C++ applications than mark and sweep garbage collection since it is completely synchronous. The lifetime of every scope-bound object in the system has a predictable lifetime, and when it goes out of scope or gets deleted clean up is done immediately. While this may not be quite as efficient overall as the C# model, for many applications it is preferable to be predictable. Real time apps and OS’s come to mind. Shared_ptr is a compromise in this direction, and certainly has its place. Again, for some apps, better, for others worse.

That’s why we like .NET. It allows us to use the paradigms we need to get the job done.

"C# generics have the same syntax as Java generics, which work similarly." – uh? Java generics and C# generics work completely differently!

Java generics are purely a compile time illusion: there is no such thing, at runtime, as an ArrayList<String> – just an ArrayList. The compiler keeps track of the compile time types and gives you some helpful error messages if you bypass type-safety, but you can bypass the error messages – eg by casting your ArrayList<String> to an ArrayList and then to an ArrayList<Object>, and put in something other than strings – and they will be *accepted* at runtime because the runtime isn’t even aware that ArrayList was generic in the first place!

Also, Java allows types to be partially unbound, so ArrayList<?> is also a legal type which you can have instances of and perform operations on.

C# does not have any concept of List<?> at compiletime except as an argument to typeof(). Not only that, but it doesn’t have a concept of List<?> at runtime, either. Any given instance of List<T> has a *particular* T that it applies to. Attempting to cast that list to a List of a different<T> would fail. Attempting to put other kinds of object other than String into a List<String> would be impossible (because to do so you’d have to write code that would fail with a class cast exception before it even got to the Add() call).

Java and C# generics are just as different as C# generics are from C++ templates. They use the same syntax because they have more in common than different. And that’s as it should be. Sure, it’s confusing when you encounter one of the comparatively obscure things that they do differently. But that’s outweighed by the fact that you can get a general sense of the code based on what they are doing the same.

A halfway decent programmer learns a new language, initially, by learning how to map its syntax to concepts he or she is familiar with from other languages, ANYWAY. Even if C# had decided to say that generics should be expressed as List~T~, C++ programmers coming to C# would still mentally map that to List<T> to start with to be able to get the basic idea of what it’s for. And only AFTER that get a deep enough understanding to grasp the subtle differences.

Templates like in C++ are a bit more than fancy-pants search&replace you know 🙂

I’m just writing to ask you:

What have stopped the C#/CLR designer from actually allowing to specialize the generic methods? Having: method<T>(T arg), method(string arg), method(int arg) a compiler knows about ALL possible overloads. Easiest way would be to emit a prologue for method<T>(T) that checks "if arg is string" "if arg is int" and thus call the appropriate ‘overload’ at runtime even if someone calls method((object)string). Now the programmers must do that all the time by hand if they want to have any specialization-like features :/

IMHO, not the C# compiler shall, but rather the CLR should perform such lookup and choose the right, tightest match at runtime, but with all other languages on the platform, I can understand the option taken there. But why not in C#? Rarely you have more than 1-6 specializations, so not that big performance hit at all.

@ComeOn

Aggreed with that. It’s very name and symbol claims that. Early marketing joke was, that C# is actually C++++ (two rows with ++). The evolution of the language shows heavy impact of changes made to independently evolving C++ and Java. heh, I ven remember that when C# was born, it was claimed that it is being created in a such way, that existing C++ and Java programmers will not notice much difference and can start coding in C# right away with little fuss. Maybe it’s still written somewhere on Microsoft’s pages?

@Joel Redman – this is now off topic and probably too late for you to read it, but I gotta take issue with this:

"Furthermore, shared_ptr is more suitable for C++ applications than mark and sweep garbage collection since it is completely synchronous. The lifetime of every scope-bound object in the system has a predictable lifetime, and when it goes out of scope or gets deleted clean up is done immediately."

I hear/read that a lot, and I don’t agree. When you use shared_ptr (or any kind of ref-counting smart ptr) you’re doing so because you *don’t* know locally whether it is time to destroy the object. Therefore when the program exits the scope of the shared_ptr, you *don’t* know for sure that the object has been destroyed. So in practise there’s nothing deterministic about it. It may happen sooner than it would with real GC, but it’s no more deterministic in reality. If you want to know for sure that an object is definitely destroyed when you exit a specific scope, then use a normal variable or auto_ptr. Don’t use shared_ptr.

C++/CLI is a great resource for clarifying the distinction and integration point between RAII and GC, because it quite beautifully combines the two in a way that makes me rather envious. I wish C# could write Dispose methods for me! So far it only has the ‘using’ statement to make it easy to consume disposable objects, but nothing much to assist with implementing them.

whoa. where do you see sensible RAII in C# ? in using/Dispose? ‘dispose’ thing is a patch for nonexistence of destructors, and using – for of deterministic cleanup, and it doesnt more than calling dispose anyways.. i lately heard a lot from other programmers that using-this using-that, use-using-dont-call-dispose-manually, they really think that using() is doing some cleanup and frees memory.. please, finally stop publishing this rubbish, people really can start believing in it