Still more on virtual by default…

In a bid to keep my blog hits high, I've decided to revisit this again.

The comments (I read all comments, though I don't respond to all of them) have been fairly split.

There has one group who agrees with me, and another group that is opposed to my perspective. I'd like to address those who disagree with me, as there are a few points I didn't cover last time.

I do want to state up front that this is an area in which there is room for honest disagreement.

I think the point that I didn't make very well earlier was that one of the reasons we're somewhat - okay, perhaps “paranoid“ is too strong of a term, but it has the right “feel“ - is that if we had a virtual method on a class but didn't support it, there's a likelihood that it will break in the future (the discussion of how likely that is is an interesting side discussion. Some may argue that it is likely, others may argue that it's not likely at all. ).

If that break happens, I don't expect the customer response to be, “Oh, it broke, so I guess I shouldn't do that“. I expect it to be more along the lines of “Why didn't Microsoft build this thing correctly in the first place?“, which is why we work very hard to avoid that situation.

There's one other point I'd like to raise. When you look at a class and there's a method that isn't virtual, you rarely have any idea why it isn't virtual. It coul be that the designer just didn't want to support that as a virtual. But it could also be that making it virtual would raise a security issue, or cause your cat's fur to fall out.

It’s true that in your example the customer made a poor choice of class to inherit from. However, I remain unconvinced that it’s a good idea to force customers, who have made the correct choice to inherit from your String class, to re-implement code – which you have tested and know to work correctly – because you have made the functions final to protect yourself from the minority of customers who don’t know what they’re doing.

One of the beauties of OO programming is code re-use through inheritance, but to achieve it programmers have to actually let other programmers re-use the code. That’s why I say virtual by default and only make functions final if you *know* there’s a problem with overriding them.

Having said that, I do appreciate the arguments put forward by the "other side", I’m just not convinced by them.

I’ve been thinking lots more about this, and think the two camps are splitting because of two very separate underlying issues:

a) protection/freedom for the coder

b) inheritance as a code extension tool

— a —

won’t go over this again, lots of people have said lots, but it’s a bit of a religious issue… personally, if I’ve written a library, I’d MUCH prefer to be able to say "you have to go through a bit of hoops to use it, but it will NEVER fall over" – than to say "yeah, it’ll do whatever you want – just be very careful"… in my experience your average programmer, or even your great programmer – just ain’t always careful!! – but I can see people might have greater faith in their perfection than I have in mine 🙂

— b —

inheritance is a VERY poor extensibility mechanism! – the way people sound like they are using it is quite scary.. There are many many reasons, but here are two:

the conceptual – "loose coupling is good" – weve all heard it, we all agree? Well, inheritance is just about the tightest coupling you can envisage.. You have suddenly not just EXPOSED the entire inheritance tree, so anyone can do very bad things if they aren’t careful, but more importantly, you are now DEPENDENT on it! A simple example – you are using a workflow product. You have an object called workitem exposed to you – which you inherited from. Quick, sure… but now your company makes a corporate decision to standardise the workflow system they are using over all the products (btw, this is a real example, that happened in a previous place where I was architect) – anyway, you are royally stuffed… you just cant do it. Suppose instead your objects had just had the corresponding workitem object inside YOUR base object (as fortunately was the case) – then changing to a different workflow product is just a change of a few lines, and it all magically works…

This may seem an extreme example, but at every level the principle is the same – the more we make and use simple, clean, single level interfaces, the more robust, extensible, reliable and maintainable our system is. 10% extra effort early on for 10000% gain in maintainability 2 years into the future!

But another example of why containment is a much better extensibility method is actually more relevant to this discussion – and goes straight into the issue of overriding individual functions..

Suppose we make a list box to store Employees. We call it EmployeeListBox. Now – you are the designer. You have two quick choices – 1) inherit from ListBox. Yay – easy, we have to make 1 function – showEmployees( list)… easy. 2) inherit from control, and contain a listbox. Ok – a few more clicks – we need to make four or five properties, that expose the underlying listbox properties… much more work, right?

This is used all the way through the system.. And this is what happens (and this is also a real example – unfortunately, in this case we did it the wrong way, and inherited)

anyway… first, the system crash in production – someone along the way was doing a summary list – and noticed the employee summary was similar… so what they did was bypass the ShowEmployee function, and DIRECTLY ADDED TO THE LISTBOX… sounds alright so far?

well, months later, we decided, every time you see a list of employees, you can right click and edit one. So, we put it in – and changed the show employee function to associate the employee object in the tag. – and of course the click event "knows" that the tag should be an employee right? execpt on this one machine where the application falls over in productioh with a null reference exception, because no one had a clue that this person had done this! If wed used containment like we should have, there would have been no way to directly add an item to the listbox, and everything would have been happy..

the second example, with the same listbox, and just as real – we decided we wanted a HEADER on the listbox everywhere it appears. Oh s*** – there is no way.. i mean maybe we could override the paint, and resize the painted area or something… (im not serious of course) – of course the only way we could see of doing it, and what we actually did, was to change our control so that inherited from control, and just contained a listbox – exactly what it should have been in the first place, and then the header was just adding a label.. Unfortunately, this BREAKS everything that referenced anything inherited from listbox – fortunately for us we actually do have a pretty good team of coders, so in this case it was only one or two places – but it could have been a lot worse.

Basically, when it really comes down to it – when you are trying to EXTEND THE FUNCTIONALITY OF A CLASS – in 98% of cases containment is vastly better than inheritance anyway you care to look at it – it just takes slightly more thought when you are building..

Very good arguments, but you are schetching the library coding scenario. You create something which is for other to use.

Coming from a VB6 world where containment is the only option, I am used to think containment. Or more correctly, I think in interfaces and signature. I design and code my objects from what their clients need. In your case with the listbox I would only add the members needed to use it.

But once you are inside of your library, there is a whole different world. You want short, clear, readible code without duplications. Inheritance is a wonderful duplication remover.

Whenever you have a Thingy1 inside one container and Thingy2 inside another and you see they share common functionallity, what do you do? If they are internal to your library, then no harm is done when you introduce Thingy, move the shared to it and let both Thingy1 and Thingy2 inherit from it.

In this application coding scenario, the nice restrictions for the library coder is really a PITA.

But why waste more energy on this issue? The C# team will just keep on saying -Well, the crowd divides cleanly into two camps. We think the one is safer than the other, so we will keep C# as it is.

I *love* to have to declare both virtual and override. It catches typos that would otherwise lead to late nights of frustrated debugging. The Java way makes it far, far too easy for me to type "MySerailizeMethod" and then wonder why my override isn’t getting called. Or to rename the method in my base class and somehow miss renaming it in all the descendants.

No, C#’s use of virtual and override is exactly right. (Maybe it’s because they copied it from Delphi!)

I never meant to remove both, just the virtual part. The compiler will still warn when you try to override something which does not exist.

So:

class MyCommand

{

public void Execute()

{

}

}

class MyDerivedCommand()

{

public void override Execute()

{

//this will override

}

public void override Excecute()

{

//typo, and will cause a warning/error

}

public void override NoSuchMethod()

{

//this will not work, there are nothing to override

}

}

This will bring us to middle ground. We can have virtual by default, and fewer things will not break by default because nothing will override by default. It is not as secure as final by default, but I still feel the author has the responsibility to seal of dangerous methods.

Best of all would be to let this be a tool support thing: Go to Tools – Options – Languages – C# and select Virtual by default.

There can be at least three implementation:

1) Ditch final by default. When a new method is created and Virtual by default is *not* selected, the tool inserts the sealed keyword

2) Keep final by default. The tool will insert virtual on new base methods.

3) Keep final by default. The tool inserts virtual at the proper place when you override a non specified method.

you realise of course that there is a vast implementation difference between virtual and non-virtual (what you are calling final)? – it’s not just like the compiler goes and puts a boolean on the class saying "this can be overridden" and "this can’t".

virtual functions are a pointer indirection – ie when you declare something "virtual" – it adds a record to the virtual table. Calling that function looks up the virtual table, gets the pointer to the function it needs to call, and calls it.

Most "normal" – ie – non-virtual functions – are just called directly – ie the compiler just inserts a "call" to the address of the function.

Now, I know performance considerations are secondary in this new world, but there are some performance implications – not so much on how slow the function call can be, but on the power you are giving to the optimiser – it can directly inline a lot of function calls, but couldn’t possibly do that with virtual functions, because it doesn’t know what the function is going to be…

However, it’s just important to remember that we are not just twiddling a bit in some class table – when you say virtual vs final – you are actually talking two completely different underlying code implementations, and there are a lot of potential physical implications, beyond even the important logical considerations of "whether virtual or final should be the default".

Darren, even if I know nothing about compiler technology, I am aware that the things which are going on behind the scenes are different than what we see.

If not, there would not be any need for high level languages, would there?

But virtual table and inlining are not the issues here, the language is.

I believe designing a language is similar to designing any library api. We design it to empower our users to do certain things easy and cleanly. After we have designed what we wish to accomplish, then we find a way to implement it.

A library user doesn’t care about how things are implemented. I couldn’t care less about how the classes in System.Collection or System.Xml are implemented. In the same way I don’t care about how virtual methods are implemented or that inlining is good for implementation. I design my code after what I need. If that requires a virtual method, then it does.

To clearify a little: Two of my suggestions was about tool support. Let’s say you have a base class CommandBase with the non virtual Execute() method.

You then add a new class MyCommand : CommandBase and write:

public override void Execute()

The tool, when enabled, would either change or ask you to change the base to:

I agree with those who appreciate the symmetry of the virtual and override keywords. I want my features to be explicit and this is one of the things I appreciate most about C#, coming from a C/C++ background. Please don’t make virtual a default!