Meta

One of the other things that listening to Bjarne Stroustrup reminded me of is an idea that I’ve had kicking around in my head for quite some time about the way programming languages evolve. I can’t exactly quote Bjarne here, but I think he said something to the effect of, “One of the reasons I give this talk is because people still think of C++ as it existed back in 1986, not as it exists today.” Which reminded me of something I read a long time ago about avalanches and the way that they work.

The interesting thing about avalanches is that, should you ever be unlucky enough to be caught in one, they go through two distinct phases. In the first phase, while the avalanche is making its way down the slope, it behaves almost as if it were a liquid. Everything is moving, and it’s theoretically possible (if you don’t get bashed in the head by a rock or tree or something, and you can tell which way is “up”) to “swim” to the top of the avalanche and sort of float/surf on top of it. And it turns out that it’s pretty important to do this if you happen to be caught in an avalanche. Because once the avalanche hits the bottom of the slope and stops moving, it suddenly transitions to another state: a solid. At this point, if you aren’t at least partially free of the avalanche (i.e. on top of it), you’re totally screwed because you are now basically encased in a block of concrete. Not only will you be totally unable to move, but you’re quickly going to suffocate because there’s no way to get fresh air. Unless someone is close by with a locator and a shovel, you’re basically dead.

Programming languages, as far as I can tell, seem to evolve in a way similar to avalanches. Once a programming language “breaks free”–and, to be honest, most never do–it starts accelerating down the slope. At this point, the design of the language is very fluid as new developers pour into the space. And by “design of the language,” I don’t just mean the language spec but everything that makes up the practical design of a language, from the minute (coding standards, delimiter formatting, identifier casing) to the large-scale (best practices for APIs, componentization, etc.). Everything is shifting and changing and growing and it’s very exciting!

And then… boom! It all stops and freezes into place, just like that. Usually, this happens around maybe the second or third major release of the language, long enough for the initial kinks to get worked out and for things to cool and take shape. And the interesting thing about it is that it’s not so much that the language designers are done, as it is that from that point on the language design effectively become fixed in the minds of the public. There are number of reasons, but I think that it has to do with reaching a critical mass of things like books, blog posts, articles, samples, course materials, etc. and a critical mass of developers who were trained in a particular version of a language. Once enough of that stuff is out there and enough people have adopted a particular usage of the language, they effectively become the “standard” for the language.

And from that point on, I think, the language designer is essentially like those poor souls trapped at the bottom of the avalanche–they’re still alive and kicking (well, for a while, at least) but they increasingly find that they can’t really move anything. They can pump out new features and new ideas, but smaller and smaller slices of the developers in those languages are even aware of those features, much less willing to retrain (and rethink) to take advantage of them.

It makes me seriously wonder about all the work we did in VS 2008 to add things like LINQ to VB and C#. I suspect the .NET development avalanche largely came to rest around the time of VS 2005 (or maybe even VS 2003), and while I think a lot of high-end developers like and use LINQ, I don’t know that it ever penetrated the huge unwashed masses of .NET developers. I’m not saying we shouldn’t have done it–I think at very least, it helps push the language design conversation forward and influences the next generation of programming languages as they start their descent down the slopes–but just that I wonder.

And I also have to say that I’m very much a participant in this process. I originally learned C++ all by myself back in the ancient year of 1989 by buying myself Bjarne’s first C++ book and a copy of Zortech C++ for my IBM 386 PC. For a long, long time, C++ was effectively the C++ that I learned way back when C++ compilers were just front-ends for C compilers. Even with lots of exposure to more modern programming concepts while working on VB and so on, it’s taken me a long time to break the old habits and stretch within the C++ language. And, I have to admit, it’s really not a bad language once I did it. But I suspect I’m part of a somewhat small portion of the C++ market.

Anyway, all it really means is that I expectantly scan the metaphorical slopes, waiting for the large “BANG!” that will herald the descent of a new avalanche and the chance to try and surf it once again…

A little over the year ago, I asked “How on earth do normal people learn C++?” which reflected some of my frustration as I re-engaged with the language and tried to make sense of what “modern” C++ had become. Over time, I’ve found that as I’ve become more and more familiar with the language again, things have begun to make more sense. Then a couple of days ago I went to a talk by Bjarne Stroustrup (whose name, apparently, I have no hope of ever pronouncing correctly) and the secret of understanding C++ suddenly crystallized in my mind.

I have to say, I found the talk quite interesting which was a huge accomplishment because: a) I usually don’t like sitting listening to talks under the best of circumstances, and b) he was basically covering the “whys and wherefores” of C++, which is something I’m already fairly familiar with. However, in listening to the designer of C++ talk about his language, I was struck by a realization: the secret to understanding C++ is to think like the machine, not like the programmer.

You see, the fundamental mistake that most teachers make when teaching C++ to human beings is to teach C++ like they teach other programming languages. But with the exception of C, most modern programming languages are designed around hiding as many details of how things actually happen on the machine as possible. They’re designed to allow humans to explain to the computer what they want to do in a way that’s as close to the way humans think (or, at least, how engineers thing) as possible. And since C++ superficially looks like some of those languages, teachers just apply the same methodology. Hence, when you learn about classes, teachers tend to spend most of their time on the conceptual level, talking about inheritance and encapsulation and such.

But the way Bjarne talks about C++, it’s clear that everything that C++ does is designed while thinking hard about the question how will this translate to the machine level? This may be a completely obvious point for a language whose main purpose in life is a systems programming language, but I don’t think I’d ever really groked how deeply that idea is baked in to C++. And once I really looked at things that way, things make a lot more sense. Instead of teaching classes at just the conceptual level, you really need to teach classes at the implementation level for C++ to make sense.

For example, I’ve never been in a programming class that discussed how C++ classes are actually implemented at runtime using Vtables and such. Instead, I had to learn all that on my own by implementing a programming language on the Common Language Runtime. The CLR hides a lot of the nitty-gritty of implementing inheritance from the C# and VB programmer, but the language implementer has to understand it at a fairly deep level to make sure they handle cross-language interop correctly. As such, I find myself continually falling back on my CLR experience when looking at C++ features and thinking, “How is this supposed to work?” I can’t imagine how people who haven’t had to confront these kinds of implementation-level details figure it out.

It makes me wonder if a proper C++ programming course would actually work in the opposite direction of how most classes (that I’ve seen) do it. Instead of starting at the conceptual level, start at the machine level. Here is a machine: CPU, registers, memory. Here’s how the basic C++ expression map to them. Here’s how basic C++ structures map to them. Here’s how you use those to build C++ classes and inheritance. And so on. By the time you got to move semantics vs. copy semantics, people might actually understand what you’re talking about.

Mary Jo Foley pointed today to a very interesting post by David Sobeski entitled Trust, Users and the Developer Division. Written from the perspective of a guy who was in the Windows division through the climactic shift to .NET, he brings up a lot of really good criticisms about exactly what went on during that time. (In fact, I was just wondering last night whether it’s time yet to sit down and write the “what I learned in the VB to VB.NET transition” blog post that I’ve been putting off for a long time.) But there was one really crucial point that he missed that I think made one of his central theses–that DevDiv managers went all Colonel Kurtz on developers and decided with .NET they were going to go build their own platform–a little on the unfair side. Somehow he, and most of the comments I’ve seen so far, have managed to overlook the monster that was striking fear into the hearts of the Microsoft management chain back then. The monster that was going to come along and destroy everything they’d worked so hard to build. Can no one remember? Has it really been so long since Java was relevant?

Yes, before Android and iOS and so on there was Java, and at the time it really scared the hell out of a lot of important and powerful people at Microsoft. Here was a programming layer that was supposedly going to: a) be free, b) be cross platform, c) not be controlled by Microsoft, and d) abstract away the underlying operating system. I mean, now we all can look back and see exactly how the dream of “write once, run anywhere” worked out (pace HTML5), but at the time Java looked like the Trojan Horse that was going to slip into Windows via the browser and hollow out the Windows ecosystem from the inside. In just a few years Scott McNeely was going to be standing over the corpse of Windows laughing in Bill Gates face. I was just a lowly functionary on the VB team at the time (OLE Automation, FTW!) but since I had some fingers in the VB runtime pie I got included on some email chains from some very high level people (not just in DevDiv!)talking about just how worried they were about what Java might do to Windows and how it might allow Sun to wrest control of the app market away from Microsoft.

(One interesting angle, too, about this was that since the Developer Division had a Java compiler project, they also got to see firsthand just how much better development could be on a modern runtime as opposed to the creaky, old, inconsistent Win32/COM API set. The VB team lost a bunch of very bright developers who jumped at the chance to work on something as enjoyable and freeing as Java as opposed to the jumbled mess Windows programming had become. I think this added extra wind in the sails of the managers who weren’t in denial about the fact that as much as developers were tied to the Windows ecosystem, they could still be pried away if you gave them a much better way to program.)

So, anyway, this is the big piece of missing context when looking at the motivation behind .NET. The .NET initiative was, fundamentally, a way to answer the threat that Java posed to Windows, not just some way for a bunch of DevDiv middle managers to go play OS designer (although they did seem to enjoy doing that just a little too much at times). And it’s not surprising that this context would be missing from the worldview of someone who worked in Windows. I never got the impression that Java scared the Windows guys half as much as it scared the DevDiv guys because at that time Java was still mostly a language and a language runtime. It was something that we understood in a much more visceral way than they did. One can certainly argue that that the Windows guys were right not to worry about Java, but hindsight is 20/20. And, based on what’s happened since then, maybe the Windows guys maybe should have been a bit more worried about someone coming along and stealing away their position at the top of the app development heap. Just sayin’.

All that being said, I want to emphasize that I agree with much of the criticism that he levels at Microsoft’s and DevDiv’s approach to the question of trust. But I think that’s another post for another time.

One of the great things to do in my neighborhood is to head over to Central Cinema, a small movie theater that serves food and drinks, and which has decidedly eclectic tastes. The best thing they do, speaking as a parent, is their “Cartoon Happy Hour” every Thursday where they show a wide range of cartoons for kids (and parents) to enjoy. It’s been a great opportunity to introduce my kids to cartoons that I watched as a kid (Hong Kong Phooey! Tom and Jerry! Looney Toons!) that they might not otherwise get to see.

Recently they’ve shown several episodes of Speed Racer, a cartoon that my mom would never let me watch because it was “too violent.” (Compared to modern cartoons, it’s positively milquetoast.) So it was fun to finally watch it with the kid, but when the theme song came on it triggered a nostalgia of an entirely different sort.

It reminded me of that day, long ago, when I actually got to pick the codename for a product. Looking back now, I’m not exactly sure why I was so keen to pick a codename–it’s not like it’s prestigious or anything–but I definitely was. I wanted to set the name for a whole product cycle, really wanted to. And when the cycle for Access ’97 came along, I got my chance, for two reasons. The first was that most of the team had split off to do a re-write of Access that was supposed to ship after Access ’95 but then got pushed back to after Access ’97 (and then cancelled), so pretty much all of the senior people were no longer on my team. And because Access ’97 was supposed to be a placeholder until the rewrite was finished, it was decided that the bulk of the work on the release was going to be solving the major performance issues we incurred moving from 16-bit Windows 3.1 to 32-bit Windows 95/NT.

Since I was heading up the performance work, I saw my chance and pushed to pick the codename. Of course, picking a codename wasn’t that easy–what would be cool name that would go with the idea of speed? That’s when Speed Racer flashed in my mind, and so the codename for the release became “Mach5,” named after Speed Racer’s car. In the pre-YouTube days, I even got a VHS tape of a Speed Racer episode and went over to Microsoft Studios to get them to convert the theme song to an AVI so I could send it around to the team. (Boy, was I a bit of a geek.) Mission accomplished.

Now, of course, you could never pick a codename like that. In our litigious society, people sue over codenames. I actually saw this in action–there was an old internal project at Microsoft that borrowed its name from a well-known company that makes little plastic bricks. Even though the tool was completely internal and never used or discussed publicly, the codename somehow ended up on some stray public document. The company in question was alerted to the use of the name, the lawyers got involved (so I heard), and in short order the tool was renamed to an innocuous acronym.

In my junior year of college, I took the standard programming languages course that goes over fun stuff like how programming languages are put together. The main project for the class, of course, is to build a compiler for a small language. The twist was, that the professor for this class happened to be one of the designers of the Haskell language, so you can guess what programming language the compilers had to be written in.

The thing is, this class took place probably in the fall of 1990 or the spring of 1991. According to Wikipedia, Haskell debuted in 1990, so first and foremost the tools we were working with were… uh, primitive at best. I think the Haskell interpreter was written in Common LISP, and basically you took your program, invoked the interpreter on it, went and got yourself a nice cup of tea (so to speak), and then came back to an answer (if you were lucky), an undecipherable error message (if you were only kind of unlucky), or just nothing (most of the time). Definitely honed the skill of “psychic debugging.”

Anyway, with a brand-new language (that, of course, had no manuals or books written about it yet) and an alpha-level (at best) compiler, as I remember it at least half the class never even managed to get something working. I somehow managed to grok enough of Haskell to be able to write a functioning compiler that took our toy language in and produced correct output. I was one of the lucky ones. But here’s the thing…

It was, without a doubt, one of the most beautiful programs that I’ve ever written. Just a real work of art, with the data flowing through the code in one of the most natural ways I’ve ever seen. Just awesome. It’s the one piece of code that I look back on and wish that I still had a copy of.

So I’ve always had a soft spot in my heart for Haskell. Even if nobody else could actually understand what it was doing or why.

This happened a while ago but just getting to it now because of my blog hiccup. A sharp-eyed reader pointed out a problem with my C# and VB Chakra hosting samples (on MSDN and on GitHub). When passing a host callback from managed code, I forgot to hold on to a managed reference to the delegate I was passing out to Chakra. So if the CLR ran a GC, it would dispose the delegate and then a callback on that delegate would jump into hyperspace. The samples are now correct. Ah, the joys of coordinating GCs.

If anyone has been trying to leave comments or send me a message or anything, I’m afraid my blog has been on the fritz for… a while. Not sure exactly how long. Apparently my hosting tier with Azure give me a 20Mb limit on my MySQL database for this blog and when you hit it, it just starts failing your INSERT and UPDATE queries. So it wasn’t actually clear anything was wrong for quite a while and then it wasn’t clear WHAT was wrong. Thank goodness for the wisdom of the Internet, that’s all I have to say. Anyway, should be back up and running now, thanks.