Jon Blow is the designer of Braid and one of the first of the new-wave indie auteurs. Back in September of 2014 he made a video talking about his ideas for a new computer language focused on game development.

I wanted to write about this when the video came out, but I was in the middle of a move. Then there was Christmas. Then other projects. But now almost half a year later, I’m finally coming back to this. His project has moved on and I don’t know where it is now, but it’s this first video I want to talk about.

That’s an hour and a half talk about why he thinks game development needs a new language, why the existing languages don’t quite cut it, and a few things he thinks the new language ought to do. It’s pretty heavy-duty in terms of technical jargon, so if you’re not a coder I don’t know if you’ll get much out of it.

I’ve seen people criticizing his suggestions saying that other languages already do what he wants, or that he’s not qualified to design a language. I’m not really qualified to comment on that and not really interested in that debate. I’m more interested in his talk as a sort of “Everything annoying, frustrating, inefficient, or scary about the C languages”.

So I want to comment on what he’s said, and I’m going to do my best to say it in language that should be comprehensible for non-programmers. We’re not so much discussing his language ideas as using them as a launching point for talking about things that make programming less fun.

Consider this a consolation that we didn’t get an annotated version Carmack QuakeCon address last year like we have in years past. Sadface.

Timestamps are approximate:

1:00 Inertia keeping us using C++

C++ is very much the language of AAA game development. Lots of languages have come along over the years, but nothing has really challenged it. The inertia comes from a lot of factors that form a positive feedback loop that keeps C++ on top:

Lots of libraries. There are tons and tons of C++ toolkits, libraries, and code snippets floating around in the wild. Do you need a sound library? Font loading? Access to rendering hardware? Support for gaming controllers? You don’t need to write that code yourself, because there’s a really good chance that someone else has already solved the problem and provided the source code for free. Of course, adding their code to your project is often a lot harder than it ought to be, but spending six hours pulling out your hair playing “dependency scavenger hunt” is faster than writing everything from scratch, even if it is a dumb miserable way to spend an evening.

Lots of programmers. Since C++ is the big important language, everyone learns it. Which makes it easy to hire people to work on your project.

Lots of help. Yes, answers to forum questions often take the form of abusive condescension and nerd peacocking, but at least a C++ programmer can get their question answered after their tag-team humiliation. If you’re using one of the more obscure languages, then you might not get any answer at allYou’ll still get mocked, though. Mostly by jackasses asking, “Why didn’t you use C?”.

No dominant alternative. It would be one thing if there was another language out there to play Pepsi to C++ Coke, or could be the Apple to the C++ Windows. But there’s no clear contender. Java is good for some tasks, Python is good for others, but none of the challengers works as a broad general-purpose language. And that’s fine. There’s lot of value in specialization. But that focus helps drive the C++ feedback loop of ubiquity.

When creating a new language, you can either break from the old ways entirely…

You fool! Nobody is going to give up the universe of libraries to screw around with your language! It’s useless!

…or you can try to have some sort of compatibility with C…

You fool! If people wanted C they would have just used C! You’re just dragging those old problems into the new language! It’s useless!

…or you can whine about how much C sucks and not do anything about it. That’s sort of the strategy I’m using.

3:30 Not a Big Agenda Language

We run into a lot of really common problems in programming, and everyone once in a while someone invents a language that tries to solve one of these problems. Blow offers two examples of “Big Idea” languages. Functional programming, and a language where buffer overruns are impossible.

Taking the last one first:

Buffer Overruns are impossible.

A buffer overrun is when you allocate space for N objects, but then try to interact with some object greater than N. If I create a buffer of 20 space marines in memory and then (because I’ve got a dumb bug in my program) I try to do something with space marine #25, then I’m actually reaching into a block of memory beyond what I set aside. What is in that space of memory? Maybe some other stuff. Maybe garbage. This leads to crashing and random behavior. It’s a pretty common mistake, and so we’re always looking for ways to protect against it. Some people are even attempting to solve this by inventing new languages where you can’t do it.

Making buffer overruns impossible sounds useful, but it doesn’t magically stop you from writing bugs. If I access space marine #25 in my list of 20, I’m still making a mistake. The language might give me #20, or it might ignore all the stuff I attempt to do to #25, but I’ve still made a programming mistake. Maybe I’d rather have the program crash so I can find the bug. If I was dealing with a list of bank customers instead of videogame space marines, then maybe I REALLY don’t want the program to attempt to quietly carry on.

So protecting against buffer overruns is an interesting feature in a language, but it’s not something everyone wants or needs. And different people might have different ideas on what the protection should do.

Functional programming is an approach to programming that tries to solve the problem of unmanageable complexity. Modern software projects – and games in particular – are terrifyingly complex beasts. You might be dealing with millions of lines of code that manipulate thousands of different ideas, and it’s not at all possible for any one programmer to understand how the whole thing works. Consider this pseudo-code:

So if the player is holding down the fire button, then shoot a bullet and subtract 1 bullet from our current supply. Then on line 7 we see if the gun is empty. If so, we trigger a reload.

However, what this programmer is overlooking is the staggering number of side-effects that happen inside of FireBullet (). Maybe the bullet hits a red barrel, which blows it up, which kills our space marine. So let’s say the player uses their last bullet to shoot a barrel to kill themselves. In that case, they would die on line 4, and then line 8 would cause the SpaceMarine to begin reloading, even though they’re currently DEAD. That’s at least buggy behavior, if not a crash.

This kind of thing happens all the time. In the above example, maybe the programmer should have done a check right before attempting to reload to make sure the marine was still alive. The problem is that it probably didn’t occur to them that it was possible for FireBullet () to result in deathI mean, other than the hundreds and hundreds of other space marines this space marine is trying to kill.. Once a project gets above a certain size, it’s impossible to know all the things that might happen when you call FireBullet (). That one function might cause thousands of lines of code to execute before it’s through. Stuff will blow up, particles will be created, sound effects will start, and things will be damaged or killed. These are called side effects. These systems are not part of a space marine (much less a bullet) but they are changed when a bullet is fired.

Functional programming attempts to fix this. In a completely functional program, no side-effects are allowed. In a completely functional language, no side-effects are possible. When you call SpaceMarineUpdate (), it doesn’t change any part of the game state. Instead, the program creates a new SpaceMarine object with new properties, and some other part of the program (the one that called SpaceMarineUpdate()) will (one assumes) replace the old one with the new. The program has to say to the SpaceMarine (), “Here is a bullet. Shoot yourself with it and then return a copy of yourself exhibiting the result.”

It’s actually really hard to get stuff done, and I’m not even sure how you go about handling actual side-effects that need to take place. (Like people getting killed by splash damage from exploding barrels.) And it’s not clear to me how you’re supposed to go about creating bullets and things.

Functional programming is a new idea. It needs to be proven and we need to figure out how / if we can use it to model complex games with lots of inter-connecting systems. It might be big someday, but right now it’s not ready for AAA development.

So I just spent 1,500 words on the first 3 minutes of video. Don’t worry. It goes faster after this. This series will continue until I make it through the whole thing. Probably about three parts. Maybe.

Footnotes:

[1] You’ll still get mocked, though. Mostly by jackasses asking, “Why didn’t you use C?”

[2] I mean, other than the hundreds and hundreds of other space marines this space marine is trying to kill.

Lisp can’t really be considered a strict functional language. It has features that support similar mechanisms such as first class functions, closures, dynamic function creation. But at its core it’s a general purpose language: You are free to update the values of slots, you can freely insert side-effects into lisp functions without restriction. The strength of Lisp is more in the ability to write domain specific mini languages and tools that feel powerful and like part of the core language.

I’d go as far as saying that if you think (Common, specifically) Lisp is functional language, you’ve missed most of its utility.

Among its strengths are “run-time compilation of newly generated code” (yes, it’s a thing and it’s well-defined and kinda frightening until you get used to it), an object-orientation paradigm that is super-convenient for weird edge cases and no worse than the “derived from message passing” paradigm that C++ exhibits so well.

I mean, the ability to define “around”, “pre” and “post” methods alone is a huge productivity win, once you grasp them.

Well, Erlang is a functional programming language (though not in such a pure way as for example Haskell). It was developed by Ericsson and is used among others in their high performance network switches. Also used by quite a bunch of other big companies (Amazon, Facebook, T-Mobile). It is by no means widespread, and doubly so in gaming, but it is used.

According to one of the last two (can’t remember which) addresses by John Carmack at QuakeCon, he’s very supportive of functional programming in games … up to a point.

Meaning: As of now, doing things the functional way seems to be rather a question of style than of language choice. And there are some situations cases where the non-functional way is the only way that doesn’t drive someone mad.

That said: I’ve become a great fan of functional loops and such in Python, and use them whenever my ability lets me, also having functional objects (i.e. ones whose properties are never changed) makes a lot of things so much better. That, of course, varies extremely with what you’re trying to do with your program…

TL,DR: I wouldn’t wonder if a semi-functional language was the next big thing in programming, but I’d doubt if it was strictly functional.

I want to say that Yahoo’s webstore was (nominally) written in LISP, but dim memories of writings on that subject indicate XKCD summarized that one pretty well. Paul H. Graham has written on the topic, I’m pretty sure.

ML and some of its derivatives are said to be pretty big in a certain circles.

It not very often used for software sold commercially, but a number of firms use it for a lot of in house development, particularly in financial product pricing and risk. It’s also somewhat popular for writing language tools and high performance data processing middleware at places like Facebook and AT&T.

Clojure is less strictly functional, but does use immutable data structures, and tries to identify and avoid side effects to make its STM system safe. It’s used in the Datomic database engine.

OCaml is another member of the ML language family that Haskell descends from, and is also used in finance, as well as for some tooling in the Xen hypervisor. Jane street are probably the most well known user, with an $8 Billion per day trading business built entirely in OCaml.

You’re right in that functional programming is less popular for the traditional build large binaries and sell them form of professional software development, but the in house development space is still commercial software in my mind.

Actually tons, especially if you include a few newer languages that incorporate functional designs, but are not purely functional. Twitter runs a lot of Scala, F# is used by banks and insurance companies, Lisp and Scheme are widely used in pretty much every sector, and a quick search for Haskell turned up a list full of really big brand names too.

To be a bit critical: When you write about programming, your knowledge is sometimes outdated by a decade or two. Did you know that C++ has lambda-functions, reference counting and type inference for four years (two revisions of the standard!) now?

Be fair to Shamus. C++0xb is not anywhere near a decade old, and it only recently started getting solid support from compilers. It’s still not completely supported by any compiler I use, though I avoid C++ like the plague so I might be dated too.

Those are called C++11, and C++14, and both are finished, and the major compilers (VC, Clang, GCC) support the vast majority of the features. In fact, most compilers supported a significant number of those features before the standards were even finalized. tr1 contained a ton of stuff that went into C++11, and tr1 was drafted between 2003 and 2005.

Shamus use of C++ fits with the 1998 standard, which is seventeen years ago.

I don’t know about C++, but in ObjC there are plenty of reasons why you might not want to use the fast enumeration syntax – any case when you need to access an additional element at the same time as the current one, for example.

In the current iteration of Blow’s language, it’s more like
for vec
{
it.DoAThing();
vec[it_index+1].DoAThingOnTheNextElement();
}
With it and it_index being the default names of things in the loop (which you can of course name differently if you want).

Nested iteration occurs all the time. Lists of lists, for instance. You could get around that by writing a method to cover the inner loop, but that may or may not be desirable from a readability standpoint.

It’s not so much about syntax; more that if you’re using fast enumeration and then decide you need the index of your current position in the array, a language like ObjC will happily go off and iterate over the entire array doing equality comparisons on every element until it finds the object you’ve already got, and then proudly tell you the result having wasted a huge amount of its own time. That’s why I don’t use the fast enumeration syntax unless I’m really, really sure it isn’t going to encourage me to do something dumb.

I don’t know about C, but in Java you can’t use for(foo a: Collection list) if you want to resize or reorder the list, and more relevantly for cases where a for loop is an appropriate choice I don’t believe you can reference list position. I have written a lot of programs using nested for loops to iterate over two-dimensional arrays that depend on that property.

Also, I would point out that the 1998 version iterates over the first ten elements of the array while the 2011 version iterates over every element of the array. Those are very different. To get the same functionality from the 1998 version you’d check i < arr.length or however checking that works in C++.

For arrays declared in current scope you always know the length of the array. For arrays passed in by a caller (different scope, array decays to pointer as a function parameter), the caller can provide the length.

Either way, it’s not a problem for maybe 99% of uses. The advantage of a vector over a primitive C-type array is the ability to resize (just don’t store objects needing deep copying in them, store references).

Yeah, sorry Shamus. Using functional programming in shipping products is nothing resembling new. We have sent things to Mars with Lisp code on them. Lisp is one of the oldest languages anyone still uses. OCaml is used in high frequency trading and has been for more than a decade. Games programming is just not a place where it’s had a heavy presence.

There is a sizable enough community for this that there’s a Commercial Users of Functional Programming conference that has been running for some years now. For Haskell specifically, you could look at the Haskell Wiki “Haskell in Industry” page, which lists a bunch of companies that use it. Mostly not stuff that consumers would buy; it tends to be used for a lot of internal stuff where reliability is important (there are a fair number of investment groups using it, for example). Facebook and Google also use it internally for some applications.

It really doesn’t have so much to do with the hardware; it’s more that compiler design has caught up to it. If you follow the discussion regarding improvements in compiler optimization in GHC (the Glasgow Haskell Compiler), you’ll see that there are a significant (and gradually increasing) number of cases where functions written in idiomatic Haskell now compile to identical (or near-identical, i.e. with one value stored in a different register) machine code as hand-optimized C that performs the same computation.

A lot of commercial software uses functional languages, mostly those that focus in data streams and data crunching, since the functional paradigm (and functional-lite features such as C#’s LINQ or list comprehensions or even GPU programming) works really well with that kind of data.

I would actually be really happy if C# and .NET took a more central role in the programming universe. I know it’s a Microsoft thing, but things just come together for me so so cleanly in C#, in a way that they never quite do in C++ or Java.

Hopefully, the new open sourcing of the .NET runtime and Roslyn compiler will drive further adoption, and maybe even edge MS out of their current role as benevolent dictator of .NET.

.NET performs better than lots of horrible languages that have been running on computers for ages. For what I’ve seen, it performs better than Java, the very same Java that the amazing Minecraft was implemented in.

Actually calling C/C++ procedures from their librarires in C# code is not that difficult, since interoperability was one of key ideas behind C#. Pretty much the only thing you need are dummy methods in a class with matching datatypes and a property saying which functuions do they represent. The biggest timeloss is in the fact that you might want to wrap those into methods that convert from pointer arrays into C# arrays, but even that is pretty trivial.

I do agree somewhat about garbage collection, and JIT leaves something to be desired, but doesn’t Java also have GC?

To make a sweeping statement:
RAII is nearly always the best way to do memory management in a realtime system.
It might even be the best way to do memory management that has yet been discovered.

It’s the only predictable* memory management that doesn’t force the programmer to manually (de)allocate.
– Manually/static is often faster, but at the expense of brain time and greater risk of getting it wrong.

In my opinion, GC was an early attempt at automating memory allocation that removed much of the risk, but didn’t work very well. While a worthy attempt, its time has passed and there isn’t any situation where it’s now the best option.

IMO the ideal language would take C#, disable the GC and leverage the IDispose interface (or create a moral equivalent) for RAII memory management.

The newest GCs are pretty good though. Unity is saddled with an incredibly old and inefficient runtime and GC (because they can’t license a new version because of some nonsense involving Xamarin in Apple platforms, I think. Licensing costs?).

I’ve noticed in the past few years more people starting to look at how Functional programming could be used for common everyday software. I personally don’t know how it works yet, but from friends who do like it they find it good for complex systems, but a pain to set up simple things like the GUI.

Although complexity is a pain for programs, taking the concept of functional programming when you design your software is really useful. For example, objects should only ever be able to edit themselves. So in this case, the SpaceMarine should not be able to talk to the weapon, instead a high level controller should be saying “update the SpaceMarine, now update the weapon, now update what has happened to that box over there”. The idea is that your object links look like a tree rather then a web, so if something in SpaceMarine should effect the weapon, then the controller can detect that and have all the necessary safeguards in one place.

That’s more of my own opinion though, I find that sort of segmented structure is common in most areas of software development. But soooo many new programmers do not understand why it is important to have strictly designed architecture until after they already needed it.

In my admittedly limited experience with functional programming languages, they make moving information around annoyingly inconvenient. If you want to actually track the overall state of the program, you have to pass it and return it all over the place. However, the upside is that a function’s behavior will only depend on the parameters passed to that function.

The one thing I really miss from functional languages in Java is functions as parameters. You can sort of do it in Java by messing around with superclasses, by making a FooDoer interface with method foo([parameters]) and then passing instances of various subclasses so you can call their version of foo, but it’s much less convenient.

This has some ups and downs. It makes it much easier to reason about the state of a function, because state changes become isolated to parameters passed in and values returned. Reasoning about code is most of what we do, really. Functional programming also lends itself well to method chaining in fluent interfaces, but that’s just a syntactical side-effect. Immutable state makes ‘undo’ functionality trivial to implement: you place each state value on a stack, and pop them off to undo. Immutable state also makes threads easier to deal with, because immutable objects are (generally) thread-safe by default.

One downside is that this means that manipulating state is often kind of awkward and cumbersome. Objects have to be passed around and returned all over the place, mutators have to return copies of the object (which is awkward and expensive). In many languages, this results in large parameters lists, out parameters, and other assorted ugliness.

I’m under the impression that functional programming does not lend itself especially well to conventional UI work (it ought to work fine for a lot of web-based stuff, but I don’t know if any current web APIs are built that way). How much of that is the nature of UIs and how much is the nature of APIs, I’m not sure.

At its most basic level, functional programming mostly combats the problems associated with mutable, global state by replacing it with immutable, localized state. (There are other perks, too, but I’d be way out of my depth if I tried to talk about composability, etc.)

Maybe I'd rather have the program crash so I can find the bug. If I was dealing with a list of bank customers instead of videogame space marines, then maybe I REALLY don't want the program to attempt to quietly carry on.
Yes, that’s very much the point of making buffer overruns impossible. A buffer overrun is what happens when it doesn’t crash, and does just attempt to quietly carry on, operating on garbage (or malicious) memory contents. Which is what C does.

So a modern language spends a little performance on adding checks to make sure it doesn’t happen. In some, you can turn this checking of for specific bits of code that a performance critical, but in the general case it is far more valuable to be protected.

Ah. Yeah, okay. When it was described to me (probably verbally) I thought it sounded like it idea was to make the program more “durable” or tolerant of faults. That makes a lot more sense.

I wonder: How high is the performance cost on bounds-checking? It can’t be much these days. I can think of a few cases where I really, REALLY wouldn’t want it (like iterating over a massive array of pixel data with millions of elements) but for crap like space marine lists it ought to be super-cheap.

In our company we use Delphi (yes, the language is still alive and well (-ish)), which has optional bounds and overflow checking among other niceties. What is nice though is that you can turn those features on or off with pre-processor directives, but also in the build config. So we have them on during development and testing (in the debug and testing build configs), but turn them off for the release version. This means that we are able to catch most of such bugs in development while retaining the performance on the clients’ computers.

In my experience the costs of array bounds checking are usually:
1. that you need to have a length stored somewhere
2. before you do the pointer arithmetic to access an element you first check if the index is valid using that length

You may be able to avoid some of the cost if:
– the length is a known constant at compile time
– you are accessing elements in sequence or some other pattern the compiler understands

That being said, the cost of doing the check is often negligible compared to the work you are actually doing with the items you access from the array.

Storage cost:
* Some extra bytes for the bound to be stored (one per array)

On each access:
* One comparison.
* Possibly one branch.

Call it “vaguely 1% more RAM required” and “roughly 2-10 instructions extra per access”. Unless you decide to do it using the MMU, in which case the cost is lots and lots of wasted virtual address space (not needed to be backed by RAM) and a small possibility that the buffer overrun is sufficiently huge that you accidentally another array.

Branch prediction is insanely useful here. In all cases except the error, branch prediction will execute the “not an overflow” branch first, and then rollback if necessary. So in reality, most code would run just as fast.

Yes, but that puts the total number of instructions into the single digits for the whole iteration.

As a comparison: If you miss your cache and have to access DRAM, you spend ~200 cycles waiting for the RAM access alone. Compared to these numbers, spending three or four instructions on a boundary check is irrelevant for all but the most hardcore of cases.

Games need a lot of performance (AAA games, anyway), but they don’t need that much, especially because production speed and code quality have just as much impact as performance on the end-result, if not more.

Performance is almost always bound by CPU cache access anyway nowadays. So assuming the length fits into a register or at least doesn’t cause additional cache misses, it won’t make a noticeable difference.

But exception handling only triggers if you’re already in a failed state and the overflow has happened. It’s okay to be slow when you crash anyway. try{}catch{} does not slow down if there is no exception.

That’s actually what I meant: handling the error if it occurs is more expensive by far than any of the business required to generate it behind the scenes.

… which is much more ‘duh’-tastic than I thought it was when I wrote that comment while I was eating breakfast. I was still working on my first cup of tea, which I had to make by hand, like a savage, since the coffee robot went tango uniform.

I was initially thinking of the quirk that existed in (very early) Java release, when the optimizing was poor, so the quickest way to iterate through an array was something like

I don’t recall if array.length was an O(n) value that was calculated on-the-fly each time it was accessed or what, but I do recall the matter coming up in conversation.

Anyway. The cost of bounds-checking seems to be pretty cheap. I can see where it might still be expensive enough to be an issue in specific circumstances, but if you can afford automatic garbage collection, you can probably afford some of the other bells and whistles that come with modern programming environments.

IIRC, the original idea in the video is the “durable” one. When you describe it as “buffer overruns are impossible,” it makes it sound like the language is actively making sure that you can’t ever overrun a buffer, not that it is generating an exception when you try to overrun a buffer.

I would call what Alex is describing “buffer overruns are automatically detected,” which is a lot more reasonable.

In Java, you don’t have raw arrays. You have an array object that you can access like an array, but in practicality it’s more like a C++ vector that can’t increase in size. And the vector class already throws an error if you’re out of bounds. So by that standard, it’s not even new in C++.

Yeah, I was going to mention various C++ data structures, but decided to keep it simple, since in Java almost everything is an object of some sort and so that distinction doesn’t really matter that much.

Right. Like I said, it’s a lot more reasonable to toss out an error whenever you have an overflow versus making it impossible to overflow, so that’s been implemented in numerous (non-PHP, because PHP doesn’t count while we’re talking about “reasonable” implementations) languages.

Making it impossible to ever overflow is also doable, but less preferred.

Yeah, it’s so bad that pretty much any system you care to name will kill a program that tries this hard. Windows has had memory protection since 3.1, and most other systems since the 70s. (And people were saying that Shamus is out of touch…)

Ada has had bounds checking forever, and you can turn it off. The model for the only Ada project I worked on was that you test thoroughly with it on, then test thoroughly with it off (where it’s causing performance problems as measured by your profiler).

I came here to say this, so I’m going to sit in that corner over there and silently bemoan my lateness.

As for the performance issues Shamus asked about below:
Bounds Checking should be neglible on the performance side, if the language in question is designed properly.

Back-of-the-napkin design:
Keep an index of your variable sizes stored somewhere else. Before doing any access stuff (writing to a var with data, accessing an array etc) check for the size in that index. is what you’re attempting to do bigger than the size? Crash, and if you’re nice, crash with a useful error message.

Worst case you’ve doubled the overhead of looking at any variable, which is trivial nowadays. And if you’ve designed the lookup rules a bit better, then the overhead is less than that. You’ll use a bit more memory, but memory is cheap, and it shouldn’t be THAT much anyways.

Well, checking a variable value is pretty fast nowadays and the overhead might be considered negligible of we’re talking of a couple hundreds or thousands of values in a loop. And if what you do with them is significantly more expensive, then yeah, the difference might as well be nonexistent. But if you have the case of a large number of elements (I’m talking millions here) and a cheap operation performed on them, then the overhead starts getting significant. As soon as that gets to 5-10% overhead, I would consider it a significant impact on overall performance.

And it is quite easy to get there, you don’t need some high performance application. I’m a developer on an application for designing and ordering of personalized photobooks (go to your nearest mall and I’m sure there are at least three shops offering them) and we have a fair bit of code that gets called millions of times, where the user would still expect to get the results instantaneously.

“But if you have the case of a large number of elements (I'm talking millions here) and a cheap operation performed on them, then the overhead starts getting significant.”

What I meant was something akin to:
“How big is this array? 1 billion elements? OK, how far will this for statement run? To 2 billion? Well, better throw an exception now!”.
Check once before execution instead of on each iteration. I suck at compiler design, so I don’t know how feasible that approach is and how they ACTUALLY do that, but a lot of more modern, not-as-near-the-metal languages nowadays do stuff like that.

Well, it’s not always possible to determine in advance how long a loop will run and it’s not always possible to determine bounds at compilation time and they might even change within the loop. But you could detect some cases.

I hate to be the guy who says modern features are too slow, and ordinarily I’m all for automatic bounds checking. However, in some situations that extra read is a cache miss and in inner loops sometimes you really can’t afford that. I worked on a system once with a pretty bad caching architecture (2-way) and it wasn’t unheard of to have cache thrashing cut (essential) performance by a factor of 10.

You can help this a lot by improving the locality of the boundary metadata and doing compile-time checks, but sometimes you really do need to turn off the bounds checking or use an unsafe language and be super careful.

It’s perfectly OK to do stuff like that if you do it on purpose because your hands are tied. You do this with the knowledge that what you do is bad, and you should feel bad, and you (and everyone of your colleagues that heard you talk after one too many beers) will REMEMBER what you were forced to do. You’ll remember it when you wake up screaming in the night, and you’ll remember it the instant your program will misbehave, and you’ll check for it. Which is good.

But that should be the exception, not the default. Have bounds-checking enabled, disable when absolutely forced to, not the other way around.

Well, AFAIAC punctuation is far more important than spacing. Don’t get me wrong, I’m as bothered by wrong spacing as the next obsessive nerd, but spacing is still more or less decorative, while punctuation serves a functional purpose in the sense that it conveys (and can change) the meaning of text. Just contrast “its” vs. “it’s” (oh, how many people get it wrong, don’t even!..) and the much maligned “then” vs. “than” (I have to stop and think. Every. Damn. Time.).

It is entirely possible that I think way too much about this. Still better than working, though. :-)

New guy? I resent the implication. I’ve been reading the site since at least 2008 (I was pointed here by Darths & Droids). I do realize that I haven’t been quite active in the ways of commenting, I’ve been mostly lurking in the shadowscomments. So I’m not as much “the new guy” and more “the weird quiet geek who’s been sitting in corner at a party without saying a word and who nobody even noticed”. But when people started talking programming I decided to crawl out and make pedantic commentaries.

I don’t know if Haskell-style pure functional programming is really suitable for game development – you’re dealing with a system that is, in a very fundamental sense, all about the constantly changing state of the gameworld. But the simplest way to use it(apart maybe from than sticking IO everywhere and using lots of mutable variables, at which point you may as well be using C for all the safety Haskell will give you) is probably to have a big world object representing the game state, and have a function like

world update(float dt, world oldWorld)

that you call at each frame to update the game state. Rather than mutating the world that exists, you just construct a new game state. The issue with this is that it requires you to chuck around a lot of data(you’re making a new game world object every frame), which can be really inefficient.

That’s why an entirely functional approach is impractical. After all, the functional approach to rendering is, for each triangle you want to render, to take a frame buffer, create a copy, add in the triangle and return the copy. Which is obviously massively impractical.

But ‘functional’ality isn’t a binary proposition, where you either create 100% functional code or you give up on the whole idea. Each section of code you can reasonably make side effect free improves the encapsulation and stability of your code.

So while your top level UpdateWorld loop can’t reasonably be functional, and probably quite a few management / aggregate classes below that, your should try and aim for more functional programming at lower object levels.

And even if you can’t be 100% functional within a class, you can actually gain a lot of the benefits by e.g. limiting side effects to updating a state variable (if your class is implemented as a state machine) and keep all the fun side effects of reacting to the state change in a single place. It’s not perfect, but it keeps side effects contained to small areas, and vastly reduces the web of complex interdependent side effect behaviour that you’re really trying to avoid. Anything that reduces it helps, even if you can’t go all the way.

Bearing in mind that I watched the video back in September, and I’m not going to sink another 1.5 hours into it before I comment…

I think the point being made is that a “Big Idea” language is usually one where the language developers have a coding philosophy that they want to encourage in anyone who wants to use the language. So, while any practical functional language is going to have ways to break away from ‘function’ality, the prevailing philosophy is going to be “you shouldn’t be doing this” which mean jumping through hoops whenever you break from the paradigm. Since games have so many different systems interacting on so many levels, your code is going to have all sorts of cludgy exceptions where you have to divert from the Big Idea all over the place.

The net effect is more complexity, as you try to make metaphorical “non-functional” peg fit in a “functional” hole. It’s better to implement a language that is more general, and developers can engineer their own Big Ideas into their class definitions if they make sense within the application. I mean, it’s not like you couldn’t code in C++ in a functional way if you wanted…

All true. A functional language almost inevitably makes it harder to do non-functional things, and as you say, games tend to be pretty non-functional at heart – especially ones hard up against deadlines, where you can bodge a bug fix in 10 minutes or spend half a day doing it properly. The bodges tend to win out (and always will until people are prepared to pay three times as much for games as they do at the moment).

I’ve often wished there was a ‘pure’ qualifier for c++ methods like the const one to make it easier to enforce functional behaviour where you want it. It’d be nice to be able to see a ‘pure’ qualifier and know that that function (and all functions called from it) have no side effects.

I think there’s a bit of a misunderstanding about how much data needs to be written when you update the world. Because you know the object can not be edited you only have to replace the structure between the root and the point being edited.

Say you have a world, containing two characters, each with a hitpoint count and an inventory. If you want to change the hitpoints of char1, you create a new world struct (two pointers, and I’m using the equivalent C terms for comprehensibility), set the char2 pointer to point to the original char2 struct, make a new char1 struct with a pointer to the old char1 inventory and the new hitpoint value. The total new memory allocated is 3*sizeof(pointer) + sizeof(hitpoints), even if there are many thousands of items in each players inventory. If you want to have this work effectively with many characters you would have to use a sensible data structure, but most of the structure and the other character objects could still inhabit the same unaltered memory.

This gets really useful when you get to concurrent programming, when you can give the same data structure to two different threads, and know that if one of them “alters” the structure that can’t change what the other thread sees, but if you do want to alter the structure you don’t have to copy the whole thing, only the path.

I’m no expert here, but functional languages usually have some sort of escape mechanism for when you need to handle things in a stateful way. In Haskell, it’s things like IORef and State, in Clojure it’s mutable object references (which may still point to immutable values but the value of the reference can be replaced), or just using a plain Java class, which are totally unprotected.

The essence of functional programming is usually "Ok.. try to write the majority of your program as data transformations without side effects, but we understand that at some point you need to do this other stuff and here are the tools for that: …". The upshot is that you can do stateful stuff, but the language now expects you to be explicit about your intentions. State has a nasty habit of creeping outwards from the point at which it is introduced, and so the type checkers in functional languages might feel awkward and nitpicky, but they're just trying to make sure you understand where the impure procedures are leaking into pure functions.

The pattern I've seen used for functional games is usually to have some sort of structure representing the entire world state (which itself manages the other actors in the model), and when something happens to an actor, the world is informed about it, and a new copy (or partial copy) of the world is created. One benefit is that if you hold onto some of the historical world states, you have a rollback model that is supported by the language.

One of my hobby projects lately is an attempt to implement the Entity Component System concept with a functional core, and a (restricted) imperative interface. It works out to, you write some functions that expect some magic objects with a well-defined, restricted interface, and you manipulate them according to that interface. So long as you don’t leak state from the function, you can otherwise do whatever you want.

It’s mostly a going-to-be-a-proof-of-concept, since the tech to run it efficiently in its current planned form doesn’t actually exist yet (PyPy-STM supporting Python 3.4 is the kind of thing this needs), but it’s fascinating designing algorithms for stuff like output when the idea of mutable global state doesn’t make sense from the context of the algorithm primitives.

The problem with trying to create a new language that’s as fast as C/C++ but isn’t C/C++ is that C and C++ are they way they are because they aim to provide the most direct model for how your computer actually works. Most of the big problems that you’d “fix” would mean making the language less powerful.

For example, removing buffer overruns means removing the ability to arbitrarily access memory. In C++, accessing an array is just hiding the syntax for pointer arithmetic. An array in C++ is basically just a pointer to the beginning of a block of memory that you set aside at some point. So how do you know that I’m adding the wrong number to to that pointer? At best, you could create a special looping syntax that adds in the check, but then you need to know how large the array is. Which means turning the array into an object, like Java does, since otherwise someone can always change which block of memory that array variable is pointed to. And that cuts down on the flexibility of raw arrays. Rather than creating a new language, you would be better off just creating an Array class in C++ and using that when you don’t need raw arrays.

Manual memory management? A pain in the ass sometimes, but also critical when you’ve got a limited block of memory and you can’t tolerate the uncertainty of a garbage collector. Pointers? Confusing for a lot of people, but they’re a powerful tool that can make an otherwise complicated task simple (Try putting an integer value in a float type in Java. With C++ you just have to cast the pointer type. With Java you have to screw around with ByteBuffers),

There’s probably room for improvement with C++, but it would be more on the “make the snytax less obtuse” level, not a fundamental shift in how the language works. I also question the usefulness of a programming language “For Games!”. Video games are an incredibly complex and broad set of applications. What they need is a general-purpose language suitable for any task. If we’re going to make the jump to managed code, then we already have C#.

I suppose you could create some sort of “middle ground” language that isn’t managed, but abstracts things just a little more than C++ in order to cut down on the hairier parts of it. I’m just not sure if you’re going to see any real advantages over managed code.

There are already languages that solve the problems you list there, and they do it by letting you turn off the nice features when you really can’t use them. I’m going to use D as an example because I’ve used it most recently, but there are others.

Manual memory management sucks. People are terrible at it and it causes all sorts of headaches. Meanwhile, garbage collectors are getting really good. That said, sometimes the garbage collector isn’t good enough for what you need. That’s why D lets you use C’s malloc and free directly for unmanaged memory living in a managed app, or you can turn off the GC entirely.

Pointers confuse newbies, but I don’t think that’s the reason they’re awkward to get at in modern languages. They’re dangerous because dereferencing random data or randomly typed can cause crashes or hard-to-find memory corruption bugs. C++ tries to fix this with “smart” pointers, but languages with more higher level data structures built in don’t need pointers as much. Really, the pointer issue is closely tied to memory management. If you don’t need to allocate untyped blocks of memory, you don’t need to refer to them with C-style pointers. It’s also impossible to have pointer aliasing issues if pointer aliasing is disallowed by the language (or it manages your reference counts for you).

C++’s syntax is a problem, but it’s not because it’s obtuse. Its grammar is an atrocity. Compilers literally can not figure out what a given piece of C code is supposed to do without looking at all the included headers. Sometimes, because of operator overloading, it’s just not knowable at compile time. The solution here is “don’t use those features” but tell that to your library authors. Decidable grammars aren’t just a luxury.

I still haven’t had the hour and a half to watch this video, so maybe Blow’s talking about something that doesn’t exist. I’m pretty sure the ideal language for game development is out there though. It may need some love in its implementation, but there are several languages that don’t have the huge problems C++ has.

“If you don't need to allocate untyped blocks of memory, you don't need to refer to them with C-style pointers.”

C-style pointers have the advantage of being very simple, and as such, very fast. You can have indirection without the overhead of an object, which is something you can’t do in Java.

For most applications, the overhead is worth having more structured code, but in the guts of a graphics engine you don’t want to have to instantiate an object just so that a method can return two arrays of non-determinate size to you.

Doesn’t OO encapsulation solve a lot of these issues already? You wouldn’t have a function that has a lot of side effects, but would instead have a function that changes something and then tells a lot of related objects what’s happened so they can update properly. So in the Reload() case, you’d ask the Space Marine to reload, and it would handle any side effects appropriately. If the Space Marine is in fact dead, then either it is gone from the list of things to update and so won’t reload at that point, or else it is still in the system but since its state is “Dead” it wouldn’t reload. Or it would if it didn’t matter otherwise (ie there’s no looting allowed and it would only be transferring its ammo from one variable to another, or to a weapon object).

Essentially, in a case like this the Space Marine object would tell the Environment that it has fired a bullet, and go on its merry way. The Environment would gather up a list of objects that are affected and ask them to figure out what that means for them, which could include the Space Marine. When that’s done, the Game Engine would handle post-bullet updates like reloading, or something like that, at which point the Space Marine tries to reload if it can. There’s still complexity there, but for the most part the impact is limited to the object that knows what’s going on, and no copies need to be passed around or made. I can’t see how a functional language could hide that complexity by eliminating side effects; it seems like it’d just end up having ONE object calculate the impact on everything else, which is probably not what we want.

The code as Shamus has it would fail in some manner in an object-oriented language, though that’s because it’s the wrong way of doing things. It’s got firing a bullet and reloading in the same method, so it will fire and then check if it needs to reload after the bullet is resolved. The reload method itself might gracefully handle attempting to reload while dead, but without doing horrible things to the stack it’s going to get called after firing the last shot no matter what happens.

The proper way of doing things is to make it so either it won’t reload in the same update cycle it fires in, or writing FireBullet() such that it won’t cause a barrel to explode before the rest of the space marine update happens.

Actually, again I’d say that it’s because it wouldn’t be well written OO code. What you’d have in OO is that the input would detect that the fire button was pressed, and then the context would decide that this meant that the space marine fired, and the space marine would fire a bullet. If a space marine can reload when dead, then the code can just reload right after and everything’s fine. If not, then you don’t call the reload from the fire and make the environment call back to it as part of the post-action handling. While there are still issues, this decision isn’t one that you make for reloading, but that you make in general, for purposes including determining if you died after firing yourself.

This can involve a lot of notifications, though, so can be slow. But I don’t see how a functional language would solve it any cleaner.

Yes and no. It made them much more manageable than procedural programming, but you still wind up with this problem where you need to alter an object’s state in complex ways, and just doing it through a public interface doesn’t remove the possibility of screwing it up.

Let’s say I have a character. I need to damage him, so I write a damage(int damage) function that does things like make sure his hitpoints don’t go below 0 and that he collapses when they do reach 0. But now I’ve got a spell that’s supposed to set his hitpoints to 1. So I can write a setHitPoints(int hitpoints) function that does the same stuff. But now maybe I want to add armor. Armor reduces the amount of damage, so let’s put that in the damage() function. But, oops, some stuff pierces armor, so what do we do? Your interface becomes more and more complex until you either have to give more direct access to state to simplify it or your interface itself becomes a problem.

And then there’s problem of needing side effects to happen, and needing the code that causes those side effects to be agnostic to whether side effects happen or not.

In the example you give, that works fine as long as firing a bullet is one simple thing that has to be looked out for. What happens when you want to make your weapons more flexible? Do you tell every object in the game how to react to every weapon? What if you’ve got different types of ammo? You wind up passing this massive, complex “effect” state around and every time you add something to that state you have to re-write every function that relies on it.

Making is so that fireWeapon() effects objects through their general interface is much simpler, since writing a new weapon effect just means writing code for that effect. But it’s a catch-22. If you code allows for flexible enough access to state in order to do that, it allows for flexible enough access to state to make it difficult to tell which part of your code is improperly setting an object’s state.

The root problem is that games are trying to create a clean model of systems that aren’t clean to begin with.

Let's say I have a character. I need to damage him, so I write a damage(int damage) function that does things like make sure his hitpoints don't go below 0 and that he collapses when they do reach 0. But now I've got a spell that's supposed to set his hitpoints to 1. So I can write a setHitPoints(int hitpoints) function that does the same stuff.

The proper way to do this is to write a private changeHitPoints method that does the checking for zero stuff, which both the damage and setHitPoints methods call.

But now maybe I want to add armor. Armor reduces the amount of damage, so let's put that in the damage() function. But, oops, some stuff pierces armor, so what do we do?

In the example you give, that works fine as long as firing a bullet is one simple thing that has to be looked out for. What happens when you want to make your weapons more flexible? Do you tell every object in the game how to react to every weapon? What if you've got different types of ammo?

You tell every object in the game how to react to every weapon, but you do it by making them all inherit from a single superclass.

Now, if you do Object Oriented Programming badly, it can become a ridiculous mess of spaghetti code. I speak from experience when I say functional languages are also entirely capable of becoming ridiculous messes of spaghetti code. Granted, the one I used was called Scheme, and its list interface is basically designed to become a ridiculous mess of spaghetti code without any help from the user. You want the nth item of a list, you have to nest n functions.

This is only solving the immediate problem of “I need to do this”, not “my class now has 1000 public methods with subtle differences between some of them”. What happens when armorPierce is more complex than an integer? What happens when type of damage matters? What happens when I need to know how much damage I’ve actually done (after armor/resistances were applied)? What happens when weapon X has an effect that procs when I’ve done half of your remaining hitpoints in damage?

You can create a new method to solve each of those problems, but as a general rule of thumb as soon as you write a method that does X to a class because class Y needs to do X to it rather than because X is an obvious behavior of that class your encapsulation starts breaking down and the question of “which class should this code be in” starts becoming muddier.

“You tell every object in the game how to react to every weapon, but you do it by making them all inherit from a single superclass.”

Trying to force everything into a single inheritance chain has it’s own headaches. And it’s not really a solution- the OO principle is that weapon should contain all of the behavioral logic for firing that weapon. But here you are putting it in a different class because the OO principle is also that another class shouldn’t expose the inner workings needed for the weapon class to manipulate it properly.

Then you find yourself in a situation where the weapon needs to do no more than 100 total damage, so it needs to start with the closest character to wherever it hit, try to do 20 damage to them, reduce that total by however much it actually did to them, then move on to the nearest target and repeat. But at the same time one of the characters it hits has his own special rules that the weapon doesn’t know about that should result in that damage total being reduced.

That kind of thing is why OO winds up being messy. No matter how slick you think you are with your “correct” way of doing things, something comes along and breaks it.

This is only solving the immediate problem of “I need to do this”, not “my class now has 1000 public methods with subtle differences between some of them”

Well, if you want to do thousands of different things you’re going to have horrible complexity somewhere. And it’s really not a big deal if you have a method that does nothing but call an identically-named method with more parameters that are automatically filled in. Happens all the time in the Java standard libraries.

What happens when armorPierce is more complex than an integer?

Basically the same thing, except you pass whatever represents having no armor piercing in the Damage(int damage) method. If armorPierce can be an integer or not an integer, have two damage classes that take the different kinds of armor piercing and convert it to one kind.

What happens when type of damage matters?

Same idea.

What happens when I need to know how much damage I've actually done (after armor/resistances were applied)?

Make them int methods and add return statements.

What happens when weapon X has an effect that procs when I've done half of your remaining hitpoints in damage?

First you fire either the man who told you attacks would only deal damage or the man who knew attacks might be more complex and decided to write a public damage method anyway, and then you make your damage methods private and have a single getHit(Attack incoming) method. And you make Attack an abstract class and have it have methods for every kind of special effect, and have the default behavior set so nothing of interest happens unless a subclass overrides it.

And it's not really a solution- the OO principle is that weapon should contain all of the behavioral logic for firing that weapon. But here you are putting it in a different class because the OO principle is also that another class shouldn't expose the inner workings needed for the weapon class to manipulate it properly.

The weapon generates an attack. The target resolves suffering an attack. This is in no way a problem.

That kind of thing is why OO winds up being messy. No matter how slick you think you are with your “correct” way of doing things, something comes along and breaks it.

See, that’s why you plan ahead. If you know in advance that your combat system is going to have all those things, you should not write a public damage method. However, if you do make changes, they don’t have to cascade.

Well, if you want to do thousands of different things you're going to have horrible complexity somewhere. And it's really not a big deal if you have a method that does nothing but call an identically-named method with more parameters that are automatically filled in. Happens all the time in the Java standard libraries.

“Happens all the time in the Java standard libraries” is not a great sign of elegance. And you are going to have to do thousands of things, because that’s what modern AAA games do.

Basically the same thing, except you pass whatever represents having no armor piercing in the Damage(int damage) method. If armorPierce can be an integer or not an integer, have two damage classes that take the different kinds of armor piercing and convert it to one kind.

Same idea.

Make them int methods and add return statements.

…and every time you’re making your interface more and more complicated. Also, adding a return value to a command is one of the “sins” of proper OO development. The “correct” way is to have one method that tells you how much damage the attack would do, and another to actually do the damage. Of course, this doesn’t work, both because it’s slow and because it can rely on the state of any number of things in the game. But it goes to show how complex systems sometimes require you to break your ideal patterns.

First you fire either the man who told you attacks would only deal damage or the man who knew attacks might be more complex and decided to write a public damage method anyway, and then you make your damage methods private and have a single getHit(Attack incoming) method. And you make Attack an abstract class and have it have methods for every kind of special effect, and have the default behavior set so nothing of interest happens unless a subclass overrides it.

I think you’ve confused yourself. Character.getHit() would need to call Attack.hit(Character character) in order for Attack to be able to perform an arbitrary function. If all of the logic is in Character, then Attack doesn’t need any behavior at all, just the state of the attack. If all of the logic is in Attack, then you haven’t improved upon fireWeapon().

What you’d *actually* need to fully solve the problem is for Weapon, Attack, and Character to all have a turn to run arbitrary code, and for each to know each other’s state. Character.getHit(Attack incoming) might well effect the character the attack comes from, which means that your encapsulation is shot to hell, no matter how you structure it.

The weapon generates an attack. The target resolves suffering an attack. This is in no way a problem.

This is just silly reductionism. It looks good until you realize that it’s just restating the problem, not solving it.

See, that's why you plan ahead. If you know in advance that your combat system is going to have all those things, you should not write a public damage method. However, if you do make changes, they don't have to cascade.

Game development is far to iterative to have that attitude. Especially when you’re more on the experimental end. And planning isn’t even going to help there- the system to do what you want needs complexity regardless.

No, here is how it would work: FireWeapon generates an attack. Then it probably goes to the general map, which determines what it will hit. The general map then calls Character.getHit(attack). In getHit, it gets the damage value and type from attack.baseDamage() and attack.damageType(). Then it calculates the damage dealt. Then it passes that into attack.onDealDamage(int damage), which returns an enum defining the rider. Then getHit applies the rider condition as appropriate and also handles your “deals 100 damage” rule. If putting those in the same method offends your sense of propriety you can make them separate methods and call them in order and it doesn’t really matter. At no point in this do you have to let anything access variables of other objects without using that object’s methods.

And yes, it can be difficult to plan ahead in game development. However, you wouldn’t usually start out planning to make a game where attacks only had damage values and wound up with attacks as complex as you’re describing. If you told me “we’re making a JRPG!” I would probably make an Attack object before you described any specific attacks that did something besides straight damage, because we’re obviously going to need one eventually. The advantage of object-oriented programming, though, is that if you told me “this particular boss takes half damage from all weapons,” I could just change that boss.

No, here is how it would work: FireWeapon generates an attack. Then it probably goes to the general map, which determines what it will hit. The general map then calls Character.getHit(attack). In getHit, it gets the damage value and type from attack.baseDamage() and attack.damageType(). Then it calculates the damage dealt. Then it passes that into attack.onDealDamage(int damage), which returns an enum defining the rider. Then getHit applies the rider condition as appropriate and also handles your “deals 100 damage” rule.

How is getHit() sending the attack on to other Characters? Is it generating more Attacks? What about the side effects that come from things happening when an Attack is generated? Are you giving the character who was hit a reference to the hitting Character so that it can process any effects or conditions? Are you going to model that as an attack too? Which class keeps track of how much damage Attack has done so far? You’re asking the Attack what to do when it deals X damage, but what if damage amount is not what’s important to it? What if it really wants to know if you were standing when you were hit or if you were jumping? What if it needs to activate it’s effect *before* it’s damage can be determined?

You solution is already highly complicated compared to the procedural approach and in return it requires all Attacks to be processed in something of a fixed way.

This is a general pattern that plagues OO: the very structure that it enforces to keep itself clean results in twice as dirty methods needed to subvert that structure when it doesn’t allow for a needed effect. Your overall state tends to wind up being more complex since it requires a web of objects to pass effects around, and is less transparent to boot.

If putting those in the same method offends your sense of propriety you can make them separate methods and call them in order and it doesn't really matter.

How does that help? You’re still winding up with weapon firing logic in the Character class.

At no point in this do you have to let anything access variables of other objects without using that object's methods.

That doesn’t matter, because you can do just as much damage accessing variables of other objects through that object’s methods if they’re permissive enough.

Yes, if you change the question it changes the answer. And yes, you can keep proposing entirely new things which are not handled by the answers I have already given.

Now I will ask a question: Say you have a procedural set up with several dozen different attacks. A character suffers 50 attacks, and then you determine that it is at negative hit points and not dead. Where is the error?

Because in all of the OO setups I have been describing, the error cannot possibly be outside of Character, and if you’ve been doing things tidily in there it’s in the private changeHitPoints method because nothing else alters the value.

I work on insurance software, but if this was done in my workplace, there would be many more objects.
The Character object would be a subclass of the Entity class. It would contain references to a CharacterState object, one or multiple Weapon objects and so on. The damage would be done by passing a DmgType object to the character one. The DmgType object would be a subclass of a Dmg class, which would be a subclass of an abstract EffectToEntity class.
There would be a table of object references of all entities in the World class. It would not determine who got hit, it would just send the Effect object to all entities, whose WasIAffected method would pass it on to the methods that actually do stuff.
The big problem with this is that making the software architecture would actually take much more time than writing the code that does stuff. It would be much more costly, unless you can reuse the system. The first game could take 5 years but churning out sequels would get even faster!

You are making the same mistake here as Shamoose did in his quick example:Putting too much stuff into a single class.Your Character class doesnt need to know how much damage every weapon does,or how all the spells work,or whatever.It should have a simple set of attributes,like hit points,speed,resistance to this and that,etc.Then your Weapon object(or bullet,if you dont use hitscan weapons)will handle stuff like damage,armor piercing,(de)buffs,etc.It will call for necessary stuff from the Character object,do all the calculations in its method,and send back just the raw numbers that Character object will use to change its stats with its method.

Now,whether you make a single Weapon class that will have all the attributes for all the various weapons,or specific classes for every different one(for example Axe,Gun,Fire bullet,….)depends on how diverse the weapons are.If all weapons just fire bullets,but with different damage and ROF,theres no need to create different classes for rifle and magnum,but if you have swords and bows and magic,then you should probably group these separately.

So all the complexity comes from how complex your game is,not the code itself.If you have simple set of rules,your code will be simple.That is,if you properly planned everything in advance,like guy said.This is why quick and dirty examples are usually wrong when dealing with OO programming.

And now that I think a bit more about this specific example,it seems much more efficient to have an intermediate Damage class that will take stuff from both the object dealing damage and object receiving damage,and use that to do the calculations.This way,it doesnt matter if the damage is dealt by weapon,environment,magic,or something else.

Yeah, another intermediate class just for damage. Now we’re getting into the interesting problems.

This takes that dense low-level complexity that was a tangle of side-effects in one function and trades it in for a bunch of high-level complexity in the class hierarchies and relationships.

Of course, the complexity has to go SOMEWHERE. But this ability to move solve low-level messes by building high-level class relationships is an interesting choice. As a programmer you’re still going to need to read a bunch of code before you can accomplish anything. In C, you’re reading one long, dense function and then chasing all the called functions all over the place trying to figure out what’s going on. In proper OOP, you’re sort of stuck at the top and you end up reading 20 headers before you can figure out which class you’re looking for.

I think OOP gets a bad rep because of purists. If someone mindlessly adheres to OOP orthodoxy you end up with a project with hundreds of stupid little classes that do one simple thing. You have to sort through 100 50-line source files. It really is okay to just call some shit once in a while without building a class interface.

Related: These two extremes of programmers (low-level complexity vs. high-level complexity) seem to roughly map to coding styles. The former likes super-compact code like they think moving the characters closer together will make the program run faster, while the latter seems to view code as a thing to occasionally break up the vast expanse of white space.

Indeed,you dont really need to make a new class that gets called once in a blue moon and does just a single conversion.Thats the importance of proper planning,seeing in advance what can be grouped together,and what should be separate.

Also,as with everything,the best solution is in between the two extremes.You should have discipline,but be flexible as well,depending on what the situation dictates.

By the way,comments sections are still broken.Could it be that you made their width static while you were changing the width of the blog?Or were they always static,but we didnt notice because they were smaller?

My software engineering course has left me inclined to go for a moderate number of classes and a method explosion where each method is small and calls others. Makes them amazingly more readable, and easy to unit-test. Once you’ve unit-tested a method properly, you don’t have to hunt for errors inside it if another method that calls it is buggy. Of course, you might be using the method wrong, but they can be given intuitive names and clear documentation. Having if(isInBounds(int xToCheck, int yToCheck, int xMax, int yMax) is much more understandable than writing the branch statement it does inside your method to visit adjacent squares.

Mind, it could be an IDE thing. The ones I’m used to make it really easy to get to the method you’re calling.

Re: designing a game, I was inclined to put the damage calculation in the character object because I’d expect most of the numbers that adjust it to be associated with the character. It’s of course possible to do it in either. Also, my private method explosion would make it easy to write special cases.

” In proper OOP, you're sort of stuck at the top and you end up reading 20 headers before you can figure out which class you're looking for. ”

we need a better source code standard, imho.

because really you should be looking at the UML class hierarchy, not the classes, when you’re trying to figure them out the first time. But that means an attached image, whic can’t just be dropped into the source code but has be outside of it. blah.

Lists in Scheme (and LISP, from which Scheme is derived) are singly-linked lists. Access is awkward because access is expensive.

There are probably some convenience functions to go with car and cdr, though. ISTR that Chez Scheme had, a decade and a half ago, six or seven methods like cadr, caddr, cadddr, etc, and it would have been trivial to write something like this E-Lisp example in Scheme. Probably like

Well, yeah, the built-in stuff contains nesting up to like six. However, I was writing an interpreter for class and had to extract stuff from lists of lists that could themselves contain lists and also it had to be tail-recursive.

I am absolutely positive there was a better way of writing the thing, but it didn’t become insanely unmanagable until the fifth assignment added “Objects and their runtime types” to things I needed to interpret and I had to either go into insane spaghetti code or redo the previous four projects. Though what really pushed up the insane spaghetti code was that I never really figured out how adding things to lists vs. putting the new thing and the list into a new list worked.

Granted, I have also written a compiler in Java (shut up) and it became a crazy mess for a similar reason. Since one step involved creating an abstract syntax tree, I figured I’d just represent it as a tree and have nodes contain the relevant information about themselves and propogate stuff up and down. This worked brilliantly until the part where I needed to convert it into an intermediate language.

Many programming paradigms are non-exclusive. A language can be both functional and OO. Common LISP is both. So is F#. C is not really either, though you could probably torture it into a semblance of functional behavior by using function pointers and structs all over.

Regardless, a functional approach wouldn’t really hide that complexity. It would tend to isolate it, and do so in ways that may make it easier to reason about the state of the system and of individual objects in the system.

I don’t have a lot of time right now, and I haven’t watched the video, but I need to correct your misapprehension about languages where buffer overruns are impossible. They don’t just quietly carry on when you try to access random memory. They crash with nice stack traces to show you exactly where your error was. Nobody ever wants programs to quietly carry on after doing something illegal, which is why Javascript is such a mess. There are also higher level structures in these languages like foreach, which make it a lot less likely that you’ll try to access space marine #25 in the first place.

I realize that I’m way late here, and most of the stuff I wanted to say was already said by others, but I still want to give my 2 cents about functional programming. The thing is that working with pure functions in immutable and side-effect-free way is not at all everything there is about FP. There are lots of ways for a pragmatic programmer to use FP techniques to improve his/her code.

When working with my hobby game project I can’t stay away from state. However, I use the FP mechanisms provided my current favourite language (Scala) to help me reduce the amount of code I need to write, and also avoid the constructs I find are most easy to make small errors with: loops and if checks. For a trivial example, here is the function on my game world object that runs NPC AI:

def updateNPCs = npcs filter { _.alive } foreach AI.update

I find this way of programming very terse and elegant, even if it might not pass muster for a Haskell expert.

One issue though Shamus is that at some point the game programming language is no longer a language but a game engine instead with a game script language.
And the engine has bounds checks and race condition checks instead.

That being said, even a Point’n’click game IDE can be used (misused?) to create a buggy game.

Also for the record, the only “competitor” to C++ is actually C.
Myself I dislike C++ especially all the object orient stuff or rather how OO stuff is misused by people.

I also prefer unmanaged over managed (#Net/.net) languages, I kind of like being able to manually allocate and free my memory for example.

He mentions Rust, and sort of dismisses it; but if I see any language that he talks about becoming big for game development, I think it’s Rust. Granted, I haven’t used Rust, just studied it, so take this opinion with more salt than usual but from what I know it seems like a good fit.

His arguments against Rust seem to be

1) “it’s new”, which is fair, it’s going to hit 1.0.0 pretty soon, so obviously it’s pretty early to be throwing a ton of resources at… but then a hypothetical new language is obviously even more problematic in this regard; you have to invent the thing and then wait until its stable. Plus, Rust is already being used in a large project, the Servo browser engine; which I imagine is somewhere on the same order of magnitude to game dev.

2) that it’s “high friction”, which is also probably true, but the benefit of that friction is no memory management errors. If “low friction” were really your goal, then you’d be using a GC’d language. For the same reason it makes sense to do C++ style management, even though it’s higher friction than Java-style memory management, I think it makes sense to do Rust-style memory management over C++ style memory management.

It seems a bit like he wants a free lunch: a language that basically has the same benefits as Rust but without the overhead that makes that possible… and sure it’s great to talk about that hypothetically… but I’m not sure it’s realistic.

The thing is that Rust is “really high friction”. I like Rust, but it’s an absolute bugger to program in. You have to get used to thinking really differently, and lots of things which are easy or logical to do in C or C++ are really difficult or impossible to do in Rust.

I thought that Rust covered a lot of his points pretty well. He spends a lot of time talking about lifetimes and their management, which is the truly interesting thing that Rust brings to the table.

A game typically has a few entity lifetimes:
– things that are allocated at the beginning of time, and live forever (the UI, basic infrastructure, rendering buffers)
– things that are allocated at level load time and stay more or less immutable while you’re on that level (trees and rocks)
– things that are instantiated at some frame, may live for a while, then need to be freed up (monsters, bullets, particles, AI states, etc)
– things that are allocated and freed within a frame (geometry buffers, temporary AI calculation, etc)

You can write a “game specific” language that basically codifies these lifetimes and cut down on a bunch of work. Or you could go with Rust that completely generalizes this concept, and then lets you do a ton of other interesting things with it.

The particularly nice thing Rust does with this is extend it to multithreading code, so that if the lifetime of something extends into another thread, then it makes it a compile-time error to touch it in a non-threadsafe way. It’s really awesome.

I do a pile of programming in C, and about 50% of the bugs that we hit are “dumb C crap” that Rust just makes into compile errors you can just fix. (The rest are actual algorithmic/spec/complex bugs that would happen no matter what, but having a richer type system can go a way to helping with those too.)

That would depend on if the game models “chambered rounds” at all. A game shouldn’t try to simulate reality, it should only implement the rules that make sense for the game model. A game like “Receiver” where the whole point is showing the gun as a mechanical device would, but most other games wouldn’t bother.

I remember seeing this talk and not being swayed by him. I guess it’d be interesting to see how far he’s gotten with it.

When I started looking around for languages to replace C++, the only one that drew my attention was Rust. The reason is, I have a number of things about C++ that I really like, and that I was shocked to learn other languages did not use.

One, destructors. Destructors are awesome, in that they solve the issue that garbage collectors try to solve in a straightforward way that’s applicable to every kind of resource, not just memory. There’s no reason why you should ever have a leak in C++, thanks to destructors (which enable the RAII idiom). With destructors, your resources look after themselves. You never need to remember to free things (file handles, memory, whatever). You just make a smart class to hold that resource.

Two, exceptions. Exceptions, when used correctly, provide code that’s faster and cleaner than the alternative error returning code. “When used correctly” is a pretty big caveat, though, so let’s look at a practical advantage. Exceptions do the right thing by default (crashing when you have a problem), while the main alternative, error checking, does the wrong thing by default (ignore the error and carry on).

In order to ignore an error, exceptions require that you write code to catch the exception and then do nothing about it. In order to ignore an error code, you just need to forget to check for it.

The point of destructors is to clean up when the lifetime of the object ends. Managing the lifetime of an object is a separate matter. Objects that require dynamic storage duration (“on the heap”, living past the scope that instantiated them) can still use destructors to clean up after themselves. You just need to control that lifetime with a smart pointer.

As for them “not being used”, I’d say that depends on the studio. The one I worked in made use of them, although code quality was poor overall anyway. But that’s a separate matter. In any case, error checking is tedious, but it doesn’t impact performance significantly. And it helps productivity, since actually catching errors as soon as you make them instead of letting them pass until the application crashes some time later lets you fix the problems quickly. It results in less buggy code in less time. Any studio that sacrifices that for “performance” is absolutely doing the wrong thing.

The truth about performance is that unless you are working on a AAA game that’s pushing the boundaries of the hardware, you have performance to spare. If your game is running slow anyway, it’s because you have poor data structures all around and possibly too much indirection in critical areas. Error checking has nothing to do with it, and not using it is leading you to produce poor quality code.

Absolutely agreed on destructors. Well… not exactly on *having* destructors, but on *predictably invoking* them. (In C++, when the variable holding them falls out of scope or has operator delete called on it. Not sure how Rust is doing it.)

.net had destructors as well, but they only got called when the GC eventually came around and destroyed the object (because it had no other choice, being a garbage collected language) so you couldn’t create something like an RAII mutex holder (which is forced to always release the mutex on function exit). Well, unless you also implemented IDisposable and had all your call sites invoke the Dispose method. But that’s not automatic, either.

They are often considered the most important feature of C++. RAII is an incredibly powerful tool that very few other languages have, and some of them try and fail miserably (e.g. Java has destructors, but due to how the GC works they are never guaranteed to be called).

The problem with Exceptions comes up when you do want the software to keep going. Crashing informatively is generally the correct response in development, but when in actual use it’s sometimes more important for the software to keep going than for every part to work perfectly. Also, you need to be careful about how you structure things if you want to react to errors but not crash. If you aren’t careful, throwing an Exception can result in a function stopping halfway through editing something.

The way I’ve been taught to write Exception throwing is to start the function by checking the parameter values. If you’re passed a negative number for a parameter you’re eventually going to use as an array index, just throw an IndexOutOfBoundsException right at the start. Also, only write a try-catch statement if you actually know what to do about the Exception, which is ideally simple because the throwing statement did nothing except throw it.

Also in most cases you want to use Exceptions to rewrite your code until they stop getting thrown. That’s where they shine.

This is why I added the caveat “when used correctly”. But even if you don’t know how to make good exception safe code, just throwing on bad inputs already provides the benefit of getting an unavoidable crash as soon as you make the first mistake, which gives you a chance to fix the problem.

Learning to use exceptions safely, however, takes no more than two hours. Here’s a link to resources and a video tutorial that covers the important bits: http://exceptionsafecode.com/

Your code should offer one of three guarantees: basic safety means that even if you throw, you remain in an indeterminate but valid state. No resources are leaked and no objects are left in invalid states. Strong safety introduces transactional semantics to your calls: they either succeed, or fail and everything remains as it was. No throw guarantee means that the operation can’t fail.

You can always offer the basic guarantee for free. You just need to wrap your resources in smart classes and have destructors that don’t throw. The strong guarantee requires that you reorganize your code a bit, which might introduce some cost. The goal is to not update persistent objects until you’ve done all the required work, and then make updates with non-throwing operations. Generally, you create an auxiliary object to hold the new state, and then swap it with your actual object. No throw is nice to have but can be impossible. You should only enforce no throw on move assignment, move construction, swap and destructors.

As for guy’s claim that “in actual use it's sometimes more important for the software to keep going than for every part to work perfectly”, I strongly disagree. Program correctness is always the number one priority. Your program should never do the wrong thing just to “keep going”. If you keep going when your program is in an invalid state, then you could be destroying your user’s data, or generally not doing what you promised the user you would do.

You shouldn’t crash for a user, of course. Crashing is for developers. If someone wants to save a file and there’s no room in their hard drive, show a dialog that tells them “no space available” and let them free some space up and try again. But don’t just pretend like you saved it and ignore the error because it was important to keep going.

If the software controls an airplane in flight, it is completely unacceptable for it to suddenly stop working. Ideally you respond to a problem in some sensible manner, but with a complex program you could miss a bug during development because you didn’t consider that you might cross the International Date Line.

As I pointed out above, you should not crash for an end user (pilots, in this case).

It’s important to distinguish error signaling from error handling. Exceptions and error codes are the two basic mechanisms you can use for error signaling. I favor exceptions, because they require active effort to ignore them, whereas error codes require active effort to check them.

In either case, the immediate reaction in a well formed program is the same. You unwind the stack and pass the error to your caller. Either by letting the exception pass through you or by checking the error and returning early on error.

What you are talking about is error handling. When to catch the exception, or do something meaningful about the error. You seem to think that discarding an error is good, which would be equivalent to catching an exception and doing nothing about it. If your problem is an aileron is not responding because your hydraulic system failed, pretending that didn’t happen is not going to be of help to the pilot. Worse, you might update the logical model of the plane, and display in your instruments that the aileron is moving, but not actually move it and so confuse the pilot. If you instead tell the pilot “Sorry, can’t move the aileron. Is there anything else I can do for you?”, he can at least take some other mitigating action, instead of continuing to pull on the stick uselessly.

But you can be equally unhelpful with either exceptions or error codes. Your error handling strategy is only related to your error signaling mechanism by implementation details. Writing “try{}catch{}” for exceptions instead of “if (e != SUCCESS){}” for errors.

But with Exceptions, if you don’t anticipate an error and don’t put in a try-catch the thing crashes. Obviously you want to handle every possible error correctly, but in highly complex programs produced to deadlines there are going to be errors you didn’t think of. Airplane systems failing from crossing the International Date Line was an actual example; a squadron of F-22s nearly had to ditch in the middle of the Pacific. The choice isn’t “tell the pilot something is wrong” or “do nothing”, it’s “give bad information about one system” or “cause everything to stop working”

Exceptions are not magical. They can’t happen unexpectedly. An exception reports the same failure that returning an error would report. If you have an unhandled exception, the equivalent code would have an unhandled error case. If you let either an exception or an error slip through and do nothing about it, both will eventually crash your program and exit with an error.

In order to do what you seem to be defending, you have to never check error codes. That “protects” you from unexpected errors, by not letting you do anything about expected errors either.

Imagine you have ten million lines of code. Now imagine that you forgot to check one error. It’s quite possible it will find a matching catch statement somewhere… that you put in to deal with a different error in some other part of the tree of called functions from the line or lines you surrounded with the try block. I am certainly not saying you should leave anything unchecked on purpose, but these things do happen.

I am also given to understand that they break the proof-of-correctness methods that academics use to conclusively demonstrate that a program cannot enter an invalid state, though at present those are ludicrously time-consuming to do. Probably going to be a bigger issue in the future, though, since DARPA is working on automated verifiers.

Admittedly I personally am rather fond of Exceptions because they’re really handy during development. They’re great for adjusting your code until errors stop happening at all. But there are legitimate concerns with their use.

Same applies to exceptions. You generally build an inheritance hierarchy of exception classes, and base catchers can catch any of the derived exceptions. So a general case catcher closer to main (or a generic “std::exception” one in main itself) can catch it and react as appropriate.

By the way, planes are probably a bad example. Real-time systems tend to have very strict requirements, and for example the jsf coding standard (http://www.stroustrup.com/JSF-AV-rules.pdf) forbids them. The explanation given however is poor tool support, not any inherent weakness about exceptions themselves, and the document is ten years old. It would be interesting to see if they’d still impose that limitation today.

I don’t work in developing software for aeroplanes, but it seems to me that even in that extreme case an unhandled exception is better than an ignored error. Would you prefer the autopilot crash, or the plane crash? Because if it goes wrong but keeps running, the plane could do anything, while crashing is highly predictable: an alarm goes off, and the pilot flies manually while the autopilot reboots.

Well, he is now actually implementing the compiler for his own language, and he’s making great progress.

You should definitely check out some of his recent demos – there are some really really awesome ideas in there I have never seen in any other language before, like making SOA (struct-of-arrays) structures usable like AOS (array-of-structs).

Yeah I’ve been watching them all as they’ve been coming, the main focus of the language seems to be making it easy to do things that games developers do every day.
The arrays you mention are a prime example, so for people who haven’t watched the videos it’s basically this:
vec3 : struct{
x : float;
y : float;
z : float;
}
array1 : vec3[3]; // Laid out in memory as {x1, y1, z1, x2, y2, z2, x3, y3, z3}
array2 : vec3[3] SOA; // Laid out in memory as {x1, x2, x3, y1, y2, y3, z1, z2, z3}

The idea being you write what you had before, then if you decide you need to change how it’s laid out in memory, you can easily do it without changing code everywhere else.

Also simplifying default values of structures:
vec3 : struct {
x : float = 4.6; // will be default initialised to 4.6
y : float; // will be 0 by default
z : float = –; // Will not be initialised, will be whatever was in memory beforehand
}
a : vec3; // Will be {4.6, 0, anything}
b : vec3 = –; // Completely uninitialised
A lot of the time you’ll want things 0 initialised, if your class wants another default you set it in the definition, if late in development you decide you don’t want to initialise your memory for performance reasons you make a minor change to the definition and it’s done.

Also features like:
var = new Foo;
defer delete var; // Will delete when we exit this scope, safe with early return statements

for i 1..5 { for j 1..5 { if j == 3 { break i } } } // break i will break from outer loop not inner loop

Just basically lots of things that would make your code cleaner, faster to write and in some cases faster to execute as the compiler can be more aware of what optimisations it can make.

Ehh, I don’t like it much. It seems like it helps you code sloppily. For example, C++ is actually moving away from uninitialized variables with the auto keyword and the Almost Always Auto idiom. The idea is, you can declare a variable as

auto x = some_expression;

and x will infer the type of some_expression and be of that type, initialized to that value. For example:

This provides three benefits. First, type inference means that if I change the type that my function returns, I don’t need to retype that type. It’s nifty. Two, I actually get the exact type of the function, without casting to a different type. For example, if I write “int x = FunctionReturnsFloat():”, I’ll get an int and lose the decimal part of the number returned by my function. This might not be what I wanted to do, however. With auto, I can’t make that mistake. Three, you can only declare the variable when you have a value to assign to it (since you can’t infer a type if you don’t have an object).

And if you think about it, what is the point of declaring an object before you actually have the stuffing for it?

Eh, I’d rather stick with explicit initialization and have the compiler generate a Possible loss of precision: float to int warning. The variable is declared in only one place, but used in many.

On the other hand, I really don’t like default initialization. Typing = 0 is two extra characters per variable, defaulting to 0 because you forgot to type = 1 somewhere is a nightmare of runtime errors.

Ah, right, I’m used to mostly doing Java where the compiler is at least by default a lot touchier. Though I recall that you can get away with not initializing fields of an object in the constructor and it just crashes on you if you try to use an uninitialized field, which isn’t as convenient as a compiler error but at least easier to diagnose than a wrong default.

Well, the point of Jon’s initialization syntax is that you can use the old C/C++ undefined behavior initialization if you want to, but have to explicitly specify that.

As for an example where you don’t want to default initialize: Don’t think objects but think arrays of primitive values. Say you pre-allocate a buffer of 1024 float values. In C/C++, you will only pay the cost of allocation, whereas in languages like Java/Python you will also pay the additional cost of pre-filling that data with default 0 values.

If you know that you are going to fill that array with values generated in some way, but you want to allocate it up-front in one big chunk, pre-initializing to a default value is going to be wasted time.

Of course, you wouldn’t use this kind of code in higher level areas, but in performance critical sections, like the guts of a rendering engine.

Arrays are the only place where uninitialized memory makes sense, I agree on that. Although then you have to remember that you can’t use that memory until you initialize it, and there’s no way for the compiler to catch all errors related to that.

Is it not feasible/possible to check before accessing to prevent buffer overflows?

I’ve only used PHP in recent years so I’m spoiled by functions like array_key_exists($key, $array) and other various ways to determine if I’m accessing a valid element in the set. I know PHP has way better text processing functions (as it should for its purpose), but surely it’s possible to check if the index you want is in the array or list you’re referencing. Is it just too cumbersome?

It’s possible, but it’s easier not to. Since programmers are lazy, not doing things wins out over doing them. Especially when it comes to safety, where doing dangerous things is fine so long as you’re careful. Most people overestimate how careful they are.

Slightly more seriously, buffer overflows are really not that common in modern C++ code. Between std::array, std::string, std::vector and range based loops, there’s no reason why you should ever find yourself in a situation where you run out the end of an array, or access items that don’t exist, etc. That mostly just happens when interacting with old code, or by old-school programmers that haven’t been introduced to the newer, safer ways of doing things.

I don’t believe it’s ethical to write code that accesses the Internet without the language doing bounds checking. You are one bug away from adding your customer’s computer to a botnet. Even if you only connect to your servers, it’s quite possible to stick a proxy in the middle if they connect to a Wi-Fi network they shouldn’t have trusted. It’s been 25 years since a buffer overflow in the Unix finger server let a worm bring down the Internet; it’s simply unacceptable not to learn from that.

I still like C. No bloody ++, # or whatever postfix. And it has been updated, there is a 2011 version where they actually worked with the C++ people.

I spent a few days recently really diving into C++, I ran into some newer C++14 stuff that was supposed to make things better (unique pointers) and they drove me insane! I mean, it was bad enough that the tutorial series I was following insisted on typing out std::cout rather than cout for example (didn’t use “using namespace std”), but once you start using your own classes and libraries, suddenly I’m typing libraryx::function::thing etc… the lines of code got insane. The guy wasn’t even using the constructors and destructors (it was no wonder he wanted to use smart pointers, because he was too stupid to use a destructor). The only thing I think I liked about C++ was the vector class, I found it really handy so I sat down in C and wrote my own functions to do what I liked about it. Pretty straight forward really. And I don’t have a problem remembering to free() memory and handle my pointers properly because I don’t have my hand held by C++ doing it all for me with bloated, lengthy lines of code.

I don’t know… the newer versions of C++ “functionality” turns me off of it even more. I guess that whole template structure annoys me, there has to be a better way to do them.

After spending some time on C++, actually creating a usable graphic engine written in C++, I was content to move back to my beloved C with all it’s pointer goodness where I can just use a single * rather than a 40 word essay. ;)