I switched to one step init using constructors a couple of years ago and never looked back. I don't use exceptions (in C++) because in my games if I need something it means I really need it to keep the game going so if something goes bad I just crash with a call stack log and go fix the bug. The rare things that might fail and can be recovered (ie, an optional data file that is not there) get pre-tested then constructed if the situation allow them to be constructed.
But the rule is, right after an object is constructed it has to be ready to go and fully initialised... I have the feeling this has incredibly decreased the amount of runtime crashes in my software because construction order and object relationships are much more well planned.. no init( bla, bla) , no setMyImportantPointer( bla* bla ).. if one object needs another one to be fully constructed the relationship is explicitly expressed in the constructor.. so everything is forced to be top-down.

Consider the C++ standard library. With a std::fstream you can specify a file name as constructor argument, but if the file isn't opened, it doesn't throw an exception. In comparison, if any of the std::string constructors fail an exception is thrown.

That was not a design decision, it was because the iostreams was written before exceptions were added to C++. You can enable exceptions in iostreams by calling the istream::exceptions() or ostream::exceptions() method.

And I wouldn't refer to the iostreams library for any kind of design advice. It's not exactly a shining example of C++ engineering.

All my classes are initialized in their constructors and destroyed through their destructors.

I've worked on countless large-scale projects and claiming that some objects "need" two-stage construction because they are too complex just means you're putting too much stuff in a single class. One class, one purpose. Using methods like Init() and Shutdown() is just sloppy style and could be avoided by properly designing your object model. Yet for some reason some C++ programmers appear to be scared of adding classes - usually with excuses like overhead, performance, binary size and whatnot.

C++ classes are designed to be usable without overhead. You can declare a struct with some methods and an std::uint16_t in it and it will have a size of 2 bytes. If you stack-allocate it's the same as if the code was implemented in its owner class. That's why there's really no point to writing silly classes like CGraphics with InitWindow(), InitD3D() and stuff like that.

Same goes for exceptions. I don't do error handling without exceptions. That includes Windows, the Xbox 360, Android (Crystax NDK) and WinRT (Win8+tablets+phone).

There aren't only those cases where someone mistyped an asset filename (and even then exceptions would be the appropriate choice: the OpenFile() method can't resolve the error, so it goes up the call stack - the LoadImage() method can't resolve it either, so up it goes again - the ReadAssets() method finally could catch it, log the error and use a pink placeholder asset). Back to the paragraph's opening line, there are tons of other cases where errors can occur and you can't do a thing. Failed to initialize Direct3D. No compatible graphics adapter. Unsupported pixel format. Swap chain resize failed. And of course all those little API calls that usually work, but where due diligence requires us to check that they really did their job.

The point many C++ programmers don't get is that exceptions aren't fatal, they merely indicate that the current scope can't reasonably deal with the error. Yes, there's the concept of exception safety, forcing you to employ RAII. If you ignore it, funny things may happen. Without exceptions, there's the risk of forgetting to check result codes, forcing you to make your code unreadable by littering it with error checks. If you ignore it, funny things may happen, too. Given the decision between tedious result code checking with unreadable code and equally tedious RAII programming with nice code, guess what I'll pick.

Forgetting about the exception part "code safe" vs "fast code & stable" phylosophy & practices being discussed, I would like to raise a point related with building an actual game:

Your game will eventually get to the point where a level is restarted (may be because the player died), or a full game session was restarted. Sometimes the one-step approach works fine, because objects from the previous level/session get destroyed and recreated.
In fact I use this a lot (minus exceptions, I don't use them)

But it may be posible that some of these objects must, for some reason, stay persistent throughout levels or even game sessions. So, you can't delete them and create new ones, but you'll probably need to reinitialize most of it's variables to a default value.
For these cases a two-step approach is much better suited, where you just call init( bool firstTime ) rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).
I also use this two-step approach when suitable.
Furthermore the multi-step initialization design adapts easier for multithreading (leave the concurrent part in one or more passes, execute the serial code in another pass).

That was not a design decision, it was because the iostreams was written before exceptions were added to C++.

C++ went through a process of standardization where features were added to the language and things like the CFront library designs were changed in response to those feature changes. The new versions of the standard library were given extensionless header names and the old CFront style implementations continued to be available for a while from most compilers in the form of the .h header files. Ex: iostream vs. iostream.h, fstream vs. fstream.h, etc. There are actually some interesting differences in the interfaces between the old iostream library and the newer iostream library, such as the removal of the noreplace and nocreate flags. Another is that the old-style iostream library gave istream and ostream protected constructors. To extend the library you would derive from istream or ostream, and in your constructor use the default constructor for the base class and call the init() function from the base class. Whether or not you believe the interface of the iostream portion of the standard library to be a good design decision, it was nonetheless an actual design decision.

But it may be posible that some of these objects must, for some reason, stay persistent throughout levels or even game sessions. So, you can't delete them and create new ones, but you'll probably need to reinitialize most of it's variables to a default value.For these cases a two-step approach is much better suited, where you just call init( bool firstTime ) rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).I also use this two-step approach when suitable.

And that is, in my opinion, where your design went bad. It only seems that the two-step approach is "much better suited" because earlier on, you failed to separate those parts of your game object that survive a map change from those that are map specific.

By knitting them both into the same class, you created a weird amalgamation which will cause raised eyebrows in many situations: if you want to save the game's state, you would have to carfully check what is permanent (= you save it) and what is level-specific (= don't save it, but pull it from somewhere after the object was loaded). If the map is unloaded, only a subset of the methods of those game objects may be used (= be careful to document which methods you may use when displaying eg. player stats in the menu without a loaded map).

Whether you implement such two-stage initialization with Init()/Shutdown() or Reinit() are just details - it's both two-stage initialization. Code duplication would be avoidable in both cases.

For functionality such as level resets, this makes me think of functions like std::vector<T>::clear, where std::vector could be implemented with either 1-step or 2-step initialisation, regardless of it's ability to be reset back to a default state -- This seems an orthogonal issue, and should be equally possible under either idiom...

For times where you want to set a large amount of state back to some very particular values (e.g. loading a save game, re-loading the default level state), I'd just make sure that all of that data is POD, in as few contiguous blocks as possible, and uses offsets instead of pointers, so that it can be set/reset with just a single (or a few) memcpy calls.Comparing this simple approach to fancy 2-step deserialisation systems with init order dependencies and pointer-patching... those systems just makes me think of the pejorative "enterprise software"...

I remember learning about the fundamental usefulness of invariants and pre- and post-conditions and how they can make reasoning about software easier and more correct. I remember as object-oriented tools because available outside academia in the 1980s, these engineering concepts were applied there and became ideas like object invariants.

Then, PCs became popular and evryone typed in BASIC from magazines and we were thrown back to the spaghetti code of the 1950s and 1960s. Then along came the web and everyone wrote JavaScript. Now, we see arguments whether good engineering practices developed through peer-reviewed journals over decades of experience is good (ie. using C++ constructors to construct objects) or whether it's best to throw spaghetti at the fridge and see what sticks because that's the way you've always done it (using C/Fortran/Basic-style initialization functions to set values in your structures).

I guess there are a large number of factors to take into consideration when you decide which methodology to use, ranging from your age and experience to whether you're going to be maintaining the software in the long term and how long it's been since you manager has actually touched code (the latter is usually the most important factor in any tech decision).

Builder pattern solves this. Pass only the data or private implementation from builder to the constructor, and in constructor do only a no-throw member initialization. Builder can be many-step, polymorphic, and throw exceptions or return null values, whichever one likes. And the constructed objects are always usable and valid.

But it may be posible that some of these objects must, for some reason, stay persistent throughout levels or even game sessions. So, you can't delete them and create new ones, but you'll probably need to reinitialize most of it's variables to a default value.For these cases a two-step approach is much better suited, where you just call init( bool firstTime ) rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).I also use this two-step approach when suitable.

And that is, in my opinion, where your design went bad. It only seems that the two-step approach is "much better suited" because earlier on, you failed to separate those parts of your game object that survive a map change from those that are map specific.

By knitting them both into the same class, you created a weird amalgamation which will cause raised eyebrows in many situations: if you want to save the game's state, you would have to carfully check what is permanent (= you save it) and what is level-specific (= don't save it, but pull it from somewhere after the object was loaded). If the map is unloaded, only a subset of the methods of those game objects may be used (= be careful to document which methods you may use when displaying eg. player stats in the menu without a loaded map).

Whether you implement such two-stage initialization with Init()/Shutdown() or Reinit() are just details - it's both two-stage initialization. Code duplication would be avoidable in both cases.

I won't deny that went wrong. But unless you're coding Tetris or Pac Man, design issues will always come out given enough scope. I do actually separate persistent from non-persistent data; but combinations were trickier than I anticipated.

These are all different kinds of reinits you should take into consideration (although some may not apply to all kinds of games). They're not the same and are treated differently:

Level reloading. Player died. Reloading should be very quick to prevent frustration. Of course, a well balanced game should prevent a player dieing often, but that's not technical issue. Anyway, YOU are going to die very often while balancing the game, and high reloading times don't help

Object reloading because a new level was started or a different area was reached. This is usually taken into consideration.

Object reloading for memory & performance: It's still the same level/area/whatever, but memory is going out of the charts and your engine is capable of destroying objects which are no longer needed until the player goes closer again to them. This is usually taken into consideration but rarely implemented the right way.

Reloading for in-place editing: This is often the most overlooked, the most versatile one (which is what makes it hard), and one of the most relevant! Iteration becomes very important into making a great fun game. And real-time editing is key for improving iteration. The point of this kind of "reload" is to prevent the designer from closing and opening the program again each time he makes a change. This could be a GUI modification, a stat value change, a different placement of an object, a change of size. It can go worse, it could be a change to a value used to precompute something at level-loading time. And you need to implement this kind of reload to be faster than closing and opening the game, not crashing the game (i.e. dangling pointers? div. by zero?), inconsistent states (most objects still using the old values)

These are all reloads that may be treated differently (specially the last one). And given enough complexity they start to become a bit contradictory, just in the same way that GPUs are faster when sorted front to back, traversing by shadowbuffer to save switching rendertargets, traversing by surface type to save switching shaders, and traversing by skeleton to keep the animation caches nice and warm, supposedly all at the same time (Yes, I just quoted TomF's blog's Scene Graphs article).

Oh, and I forgot... keep it FAST. In place editing can be done the right way. But then you get Blender-like or Maya-like performance. It's good, but nowhere near good for a real-time game. Or you can build your game to run very efficiently, but then there's a lot to preprocess or tag as "read only".And make sure your reloading for "memory & performance" is done in the background. Framerate spikes are very bad for gameplay experience.

This is, among many reasons, why some engines opt for two different executables for the game editor and the game itself rather than one. That's ok, but just make sure you have the resources (namely time & money) to keep two different projects up to date. A brilliant design can minimize the effort to keep them both synchronized, but who said it was easy?

And like you said, avoiding code duplication is the key. And I can't make more emphasis on it. May be that part was missunderstood from my post? I never vowed in favour of duplicating code or tried to implied I ended up doing that. It's the other way around, I was trying to imply how to prevent it.

I prefer one step initialization. Unless you have a thing for extra typing when creating objects. Also, I find that one-step initialization tends to make you think more about object graph / dependencies, and so you are less likely to end up with co-dependent / mutually dependent objects and order-of-initialization problems.

The issue of how to handle initialization failure (exception or flag or something else) is completely independent of whether or not you do one/two step init. A constructor can flag invalid? or throw and so can ::initialize(). Error handling is important but for this particular topic it is a red herring issue.

The only practical consideration here that I can think of (besides extra typing) is whether or not you want to be able to put your object (and not a pointer to it) into STL or similar containers. Many/most containers require that your object have a no-argument constructor. If a no-arg constructor makes sense and you want to put the object into a container, then one-step is much more practical.

If you aren't sure, write a separate ::initialize method and have your ctor call it. Be sure to do the right thing if ::initialize() is called twice. If the object in question lends itself to object pooling, then you'll want ::initialize(), ::reinitialize(), ::clear() or similar so that you can tidy it up before you put it back in the pool. Speaking of pooling (I realize this is going a bit off topic), if you think you are likely to end up with pooling then make all your ctor's private and use a static class method to get an object instance.

I use one-step almost exclusively, the code is simply cleaner/shorter that way. I do very rarely use two-step, almost always because I wish to reuse memory(say the type in question contains a few dynamic buffers), but even here I hide the fact that it is two step initialization from the user of the object(template magic).

@Matias Goldberg: I got that. I didn't want to imply you recommended copy&paste programming. When you wrote "...rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).", in my opinion reinit() is a) just another form of two-stage initialization and b) can avoid code duplication just the same (it's not like reinit() would somehow force it).

I would still strongly prefer one-stage initialization in all 4 cases you listed. But instead of just criticizing your design, let me explain how I would have done it and you can criticize mine )

Object reloading for memory & performance: I have a logical game object which maintains the persistent state (eg. class ScarySpider with int health, Item loot, int xp). Add AI, physics etc. via composition. This thing gets loaded and saved with a level and in savegames. With a client/server model, the server would only work with this logical game object. To actually render it, there is a class ScarySpiderPresenter which creates and maintains the visual and audible representation of the logical game object in the game's scene graph. If I wanted to reclaim memory this way, I'd destroy the ScarySpiderPresenter but leave the ScarySpider.

Reloading for in-place editing: Kill ScarySpiderPresenter, create ScarySpiderEditorPresenter. The latter could derive from the former or both could derive from a common base class as appropriate if there is shared functionality. This presenter could draw bounding boxes, overlay the AI's current patrol path / attack target or display scale and rotate widgets or whatever you like. Normal gameplay is not burdened with dragging the editor-specific state variables along as dead code.

Level reloading: Either be very clever and mutate the active world's state into the state loaded from the saved game / level (this is how I understood your approach) which will neatly cause the presenter hierarchy to create/destroy presenters as needed, all without requiring the save/load code to get clogged dealing out references to graphics devices, input managers and audio managers to the newly created objects. Or alternatively go ahead and kill the world with its presenters, then recreate it, taking into account that long loading times primarily result from needlessly reloading resources from disk - which can be solved by a resource manager with a LRU list and a memory budget.

So in short, whenever two-stage initialization is used, there are actually two classes being glued together. Yes, design gets more complicated as a project's scope grows and the effort to change late in the process becomes ever larger. All the more reason to pay attention to it, especially if one is not just coding Tetris of Pac Man!

@Matias Goldberg: I got that. I didn't want to imply you recommended copy&paste programming. When you wrote "...rather than having to refactor everything or resort to copy-pasting the constructor into a reinit() function, then debug why reinit is failing (because you pasted more code than you should, or forgot to copy something).", in my opinion reinit() is a) just another form of two-stage initialization and b) can avoid code duplication just the same (it's not like reinit() would somehow force it).

Oh I see the confusion. Let me rephrase and be more clear what I was trying to say:When using the one-step, eventually something may pop up that requires a reinitialization that is inconsistent or hard to solve with the one-step initialization solution you've designed from the beginning. To solve that issue, you have three choices:

Refactor everything so that you still end up using One-step init. The most elegant solution but requires time to refactor.

Modify the affected part so it ends up as a two-step init. (for this/these particular case(s)). For example, first pass may contain immutable data, while second one would contain data that is supposed to be reinitialized. When it's time to reinit, just call the second pass' function. Much faster to code and still elegant.

Copy / paste the constructor into a different function and use that function when you have to reinit as an exceptional case--> Avoid this, it's error prone.

I would still strongly prefer one-stage initialization in all 4 cases you listed. But instead of just criticizing your design, let me explain how I would have done it and you can criticize mine )

Cool

Object reloading for memory & performance: I have a logical game object which maintains the persistent state (eg. class ScarySpider with int health, Item loot, int xp). Add AI, physics etc. via composition. This thing gets loaded and saved with a level and in savegames. With a client/server model, the server would only work with this logical game object. To actually render it, there is a class ScarySpiderPresenter which creates and maintains the visual and audible representation of the logical game object in the game's scene graph. If I wanted to reclaim memory this way, I'd destroy the ScarySpiderPresenter but leave the ScarySpider.

It's pretty much how my system already works ;)There will be a few (solvable) issues I ran to, for example the AI needs bounding box information for their calculations, which needs to be handled correctly during the composition or else they'll end up tied together. The Physics need to access the Animation data to extract motion, and so on..Unfortunately in my case destroying ScarySpiderPresenter is hard (but not impossible) because the Graphics is manipulated from a different thread than Logic. However, I do have control free of what happens inside ScarySpiderPresenter though; but since I use Ogre I start being tied to what happens inside Ogre::Entity.A trade off from using an existing tech rather than rolling my own or modifiying existing one. Unfortunately Ogre is not very multi-thread friendly and I don't have time to spend on it, but I've already submitted a proposal for 2.0. It would be very easy for me to implement those features but it takes time, and currently I'm commited into finishing my own project.

Reloading for in-place editing: Kill ScarySpiderPresenter, create ScarySpiderEditorPresenter. The latter could derive from the former or both could derive from a common base class as appropriate if there is shared functionality. This presenter could draw bounding boxes, overlay the AI's current patrol path / attack target or display scale and rotate widgets or whatever you like. Normal gameplay is not burdened with dragging the editor-specific state variables along as dead code.

That's a very interesting method (and surprisingly simple, I like it). I'll take it into account! Thanks.

Level reloading seems fine, there are hundreds of ways of doing it. Like I said that's the one that gets often most attention.