A game developer's field journal

Most of the art that comes my way is in flash format, and so I’ve been using Flash’s built in sprite sheet exporter. It works ok, as long as you work with its limitations. As I’ve been getting into pixel art recently, I’ve finally had a reason to try out Texture Packer, From Code and Web.

They are kind enough to offer a free copy of the software to developers with active blogs, so I was lucky enough to get a license at no cost. However, using the software, I can say that it would be well worth the US$39.95 it sells for.

My favourite feature so far is “Smart Folders”, which automatically detects and adds new assets to the sheet. Just export image sequences from Photoshop to the target folder, and they are immediately packed and available to be published into your game. There is excellent support for a wide variety of game engines and formats, and the ability to create a custom output format.

I’m a big believer in the idea that it’s worth spending a little money to avoid hours of time messing around with inferior tools. Texture Packer is a professional tool for developers who want to set up a smooth art pipeline, and I can happily recommend it.

UPDATE: Since writing this post, mainly as a note-to-self, I’ve come across this much more thorough post covering the same material and more.

Inferior post follows. See bottom of post for bonus links:

I’ve recently spent a huge amount of time trying out pretty much every major (and many minor) pixel art animation tool available, and have been quite surprised to find that there is no perfect tool out there. There are plenty of choices, but they are all lacking in one way or another.

After all is said and done, I usually find myself falling back on Photoshop. Although Photoshop has pretty awkward animation tools, they can be made to work for you, especially if you can manage to locate a couple of key settings that make your life a lot easier.

To get started, create a canvas of small size, such as 32×32. Under the ‘Window’ menu, select ‘Timeline’. This will open the timeline panel. Select frame animation mode (rather than video mode), and click ‘Create new frame animation’.

You should now have a single frame of animation on the timeline. An important thing to realise about how the frame animation editor works is that it is not like using most pixel art editors. The majority of editors basically duplicate the whole document for each frame, allowing you to edit them separately without affecting the others. Photoshop does not insulate frames from each other in this convenient way. When you make a second frame and edit the image, that change will be visible on the first frame too. Here’s the reason why:

Each frame of animation is basically just a group of layers. You can choose which layers are visible for a particular frame, and you can also offset their position, but if you edit their pixels, that edit will show on every frame where that layer is visible. For this reason, you need to think of layers as your basic unit for a frame, and make a separate layer per frame. Some people also animate by making lots of layers and moving them around separately. It’s a very different work flow to that found in dedicated pixel art animation software, but it can be made to work.

Headaches and their Cures

Probably the first major headache you will encounter with this setup is that when you add a new layer, it gets automatically added to all the previous frames, requiring you to painfully go back and hide it for every previous frame, a job that gets more and more annoying the longer the animation gets.

You can fix this by using the option in the timeline menu called ‘New Layers Visible In All Frames’.

While you’re there you may want to turn on ‘Create New Layer For Each New Frame.’ This option saves a bit of time, depending on your process. You still have to hide the layer for the previous layer. Note that you can drop the opacity before hiding it and use it as a kind of simulated onion skin. (Another basic feature that’s sadly missing from Photoshop’s frame animation.)

Another headache is caused by a weird setting that can be found on the layers panel, called “Propagate frame 1”.

This will make any changes you make on frame 1 propagate across the animation, which is not what you want. This tick box seems to default to a selected state and must be unticked for every layer (if you intend to animate using layer offsets).

So those are my tips to make an awkward and frustrating experience a bit less aggravating. To keep things simple I try not to use too many layers per frame. With time and practice it is clear that it is possible to develop a crazy complex process like this:

UPDATE:

Check out this great video by Barney Cumming, the animator of Crawl, showing his pixel art animation process in Photoshop:

I’ve been doing a big survey of all the available pixel art and animation tools available.

I have been a bit surprised to discover that there just isn’t any single killer app for doing pixel art animation. Although there are tons of retro pixel art games coming out all the time, for some reason the tools are lacking. I don’t know what people are using to make their artwork, but it seems that you are doomed to suffer one inconvenience or another no matter which software you choose.

What I Want Out of an Application:

There are a few things I want out of a piece of software. Some things are just “nice to have”, and others are essential to my process.

Cross Platform – not absolutely essential, but I would much prefer an option that allows me to work on any platform.

Frictionless drawing experience – drawing tools must not be fiddly or time consuming to use.

Useful animation tools – There are a lot of tools that do drawing well, but for making games the animation work flow is of vital importance. I want a smooth animation experience, with features like onion skinning etc.

Good colour selection and management – this is an area where a lot of software is weak. This may be because pixel art comes from an old school, palette-based mindset, but if it’s not painless and immediate to select the colours I need, I just can’t work with a program, no matter what its other strengths are.

Offline: There are a few online editors around, but I prefer not to be reliant on the internet.

These are the apps that I tried out, and my take on their strengths and weakness:

Aseprite is cross platform, which is a real selling point for me. It uses what to me is the best approach to pixel animation. Each frame is basically a dupilicate of the entire document, and can be edited separately.Its weak point for me is the colour selection. It uses a palette based system, which might suit some people, but I’d much prefer a simple colour picker. Aseprite forces you to dial colours using a collection of sliders, which makes it essentially unusable, to my mind. Apart than that, it’s close to being awesome. So close…

The Good

Cross platform.

Decent animation flow.

The Bad

Colour management and selection is not great. (Focus on limited palettes.)

Pixelated ui is kind of freaky. (Not really an issue though, I kind of like it.)

Pixen is a great tool with great features. It is as close to being the perfect tool for pixel art animation as I’ve found. I have not used the very latest version, but I’ve had very positive experiences with it in the past. The version I used (the last open source version available to build yourself) took the right approach in all areas. It does everything you need, and nothing more, allowing you to focus on being productive. Sadly Pixen is only available for OSX which is not ideal, and prevents me from committing to it.

The Good

Great tool with great features.

The Bad

At first glance this app looks great. I bought the licensed version and was feeling pretty happy until I started trying to do some animation. This is where things started to fall apart. The app was obviously written with tileset creation in mind. It is great for this, but it tries to force animation into the same grid based system used for tiles. It really needs a separate animation module that works the way most the other apps do, with a separate canvas per frame. With that, this would be a top contender for best pixel art tool on the market. Without it, it is unfortunately not very useful for animation. It does have strengths though, such as an especially good colour selection tool.

The Good

Cross Platform

Nice UI

Great colour selection tool

Great tile creation features

The Bad

Beta software (some glitches present)

Poor animation workflow

Basic grid-based layout not suitable for animation

Can’t copy more than one layer at a time from frame to frame, making layer-based animation impractical

Graphics gale is an older app, with a quirky and non-intuitive interface, and a fair bit of weirdness. Once you get through the learning curve and figure out how to get it to do what you want, it is a pretty strong contender. I find the colour management system to be a bit too limiting. The colour selection tool is just not that great. There are a few third party applications around for creating palettes, but it is definitely a layer of friction that it would be nice not to have to deal with. Another quirk is that it does not support transparency. In order to export an transparent image you have to select a colour to be the transparent colour. This also means there is no eraser tool. The only choice is to draw with the transparent colour to erase unwanted pixels.

The Good

Well established

Free

Decent animation setup

The Bad

No true transparency. Have to jump through hoops to select a custom transparent colour

Fairly horrible interface

Not intuitive to learn, but that is easily fixed by reading a few forums, etc.

Photoshop, if you can afford it, is an awesome piece of software. It’s the ultimate tool for image editing, but it’s not an animation tool. Every professional artist I’ve talked to says the same thing – Photoshop’s animation tools are terrible. Although they are high friction, it is possible to develop a process that works. There are definitely professional pixel art animators using Photoshop, and in the long run you would expect it to pay off to have access to all the features that Photoshop has available in it.

The Good

Awesome drawing experience (Obviously… it’s Photoshop!)

The Bad

Animation tools are limited and annoying. (Can be made workable with a few basic setup tips)

Piskel is an open source web app, but there is a down-loadable offline version. It is a simple app, but has all the features you need to make pixel art animation, including a few little innovations that make it special. It has a very nicely designed interface and excellent colour selection and management tools. The current drawback of the offline app is that it is basically just the web app, packaged with Node-Webkit. Because it doesn’t use the underlying NodeJS FileSystem api available in Node-Webkit, it still has the same same security sandbox limitations as a web page. That means it can’t save to disk without having to open a save dialog every time. (I miss Ctrl-S!) Since it is open source, I’ve forked the code and have begun to add the file IO features that Node-Webkit allows. Hopefully these changes will eventually make their way back into the master branch. This is a good project to keep an eye on, as it is is already great software, and can only get better.

The Good

Open Source

Cross Platform

Nice interface and features

Good animation workflow

The Bad

Offline version does not have true file IO capability (yet)

Conclusion

There seems to be no perfect app for pixel animation. I seem to find myself falling back on Photoshop, but the pain of animation keeps pushing me away again. If you work exclusively on Mac, then I recommend Pixen. I think Piskel has great potential, and just needs a little love on the desktop version.

Have I missed an app? Has anything I’ve said become wildly out of date? Get in touch and let me know!

Thinking back, some of my most memorable gaming experiences have been played in co-operative mode. I have a special thing for local co-op in games, although it is surprisingly rare to find it done well, and even less common for it to be central to the design of the game.

Co-op is often tacked on to games by just adding a second player instance and doubling the bad guys. This can be enjoyable, but co-op shines most when players control characters who are not alike. Having differing capability makes co-op more fun. When the only way to progress is to work in tandem, a unique feeling of shared flow occurs.

Probably my first experience of this was back on the C-64, playing Wizball with my next door neighbor. The second player could play the Cat, a very different character to the Wizball itself. It zipped and zapped, and played a unique role in the game mechanics. The characters felt totally different to play, and both roles seemed as fun as the other.

I recently picked up Lara Croft: Guardian of Light for PS3. I've always enjoyed the Tombraider games, often playing through with a particular tomb raiding buddy of mine. Guardian of Light is a well designed game, but the thing that really brings it to life is the co-op mode. This is one of those rare games that has been designed with co-op as the main intended mode of play. The single player game works, but is clearly a compromise on the primary design.

The two characters, Lara Croft and Totec, have differing capabilities, and it is impossible to progress without co-ordinated use of these abilities. It is often very clear how the two should co-operate, but there are enough different interactions that time pressure and other dangers can really push the excitement level. The pair are continually saving each other, protecting each other. For me, this is where the essence of the co-op experience lies.

Multi-player games bring people together, they connect people and bring about emotional exchange. Unlike competitive multi-player, the co-op experience is one of companionship, friendship, altruism, kindness, and mutual protectiveness.

Journey takes this sentiment much further than most. The co-operative experience in Journey is so subtle that its primary role becomes simply to keep each other company through what is otherwise a quite lonely and even frightening landscape. Players can aid each other in small ways, leading each other or exchanging energy, but it is this sense of not being alone that makes the co-op so effective in Journey.

Brothers is an interesting example of a co-op game that is not really a co-op game. A "Single Player Co-op" experience, where a single player simultaneously controls two characters with two analogue sticks. The emotional experience of the connection between the two characters is so vivid that when I recently revisited the game I had actually forgotten that it was a game for one player.

There are surprisingly few co-op games. Competition is far more common, as is single player. Gaming can be a lonely experience. Sometimes just standing side by side destroying hoards of minions can create an emotional connection between two people that is far deeper than any to be had in a single player game.

It seems to me that there is a niche here that calls out to be filled. I so often hear gamers complain that they don't have games to play with their significant others. I am always on the lookout for interesting co-op games to play with my wife and children, and have found the market sadly lacking, especially in games that appeal to both men and women.

I’m going to say it up front. Most free to play games available on mobile are little better than casino slot machines. They are engineered to be psychologically addictive, to manipulate the player, to appeal to the weaknesses and mental illnesses of players. They lack all merit.

If you love games, then the free to play business model is a corruption of what games are for and about. The only way to make money from a free to play title is to design a crippled game, with artificial barriers, harsh difficulty spikes, and obsessive repetition built in. Worse, the games themselves are now so designed around psychological manipulation that there is very little actual game play left. Sometimes there is none left at all. Players are caught, like junkies, in a cycle of grind and reward. Like slot machines, the process simply feeds an addiction, firing pleasure senses in the brain with small but regular rewards. This pattern is not a game, it is a vice disguised as a game.

Perhaps the worst thing about the current situation is that companies that might otherwise make great, fun games with real value are forced to go down the freemium route in order to stay afloat. Only the most ruthless will survive, and unfortunately ruthless in many cases means making inferior games, games that are not be worth playing by anyone who really love games. Loyal audiences, built up by making great games, are treated with disrespect, reduced to a resource from which to extract “whale” players who will pay far more than anyone should.

Happily, there are some free to play games that hit the mark. They have found a business model that works for them while maintaining integrity. This model can be found in a number of entertainment industries. It is often referred to more honestly as “pay what you want” rather than the very dishonest title “free” or “freemium”. In this model, a product such as a complete game, a music album, a book etc, is given away for free, or by donation. Games may also have micro-transactions, but they are often for token elements that do not affect gameplay, and are generally designed as an opportunity for players to “tip” developers for a great product. However, this kind of tip or donation will only ever be awarded to developers with real integrity. It must be clear that the game has been created with love, with full respect for the players, and without compromising game play. Perhaps there can only be a few success stories in this arena, but maybe that’s how it should be. Cream rises, and only the best games will succeed. There is a huge glut of horrible games, and only a few good games. Who needs the inferior junk?

Games, like art, should be made firstly for love and only secondly for money. Games made purely from the perspective of business will always suffer. Freemium titles, like slot machines, are an expression of this greed for money. They disrespect the wellbeing and precious time of people.

Rust is an exciting new langauge from Mozilla, that seems to be very well suited to game development. It is a compiled language that offers safe, gc-free memory management, at near c/c++ performance. For game development it will allow us to work in a language with some of the convenience of a higher level language, but with the low level control and performance of "less convenient" languages like c++. In this sense it starts to look like a bit of a holy grail, in particular for game development where performance and low level control over memory is a high priority.

Rust seems to solve many of the dangers and problems of c++ by simply making it impossible to make many of the terrible mistakes that are possible in c/c++. By being simpler, and removing some of the "ultimate powers" that we are used to in c++, we can hope to be more productive, and spend less time doing mental backflips and debugging the results.

Mozilla are clearly working very very hard to make adoption of Rust easy for new users. Part of their strategy for easing the on-boarding process is to have an extremely strong commitment to backward compatibility once version 1.0 comes out. (Full stability is promised from around feb 2015.) Another part of their strategy is Rust's package manager, Cargo. It makes setting up and checking out projects very easy. It reminds me a lot of of using node / npm. You can check out a project from github, call 'cargo build', and watch it download and build all dependencies from their respective repositories before your eyes.

Unfortunately, at this moment in time the language is still in a high state of churn, constantly evolving. I have found it hard to get many of the open source projects on github to build, presumably because it is near impossible to keep them stable while the language itself is constantly shifting. Come February, if Mozilla's promises pan out, we can expect all that to change.

There are already many libraries available, and lots of wrappers around existing C / C++ libraries. These include several multimedia libraries and game dev kits. There is a nice list of multimedia related projects on the Rust wiki.

So far I've talked mainly about what Rust promises. Looking at the language itself, it is relatively simple and easy to comprehend. It does not support classes or OOP in the classical way that many are used to, although OOP style programming is possible using the traits system. My own experience with Rust is still limited to a few learning experiments, and it would take developing a much larger project to get a real sense of its strengths and weaknesses.

I'm quite excited about the future of Rust. At this stage it's a waiting game. Until it stabilises, there is only so much you can do, beyond learning the language. I have a feeling Rust is going to explode though, and that it will become very popular for game development. I have a similar feeling about it as I did when I first started learning about Node.js in 2010. It feels like it fills a niche where there is a huge amount of demand. Sentiment on the internet appears quite universally supportive, and I often see remarks that it is "better than Go" and "better than D". Time will tell, but I'm feeling like backing this horse, and will be putting some time into working with this language.

I was thinking through ways to do pathfinding on a grid, and this is what I came up with.

I’m a big fan of flow fields, and had always thought they would be a great way to efficiently control the movements of a lot of entities at the same time. Unlike an algorithm like A* or similar, which requires every entity to find its own best path, this technique calculates a distance field for the entire grid, so that any number of individual entities can share the same data. For any given location on the grid an entity can find out which direction to travel in by simply choosing the adjacent tile with the lowest distance value. There is no need to to calculate the entire route. The end resulting flow field can be calculated on the fly or when the initial distance field is generated.

Assuming the target is the player, the flow field only needs to be recalculated when the player moves from one tile to the next.

The algorithm is best visualised as a flood that extends from the origin, out through the tile map.

– With each iteration, add the newly marked tiles to the end of the process list.

– Process the list until all accessible tiles have marked and processed.

This animation illustrates the process in action:

In the final step you see the directional information that is available to entities.

Below is an example implementation running in JavaScript. Move your mouse around the grid to see the field recalculating. Note that I am using a wrapping tile-space, so entities will often head out of one edge and into the opposite.

Click to regenerate the solid tiles:

I had intended to go onto more detail, but as I began writing this post I came across this great article by Sidney Durant, about this exact technique. I highly recommend his article and video on the subject. He describes how to take the technique one step further by calculating a vector field. This results in a much smoother movement, as entities will travel diagonally where appropriate. In the above demo I am using the “lowest adjacent tile” approach. Each entity has a “hint” to resolve the problem that arises when there is an equilibrium, or when there is more than one choice of tiles. Because my rectangular entities can overlap more than one tile at a time they tend to move in a somewhat diagonal direction, although not as smoothly as with the full vector field.

This is the source code for my implementation of the path field generator, in TypeScript:

Cocos2d-x has reached version 3.0 final. This version looks like it has some nice new features and improvements, including better performance and a focus on C++11 style code.

Ricardo Quesada, the creator of the original objective-c version of Cocos2d, now works full time on Cocos2d-x at Chukong Technologies. I’m really enthusiastic about this new focus on the C++ version and think it’s a great sign for the engine.

Check out Ricardo Quesada talking about the new features of Cocos2d-x:

My recent post on the many available game development platforms was prompted by a pressing existential dilemma. My favourite platform for making games (Javascript / the “Open Web Runtime”) was giving me development pain that I wasn’t willing to tolerate. I had hit that wall with JavaScript where the scale of my project made things unmanageable, and I was wasting far too much time debugging annoying little mistakes. I was over it, and was beginning to think that if I was going to experience this much discomfort I might as well develop in C++.

I started delving into the recent most version of Cocos2d-x, and it was good. Building for desktop and iOS emulator worked well. Cocos2d-x is a great engine, and I’d happily use it for a serious project, but after a few unfruitful hours lost trying to get it to build to my android device I was remembering the true pain of working with C++. This got me to thinking again… my goal is to MAKE GAMES, to enjoy making them, and to get them out in the world for people to play. Right now I’m more interested in iterating my ideas than making a large scale game. Surely there was some kind of middle ground? Flash or OpenFL sit around that middle zone, but for reasons stated in my last post, they don’t work for me.

Then I started to reconsider my stance on the TypeScript language. I have a gut level reaction against Microsoft technology, but TypeScript is open source, and outputs JavaScript that is close enough to hand written that you can read it and understand how it fits in with your hand written code. It doesn’t feel like too severe a lock in, especially since the code you write will eventually be roughly compatible with ECMASript 6 when it finally arrives.

I figured, if I was prepared to walk away from JavaScript, maybe TypeScript could allow me to keep all the breezy ease of development and creative expression, while giving me the features I craved, such as auto-completion, jump-to-definition, etc.

The announcement that TypeScript has reached version 1.0 was made less than two weeks ago, so this was the perfect time to take a real look at what it had to offer.

Finding an IDE

My first obstacle was that I am developing on OSX. TypeScript is only really supported in Visual Studio at this stage. On any OS, my favourite code editor is Sublime Text. It has some support for TypeScript but it is incomplete. You’ll get partial auto-completion and error highlighting, but no in-code messages to tell you what errors you have made, as you would in Visual Studio.

Another cross platform editor for TypeScript is CATS. The project is promising, and autocomplete and error warnings are functional, but the editor itself, at least on OSX, has some problems. It is still Alpha software, so it’s not really ready for serious use. As a side note, it is built on Node-Webkit, so +1 for that. ;)

In the end the best setup I was able to find for now was using an Eclipse plugin from Palantir. In not a fan of Eclipse, but I’m willing to use it until something better comes along.

It seems very likely that more and more ides will support TypeScript, as it has been made relatively easy using TypeScript Tools. From the github page:

This approach is a good move and I’m sure it will help the language to flourish.

Let’s Do It

Once I had my environment set up I found that it was easy to set up a hybrid TypeScript and Javascript project. For me the most important proof of concept was to be able to work with the pixi.js rendering library. In order to work with JavaScript libraries you need type definition files, which are like interfaces that describe classes and functions so TypeScript can do it’s thing.

A valuable resource for TypeScript developers is DefinitelyTyped, a repository of TypeScript definition files for a large number of popular JavaScript libraries. Pixi.js was in there, which made me happy. Type definition files are nothing special, and you can easily write them yourself if you want to work with your own JavaScript code.

Using the DefinitelyTyped PIXI type definition file, I quickly got some bunnies spinning on the screen, and immediately felt the benefits of autocompletion and jump-to-definition, the two ide features no programmer should really ever have to do without. (At least if they want to stay sane.)

I can really see this setup working for me. It will allow me to go a lot further with this runtime than I would be willing to go otherwise.

I came to the games industry from a web development background, originally as an Actionscript 3.0 programmer. Over the last couple of years the casual games industry I got started in has become less and less web-focused. Mobile is where the market is now. Working at a game studio that has traditionally made Flash games for the web, I find myself participating in a huge amount of discussion about different game development platforms, and which ones are the best / most suitable / most productive etc. Is it Unity? Is it C++? HTML5? Should we write custom code or use a pre-existing engine or framework?

For my personal game development, there are several factors that define my decisions:

– Strong preference for open source technologies.

– Extreme lack of time to waste re-inventing the wheel, or doing anything other than making a game.

– Need to deploy to multiple platforms, both desktop and mobile.

– The goal of building a long-term body of game code that I can re-use and iterate for future work.

This article is a comparison between the various game development platforms that I’ve considered over the last few years.

Note: Since I’m primarily interested in making 2D games, I’m not mentioning any 3D engines at all.

JavaScript / “Open Web Runtime”

As Flash fell from favour on the web, I transitioned from ActionScript to JavaScript, and that’s where I’ll start this technology showdown. I think of JavaScript as much more than a web technology. The “Open Web” is a runtime capable of deploying to pretty much any platform, and is in many ways the most portable runtime of all.

In recent years I’ve been really obsessed with JavaScript. After Flash went into decline I really fell in love with the language, and have followed the growth and development of the “Web Runtime” very closely. (I avoid the term “HTML 5″ because it is too limited and excludes other important technologies such as WebGL.)

Projects like Node-Webkit, Crosswalk, CocoonJS, Ejecta, and XDK, make it possible and practical to deploy applications to every major platform as “native” apps. Certainly for desktop applications the runtime is sufficient to build many or most of the indie games that I love the best. Using WebGL frees the cpu to do important game logic, and V8/Chromium based wrappers have very good performance everywhere but on iOS, where JIT is disabled.

Up until recently I was really close to feeling like I was willing to go all-in with this technology for my personal work. I could accept the performance limitations in exchange for the benefits, especially ease of deployment. Then I had a sudden change in heart. After coming back to a quite large code base after a break of a few months, I kept finding myself asking, “what was the name of that function?”, “what was that variable called?”, “why is this object prototype not inheriting properly from this parent class?” and so on. I realised I really missed auto-completion and code-intel. Now the project had reached a certain size, debugging was also feeling very drawn out and tricky without compile-time error messages and warnings to show me the way to problems before they occurred. I’ve been so in love with JavaScript for so long that this experience actually represents quite an existential shift for me, and was responsible for this reassessment of the available alternatives.

PROS:

– Effortless deployment to many platforms.
– JavaScript is Fun, Expressive, and Quick.
– Great libraries like PIXI.js help you get stuff done.

CONS:

– Performance is an issue, especially on iOS.
– After a certain point, large JavaScript projects become hard to manage and development isn’t so much fun.

A lot of people really seem to love Unity. I have not really used it, but from the stories I hear I can see why it is an excellent choice for a lot of teams. The artists can get involved straight away, and the integration between editor and code ide has a lot of benefits. However, I have always had a strong resistance to using it. It just isn’t compatible with my obsession with open technologies. I don’t like the idea of being locked in to proprietary technology, or of developing in C#. I want to iterate my code base over the course of my lifetime, and C# simply isn’t the language I want my code to be written in. For an individual or company who really wants to get the job done and ship a product for multiple platforms, Unity is probably the best choice, and worth the price tag. For my personal projects I’m just not interested in it.

HAXE/NME, now rebranded as the OpenFL platform, is another option for developers who want to deploy to multiple platforms. HAXE is a nice programming language, especially if you are already coming from an ActionScript and Flash background. When I evaluated OpenFL I found it easy to set up and build the test projects to the various targets, including directly to an Android device. I’ve heard that things can get fiddly at times with it, and you have to be aware of which apis will perform well on your target platforms, since they all handle the apis differently.

I’ve always thought HAXE was really cool, but when I weigh up using any language or technology over the long term, I’m not willing to spend my time on it unless it is widely used and backed by at least one large company with a big investment into its success. For the right project it could be a great choice, especially if you know ActionScript well.

PROS:

– Easy to set up.
– Cool language with good balance between power and ease of use.
– Deploy to many targets.
– Use familiar apis if you have an ActionScript background.
– Enthusiastic scene.

I recently evaluated SFML, and initially really liked it. It has a very clean and simple api, that reminded me a little of PIXI.js. It supports Gamepads out of the box, as well as Audio, Networking, and of course Graphics rendering.

SFML is nicely broken down into several modules for doing separate things. This seems like a great design choice.

One of the deal breakers for me was the lack of batch rendering support. There is supposedly implicit batch rendering planned behind the scenes, but it felt like a missing feature not to allow explicit batching. It would be possible to set up your own batching if you liked the framework enough to invest the time. There was also no built in support for sprite atlases or animations, so you would have to write those yourself. Another feature that would require implementation is a Scene Graph, if you are used to having one. (As flash developers often are.)

SFML does not currently support mobile, but this is planned in the near future. (Version 2.2)

PROS:

– Clean, simple API.
– Very well documented.
– Modular design.
– Does most of what you want without telling you how you should do things too much.
– Great starting place for building your own engine.

CONS:

– Missing a few key features you need if you want to actually make a game.
– Mobile support not quite there.
– Small dev team – who knows when feature X will come out?

I haven’t personally used SDL much, but it is often compared to SFML in terms of the features it offers. It is a good place to start if you are interested writing a game engine, but probably not the best choice if you really want to make a game. I believe it is a bit more widely used than SFML, so you might have more luck finding open source classes that can be used with it.

For those willing to work in C++, the big player in open source / cross platform game development is Cocos2d-x. Like Unity, it does in some ways force you do to things the “Cocos2d way”, but in terms of the features it supports it is not really lacking. It is widely used, and shares many apis with its Objective C cousin, Cocos2d, which has been used for hundreds of commercial games.

Because the engine is focused on mobile it does lack a few features on desktop, most notably support for Gamepads and Keyboard input. Luckily it turned out to be easy to integrate SFML into the desktop build to get these features.

The original developer of Cocos2d in Objective C, Ricardo Quesada, appears to have left Zynga, where he was hired to work on the Objective C version. He has moved to work at Chukong Technologies, who are responsible for making Cocos2d-x in C++. I think this is a great sign for the engine, and indicates that the C++ version is likely to overtake the Objective C version as the leading open source engine for mobile games. After all, why would you develop for iOS only, when you can get all of those extra platforms for only a little more effort?

Chukong Technologies is a very successful company, at one point earning 6 million a month on their game Fishing Joy, made with Cocos2d-x. It’s good to know that the engine is backed by a successful company.

PROS:

– Very complete feature set.
– Good support for multiple platforms.
– Backed by a successful company.
– Widely used, with growing user base.
– Code base written in C++ may have the most ongoing value.
– Emscripten deploy to Web mostly functional.

CONS:

– C++ is a more challenging and less productive language to develop in.
– Much more work to maintain a multi-target build.

Conclusion

I find myself swinging back and forth between the two extremes of Javascript and C++. I keep coming back to JavaScript for its ease-of-deployment and high level programming fun. On the other hand C++ gives you the best possible performance, but at the cost of extra work when it comes time to port and deploy to your target platforms.

My feeling is that programming always involves a bit of pain. You just have to decide which kind of pain you find the most tolerable. Development pain, deployment pain, porting pain, debugging pain, every technology has weaknesses you’ll have to work with. You have to decide what your objectives are, both in the short and the long term.