Category: Code

@MarbleItUp has gone public. We will be launching soon on Switch. I wanted to talk a little bit about the game and how it came to be… and share some sweet GIFs (in high framerate black and white for download size reasons).

An extremely early baby shot of the game. Almost everything here – including the code – was later replaced. And that new stuff runs at 60hz on Switch in handheld mode.

Just over a year ago, Mark Frohnmayer came to me with a proposal – let’s build a sweet marble game and put it on Switch. The Switch at that time was a brand new platform, but the potential was immediately obvious to both of us. I agreed to partner with Mark and all of a sudden – after several years away – I was back in the game development world.

I didn’t know it on day one, but making Marble It Up! was going to be a rare privilege. I am incredibly proud of this game – the team, the levels, the visuals and the gameplay are all amazing, and I count myself privileged to be involved. This is really a great game.

As we near release, I feel confident it will be a hit. I am just hoping it will be blockbuster Michael Bay hit, not a indie niche Kevin Smith hit. Anyway, on with the story…

The Team

Like Danny Ocean, we had to get the old crew together. I already had Miha, an awesome coder, on my team from past projects, so he and I got rolling (hah) right away. Meanwhile, we immediately reached out to Alex, original level designer for Marble Blast, and Todd, one of the most talented (and hardest working) artists I know. We also pulled in the incredible Solovox to make an awesome soundtrack and Mike Jones to do the rest of the audio.

Along the way we ran into long time fans of our work on the original Marble Blast series. We brought in the team behind QuiVR, Jonathan and Fletcher, as well as long time community members, speed runners and developers Matan and Conor.

With the team thus completed, we went into full production. Development slowed down a little bit when a few months into development my Loom technology was acquired and I had to devote my focus to a roll (ho ho ho) with the acquiring company. This ended up being short lived and I was able to refocus my attention after about 10 months of distraction (so to speak).

All of our development was self funded. This gave us total control over direction, but also required all of us to really commit to the game. We’re using a studio model for the game itself – it has its own entity that holds the IP and pays out royalties. Maybe I’ll talk about the basic structure we used for that some day down the road.

Development Path

For this project, we used Unity 2017. It comes free with Switch developer access, and it looked like the best path forward on short notice. I briefly considered using Torque, the first commercial game engine I worked on. Because it was long in the tooth, it would require a solid porting effort before development could begin in earnest. We just didn’t have that luxury.

Although older versions of Unity were shaky, it has really matured in recent years and 2017 was easy to work with. Very quickly we had something up, and I started taking little progress GIFs (in black and white so they’d show up in Slack quickly). We got a ball rolling around in very short order:

Very early rolling behavior.

The movement took a while to get right (more on that later). But ProBuilder made it super easy to throw a level together, and with a little work we got elevators going.

One of the first elevators.And collapsing platforms!

Being able to easily tie any geometry into a mover was a huge benefit for level design. It did require some discipline with prefabs to create consistent reusable elevators, but one-off showpieces like those found in Cog Valley or Staying Alive were super easy to make.

The physics simulation was very detailed, and one of the elements needed for both physics networking was a capacity for rewind. Instead of only allowing for a tick or two of rewind, I generalized to allow for arbitrary rewind. This made level testing MUCH easier:

Check out the hook while my DJ revolves it.

And it also set the stage for two other features, ghost racing and replay. (I love ghost races way more than anyone else on the team, but I’m the lead programmer so they had to ship it.) Being able to watch and race against other players routes, as well as compete against yourself, adds loads of replay value to the game.

What better challenger for you to face…. than YOURSELF!

Not to say everything was wine and roses. We quickly found that Unity’s physics were completely inadequate for our needs. In the end, about half of the game code was for our custom marble physics engine. I feel comfortable claiming that no other marble game comes close to our level of fidelity. Part of the reason we have our own physics is in order to make them unrealistic, too – sometimes bending the laws of physics in juuust the right way makes things way more fun.

An extreme case of gravity surfaces.

We also found that the default Unity renderer was really not appropriate for the game. Shadows provide important depth cues, but Unity’s dynamic shadows were fuzzy and required very careful level design and lighting to be present in all gameplay situations. In the end, a simple projected blob shadow gave us a crisp look and perfect visibility to support the gameplay. It also saved us many man hours tuning levels.

Particles were also a huge pain – especially mesh particles. Unity does horrible slow things to these and cost us massive amounts of performance on the CPU-light Switch hardware. In the end, we had to make small systems to handle the costliest cases in order to meet our performance goal of 60hz in handheld mode.

Without custom rendering, this effect could bring the Switch to its knees.

We also provide a full reflection on the marble. When I implemented this for Marble Blast Ultra, it took me a week or so to modify the scene graph to re-render the level in all six directions. Performance took some care, but ultimately it ran fine using more-or-less the default engine code. This was on the Xbox 360, which has way less RAM, a slower GPU and weaker CPU. Under Unity, on the more powerful Switch, we ended up having to write our own custom CommandBuffer based renderer and our own set of lightweight shaders in order to hit 60hz. This was an unexpected surprise late in development.

In general, Unity was excellent for prototyping and able to be coaxed into viability for production.

Conclusion

Man, I am stoked to see this game go out into the market. It has been a year of my life, some great challenges, some great fun, and hopefully, a game that brings a little bit of joy into the world. Let’s marble it up!

Lately, we’ve been doing a lot of code caving at The Engine Co. Expert code explorers, like Fabien Sanglard, have an amazing ability to take very complex codebases and understand their core concepts. They can extend them and make them turn all kinds of tricks.

While you shouldn’t miss Fabien’s analysis of Quake 3, you don’t have to be an expert game developer working with major engines to gain huge benefits from a little code literacy. You are far more likely to work within the confines of a big codebase than you are to be working on a blue sky project with no dependencies or baggage.

Being able to dig through layers of code and understand complex existing systems is a huge benefit to your capabilities as a coder. So let me lay out my guidelines for becoming an awesome code caver able to dig through the muddiest of codebases to get things done. Starting with…

Look Deeper Than The Surface

“But wait,” you say, “I’m a JavaScript coder. My language is so simple and elegant! All I have to know is jQuery and I’m set.”

You poor, poor fool. What you aren’t seeing when you declare your very clever closure-based MVC microframework is the 5 million lines of C++ and 1500 man-years that went into the browser that is running it. The JIT that converts your template parser to machine code so it runs fast. The highly tuned GC that lets you kind of forget about memory management. And the vast cross platform rendering codebase that actually lets you draw “Hello World” and funny mustache pictures on screen.

And that browser sits on top of the operating system, which is ten times bigger in lines of code and years of effort. Now hopefully you don’t have to worry about the operating system. But suppose you have a neat little node.js server, and you run a neat little JS powered web service with it, and you get a neat little denial of service attack from some jerk in Eastern Europe. Now all of a sudden those unimportant details like how the operating system handles sockets become vastly important.

Tools of the Trade

How do you cut through a huge codebase like butter? How can you hone in on the insights that make you look like a genius? How can you do all this without breaking a sweat?

grep to grok

The first essential tool is grep. grep looks for a string in a set of files. There are many equivalents to grep (WinGrep, find-in-files in a favorite IDE, find on Windows/DOS), but the key is you can give it a folder or set of files to search and a regular expression to look for, and get back a list of results and the files that contained them. I like to use find-in-files in Sublime Text
3 for this because I can click search results and jump to them.

Grepping is super powerful. Suppose you get some weird error message – “Floorblop invalid.” – and nothing else from your app or service. You might be left with some questions. What’s a floorblop? Why is it invalid? Well, it might not matter. Just grep the error string and look at the surrounding code to determine a cause. Sometimes errors are generated dynamically so you might have to look for part of the string like ‘ invalid.”‘ if Floorblop was determined at runtime. With a little bit of cleverness, you can almost always reduce the number of potential sites for the error in the codebase to a manageable number – then just inspect the search results manually to find the place triggering the error.

Now you don’t care about the text of the error, but instead can start looking for functions that might be failing and causing the error to be emitted. In effect you can think of the error string as a unique identifier for some situation rather than a human-relevant string. (And in many codebases, error messages are cryptic at best.)

In the course of investigating your error message, you might encounter some variables or functions that seem to be related. Grep comes to the rescue once again: grep for them! Generally symbols are unique in a project, so you can quickly find all the code relating to a variable or function. You can also take advantage of the language’s syntax to help narrow down your results – for instance, in C++ function implementations in a class or namespace often look like this:

void Foo::bar()

So if you want to find the definition of bar, you can grep for “::bar”, or if it’s a global function, use a return type, ie, “void bar” and go right to the definition. Think about fragments that would match lines you want to see – you don’t have to get an exact match, just one that’s close enough you can find what you want quickly.

It should be obvious that these techniques can apply to any language, not just C/C++ – in Ruby you might use “def” in your search strings, in JS you might use “function”, in assembly, colons. The goal is to just filter down the number of results you have to consider, not take them to zero.

Binary Spelunking with strings

It’s easy to think of a binary as an opaque file. So it can be frustrating when you have a codebase with binaries in it. But fear not: DLLs, EXEs, .SOs, and a.out files have a ton of useful information in them. They have to – otherwise the operating system wouldn’t be able to run them!

One of the first tools to pull out is strings. Strings simply scans any binary file looking for bytes that look like they might be a string (displayable ascii characters ending with \0). When it finds a match, it prints it out. You can store the results in a file, or run them through grep or another tool, to help narrow results.

So suppose you have a function for which you cannot find the implementation – but you notice that the codebase includes a binary .so or DLL. You can check if the function is prepackaged in the binary by running strings and grepping for the name. If you get some hits there and nowhere else, you now have a good hint that you might not have all the source code.

Pulling Back the Hood: strace and wireshark

Maybe you’re getting a weird failure when interacting with your app. The first thing to do is establish exactly what’s happening! strace is a super powerful tool on Linux that dumps every system call a program makes. With this, you can see when and what is being read from disk, what network activity is happening, if it’s requesting memory from the system, and so on. All of a sudden a cryptic dynamic linker error will be easy to diagnose – you can see if libraries are being loaded and if symbols are being found or not. Or you can see what OS call is hanging the process. Super useful, if a bit tedious.

On Windows, SysInternals offers similar tools to inspect process activity.

Cross platform Wireshark is a packet sniffer that can really help you understand network activity. Suppose you are getting a cryptic error relating to some HTTP service. You might not be sure if the service is up, if you have a bad header, if the connection is being dropped… Wireshark will let you tell right away if data is making it to the server and what it is exactly, as well as what the server is sending back to you. It will identify things like corrupt TCP packets that are very rare failures but can totally break your app.

Once you’ve established the lowest levels are working properly, it’s easy to move up the stack to HTTP inspection tools like Charles or Telerik Fiddler. These can inspect HTTPS traffic and give you a higher level view of your app’s behavior. Chrome and Firefox also have built in tools that are similar.

Abusing Your Profiler and Debugger

You can also abuse profiling tools like Instruments to find out what an app is doing. On Windows, Windows Performance Analyzer and xperfview are the tools to try first. These tools allow you to attach to a running process on your system and see the callstacks and where time is spent in them.

In other words, they show you HOW the code is running in various situations and which are most important (due to being at top of call stack or being called often). It’s like a self-guided tour through an unknown codebase – how convenient!

You can also attach a debugger and use the time honored trigger of hitting pause a lot to see where time is spent during execution – or stepping through function calls to see what calls what. This is a bit more tedious but also very useful in understanding a large codebase.

Study the Fundamentals

The best way to learn to understand big codebases is to… wait for it… study big codebases. While every project is unique, after a while you start to internalize common layouts and development philosophies. As you learn to recognize patterns and see how bigger systems fit together, you gain building blocks that let you map out unfamiliar systems and build your own systems to higher standards.

A few books I’ve found educational in this area:

Operating Systems Design and Implementation – A walkthrough of Minix, a small self contained POSIX environment. This goes through every detail of a “pocket” operating system and explains it in detail. Great introduction for the next three books…

Mac OS Internals – A great guide to OS X’s structure with detailed examples and code walkthroughs. Understand the fundamentals of OS X while building valuable code analysis skills.

Conclusions: Fighting Dirty & Getting to the Bottom

At the highest level of mastery, you should be prepared to reason about the behavior of any program from its highest levels of UI down to the behavior of the CPU. Every abstraction is leaky, and even though you want to preserve abstractions to keep development manageable, you shouldn’t let that limit your understanding. Get to the bottom of the stack first, then work your way back.

Don’t be afraid to fight dirty. Do you think something is important? Do you have an assumption about how some code works? Try breaking it to see how it fails. You’d be amazed how many problems can be solved by asking “Is it plugged in?”. Use version control or backups to make sure you can safely make dangerous changes and roll back to a safe state.

Finally – optimize for your sanity and development time. Don’t spend 12 hours when you can spend 1. Think about how you can narrow down your problem and check your assumptions quickly so you can focus on gaining knowledge and making the system do what you need. There are a universe of tools to make it possible to wrangle complex systems. Use them!

This code is hard to understand. It’s hard to understand because error handling is distant from the error checks (for instance, the check for nil is at the beginning but the error and return are at the end!). It’s hard to understand because the important parts are deeply indented, giving you less headroom. If you want to add additional checks, it’s hard to know where to add them – and you have to touch lots of unrelated lines to change indent level. And there are many exit points scattered throughout. GROSS.

Whenever I see code like this I cringe. When I get the chance, I like to untangle it (or even catch it in code review). It’s soothing, simple work. To be sure, the functionality of the code is fine – it’s purely how it is written that annoys me.

There’s a key thing to be aware of in the structure of this code – it has a bunch of early outs related to error handling. This is a common pattern so it’s worth walking through the cleanup process. Let’s pull the first block out:

It’s a LOT better, but now we have a return that can never be run. Some error handling code is still far from the error detecting code. So still a little messy. Let’s do the same cleanup again on the second block:

See how much cleaner that is? Beyond saving indents, it also exposes the structure of the algorithm a great deal more clearly – check it out:

Check for nil productId; bail if absent.

Log productId if it is present.

Check if we can make payments/IAP is active; bail if not.

Submit the product info request.

Return success!

The code and its “flowchart” now match up nicely, and if you modify one, it’s easy to identify the change in the other. This might seem like a little thing, but I find it shows that the purpose + structure of the function is well set up. And if you can’t write the function without violating this rule, it’s often a very solid clue you need to introduce some more abstraction – tactics such as breaking stuff up into helper methods, reorganizing your data structures a little bit, centralizing lookups/checks, and so on.

Something to keep in mind next time you find yourself hitting tab more than a couple times – flatten your conditionals!

The hardest part of software development is often the road from code in a repo to an artifact in the user’s hands.

There are a million ways you can ruin yourself along that long and winding path. You miss a DLL or dependency. You mark a flag wrong and it won’t run on a non-developer system. A setting gets changed on the system doing the compile and builds mysteriously fail. On multiple platforms (and who isn’t on at least a couple?), you forget to test on all 5 of your platforms and find out the build is broken – obviously or subtly – on one of them.

One building bock that helps cut down on this pain is a build tool – a tool to manage what files are built in what order and with what settings. If you can fire off your build from a single command line command, it dramatically reduces the risk of breakage due to outside factors – and helps a lot with setting up build boxes. At this point I’ve worked with nearly every option: make, msbuild, xcodebuild, rake, Maven, Ant, CMake, premake, qmake, and even a couple of home brew systems. Here are my thoughts on each of them:

GNU Make. The granddaddy of all build tools. Cryptic syntax, most widely used on POSIX-compatible environments like Mac or Linux. It can do anything you want it to, if you’re willing to dive deep enough into it. Provides very little hand holding. Tools like automake and autoconf expand capabilities quite a bit, but they are anything but intuitive, and if your goal isn’t a UNIX command line tool, they may be frustrating to work with. Makefiles are generally shippable if you are willing to put enough smarts in them (since they are fundamentally built on top of the shell). Make files are easy to generate, and many tools exist to programmatically do so (more on those later).

MSBuild. The successor to nmake (with a brief detour to devenv.exe), it owes a lot of its legacy to make. However, it’s fully integrated with Visual Studio, so if you have a Visual Studio project, it’s easy to drive. In general, vcprojs are pretty easy to programmatically generate, and also easy ship to other Windows developers, which is a big bonus. No viability for sharing cross platform, except possibly in the context of Mono development.

XCodeBuild. The command line tool for compiling XCode projects. It works just like XCode does, minus the goofy UI. Great for doing OSX/iOS builds, nothing doing for any other platforms. XCode project files are relatively easy to ship to people, although there can sometimes be little subtleties that screw you up. Once nice thing about XCode’s build model is that it’s fairly easy to call your own scripts at various points in the build process. The downside is that xcodeproj’s are finicky and hard to generate.

Rake. Ruby is pretty sweet, and Rake builds on it in the Ruby way – that is, with a domain specific language tailored to the task at hand. The downside is that the docs are inscrutable – you pretty much need to be prepared to look at a lot of examples and dive the code to understand it well. But it responds well to hacking and generally gets the job done. Since Rake just sequences commands it works great for non-Ruby projects – it’s basically a much better Make.

Maven. I have seen Maven used very well in real Java projects, and abused heavily in non-Java scenarios. If you grok the Maven way and are prepared to conform to its view of the world, you can get good results. But in general I think it is much more trouble than it’s worth in anything but enterprise Java contexts.

Ant. I’ve used Ant several times on non-Java projects, to good results. Ant is powerful and has some nice capabilities for filtering/selecting actions. However, it also has an obtuse XML syntax that becomes cumbersome in complex build scenarios, and it can be finicky to set up all your Ant tasks properly.

CMake. CMake is ugly, but it’s an effective kind of ugly. The CMake language is gross and its codebase is complex, with important features often being driven by subtle combinations of settings. But the docs are pretty decent, the community is large, and it has good update velocity. It also generates pretty good project files for most IDEs. And it is pretty easy to hook arbitrary commands into key points in the build process, which is a big win. CMake is bad if you want to do a lot of file processing or complex logic, but good for making and running project files that work across many platforms – including iOS and OSX.

premake. Of all these technologies, I most want premake to rock. It uses Lua, which is an easy and low-dependency language, and it has a pretty good set of modules for emitting different projects. Most of the time, projects can be shipped, which is big, too. However, the core generators are finicky, and we had compatibility issues. And development velocity isn’t as high as we’d like. So we ultimately had to drop it. However, I think it’s worth a look again in the future.

QMake. QMake is mostly associated with QT development, and exists to facilitate the preprocessing that QT requires to generate all of its binding + convenience features. It takes a simple configuration language, and can be effective. However, its support for mobile platforms appears to be rudimentary and it does not produce project files – just sequences build commands.

Homebrew. My main experience here was a custom project generation tool I developed at GarageGames. (Ultimately, many others have touched it, and I believe that it is still in use as of this writing.) We decided to go the homebrew route because we needed to ship high quality project files to our customers. None of the existing tools could produce these (premake is now probably the closest). And our existing process of hand-tweaking projects resulted in a lot of broken releases. We ended up using PHP to process hand-written project file templates. It worked because we had a large enough team to be able to spend a few man-months refining it until it was good enough. The main take away from that experience was that it’s not as hard to do as you’d think – it’s just matter of patience and groveling through exemplar build files to learn the format. The real cost is maintaining on-going compatibility with all the different versions of all the IDEs. I hope that someday GarageGames releases this tool as open source.

So, with all those out there to consider – what am I using today? Well, we are using a hybrid of Rake and CMake. We use CMake for all the project generation + compilation, while Rake deals with sequencing calls to CMake and make or xcodebuild or what have you – mostly for the build box’s benefit. Our project targets iOS, Android, Mac, and Windows, and so far this combination has worked out well.

Ultimately, you want a tool that builds from a single command and doesn’t require user intervention to produce final build artifacts. Otherwise, you will be constantly chasing your tail as you move from developer to developer or platform to platform. Any of these tools can achieve this, so it’s a question of choosing the tool or combination of tools that fit your situation the best. Good luck!

Like this:

The book is called Video Game Optimization, and it covers everything you need to know to get maximum performance from any software project – but especially games. If you’re struggling with getting a great framerate out of your game, I highly recommend checking it out. 😉

Video Game Optimization goes all the way from high-level concepts like planning for performance in your project’s timeline, to determining which broad area of your system is a bottleneck, down to specific tips and tricks for optimizing for space, time, and interactivity. Based on the course that Eric Preisz taught at Full Sail University on optimization, it isn’t the only book you’d ever want to read on the subject, but we think it is a great introduction!

The journey from that initial conversation where our mutual friend Jay Moore introduced us and suggested I would be a good co-author, to the day when we finished and shipped the book was a long but rewarding trek. Eric moved across the country from Florida to Nevada, as he moved from teaching at Full Sail University to running the Tech and Tools group at InstantAction. He also became a father with the arrival of his son, Grant. I left after 5 years at GarageGames and helped build a new company, PushButton Labs.

A lot has changed while we wrote it, but it still felt really good to arrive at GDC, visit the book store outside the exhibition hall, and finding a big stack of Video Game Optimization sitting front and center. 🙂

Like this:

I spent a little time with Flash 10’s 3d features recently. Since Flash 10.1 is imminent and FP10 has been at 90%+ penetration for a while now, it’s probably safe to start looking at using FP10 stuff in my projects. 🙂

My main concern was the transformation pipeline – I think there might be some benefits to using 3d positions for the rendering pipeline in PBE. So I wanted to do a brief survey of the built-in 3d capabilities, then look more closely at the transformation routines.

My first test was making a DisplayObject rotate in 3d (minimal source code). It runs well, and if you turn on redraw region display, you can see that it’s properly calculating the parts of the screen that need to be modified.

This was easy to write, but it revealed the primary flaws with the built-in Flash 3d capabilities. First, look closely – the edges of the shape were sharp, but the interior detail is aliased. This is because whenever the 3D rendering path is engaged, it turns on cacheAsBitmap. This is fine for simple scenarios (say, taking a single element in a UI and giving it a nice transition) but not for more complex situations.

In addition, because it’s cached, moving objects further/closer from the camera results in an upsampled/downsampled image. So you tend to get mediocre visual results if your objects move much.

My next step was to stop using the 3d capabilities of DisplayObjects, and just position them in x/y based on their 3D position (source code, notice it is two files now). This gave a massive performance gain. At low quality, 1000 particles runs at 1440×700 at acceptable FPS. Most of the overhead is in the Flash renderer, not the code to update DisplayObject positions, but it still takes a while to do all the transformation, and it causes a lot of pressure on the garbage collector from all the 1000 temporary Vector3D instances that are created every frame. (600kb/second or so – not insignificant.)

Next I figured it would be helpful to make my camera move around (sample code).

This required that I understand the coordinate space all this operated in. What are the coordinate spaces? According to the official docs, screen XY maps to world XY. So forward is Z+, up is Y-, right is X+. Once I figured that out, I had to prepare a worldMatrix with the transform of the camera, then append the projectionMatrix. The PerspectiveProjection class always seems to assume screen coordinate (0,0) is the center of the projection so you will have to manually offset. Maybe I was not using the projection right, since the docs imply otherwise.

There were two other details to sort out. First, I had to reject objects behind the camera, and second, I had to scale objects correctly so they appeared to have perspective. The solution revolved around the same information – the pre-projection Z. By hiding all objects with Z < 0 and scaling by focalLength / preZ, I was able to get it to behave properly.

Next up is Matrix3D.transformVector… which is slow. Transforming 1000 vectors eats 3ms in release build! This is really slow in absolute terms (Ralph Hauwert has a good example of the same functionality running much much faster). I didn’t really want to introduce Alchemy for this project. But we can use AS3 code that avoids the allocations, saving us GC overhead and getting us an incremental improvement in performance.

Time for some conclusions. I think that the 3D capabilities built into DisplayObject are OK, but very focused on light-weight graphic design use. Building a significant 3D application requires you write your own rendering code built on Flash’s 2D capabilities (either DisplayObjects or drawTriangles and friends). The 3d math classes are ok, but immature. Some things are very handy (like the prepend/append versions of all the methods on Matrix3D), but the tendency for Flash APIs to implicitly allocate temporary objects limits the usefulness of some the most central API calls. In addition, important assumptions like the order of the values in Matrix3D.rawData were not documented, leading to frustrating trial and error. I am excited to see Flash’s 3d capabilities mature.

Posts navigation

Buy My Game!

Solve Your Problems!

I've developed a love for consulting and run The Engine Company. We solve hard problems in AR/VR, embedded systems, internet video, mobile/IoT, cloud/backend, game technology, fractional CTO and a lot more. Visit our site for more info.