Making Games That Haven't Been Played

We ran into a peculiar issue while working with Meteor at Gummicube. We had two separate Meteor application that both needed to access the same MongoDB since they shared many of the same collections, but they needed to have their own users collections. This was a problem, since the users collection always had the default name of “users” and there was no official way to customize that collection name.

After digging around the source code, we came up with the following workaround. Put this in a .js file in your /lib folder in your Meteor project:

This makes sure both server and client side code will be using the updated users collection name. Hope this helps! And of course, if you have suggestions on a better way to do this – please share in the comments below.

DynamoDB, a relatively new arrival to the NoSQL party, celebrated its three-year anniversary earlier this year. We have now seen it deployed in mature products like the portfolio of online games at TinyCo and our own app store optimization solution) at Gummicube. It’s pay-as-you-go, it’s extremely scalable, with basically zero administration overhead. However, it does have some uncommon limitations in the schema design.

I completed a series of migration from MongoDB to DynamoDB earlier the year and encountered both roadblocks and successes. Here’s a postmortem on what went down, and hope you’ll find this write-up useful.

Hey guys! I’ve been meaning to start the remake of Oshiro on my free time for a long time. I got inspired to tackle it at the beginning of this year, after seeing a couple friends having a lot of fun playing the old web version. But I’ve been stuck in a waiting pattern because I didn’t know what framework/technology to use. LoomSDK looked interesting with live code reload and ease of deployment, and trying out new, bleeding-edge platforms always excited me. But when I got real about what I’m doing with my project (getting something in the store eventually, not just a fun pet project), I decided that I shouldn’t take a risk on the unproven and young framework. So I went with Cocos2D-JS, which has been enhanced with some new tools (Cocos IDE and Cocos Studio). To be honest, I don’t really know what the tools can do exactly, but it tells me the framework is mature and still well supported. And I just need to get a move on things 🙂

To start, I threw a placeholder tileset together just so I have something to work with. My first goal is to just get a single-player puzzle level prototype working, then get to the look and feel after that. Here’s how the placeholder tileset I’m rolling with:

Don’t have a whole lot of specifics yet, but Rick and I dusted off a puzzle game prototype that had been sitting in his digital drawers for a few years, until some recent chain of events lead to its rediscovery. And now we’re going to make it come alive on an iPad near you. We’re tentatively calling it: Take Us Home!

Perry and I used to joke about what will get released first: FableLab’s next game or Couchbase 2.0. And yes, he won 🙂 But that does mean that I get the option to use the new version to power my next game. Besides key operational improvement, 2.0 also added several key features that were missing in a side-by-side comparison to other document-store choices like MongoDB. Back in 20111, it seemed like a no brainer that would upgrade. But after spending a year and a half with live deployment of Membase 1.7/1.8, I am finding good reasons to use the new Amazon DynamoDB instead.

Reason 1: Growing memory usage.

The Couchbase cluster need to keep the metadata of every single key in memory, even if those values are not in the working set. Here’s how the memory usage broke down for our live cluster:

The metadata alone for us came out to 78GB – and it was larger than the actual data in the working set! And all of this data must remain in the cluster memory at all time. We ran a cluster with 8 m2.xlarge servers, and metadata ate up 60% of the 131GB cluster memory.

We probably could collate more user data into a fewer keys to go lower than average of 30 k/v pairs per player. But the point here is that as the game grew, so did the memory requirement – regardless of the working set – because we couldn’t just delete old and inactive players who hadn’t logged in for a year! Animal Party had a stabilized player base at about 300k monthly actives, but we still needed enough memory for 5m players’ metadata. Yikes.

If you want more detail on calculating Couchbase memory usage, check out: http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-bestpractice-sizing-ram.html

Reason 2: Recovery

The inability to perform online compacting in 1.x was a real issue for us, especially when servers had to be restarted. Without compacting, databases will take increasingly more time to warm-up after a reboot, and at our size it meant several hours of downtime. The auto-compacting in 2.0 should reduce this issue, but the warm-up time still remains an issue the next time AWS goes into a tizzy. Granted, no one yet knows how DynamoDB service will be impacted during one of the future AWS outage. But for a small team like ours, I’d rather put more onus on the Amazon engineers than us in a recovery situation.

After doing a bit more research, it turns out that the two above issues are not exclusive to Couchbase. Several other NoSQL solutions have similar problem.

What about DynamoDB? Judging from its spec, it should be able to allow a game to age gracefully past the peak. You can dial down access rate as concurrent user drops, and the amount you pay for additional storage as data grow is very small compared to the additional memory needed to hold them in a NoSQL database. Increasing and decreasing DynamoDB access rate takes about 10 min, so it’s also possible to run a script to ram up and down to match daily traffic cycle, which is challenging to setup for any other solution.

There are some way around the aging issue for Couchbase. Old data can be identified and pulled out of the cluster into a different storage system that’s suitable for data archiving. When user try to retrieve the old data, the system pull them out from archive and restore into Couchbase. However, if we’re going down that route, we might as well in-memory databases like VoltDB that provide transaction and SQL support.

There have been a lot of success stories with scaling up quickly with NoSQL like OMGPOP with Draw Something, but there aren’t as much discussion about the managing the data in the late stage of product life cycle. Zynga and certainly has a lot of knowledge and proprietary solution on this issue (they use Couchbase as well – one of the earliest adopters who also contributed on the technology), though it will be something the indie studios will have to tackle as we go.

We at FableLabs have been working on doing mobile development using Adobe AIR and our AS3 codebase on my Windows machine. The new FB 4.7 Beta 2 and Project Monocle have been working out really well, exceeding our expectations. And I will do a write up on them later when I finally get a breather. However, I did run into a problem with getting iOS devices recognized by FB today. After burning a few hours on it, I finally figured it out… and I hope none of y’all will have to go through the same problem again 🙂

When you do “Debug over USB” and FB tells you that it doesn’t see the device AND you swear that AIR 3.4 is configured, latest iTunes is running, cable’s connected, iPad’s powered up, and sanity pills have been taken with the proper dosage, execute this command line tool to see what the underlying problem is:

However, when I ran it, I get an error saying “The procedure entry point sqlite3_wal_checkpoint could not be located in the dynamic link library SQLite3.dll” and tells me to check my iTunes installation. I’m guessing most people will not get this issue, but if your iTunes installation is as broken as mine, download SQLite3.dll from here and drop it into the same directory as idb.exe. Once you are able to get the command to run correctly, your FB should be able to recognize your precious iPad/iPhone.

I just completed an upgrade of our prod Membase cluster to go from 1.7.2 to Couchbase 1.8.1 community edition, which was made available recently. Since I am planning to do the upgrade by taking all the nodes down and update all the server software which will incur downtime, I figured I will also try out the newly available “EBS with provisioned IOPS”. Things went well for the most part, however there is one key thing when doing the upgrade that was not covered in Couchbase’s documentation.

If you are deploying your Couchbase server in a cloud service like EC2, you likely have changed your server setting so it uses a DNS name rather than a self-reported IP address (see: http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-bestpractice-cloud-ip.html). And if that is part of your setup, you have to also do the same to the database upgrade script that will convert your data to 1.8.1 format. Here are the steps I used to do the upgrade on my boxes running Amazon Linux AMI:

Of course, backup the server. You never know what’s going to happen in one of these major upgrades.

If it all works, then run it without the -n option and it should complete very quickly (took no more than a few seconds for me)

I found myself needing to reboot the box after the upgrade. Simply starting Couchbase right after the upgrade didn’t bring the server up properly.

After that, it’s back to waiting for the server to finish warming up. I am getting much better warm-up time with the new EBS with provisioned IOPS. No surprise there. Knowing how often disk ends up being the bottleneck (like just about any database under the sun), I wouldn’t setup any future Couchbase nodes without this puppy.

Okay, I admit, I am a Tim Schafer fan. If you know what we do at FableLabs, it should be no surprise that I love to see good stories in a game. And Schafer has produced some of the most beautiful and story-rich graphics adventure games in the past. He now turns to Kickstarter to fund his next “modern age” point-and-click adventure game. Do yourself a favor and check them out!

On the KS page, they pointed out that “even something as ‘simple’ as an Xbox LIVE Arcade title can cost upwards of two or three million dollars. For disc-based games, it can be over ten times that amount.” This is something I mentioned in my rant about in my other post about the rise of game clones in the social/freemium space. Traditional games are expensive to make, and developers have to finish all the content at the time of release because players don’t continue to download updates to content and game mechanics each time they play.

Yes, downloadable content things like Steam update are starting to change that situation slowly, but it is not the same as freemium games for one major reason. If I paid $20 up front, I need to know that I will have $20 worth of content ready for me. But if I started a game for free, I wouldn’t mind if it only has three weeks worth of content and I just have to see how the game evolves as I continue to play. So instead of having no revenue stream until the entire game is finished, freemium games can start to receive revenue at a much earlier stage.

Crowd-sourcing however, is giving game developers another viable way to fund-raise through the dev cycle. There have been a few indie games that were funded and eventually released through KS (e.g. No Time To Explain), but Double Fine just proved (this morning!) that crowd-sourcing can do a lot more. Their original pledge goal of $400k is rather small for any studio quality game, but they already hit $700k in just over 9 hours. Obviously, having Tim Schafer as a lead makes a night-and-day difference (to the point where they didn’t even need to reveal any info or screenshot on the game being made), but this reinforces two of my existing believes:

1. Story driven, click-adventure games are viable today

The recent success of Machinarium and Sword & Sworcery EP and the wild funding success of Double Fine show that there is a demand for adventure games. Their audience is somewhat different from the popular FPS, RTS, or MMORPG players, but developers are finding new ways to reach those players. We’re also seeing less adventure games focused on challenging puzzles and more focus on making sure puzzles do not impede players from progressing through the game plots.

2. Studios are finding new paths to funding and revenue outside of the old developer-publisher relationship

Whether it’s freemium, crowd-sourcing or episodic releases, developers are finding new ways to get it done without relying on a publisher. I think this bodes well for everybody, because this will allow more courageous and out-of-the-box ideas to see the light of day.

Can’t wait to see how much momentum Tim Schafer and Double Fine will generate from this KS project.

I have been looking for a memcache client that plays well with gevent, and I stumbled upon one today:

https://github.com/esnme/ultramemcache

It’s written and maintained by the good folks at ESN.me. They built Battlelog, the social network for Battlefield 3, using gevent and this memcache client. They also released a gevent-compatible MySQL driver and a few other interesting python projects.

I’ll be doing load/stress testing with various different clients and setups for flask/gevent in the coming weeks, and I’ll post my findings here.

Having been in contact with several folks involved in the accusation and lawsuit, it’s been interesting to hear people’s take on the issue. There is a really fine line between inspiration and copying, and this problem has been seen in any creative field for a long time. It has come up before in the game industry in the past, but it has become more of a focus in the current social/freemium game landscape. Why? Because the efforts involved in cloning a game have been reduced while the financial reward has gone up.