Archive

As owner/organizer of the SeaFall game for our group, I felt responsible for going into the game with as good a handle on the rules as possible, so I took some time to read through the rulebook a couple times before we gathered to make sure I understood the intricacies of the system that Rob Daviau of (Risk: Legacy and Pandemic: Legacy fame) had built for us. It’s a pretty beefy rulebook even before you start to unlock the unfolding “legacy” portion of the game — Daviau himself calls it a “medium-to-heavy” game system. One side effect of investing in the rules early on was that the tease of narrative and future revelations left me with a deep curiosity and enthusiasm to get the group together and kick off the game, if for no other reason than to begin unlocking the mysteries behind it.

As a legacy game, part of the fun of the core mechanics is knowing that you don’t know all the rules at the beginning of the campaign; you expect things to go sideways halfway through. (In fact, having come to this game from midway through a Pandemic: Legacy campaign, I’m starting to learn just how much things can go sideways, but that’s another post.) What’s different about SeaFall is that this is the first legacy game that is built off of a fundamentally original core game system, rather than taking an existing game and layering a legacy campaign on top of it.

Board setup – the main board, side board, and province boards for four players.

Taking the time to read through the rules a couple times and watch a tutorial video was well worth it, otherwise we probably would have been pretty lost when we first sat down to play. Since we came to the table forearmed, however, setup went fairly smoothly, especially considering all the components to prepare. The boards themselves provided good guidance on where the pieces needed to be placed. Little boxes called “province chests” used to store individual players’ chits that carry over from game to game are genius. All in all, there were only two elements that could have been improved in my opinion: the way the “milestones” (a mechanic for progressing the game narrative) are provided to the players to set up for the first two game sessions, and the placement of the event cards that describes the state of the world for a portion of the year–unlike most other cards, this deck doesn’t have a dedicated place on the main or side board, leading to confusion about where it should be set up. The choice to use an enmity sticker from your side, typically reserved to track when you’ve wronged another player or island, to tag your portrait for your side is also a little weird. A dedicated sticker for this would have been a less incongruous choice.

Preparing to set sail

After setup came the actual gameplay. And here is where I am starting to grow concerned. The primary issue I’m facing is that the core mechanic of a game turn is…well…cumbersome to put politely. The game design forces the players to choose one of four “guilds” to employ each round, which enables them to buy/sell goods, build enhancements to their province or ships, explore a site on an island, or conduct a violent raid. On its face, this seems reasonable, but the mechanic results in all players struggling to get ships where the they need to be to execute their plan leading to a very choppy and sluggish feeling experience. Forward momentum is hard to come by.

One thing that is clear is that each game is going to be very different. The first two games we’ve played unlocked enough of the board that we won’t be able to do much more unlocking for a while — the low hanging fruit is taken on our board — and instead we’ll need to change our strategy to leverage the fruits of our exploration from previous games. While on one level that’s fine, and in fact part of the point of a legacy game, the flip side of this particular coin means that I’m concerned that we may never get to the point that the rounds in the game can proceed smoothly, because we will forever be trying to evaluate how to try to get our guilds to do what we want in an ever-changing environment.

Finally, for me personally, coming from a cooperative game like Pandemic: Legacy where the deck drove the uncertainty mechanic with some constraints, moving to a competitive game where there is unbounded uncertainty based on dice is fundamentally a less fun game to play. Also, dice hate me. These two statements may be related.

I’m still curious to see how the game evolves as we “reset the board” each session in a semi-dynamic world, and of course there are strong hints that the whole game is going to go sideways sometime in the future. Fundamentally, though, I must admit I’m a lot less excited about the whole experience than I was before we had started play. Hopefully as we get more practiced in the game the rounds will smooth out, but I’m not optimistic. We will see how things unfold. My people will struggle through. It is my destiny to become Emperor of the sea, after all.

Up until now, I’ve kept this blog strictly about technical discussions (and there hasn’t been too much interesting to discuss lately that hasn’t had extensive coverage elsewhere, sorry about that!). Today, however, I have some exciting news to share that isn’t strictly technical, though it will no doubt have a major impact on the things that I’m working on and may end up reflecting (positively) on the posts I make in the blog going forward.

I am thrilled to announce that this Monday, August 20th, is my first day at Rapid7, an awesome cybersecurity company that has recently opened a new innovation center down in Kendall Square in Cambridge, MA. I’m sure you have heard the increasing reports of how fragile our current online services are to break-ins and theft, and as more and more of us are putting ever greater portions of our lives out in the cloud, it becomes ever more critical that those spaces become secure. Rapid7 is on the side of the good guys in helping companies big and small get a handle on the security of their environments, and I’m extremely excited to be able to do my part to help make the cloud a safer, more secure place. My role at the company will continue to involve a lot of front-end work, and my first goal as I get to work is to build a JavaScript environment for our new tools that will be as healthy and robust an application development environment as any backend work, so watch this space for more tips and ideas on that front!

If I can get a little personal for a minute, this is such an exciting move for me. Not only will I be given the opportunity to take what I have been learning about healthy JavaScript application craftmanship and applying it to an industry that is doing really good work for society, but this will be the first time in my entire Boston career that I will finally be working down in Cambridge rather than the 128 tech circle. This means much more opportunity to cross-pollinate ideas with other developers working on other awesome projects down in Kendall Square, and it also means finally getting to put the Minuteman Bikeway which runs alongside my house to work. No more driving for me; the last couple days have been, in part, spent getting my bike in shape for daily commuting. Oh! And Rapid7 was awarded the #1 best medium sized business to work at by the Boston Business Journal this year. Jealous? We’re hiring!

Of course, a big move like this doesn’t come without ramifications. It is with a heavy heart that I must reduce my role in the company I co-founded, 10×10 Room, to that of advisor. I will be investing a lot of my brainspace going forward into rolling out the best possible front-end architecture I can for Rapid7, and it would not be good for either Rapid7 or 10×10 Room if I tried to ride both horses at once. I have made a ton of friends in the game development industry over the past few years, and I am genuinely sorry that I never did get a chance to meet more of them in person. I will still be following everyone on the social interwebs, and I can’t wait to hear what amazing work y’all will be sharing with us in the future. No matter how old I get, I’m sure I will always identify as a gamer, and it was a true honor to consider myself a game developer for a short while as well. I’m also incredibly excited to know what’s coming down the pipeline for 10×10 Room and Conclave, which makes this an even more difficult departure. Needless to say, Rapid7 had a very, very high bar to hit to get me to leave my current company, which I guess just goes to show once again how excited I am about this new role.

Well, that’s enough nattering on personal life stuff on this blog. For the tl;dr crowd: I’m going to be doing a lot more JavaScript architectural work in the future for a new company, the new company is awesome, and both of those things put together will hopefully mean there will be some more fun JavaScript posts coming your way here soon. Watch this space, and thanks for reading!

Tetsuo reminds us that projects allowed to grow out of control often become difficult to manage.

When building a webapp in JavaScript, you are using JavaScript in a very different way than it was originally intended when the language was designed. Case in point for this discussion: most languages designed for big projects provide some sort of mechanism for every source file to describe any other source files it depends upon. C/C++ has the “#include” directive, Java and C# have a strict class design that allows the compiler to determine where to find every source file, and Ruby has the “require” statement.

The JavaScript language has none of these techniques inherent in the language. Once you are writing code in a JavaScript file, there is no easy way to pull in other JavaScript files that your current file depends on. Fortunately, JavaScript developers have come up with several mechanisms to solve this problem.

In the old days before Visual Studio, C/C++ programmers used a tool called a ‘Makefile’ to tell the compiler which source files were needed to build a program (kids, ask your parents). The Makefile was, as its name implies, a separate file the programmer would need to open and add a new source file to every time a new file was written to add to the project. It was a cumbersome process and prone to error, but it’s also the easiest pattern for a JavaScript programmer to follow. If you’re building a web-based application, you can use the index.html page as your “Makefile” and add all JS files you need to the <head> element in your index page to have the browser load all needed files, like so:

There are a few problems with this approach. As your project grows, this list of files will become cumbersome and unwieldy. It will always be a pain for your developers to open the index.html file to add any new files to the architecture, and there’s no easy way to tell which files depend on which other files, which becomes especially onerous if you need to load the files in a specific order to make sure that the files that depend on certain other files are loaded after those files they are dependent upon. Finally, once your webapp is ready for production, you are probably going to want to run all these files through a minifier anyway, at which point you’ll need to edit the index.html file to load only the minified version of the app rather than all the resource files separately.

Don't let this happen to your project.

Fortunately, there are tools like RequireJS which allow us with a minimal amount of restructuring to declare our JavaScript dependencies within each JavaScript file. This makes the dependency tree much clearer which in turn makes it much easier to move a subset of code over to new projects. Developers no longer need to manage a resource list in an external file like the index.html file. And finally, when the project is complete, RequireJS has tools that assist in minifying the entire project for production.

In the last post we discussed how digital multiplayer games have evolved, various strategies for getting different game clients to keep their games in sync, and why the core HTTP request/response architecture of the web was fundamentally flawed when it came to multiplayer games. In this post, we’ll dig into the HTML5 solution to this problem and how to use it in desktop or mobile web applications.

WebSockets are a new technology that was written in response to JavaScript developers using AJAX calls to keep checking back with the server to see if anything new had come in. It was clear that the current design was not meeting the needs of modern Web 2.0 applications, which wanted to leave a connection with the server open for as long as the web page was up so that any new information from the server could instantly be read by the JavaScript running on the client and displayed.

At the lowest levels, WebSockets do piggyback on top of the World Wide Web infrastructure (hence the name). They follow a URL scheme, similar to the ubiquitous HTTP definition. Since the underlying protocol is different from HTTP, however, the URL scheme begins instead with a new label: ‘ws‘ (for unencrypted streams) and ‘wss‘ (for encrypted streams). For example:

ws://yourserver.com/yourwebsocketresource

If you’re running your own web servers, you’ll need to have a program running on the backend to handle WebSocket requests such as these. Apache’s ActiveMQ project is one example of WebSocket-aware server software. A full discussion of the pros and cons of various server configurations and software packages to handle WebSockets is beyond the scope of this post.

WebSocket technology is new enough that browser support when running your client HTML5 application should not be assumed. To assist in this, there are several JavaScript libraries that wrap the WebSocket calls in their own API and thus can downgrade the connection to an HTTP polling connection if the client application is running on a browser that does not support WebSockets, and we’ll dig into one of those wrappers momentarily. In the HTML5 mobile space, WebSockets were added to Mobile Safari in iOS 4.2 (though the WebSocket spec supported is an out-of-date specification, so make sure your servers are compatible with the old spec if you want to support iOS devices), but as of this writing are not supported on the default Android browser. WebSockets are supported in the newly-announced Chrome for Android beta, which only runs on Android 4.0.

If you are running in an environment that supports WebSockets, interacting with one on the client side is delightfully simple. Read more…

Along with the recent performance upgrades in the major browsers’ implementation of the HTML5 canvas, there is one other big piece of the puzzle falling into place that will provide HTML5 games with an environment that allows them to look and feel just as much like a “real game” as a game developed for Xbox Live Arcade or Steam, particularly where multiplayer is concerned. That piece is the WebSocket, which finally provides web applications with genuine realtime 2-way communication with game servers and/or other devices. But before we get to a detailed discussion of WebSockets, let us take a look at the history of networking as it relates to games and see how we got to where we are today.

A talks to B, B talks to A

In the beginning, there were two game clients who wanted to talk to each other. My personal first experience with this was a little first person shooter called Doom. The clients each ran their own copy of the game and in multiplayer mode, each copy of the game sent messages back and forth over the network so you could see where the other players’ little guys were running around and could try to blow them up before you got blown up. You know, the usual.

When Blizzard Entertainment first set up Battle.net to allow Diablo players to find each other over the Internet and play together, they followed a similar simple networking model. Each client ran a copy of the whole game and sent sync up messages to the other players so you could keep track of what your friends were doing. As long as everyone followed the rules, this was fine. However, it quickly became clear that since each client was the final authority of the game running on that client, it was very hard to catch cheaters who hacked their local copy of the game to give their character an unfair advantage.

Clients talk to the server, the server talks to the clients

Enter the client/server model. In this system, the game is designed from the ground up not to trust the client. Since clients can be hacked, servers are now used to run the central game processing, with clients only having very limited control over their environment. The messages a client sends to the game world become severely restricted in this environment as well. Take, for example, a game where a player is supposed to only move up to three squares a turn. Where before, in an environment where a client might broadcast to the other players in the game, “My player is now in location 200, 200!” and the rest of the clients simply accepted this message, a hacker could easily bypass the three-square-max rule on a hacked client where the rule was not enforced. In a client/server environment, the server verifies all messages that are sent in by the client. In this way, a hacked client that tries to bypass the three-square-max restriction would get caught by the server when the client attempts to claim their player is in a location that should be impossible for the player to reach in one turn. At the very least, the illegal message can be rejected by the server and never passed on to the other clients playing the game.

Battle.net took on just such an enforcer role for Diablo 2, which is why players were given the choice to play Battle.net-only characters, who could be considered trusted by other players to have never been hacked, or they could play “local” characters with whom Battle.net did not play a watchdog enforcer role, and thus the characters may or may not have been given unfair advantage through hacking the game. Even the Blizzard behemoth World of Warcraft, at its core, uses a similar model, though hundreds of clients are allowed to connect to a World of Warcraft server at once (actually several different servers make up a World of Warcraft game world, which is why players see a loading screen when they move from continent to continent in the game–the loading screen masks the hand-off of the character across different game servers).

On the HTML side of the world, client/server communication was around long before it became clear that client/server networking was required for trustworthy multiplayer gaming. The basic form of communication between a web page and its web server has always been something called “HTTP”, which is designed around the core idea that the web browser will ask for a single resource–a web page, an image file, a video stream, etc–then hang up the connection after the resource is delivered. In the course of loading this blog post, your web browser issued several HTTP requests: one for the post itself, and one for each image embedded in the post, as well as some extra requests invisible to the reader.

As it turns out? For games, although you have the same client/server architecture the multiplayer game world migrated to, this is a terrible system. Although it already has a server in place ready to be used as the game enforcer to protect players from cheaters using hacked clients, there is a fundamental problem in the “hang up after every request” part of the HTTP design. For multiplayer games, you don’t want to hang up. Ever. You want to keep that line open so that you can be in a constant state of communication with the server: keeping it up to date with what your client is doing and getting back constant updates on what the other clients are doing. In HTTP-land, that means a whole lot of wasted bandwidth as clients constantly set up new connections to provide new updates to the server and check for updates from other players. It also means web servers have to set up and tear down a lot of connections that may not actually contain any data if nothing has happened in the game since the last time the client checked.

TL;DR: HTTP was never intended for this kind of communication.

Which leads us, at last, to WebSockets: a new protocol that allows web pages to open a connection to a server and keep that connection open, both for sending and receiving data, for long periods of time. In my next post, we will explore WebSockets in detail and see what options HTML5 developers have today to leverage this new technology.

This is a fairly specialized topic, but it’s also an easy one, so let’s devote a quick blog post to it for the sake of thoroughness. Maybe it will help some of you HTML5 web searchers out there. (Welcome, web searchers!)

Some of you may be wondering if an HTML5 app running on the iPhone has access to the devices latitude and longitude, or “geolocation.” It does! One of the new HTML5 APIs is called the Geolocation API, and it gives your JavaScript access to the device’s geolocation information just as a native app has access to it.

Fun fact #1: when you fetch the iPhone’s geolocation info, the little compass icon in the status bar appears, just as it does for a native app, to remind the user that your application is tracking where they are.

Fun fact #2: when you attempt to access the device’s geolocation info, the user will get a popup asking if they want to grant permission to your app to…you know…track where they are. They may say “no.” A good app will be prepared for rejection.

Fun fact #3: many modern desktop browsers also support the geolocation API! In my experience, the values a desktop computer comes up with as its best guess for where you are is a joke though.

Get on with it

Final fun fact: You know how when you’re using Google Maps on your phone to drop a pin on the map showing where you are and that pin moves back and forth a few times before it finally settles down? You’re going to get that, too. When you begin querying the iPhone for its location, it will take a little while before the iPhone can come up with a final value for you, so be prepared for the location to change on you even if the user is sitting still.

So how do you fetch the location of the device, assuming your user allows you access to this private information? The call is pretty straightforward:

The coordinates that are returned are (on the iPhone at least) floating point numbers that go down many, many decimal places. In practice, I’ve found those decimal places give you a pretty decent idea of what house you’re at, but not enough resolution to tell you what room of the house you’re in. What you do with that information is up to you! There are several tools available to help you put a latitude/longitude value on a JavaScript map, for example, but that reaches into general JavaScript programming and beyond the scope of this blog, at least for now.

Or you could help poor Spirit find the nearest Starbuck’s. Goodness knows it has earned a latte.