Monthly Archives: August 2013

If you missed all of the buzz, Flash streamed himself laddering for almost 3 hours earlier today. You can find the recording on twitch, but if you’re just interested in his build orders, I have them here. Let me know if you see any mistakes; I probably missed things (like I think I missed him get combat shields most of the time), so let me know what i need to fill in

Wow, PvZ feels like it has changed a lot over the past few months. Primarily, Naniwa’s 1 Gate Expand has changed how we think about the matchup, but the HotS meta-game has also settled since I first wrote up this guide. Here’s the new version you will see on the Protoss Strategy page. As usual, let me know if you see any mistakes or have any questions or feedback.

Protoss versus Zerg

Of the 3 matchups, PvZ is the most dynamic. Whereas PvP is build order chaos and PvT generally plays out similarly, PvZ has safe economic openings into a variety of styles. You should watch this Apollo video for a solid way to play PvZ. Continue reading →

The past few days have been very productive for me on Spawning Tool on several fronts. Not only did I work through several features, I made some important backend changes. Consequently, I have also thought on future development, product, and business concerns that I think might be of interest to the community.

Let’s start with the backend changes. First, Graylin updated sc2reader to 0.6.0, which brought 2 important changes for Spawning Tool: support for patch 2.0.10 replays and support for GameHeart replays. See my last post for what GameHeart parsing entailed. Importantly, this brought in ASUS ROG Summer 2013 replays and corrected data from various other replays. Second, I migrated from a mysql to a postgres database tonight. This change won’t impact end-users much (other than some load times), but it makes me feel better about the data integrity.

Next up are the new features. First, you can now view counts on the abilities casted in a game so you can see that Protoss players typically use Psionic Storm about 2.67 times a game, whereas they can expect 1.72 EMPs and 1.37 Snipes against them. The data is pretty rough when summarized from so many replays like this, but hopefully more specific queries will yield more interesting results.

You can also specify times to capture particular counts, so you can see that on average, 12.32 SCVs are built in the first 5 minutes of a game. You can now also filter replays by the date played so you can be sure to only consider the latest and greatest replays from your favorite players.

So all of these changes have me looking forward at the direction of Spawning Tool. Over the past few months, I have been talking a lot with ChanmanV, who has provided product direction for Spawning Tool. The current feature set represents what we consider a Minimum Viable Product (MVP) for showing what basic statistics we can pull out of replays. It’s been a strange road, but Spawning Tool today is far more developed than I imagined when I first sat down pulling an all-nighter to just build something. I originally envisioned it as a proof of concept and technical achievement for the community to get excited about. Today, it’s a product when potential value, and I need to re-evaluate it in that context.

The primary change is that the Spawning Tool site is no longer open source. It was originally open source because it was supposed to be a demonstration for the community, and I hoped to pick up collaborators along the way. Well, I didn’t find many collaborators (though I’m still open to it; email me if you’re interested!), and between forums, code contributions, and the still open source spawningtool parser, I hope I’m doing enough. There are lots of great reasons to open source things, but today, the site doesn’t quite fit.

The other aspect is that I need to start looking forward towards a business plan. Before I scare you off, know this: I intend for Spawning Tool to always be free for public, individual use. The big picture goal of the site is to make quantitative thinking pervasive in the StarCraft community, and I don’t want to change that. I thankfully have a day job that I really like, and this is something that I’m doing for fun and for the community on the side. Given that, there are server and development costs to cover. That’s currently trivial (~$30/month), but those will increase with scale. I’m thinking on a lot of different ideas, so let me know if you have any thoughts on it. Hopefully we can get bigger, commercial entities interested and have them throw us some change, but nothing is locked down.

As usual, leave a comment or email me (kevin@kevinleung.com) if you have any thoughts. And if you’re here for my StarCraft strategy content, I’m working on updating my Protoss strategy guide right now. I had forgotten how much cheese you get from Zerg in real laddering, like this 8 Pool. I’ll try cater the content to this type of play.

First, you can see units killed and units lost in the research tab. For example, you might be curious to know how many Marines a certain player killed during Dreamhack Valencia or how badly their workers are harassed on average. These stats, of course, are very dependent on how long a game goes and what units composition are, but they’re in there and maybe fun to consider.

Second, I split out the win rates data onto its own page from research, so for any of your favorite filters, you can see how the games go. The most obvious use is to see how a player does in different matchups. It may be unsurprising how good JvZ is (the other numbers may be a bit off because of GameHeart replays; more on that next paragraph), but you might be surprised to see that TLO wins 80% of his games from 21 to 25 minutes long, but only wins 16.7% (1-5) of games longer than that. How strange.

Third, GameHeart support is coming soon. GameHeart games are unusual for a few reasons. First, the players and teams aren’t reflective of the actual matchup because the actual players are only picked in the in-game lobby, so the observers are mixed into the data and races may not be accurate. Second, all events are mistimed because the actual game doesn’t start until the in-game lobby is set. Finally, because the teams are misaligned, the winner may not be marked correctly.

So the fix is coming soon, which I can explain the process of. I mentioned before that Spawning Tool is actually built on top of an open source replay parser. Well, Spawning Tool is actually built on top of sc2reader, an amazing open source library that literally turns 1s and 0s into comprehensible data structures. Since many people may be interested in having GameHeart replays normalized to work like standard replays, this functionality should be added to sc2reader (not spawningtool) since it will have better reach that way. sc2reader was cleverly built to allow for plugins for specific functionality. Since sc2reader is an open source project, I wrote up a plugin to normalize GameHeart replays and submitted it to the maintainers of sc2reader to be incorporated into the primary project. If they like it, they can accept the request, and it will be available for everyone (including me) to use. Anyways, get excited about that because there are a lot of Spawning Tool replays that aren’t quite working correctly.

And on that note, if you’re interested in working with StarCraft data (whether replays, profiles, or anything else), I recommend you check out the new API and Data Analysis Forum that Blizzard made. As you might have guessed, the StarCraft developer community has some amazing people at the forefront, and I think it’s a supportive and generous group of people. If you’re interested in contributing or have an idea, chime in there or check out and contact someone else with a project in this thread. No matter where you’re coming from, we could always use a helping hand.