Wednesday, February 21, 2018

One of the things that I loved about my work on Eon Altar is it taught me a lot of other skills and brought to me many different experiences that I didn't really get to experience at a larger company. On the other hand, sometimes those experiences weren't always positive.Today I'm going to talk about forum moderation. How I attempted to build a community, and keep that community positive. I'm going to do that through a story of one time I had to banhammer someone with vengeance. Despite having receipts, you won't find names in here. The purpose of this story isn't to shame, but to illustrate. And maybe a small amount of catharsis.When you ban someone on the Steam forums you get a private thread between the banned person and the moderator, at which point you as the moderator can explain why you banned them, and they can reply, possibly to explain themselves or beg your forgiveness. Of course, in this case the banned got even more aggressive than they were on the forums, and accused me, the dev and moderator, of being biased towards long-term community members.

Me, after dealing with someone aggressive on the forums.

Let Me Tell You a SecretSurprise, yes, as a dev and moderator I am totally biased towards long-term community members in good standing. Now, caveat, if a long-term member had done what the banned person had done, I'd have similarly banned them, too. But there's really good reasons you as a developer and moderator would actually want to be biased towards long-term community members.First of all, part of the reason why forums exist is to develop said community. A group of people who identify and enjoy your game, want to help each other out, and communicate with the developers. It benefits us developers, because it lightens the workload with respect to helping customers out in a timely fashion, and it benefits the community members, because they get to have the developers' ears in terms of criticism and feedback.That's not to say I ignored feedback from people who weren't long-standing community members. Fresh eyes are always useful feedback. But at the same time, it was usually easier to understand where someone was coming from with feedback if I'm already familiar with them. History breeds context, and context increases communication throughput in my experience.And the other interesting part about said bias is that when a prominent community member brought me feedback to disseminate among the team, if it was hyper-critical, it was often done as a labour of love. They do enjoy our work, and they understand that we're people too, and so reading that hyper-critical feedback is easier on our feelings. That said, you can still tell--even with new community members--who's there to shit on your sandwich and who's there because they truly want to make the game better for them and others; by the language they use, and the specific approach to criticisms.How Do You Get Banned From a Community?

I know the previous section was a little vague, so here's some meat and potatoes to give some precise examples of the things you can do to get bushwhacked from forums.

1) Accuse the Developers of lying about a specific feature. 7 times in 7 different threads.One morning I woke up to check the forums, as I was wont to do, and lo, we had like 7 new thread replies. Checking them, they were all by the same person, and they were all variants of, "Your Kickstarter said there'd be an online mode, and now there's no online mode, y'all are liars."Okay, first of all, that Kickstarter failed. It had failed 3 years before those posts, no less. Sorry, not sorry, but we can't be held accountable to promises made in a funding drive that didn't get us the funding. We had to radically alter the game after that failure to accommodate the fact that the game in it's Kickstarter format wasn't viable, so anything in the Kickstarter barely could be applied anyway.

To top it off, after the ban, the poster accused us of being shady about it, keeping it behind the scenes. At which point I linked about 4 different interviews of us talking about the failed Kickstarter and what it meant.

I mean, really, if you're going to accuse the developers of lying, make sure you've done your research, and make sure you're not making any unreasonable assumptions. And then on top of that, approaching it more diplomatically rather than aggressively spamming the forums would have left things more open to conversation. I don't have the time or lack of emotion to deal with someone who's going to argue with me in bad faith. Not going to happen. I've got video games to make.

2) Slag others on the forums with personal attacks.After being given a warning about forum spam and a warning (and post deletions) about unfounded attacks on the developer, attacking others on the forums is a Bad Idea™. Now, these attacks were on long-standing community members who were in great standing: they'd contributed significantly in a positive manner, including criticism. And to be fair, said community members didn't hold themselves perfectly in return.However, as mentioned in the section above, I'm willing to give said long-standing members some leeway. Yeah, it's blatant bias, but frankly I didn't give a damn. They were still a net positive to our forums, whereas this newcomer throwing shade and slagging people directly was a net negative. Each individual action by an individual doesn't get evaluated individually; it's all in aggregate. Eventually we decided it wasn't worth it.

3) Continue to be aggressive in the ban thread.After we banned this person, they wrote an essay about why we were in the wrong for banning them, and how precisely we were liars, and how this other forum member wronged them so they should be banned too.The first essay I responded to, with the logic about the Kickstarter failing, the counter-evidence of us being "shady", and why the personal attacks were the last straw.

A few days later another essay came through on the ban thread, conceding some of the points and then getting aggressive again. At which point I switched off and stopped reading the thread, and didn't return until said poster added a 3rd essay months after the fact. Which I also didn't read. Again, I don't have the time or emotional capacity to deal with someone who clearly has a smear agenda against us. I never heard from them again after that, and it was blissful.

ConsequencesAt the end of the day, we as developers eventually decided this person's contributions were a net negative on our business.

From a community building perspective, as they were being extremely negative and extremely mistaken about the accusations they were making. Something like that would turn our community into a bloodbath. The good thing is we had already done a decent job of cultivating a community that many of this person's posts were reported for moderation many times, and quickly, even as we were discussing what to do about the situation.From a morale perspective internally, it was a very stressful situation. It ate a lot of emotional energy dealing with this person, and time we could have been using to develop features. This is partly why bigger companies have community managers: that level of indirection is extremely helpful to reduce the negative effects of jerks on your development team. The cost of community managers is not being able to directly interact with customers in good faith. Direct interaction is amazing for getting unfiltered, honest feedback, and being able to ask follow-up questions and have a legit discussion is incredibly useful. But sometimes, eventually, it's just not worth it.From a monetary perspective, if they requested a refund of the $15 they paid, I'd have happily given it to them--who knows, maybe they did, it's all automated. Not every customer is a blessing, even in cases where we're desperate for customers. The customer isn't always right, and a customer holding your forums hostage isn't really a customer anymore. If you walked into the GAP and started yelling loudly about how you were promised cardigans because it's fall, and there's no cardigans this season, you would be escorted from the premises. Just because it's online doesn't make the scenario different.However, our quick and decisive response paid dividends, quickly. The negativity was quarantined, and our community got to go back to talking about the game, the good, the bad, suggestions, interests, ideas, and so on, instead of having to babysit someone who was slagging them. The total negative effect was quite muted. The ban thread was draining, however, and I regret putting so much time into it, honestly. While I wanted to make sure our butts were covered, it was an unnecessary shadow over our developer team.There are real world productivity and monetary consequences to trolls in your forums, and you need to deal with them quickly and decisively. Bias isn't always a bad thing when it comes to cultivating your community: there are very good business reasons to feed that bias. It was a lesson I had to learn that day, the hard way. #IndieDev, #EonAltar, #DeveloperTales

Monday, November 13, 2017

I'm starting a new job tomorrow, and am no longer the lead programmer for Eon Altar. For those who follow me on Twitter, this won't come as a surprise, but given I've been semi-documenting my indie dev journey, I figured I'd talk about it here.As to why, we completed and shipped the entirety of Eon Altar Season 1. From start to finish it was an incredibly ambitious project, and the fact that we finished and shipped it is, frankly, massive. Add to that a 92% positive rating on Steam, and I can say without equivocation that I am proud of what we shipped. That said, it was 4 years of my life, and, among other reasons, I figured it was time to get some other industry experience.Looking BackIf someone tells you that they want to build an RPG in a year, then says it's also going to have a completely unique game mechanic/control style, laugh at them. Seriously, just shake your head and laugh. Our original schedule was stupid, for the lack of a better term. It left no room or time for iteration, and at the end of the process we used up over 3.5 years total to get it out the door in its entirety. That said, I don't regret it--at all. What it did mean, though, was going from a team of 25+ people to a team of 4-5 people for the latter 2 years, and that's hard.

The extra time was necessary. It allowed us to polish up a lot of features, add missing bits and pieces, and actually iterate on the combat and gameplay formulas. You'd be impressed at the amount of work so few people can do. It also helps that the core team was mostly on the same page as to the direction of the game, which meant very little communication overhead. Sure, diversity of viewpoints is super useful (and when our team was bigger, it WAS useful), but I found it was more useful early in the process rather than late. Once you're in the polish phase, it's mostly mechanical work. User feedback, bug fixes, small adjustments because you can't afford rewrites.

But having so few people also meant we were stuck wearing a lot of hats. By the end, I was our tech support, forum moderator/community manager, lead programmer, project manager, QA Lead, QA Tester, copy writer, copy editor, IT administrator, build and deployment engineer. The list goes on. It was a little overwhelming at times, though I got an immense amount of experience doing it.But even for just the engineering, working on Eon was an amazing process of learning something entirely different every day, and immediately applying it. One day I might be working on the combat engine. The next, code optimizations. On day 3, save games. Day 4, Steam integration. Day 5, animation bugs. I was never bored, because I was constantly learning. It was also exhausting, because I never got to revel in what I learned; I was always on the edge.But damn, did we ever pull it out. Nobody--nobody--in the industry was doing what we did. Light role-playing meets local co-op video game was a unique mix, and the mobile controllers went far and beyond what JackBox did (though JackBox is amazing and I love it to bits), so we didn't have anyone to copy from. I even got to do a talk at PAX DEV about the process, which was fantastic. Such a great experience, and people apparently got some useful information out of the talk which was icing on the cake.

And that gameplay resonated with a fair number of people. Our reviews are generally extremely positive about the unique play that Eon Altar brought to the table. We even had an unintended but amazing audience segment: spouses. The number of reviews that talk about people playing with their significant others--many of them non-gamers, no less--and how much fun they had was so mind-blowing. We had made an RPG-lite that was approachable and fun, with enough depth that even hardcore players could have some fun min-maxing.

So what's next for Eon Altar? I can't say. That's up to the remaining team, and I wish them all the luck in the future. Eon is a fantastic project, a fascinating method of gameplay, and a great property. Yeah, we made mistakes, and the game itself is far from perfect, but I don't regret the project whatsoever. It will always be my dream project.Moving ForwardSo what's next for me? I've been hired as a Senior Engineer for an independent studio here in Vancouver, BC. The studio is large enough that I'll be able to focus on engineering almost exclusively, and been around long enough that they have a stable business plan to support those employees. I can't talk about my next project within the studio, but I am excited to work in an office again, with other programmers.

While I can learn a lot about gamedev on my own, improving one's programming and software engineering is difficult to in what's effectively a vacuum. Basically, when given a specific problem, it's relatively easy to learn or deduce how to solve it, but things like programming tools, style, practices, etc. are difficult to learn because you don't necessarily need them to complete your work. Your work becomes better with said knowledge, but it's not strictly necessary.I'll be keeping an eye out for my former coworkers though, and see how Eon Altar fares moving forward. Eon Altar is a high bar to meet in terms of interesting engineering/design projects, and I'm not sure anything I work on in the near future will meet that bar, but that doesn't mean I won't be giving my new projects my all--I've far too much professional pride to do otherwise. But being able to focus primarily on engineering also frees up brain cycles for other things, like maybe blog posts? We'll see, I definitely miss blogging about game design stuff! But tomorrow, new job!

Tuesday, November 7, 2017

So, that happened. I'll be honest in that I definitely wasn't expecting Blizzard to actually work on a Vanilla server, given the sheer engineering difficulties. I'm also not convinced still that there's as much money in it as some folks posit, but some number cruncher at Blizzard must've decided that it was worth it. Perhaps even just as a marketing tool for WoW Current.But that aside, it seems Blizzard is at least entertaining it seriously, given the BlizzCon announcement. The company's been gun-shy about announcing products that might not ship in more recent years (Titan vs. Ghost, Warcraft Adventures), so I expect this effort to come to fruition eventually. So, as per my linked blog post above, there's a lot of engineering problems:

Old (possibly lost) Assets

Old (possibly lost) Codebase

Old Hardware Dependencies

OS Updates

Security Fixes for both Client and Service

Build Pipeline

Battle Net Integration

Authentication Updates

Customer Computer Updates (Graphics APIs/Cards)

Network Optimization

Server Crash Fixes

etc.

The list is extensive. But there's a few hints laying about Twitter and the Engineering Panels, and a little shower thinking I came up with a potential solution that they may be working towards.

"This is a larger endeavor than you might imagine, but we are committed to making an authentic, Blizzard quality classic experience. We want to reproduce the game experience we all enjoyed from classic WoW, not the actual launch experience."

They're couching this in careful language to suggest that the experience won't be identical, that they're not just shipping the old client. They're also suggesting that it's a huge engineering/design effort.

With all that in mind I think their solution is not to use an old client and re-engineer the old servers, but get the old content working in current server/client codebases.New CodebaseThe neat thing about that conclusion is it handily solves nearly every single engineering issue that I brought up above. All the code for security, network protocols, database access, authentication, build pipelines, optimization, graphical display, server hardware-specific optimizations, and so on could be shared code. And what better way to handle shared code than a WoW Shared Infrastructure team?

Something exciting is happening: on Nov 20th I'm moving off the Server Team to the (newish) WoW Shared Technology team. (1/?)

Kurtis McCathern, a prominent WoW Server dev, is moving to the "newish" WoW Shared Technology Team. Why bother making a shared tech team unless you were planning on versioning or forking your product? And of course, this shared tech will need client and server developers.In the first engineering panel at BlizzCon (link requires Virtual Ticket) the WoW engineer Omar Gonzalez talked about how much divide there was between the infrastructure code and the LUA scripting the designers do, and it's pretty stark how much feature work lives in scripting land. With that kind of divide, it makes it easier to envision a shared C++ codebase, and a much smaller subset of code for feature specific work that's almost entirely LUA with maybe some C++. Like reputations, weapon skills, etc.

Refactoring a current in-production codebase into smaller shared chunks is not a fast process, nor an easy one. I went through a similar process when I worked at Microsoft Office. It took 3-4 engineers over 3 years to get a bunch of the 30+ year old shared code into smaller shared libraries that could be built and modified independently. Now, Office is a (much) bigger, older codebase than WoW likely is (I can certainly guarantee older), so it's not quite the same. But the types of tasks are certainly parallel. I imagine that some of the work, especially around Battle.Net integration, has already been done though.Old ContentSo what about the old content? Lost assets? Item drops? Boss fights?The WoW Client contains the assets for rendering the world, rendering enemies, quest text, music, etc. The WoW Servers contains the information for where to spawn things, AI scripts, instancing, item drops, quest triggers, etc. The WoW Servers also host the databases that contains the data for running the game and running player characters.

Recreating the Client assets is "simple". Assuming they don't just have a versioned set of assets laying around, they find the version they want in an old client, grab the MPQs, rewrite their MPQ-cracking algorithms (because the format has changed significantly over the years), and extract the data. They could then repackage it in current WoW formats such that the current Client could actually read them. That assumes the current tech can even render that data, it's possible that the data may need to be massaged to be renderable in today's technology.

Recreating the Server assets is harder. Much harder. Private servers have generally had to reverse engineer that data, from their personal memories, Thottbot data, etc. We don't know what kind of data Blizzard has in the backend--assuming they have a copy at all--and it's quite possible that they'd have to manually re-enter that data into a current-style database anyhow. But AI logic, instancing, quest tech, etc. can be shared with Current WoW, rather than be recreated.The other aspect to all this is recreating old features and deprecating current ones. Weapon skills, hit cap, old talent trees, resistances, MP5, spell ranks, and much, much more will need to be recreated. I imagine a lot of this will be designer feature work in LUA, but it'll still require some code support. How much of that will be via memory, and how much will they be able to stare at old code for?

"Responsibilities include building gameplay systems, transforming database data, building UI elements, repackaging binary distributions, and working closely with designers to revive the classic game elements."

Transforming database data suggests they DO have the old server data and need to transform it into the newer database format. Repackaging binary distributions also suggests my thoughts about getting the old client data may be on the ball.A Massive EffortAs Brack mentioned, this is a larger endeavor than you might imagine. I've probably missed things in my analysis. And I imagine the engineering teams are bigger than the 5 people I suggested for financial solvency in my original blog post, which suggested a layout of about $2M USD over a year. The cost and timeline is probably going to be bigger than that. Possibly significantly bigger.But using current technology solves a lot of engineering hurdles and actually brings this into the realm of logistical possibility. However, it also means that Classic WoW will likely feel a lot more modern than Vanilla WoW did. I'm expecting Battle.Net integration, LFD, and phasing for overloaded servers, because those will "come for free" with a shared infrastructure team (it's not really free, but certainly a lot less work). We also may not see Vanilla-era bugs or system quirks like debuff limits. Depends on how faithful they want to be to the original game, and how much time they're willing to invest to get those details correct, and how many systems they want to diverge vs. utilizing the shared infrastructure.It's going to be fascinating from an engineering perspective, and what I would do to be a fly on those walls right now. #WoWClassic, #Engineering

Leveling, among other advancement schemes, is at its most
basic a reward for time. Play a little longer, grind a few mobs, finish a few
quests, and ding! You get a level, and along with it things like new abilities,
better stats, talent/skill points, or any number of other things. Developers of
MMOs know intimately that if you want to keep players playing, you need to give
them rewards.

RPGs definitely enjoy their progress bars. Progress bars are so good at player retention/engagement that pretty much every game genre has borrowed them. MMOs are arguably the royalty of progress bars--outside of clicker games of course. But what happens when your playerbase balks?I've Got The (Artifact) PowerIn the latest expansion, World of Warcraft has created an alternate advancement scheme of sorts with Artifact Weapons. Starting at level 100, and well past the level cap of 110, you gain Artifact Power to level up your Artifact Weapon and gain traits to increase your character's power. Once you've gotten 51 traits, every point after goes into an "infinite" trait with an exponential increase in Artifact Power required for each point. To offset that, over time players automatically get more Artifact Power from each quest/boss kill/other activity as real world time goes on.

"Concordance" is the infinite trait in the upper right. In this screen I have 7 levels of Concordance. It takes about 3.8 Billion points of Artifact Power to get an 8th level.

I'm glossing over the previous patches and focusing on the current implementation, as its more in line with their original vision based on their interviews. It's also important to note that WoW is far from the first game to have an alternate advancement system. Diablo III of course has Paragon levels, but much earlier there was the original Everquest with what they called "Alternate Advancement" (dun dun duuun! That's where the term was pretty well coined).What this does is ensures that players who don't play for a while can catch up, while preventing players who play a lot from getting too far ahead of the rest of the player base. But it also ensures that even if you play casually, you're never really too far behind. I pretty much only log on for raids and the occasional quest run once every couple weeks, and I'm within ~5 levels of more hardcore players.Which kind of almost feels like it defeats the purpose of the alternate advancement. We're basically just moving forward on Blizzard's very defined schedule. Not that's a bad thing necessarily, given most designers will make spreadsheets trying to figure out advancement timing and schedule. Nor is it that different from gear drops from a raid, given those fall within a specific power level. It just feels really naked now.But by creating this alternate advancement, it makes it easy for the designers to parcel out Artifact Power as partial rewards. Instead of dropping a big piece of gear, or giving a bump to a reputation that has no impact on your character's power levels, they can give you something in smaller, bite-sized pieces that allows you to continue progressing your character regardless of the activity you're doing. Filling that bar, which is industry-proven.The Gear TreadmillWhile Artifact Power is new to WoW, the gear treadmill is not. Most MMOs with an endgame beyond leveling uses gear acquisition as a way to increase character power without bumping their levels. WoW extended this treadmill by allowing pieces of gear to roll higher stats (Titanforging) or different bonuses (Gem Sockets, bonus tertiary stats) randomly.What this meant is that there's no perfect set of gear anymore. Or at least, it's not attainable within a human's lifetime, let alone a raid tier's lifetime. Previously raiders could make a list of gear they wanted (called "Best in Slot" or BiS), and aim to get that gear. What it does for players who're never going to get the best gear in game anyways (which is at least 95% of the player base) is occasionally you get a nice surprise. Like my Paladin's bracers that should've been 910, but rolled 940 the other day (yay!).You're Never FinishedBut interestingly, the more hardcore contingent really dislikes these alternate advancements. If you're running for World First, you need every advantage you can get, which means busting your butt to stay on the forefront of that Artifact Power wave in the previous tier. An impressive amount of work, really, given the minute advantages it brings, also given if they were to just wait a couple weeks, they'd be in the same place numerically as they busted ass to get now. And similarly for the gear, because the gear can proc better variants at random, there's no end to the gear farming if you're aiming for the perfect (or at least best-ish) loadouts.Really, the issue is that there's no endpoint. No finish. A large part of it indeed has to do with wanting to have a life outside the game and still be on the forefront, I don't doubt that. Logically it follows. But I wonder if that's really the only reason, especially since that reason only applies to a very small minority of the playerbase, and the complaints seem to be coming from far more than that minority. Maybe there's another reason that isn't recognized by players or the devs?I've been playing another game, Alliance: Heroes of the Spire, where the developers pulled a similar design decision. At some point, nearly every hardcore player had a good chunk of the good heroes, and people were pulling duplicate heroes that were a disappointment--they were a waste if you had already "finished" that hero. The hardcore players were effectively done, so to ensure further engagement and to make it feel like dupes were a good thing, the developers created a Rank II where you could power up an existing hero by merging them with a maxed duplicated hero.

The hardcore portion of the player base reacted very negatively to it. Part of it was the timing of how it was rolled out and the communication around it, but a lot of it was ostensibly based on the fact that there's suddenly more vertical power creep that players had to jump through when they thought they were done. More work.On the face of it, as a game dev myself, the extra vertical advancement complaint felt farcical in nature, given A:HotS is still relatively young, and every other game in its genre has a similar mechanic for vertical advancement. But the backlash felt similar to what I've seen in the hardcore WoW community.Changing the RulesWhat both WoW and A:HotS have in common is the devs changed the rules of advancement.For a decade--literally a decade--WoW had a level cap, you got raid gear, then next expansion dropped, putting everyone on an even playing field, and the cycle repeated. Now with the gear changes and artifact power introduction, the rules of how to advance between patches has changed completely, and the playerbase hasn't yet figured out where the line sits between meaningful advancement for each individual player and cookie-clicker-esque busywork.For A:HotS, the maximum a hero could be was 6*60 (6 star, level 60). That changed after people had already burned dupes, and had a stable of heroes at maximum. Similar to the Flying/Not Flying argument in WoW, many players in A:HotS felt those changes somehow invalidated their previous work and rewards. The system worked in a specific manner, and the system changed, and that can feel like a betrayal.As I mentioned earlier, I wouldn't attribute a fear of change being the only issue. Each advancement system mentioned so far has cons (and I've explored those cons), and they may likely be a bigger reason for certain players than a fear of change.The gear BiS problem in WoW, for example, is an issue that only affects Mythic raiders who can finish the current tier in a reasonable amount of time before the next tier drops. For the rest of the playerbase who'd never have gotten Best in Slot anyhow, it's not an issue that should ever crop up outside of theory. But what makes that scenario different from an Artifact Power bar that can never be completely filled? At an abstract level, they really aren't, but I've noticed Artifact Power affecting casual players whereas the gear proc thing doesn't even register, which is really interesting from a player psychology perspective. Again, maybe it's because the mechanics of Artifact Power are far more naked than gear, or is it because it's new so it's subject to more scrutiny?And as I mentioned in my original post on advancement:

But WoW and other
MMOs have the problem that they’re really two games rather than one: a
leveling game, and an end-game. And what system is good for one of those games isn’t really good for the other, as Blizzard’s experiments have proven.

Except now I can identify a third game: the leveling game, end-game, and top 5% players who can actually finish the hardest content, and what's good for the end-game doesn't seem to be that good for the top 5%, and vice-versa. Can they be reconciled? And then there's the potentially blasphemous question: does it matter if they aren't? #WoW, #AllianceHotS, #GameDesign

Saturday, July 22, 2017

One of the things that makes Alliance: Heroes of the Spire so deep is the serious amount of gear customization you can do. Six gear slots, with set bonuses for having matching pieces that can seriously change how you'd deploy that Hero, and then slotting in gems, as well.Gear set bonuses come when you equip either 2 or 4 pieces of a given set. The 2 piece sets are generally raw stat bonuses, whereas the 4 piece sets are the interesting effects:

That is seriously a lot of options. Today's post is going to be looking mostly at the 4 piece sets, though I may talk a bit about the 2 piece sets in relation. How much does each set help? When might you use each 4 piece set? Let's start with some of the easier to discuss sets.Lifesilk+30% Healing Done, HoTs can Critical StrikeLifesilk is pretty straight forward: you want this unit to do more healing. The details are the +30% is multiplicative (so if your Pyrus heals for 30% health, with Lifesilk he'd heal for 39% health), and critical HoTs heal for 15% of maximum health instead of 10% for the basic version.Since critical HoTs require critical strike, you may want to pair this set with things that increase your critical strike, but if the unit doesn't have any HoTs, then it'll go well with literally any 2P stat increase. On the other hand, for high-level arena, you may end up using a more defensive 2P for your healers to survive. My Anat, for example, runs Lifesilk and Sunshield so she can have the extra block.
WitchstoneBuffs/Debuffs can Critical StrikeWitchstone is possibly the most complex of the sets, because what does a buff or debuff critting even mean? You can find an exhaustive list on the Alliance website here, but here's the rules of thumb to remember:

If the buff/debuff has a number, crits increase it (ie: Armor Break goes from -50% armor to -75% armor)

If the buff/debuff doesn't have a number, crits make them unpurgeable (Stuns, Sleep, Silence, Debuff Immunity, etc.)

Bombs are the odd one out, on a crit they stun when they explode

Bar drains and Bar fills are affected multiplicatively

Witchstone cannot cause HoTs to crit

Witchstone makes a good choice if you have buffs or debuffs you want to supercharge. Unpurgeable silence or stuns can do wonders in Path of the Ancients or some Lost Dungeons where one unit constantly cleanses their team. If the unit you're bringing is less about damage and more about control or support, Witchstone may make for a good choice.An example here is Sunslash, the Order Sabretooth. He doesn't do much damage, but making his AoE Mark a Critical Mark effectively increases your team's direct damage output by 15.4% (130% extra damage for basic Mark to 150% extra damage for critical Mark).Titanguard15% Less Direct Damage taken, redirects 30% of party damage to selfTitanguard is a fun one, and in high level arena you can often see it appear in what seems at first glance weird places. The first benefit of Titanguard is a straight-up 15% damage reduction on Direct Damage. The damage transfer effect is not direct damage, nor are DoTs or damage reflect, so they would not be affected by this damage reduction.The other benefit is redirecting 30% of direct damage taken by other units to the Titanguard unit. Again, direct damage, so DoTs, damage reflect, or other Titanguard transfers do not count. This will often alow some of your squishier units to survive that much longer, even if you don't have a taunt up, or allow them to survive against AoEs.This does create a weakness in Titanguard units: it's often easier to kill them indirectly by piling on damage on other units, especially if said other unit has a lot of HP but not a lot of armor--Petra comes to mind here. Titanguard units may also be susceptible to teams that have a lot of high-power AoE attacks for a similar reason: 3 units' worth of damage redirects at once can drain the Titanguard unit's HP bar very quickly.Finally, multiple Titanguard teams work interestingly: the 30% damage redirect is calculated first, then split among all available Titanguard units. So if you have 2 Titanguard units in your team, each one will take half (15%) of the redirected damage. You also cannot redirect damage to yourself, so if a Titanguard unit is one that's hit, they're not considered an available Titanguard unit in the damage transfer calculation.Often you'll want to ensure Titanguard is on a unit that
has a lot of health to begin with, since the damage transfer cannot be mitigated. Shields will still absorb it, but nothing else reduces it. So units like Petra, Valorborn, or even the Mechanics are good choices for Titanguard.
Ironclaw35% Chance of A1 CounterattackIronclaw and Swiftsteel are relatively similar in a mechanical sense. They both give you more (automatic) uses of your first ability. The difference largely lies in the trigger mechanism. Ironclaw requires you to be attacked in the first place. This makes Ironclaw a good fit for units that get attacked often: taunters, provokers, and guarders.Ironclaw is effectively a DPS increase, but it can also be useful if the Hero has a debuff on their A1 you want to apply as often as possible. Gaius' A1 stuns, for example. Otto's A1 hits like a truck. Both good reasons to bring Ironclaw to the table. In the case where you want more debuffs, you'll likely want to pair it with a surplus of Aim. Especially for tanks, going mostly Aim instead of Block feels weird, but if the enemy team is mostly Stunned anyhow, it's not a big deal.Ironclaw on farmer units that have self-healing (i.e.: Otto, Petra) is ridiculously effective since they're always getting attacked.Note that you cannot Counter more than once per attack, so if your Hero already counters as part of their kit, Ironclaw is not likely a great choice.It's a pretty straightforward ability, but look lower in the post for an analysis of Ironclaw vs. Swiftsteel, because that's where things start to get muddy.Swiftsteel40% Chance of a Bonus A1 followup, +50% Critical Strike for DamageSwiftsteel is a bit more complex than Ironclaw. Every time you use an ability, any ability, you have a 40% chance of following up with a Free Attack, which is a usage of your A1 against the target. If your target is friendly, or the ability has no target, the Free Attack will choose a random enemy, ignoring Taunt or Provoke.Rallies and Counters count as ability usage for Swiftsteel procs, so you can get bonus A1 attacks on those. However, the Free Attack cannot proc another Free Attack--but a Free Attack proccing a Counter on the opponent to proc a Counter on yourself and then Proccing a Free Attack off that Counter can occur. It's rare, but when it happens you basically just watch the two units hit each other over and over until one of them dies or someone doesn't proc a Free Attack. It behaves like a bug, but it's permissable under the rules of Counters/Free Attacks.Swiftsteel also provides a +50% Critical Strike, but for damage only. Heals do not benefit from this. This means you'll often pair a Swiftsteel set with either a +Power or +CritMult weapon, rather than the typical +Crit weapon many go with.The reasons for going Swiftsteel are pretty well the same as going for Ironclaw: it's a DPS increase, and if your A1 has a great effect, you may want more of those. Midorimaru (or any Samurai Cat) is a great case for Swiftsteel to spread more DoTs, for example.Look below for an analysis of Swiftsteel vs. Ironclaw, and Swiftsteel vs. Wartech.WartechHalf Critical Strikes are Supercrits (+200% CritMult)Supercrits. The name sounds awesome, but what is a Supercrit? It's a Critical Strike that has an extra 200% CritMult applied to it. So for example, if you normally have 50% CritMult, a Supercrit will actually do 250% extra damage instead of 50% extra.Unless you're rocking a surplus of Crit gem slots, you'll almost always want to pair this with a +Crit weapon.It's ridiculously straightforward, and basically, if you need burst damage, Wartech is pretty much the way to go. But you also can't rely on it. On average it's really only increasing your CritMult by 100%, since only half your Crits will have it applied. So big, bursty, swingy damage.

Often units with big AoEs will get Wartech applied. If you're not terribly enamored of your Hero's A1, Supercrit may be the way to go for a damage increase. But how does it compare versus just 2 Dragonfury sets (+50% Power)?For the sake of simplicity, let's pretend everything else is the same: stat allocations, etc. 100% Crit, no extra CritMult.If we do 100 base damage, at 100% Crit with a 50% our base damage is 150.With +50% Power, our base damage changes to 150, which after a crit is 225.With Supercrits, our base damage is still 100, but a Supercrit is +250% damage, which is 350. But the floor half the time is 150. So an average damage of 250 instead.So you can see that Wartech increases the average damage dealt a bit, even over +50% Power, but it's swingy. Sometimes you'll do way less, sometimes you'll do way more. If you have less than 100% Crit, the benefits of Wartech also go down. In this particular instance, you need 75% Crit to make Wartech do the same damage on average as 2 Dragonfury sets. Now, even if the average damage is lower on Wartech, it will still have a higher maximum. The maximum damage would still be 350. You'll just see it less often.The above only holds for units that scale 1x with Power. If they have abilities that scale better with Power, then that 75% Crit inflection raises further. If their abilities scale worse with Power, then the Crit threshold goes down.

Enemy teams that have healers--Magitek Bards and Unicorns especially--basically have a HP reset button every few rounds, so burst becomes very important when fighting those teams, making Wartech attractive.AnalysisIronclaw vs. SwiftsteelBecause they're so similar, Ironclaw and Swiftsteel may be an interesting question of which to use.To ensure the same number of potential A1 procs a round, an Ironclaw unit would have to be attacked 1.14 times a round. However, Ironclaw is always against the attacking unit, whereas Swiftsteel is against the unit being attacked (usually), or a random unit if none is targeted.Swiftsteel, if you get really lucky on buff procs, can wreck backline units as it ignores taunts. It's not an effect you can count on however, as you'd have to proc it at 40%, and then randomly select the backline unit (25% chance if nothing else is dead), so basically, if you use a buff with Swiftsteel, you have a 10% chance to hit a given enemy unit. It can be deadly to the opposing team, but not something you can build around.Every unit in the game has an A1 that will end up having to target a Taunter, so if you have an Ironclaw Taunt up, eventually they'll attack your taunter, and you have a little bit better than a 1/3 chance to hit them back.So basically, Swiftsteel is great for focus fire, and Ironclaw is great for wrecking backline units.What makes this complicated is Swiftsteel's 50% Critical Strike for Damage bonus. What it means is you can effectively run a +Power or +CritMult weapon instead of a +Crit weapon, so an extra +51% Power or +67% CritMult for a maxed out 5* weapon (which, depending on what your scalars are for your abilites, and what your stats allocations are before that, could be a 50%+ damage increase, or even more, but more likely in the range of +25%ish).What that means is that the actual "must be attacked this often" value for Ironclaw to match Swiftsteel is:

So the upper bound of Swiftsteel's extra damage output to equal the average damage output of Ironclaw means the Ironclaw unit needs to be attacked an average of 1.71 times a round, which for most tanks is easily hit, even if they aren't tanking (given the prevalence of AoEs). So even with the Swiftsteel +Crit, Ironclaw is actually still fairly powerful in the Tank niche. However, if you have a Counter already built in (ie: Valorborn, or given by Diana), or Rallies, you may be better off going Swiftsteel because you'll have more than one chance to proc Swiftsteel a round, and you'll quickly outstrip Ironclaw with that.Swiftsteel vs. WartechHere's where comparisons get complicated. The two act so very differently, making direct comparisons don't quite work. I'll be making some assumptions/shortcuts to make them easier to compare as DPS increases, but what a Hero's A1 is, and what you're aiming for really dictate this decision.For DPS purposes, though, Swiftsteel is effectively 40% of an A1 and 50% bonus crit, and Wartech is effectively +100% CritMult.Let's make some other assumptions: 100% Crit regardless of set; frees up Swiftsteel for a +Power or +CritMult weapon. That assumption means that you'd be getting 2/3rds the CritMult that Wartech gives, which given the 40% extra A1 DPS on average means that if you're only using A1, you're going to do more damage with Swiftsteel. On average.Average is a dangerous word here, however, because most arena fights are over in a couple rounds (or drag on forever). You might decide to go +Power for more consistent results instead, but similar to the calculations we did for Wartech alone, on both cases Wartech will still burst higher than Swiftsteel.The other thing that makes "average" dangerous is that Wartech favours AoEs heavily. A single AoE can only proc Swiftsteel once, however, you get the potential benefit of Wartech for all the hits of your AoE.So once again, if you want consistent output, Swiftsteel may be better, but Wartech will give you better burst capability. And of course, if your A1 is an attack you want going off a lot, Swiftsteel is probably the way to go. If most of your damage is AoE, you probably want to go Wartech still. Anything that gives you extra potential Swiftsteel procs will favour Swiftsteel as well (Counters, Rallies).

The Future of Swiftsteel The Swiftsteel changes effectively made Wartech niche (where before Wartech was the "best" and Swiftsteel was niche). There's a rumour that Swiftsteel proc rate will be reduced, which will bring it more in line with Ironclaw and Wartech so it's not quite so overwhelmingly powerful, but honestly, it's not the proc rate so much as it's the +50% Crit bonus that allows it to be such a great DPS tool. +51% Power on your weapon is potentially a massive DPS bonus--+67% CritMult is potentially less of a huge bonus unless your Hero doesn't scale well with Power, or you've already got a lot of +Power as the two scale off each other.

As long as that Crit bonus exists, or exists at that level, Swiftsteel will likely be the go to 4P for DPS. To balance it, the Swiftsteel proc rate would have to be reduced to the point where you'd rarely see it proc, defeating the original purpose of the set. +30% Crit would've been more reasonable, as then you could have the question, do I Elderthorn for my 2P? Do I Weapon for +35% Crit? Do I do both? Can I get Jewel slots to make up the deficit of one or the other? Right now it's basically, Swiftsteel, 7 Crit Jewels, go. Or Swiftsteel, Elderthorn, 3 Crit Jewels, go. Swiftsteel makes it way too easy to hit the Crit cap.

With that in mind, I forsee a nerf to Swiftsteel's Crit bonus one day (after the designers try the proc reduction), or a buff to Ironclaw/Wartech, although in the right situation Ironclaw will crush Swiftsteel's output today so I'm not sure about buffing it too much. Similarly, Wartech's upper bound already hits so hard today that buffing it the wrong way could be dangerous to game balance--part of why just buffs only doesn't really work as a game balancing tool despite people constantly suggesting it. The math breaks down eventually. Sometimes you just have to nerf.#Theorycraft, #AllianceHotS

Wednesday, July 5, 2017

Last week I chatted about the start of Eon Altar's save system, why it didn't work, and how we fixed it. This week I'll go in-depth about Eon Altar's Checkpoint Save system. Fast forward nearly a year from our new save system implemention--Aug/Sept 2015--when we finally entered Early
Access. The game was probably about 80% functionally complete and 60%
content complete. As I like to say, the last 20% of your game will take
about 80% of your time, and Eon Altar was no different. We spent 10
months in Early Access, and initially, the biggest point of feedback we
got was, "How can I save my game mid-session?". Our sessions were about
30 minutes to 4 hours depending on the players, and in 2015 shipping an
RPG without the ability to save mid-session was, well, pretty bad. So
began the process to create a checkpoint save system, and retrofit our
levels to save data correctly.

Checkpoint Saves: Less Complex?Why checkpoint saves, though? Why not save anywhere the player wanted? The answer to that is largely to reduce potential complexity. If a player can save anywhere and anytime they want, it means you have effectively an infinite number of states, and good luck testing that. A specific example of this would be Myrth's Court in Episode 1: The Prelude.

Myrth's Court

That "moment" as a whole had the following:

A check to see which player characters were available.

A dialogue based on that to posit a vote.

A vote to decide which character's solution to use.

The actual moment where the party implements aforementioned solution.

Potentially a combat as a result of the solution.

If players could save at any point in that process, that would significantly increase the testing complexity around that moment. What happens if you reload with different characters mid-moment? What happens if you have fewer characters? More characters? By only allowing saves to occur at specific points in the level, we can avoid having to test those mid-moment saves.By using checkpoint saves, we could tie them to an existing checkpoint mechanic we had in the game already--Destiny Markers/Stones. Again, not having to worry about partial encounters is a huge complexity save, but also not worrying about how to turn on/off saving in certain locations. What if we had a bug that prevented save from being turned back on? Or a bug that allowed saving in the midst of a complex moment? Also, how do we communicate if we can save or not to players? And what would the save UI look like? By tying it to an existing checkpoint mechanic, it made it very easy to communicate and very easy for players to grok. No special rules or explanations necessary.

So while checkpoint saves aren't as convenient for players, the reduced complexity was enough to make checkpoint saves doable with our small team and budget.

How to Train Your Save SystemWe already had a method to quickly save data to disk, and the checkpoint save system would continue using it. The questions then became, where do we store that information at runtime so designers could access it, and how do we design it in such a way that required as little designer input/time as possible?First we had to determine what we would have to save:

Enemy spawner state: were they dead or alive?

Game object state: was it enabled or disabled?

"Usable" object state: was it waiting or already used?

Finite State Machine (FSM) state: what state was it left in?

Specialized game object state: what is the game object's transform (position, rotation)?

Specialized spawner state: what is the enemy's transform (position, rotation)? What is the enemy's AI settings (aggressive, passive, patrolling; allied to players, or enemies; patrol state)

With those 6 items, we could literally save anything and everything in our levels.

I created specialized game components that could track those states and report them to the save subsystem as they changed, so we wouldn't have to trawl through level data to extract information--remember, we wanted to ensure the save system was fast. All the designers had to do was add them to an object they wanted to save that particular state out for, and give it a unique ID (well, my code autopopulated the ID based on a random GUID and the name of the object in the hierarchy, but the designers could override that if they chose).This worked extremely well. Design quickly retrofitted our existing levels. The vast majority of our save data is items 1, 2, and 3. FSM save data is rarely used unless the FSM is long-lived (our Destiny Markers are the primary users of this tech). Most FSMs would trigger and finish in one go, or at least in one encounter so we'd not have to worry about partial FSM execution by the time we got to hit a save point (yay checkpoint saves!). 5 was almost never used outside of redirecting patrol nodes for NPCs, and 6 was generally only used on super special NPCs: ones that changed their AI based on designer scripts, or NPCs that were used for escort quests.

Wild Checkpoint Data draws near!

The code took about a week to create/test/deploy for design. The lion's
share of the time (and bugs) was designers retrofitting levels. I think
it was easily a full man-month of time to get the levels up to snuff,
and the amount of testing required was still absolutely immense, despite the reduced complexity of checkpoints.

The BugsA pitfall of this--and I'm not sure there's an easy way to solve this pitfall, I don't believe it's specific to this solution--is when designers forgot to put save components in levels, or they chained components in such a way that would create a problem on game load.

A specific example of this is a door in Episode 2, Session 1. Level design logic had the door with the following states: unopenable, locked, unlocked, open. Depending on the quests you did in the level, it could become locked, unlocked, or open. However, if you saved and quit and reloaded later, then the door would be unopenable because the door wasn't actually saving its state out, and players would become blocked.

Now, when we ran into those issues, we would add the save component in the level data, and then use code that ran on save data load to modify the data before it got applied to the level itself. Basically, we could determine based on what other quests were complete and save object states if the door should be locked, unlocked, or open, and set that state in the upgrade code.

Today we have 10 such save file upgrades that potentially run on a save file to give you an idea of how often we've had to use this, and the lion's share of them are for Episode 2 Session 1. Enough to make me glad we implemented it, but just how different E2S1 was from the rest of the levels really showed how easy it is to screw up save state if you're not careful thinking about it holistically.The Future: SPARK: ResistanceSPARK won't have need of checkpoint saves, as sessions won't last more than 10-15 minutes at a maximum. Rather, any save data will be related to your "character". Unlocks, experience, statistics, etc. Thankfully, I'll be able to take our save system nearly wholesale from Eon Altar and apply it here, minus the checkpoint stuff.

A Randomly Generated Map and Associated Data

The in-level checkpoint stuff wouldn't work in SPARK anyhow, as the level structures are fairly different to start with thanks to both the procedural nature of the levels as opposed to hand-crafted, and the fact that the levels are networked right from the start, which is very different from a local multiplayer game.ConclusionThe current save system in Eon Altar is robust, extremely fast, legible, easy to modify, and minimalistic in data requirements aside from the fact that it is XML, but the actual data output is all essential. It requires as little designer input as I could possibly get away with (even most checkpoint save data is attached to prefabs and autopopulates all IDs in the scene at the click of a single button).

Yes, it took a fair amount of engineering work altogether, but I think that's a result of you just cannot skimp on engineering for a system like this. You get what you pay for, and if you're not willing to put the engineering time in, you're not going to get a great system on the other end. And as mentioned at the beginning, persistence is extremely important to games. A game can't afford to skimp on their persistence systems in my personal opinion. #IndieDev, #EonAltar

Wednesday, June 28, 2017

Persistence is possibly one of the largest drivers of repeated and extended interaction a game can have. RPGs persist campaign data between sessions; puzzle games persist how far you've been in the game and how well you beat each puzzle; even ye olde arcade games persisted high scores for all to see (until someone rebooted the arcade machine, anyhow). With that in mind, creating a robust save system is one of the most important tasks you could have when developing a video game. For us developing Eon Altar and now SPARK: Resistance, this is no different.

However, even the task of gathering some data, throwing it on disk, and then loading it later comes with a bunch of potential issues, caveats, and work. I'll talk today about our initial attempts at a save system in Eon Altar, why we went that route, why it didn't work, and what the eventual solution came to be.

A Rough Start

When we first created our save system, the primary goal of it was to save character data and what session the players were on. We had a secondary goal of utilizing the same system as our controller reconnect technology, as the character data was originally mirrored on the controllers as it was on the main game: down to character model and everything. We weren't originally planning on having mid-session saves (those came later), so really we only had to worry about saving between levels.

The "easiest" way of doing this, without having to think of any special logic is to copy/paste the state from the main game into the save file, as well as the controllers over the network. In programmer terms, serialize the state, and deserialize it on the other end. Given our time/budget constraints, we thought this was a pretty good idea. Turned out in practice this had some pretty gnarly problems:

The first issue
was simply time. The time it took to save out a file or load up a file
was in the order of tens of seconds. Serializing character objects and
transferring that across the network was measured in minutes, if it
succeeded at all.

The second was
coupling to code. Since we were serializing objects directly, it meant
that any changes to the code could break a save file. If we changed how
the object hierarchy worked, or if some fields were deleted and others
created, then existing save files would potentially be broken.

The third issue
was complexity. The resulting save file was an illegible, uneditable
mess. Debugging a broken or corrupt save file was a near impossible
task. Editing a broken save file was also quite difficult, if not
impossible. Because of this, we couldn't (easily) write save upgrade
code to mitigate issue 2. We'd have been locked into some code
structures forever.

The fourth was
just far too much extraneous data. Because we were performing raw
serialization, we were also getting data about textures, character
models, what were supposed to be ephemeral objects, hierarchy
maintenance objects, and so on

While we had a save system that did what we wanted on the tin, it was untenable. Shipping it would've relegated our small engineering department to an immense amount of time trying to fix or work around those issues. So while this approach was "simple" and "cheap" in terms of up-front engineering cost, it was the wrong solution. We went back to the drawing board.

The Reimagining

About a year after we started development, the team shrank pretty substantially. We'd lost 1/3rd of our engineering team and my time became even more contested as I became the new Lead Programmer. I had to contend with the responsibilities that came with that title, as well as continuing to deliver features and fixes.

However, I had already been noodling on the save and reconnect systems, and had a new plan. The first step was to fix the controller reconnect, which you can read more about here.

Given reconnect was taking 8 minutes each time we had to reconnect a controller, it didn't take upper management much convincing that something needed to be done. And since reconnect and save were intimately connected at the time, making a convincing argument to fix save shortly after also wasn't a hard sell. So even though I had to disappear for 2 weeks to fix reconnect, and then another 2 weeks later to fix save files, I think everyone involved believes it was the correct decision.

To fix our 4 issues, it wasn't sufficient that we just be able to save and load character data in any which manner. It needed to be quick, it needed to be decoupled from the code, it needed to be easy to read/edit/maintain, and it needed to be deliberate about what it saved out.

The Reimplementing

For humans, text is easier to read over binary, and a semantic hierarchy is more legible than raw object data. So I knew pretty early on that my save file data was going to be in XML and in plaintext.

Plaintext was important. We often get asked why do not encrypt our save files, and it comes down to maintenance. Human-legible files are easier to read and easier to fix. As an indie studio with extremely limited resources, this was a higher priority for us than preventing people from cheating their save files in a local multiplayer game. If your friend is going to give themselves infinite resources and you catch him, you can dump your coke in his lap.

Plaintext has saved our bacon multiple times: if there is a bug that is blocking our playerbase, more enterprising players have been able to repair their own save files with careful instructions from us (and a lot of WARNING caveats) until we can get around to fixing it. Also, being able to just quickly get information from broken save files without having to decrypt them.

The benefit of using XML is we could serialize to and deserialize from programmatically without any extra work on our part: tools to do so already existed. In fact, we were already using those tools to do the old save files. The difference was instead of serializing the character object instances directly, I created an intermediate set of data that was decoupled from the objects that made up the character data instances in-game, and this data was going to be organized according to game-play semantics rather than raw object hierarchy.

An example of a simple data class, and the resultant XML.

Having actual data classes meant we could lean on the compiler to ensure data types matched up, and that we could just use existing serialization tools to spit out the save data. It did mean a fair bit of manual work to determine what goes into the save file and where, but the benefits of that work more than made up for the upfront time. Adding new fields to save data is trivial, and populating new fields via upgrade code isn't terribly difficult. Editing existing save files became super easy because the save file format was now extremely legible. Legible enough that we've had users edit their own save files easily. And good news, because the data was decoupled we could actually write save upgrade code!

Collating the data into the data classes at runtime is a super speedy process. Less than 1ms on even the slowest machines. We're only serializing the simplest of objects--data classes are generally only made up of value types, other data classes, or generic Lists of other data classes or value types. And since we weren't serializing a ton of extraneous objects that only were supposed to exist at runtime, the amount of data we'd save out was significantly reduced: 29KB for a file with 2 characters, instead of multiple MBs. We put the actual writing of the save file to disk on a background thread; once we had the data collated, there was no reason to stall the main thread any longer, and disk writes are notoriously slow.

The difficult part was going from the data classes to instanced data. Previously it would get hydrated automatically because that's what deserializing does. However, in this case we hydrated data classes, I had to write a bunch of code that recreated the instanced runtime character data based on those data classes. This required a lot of combing over how we normally generated these object instances, and basically trying to "edit" a base character by programmatically adding abilities, inventory, etc. based on the save data. It wasn't particularly hard, but it was time consuming, and potentially where most of our bugs were going to lie. But by using the same methods we call when adding these things normally at runtime allowed me to reuse a lot of existing code.Part 2: Checkpoint SavesWe had our new save system, and it was pretty awesome. The original
save system was done in approximately a week, if my memory serves, maybe
a little longer. The new system took a month to implement after
research, programming, and testing. Basically, you get what you invest
in. Skimping on engineering time on this feature was a bad decision in
my 20/20 hindsight, but we fixed it, so all is well today!

Next blog post I'll discuss the next step we took for Eon Altar: Checkpoint saves. Why checkpoints? What did we need to do to retrofit the game to handle checkpoint saves? What implementation? What pitfalls we ran into? And then, what can we reuse for SPARK: Resistance?#IndieDev, #EonAltar