Issues with estimating, particularly something like this, mostly comes down to meta-ignorance: you don't know what you don't know.

Given a particular problem of a similar sort to something he's worked on, an experienced engineer will be able to list 10 different issues that might crop up.

An inexperienced engineer won't know of those potential problems and will just see the most optimistic path possible.

When you're moving massively outside your familiar zone, many unexpected problems will crop up. Best solution is a phased approach. Pick the bits you judge as most risky (you should hopefully have some idea) and implement prototypes in those areas, in the hope that most of the scaries will crop up. You've then got a better idea as you move on.

I've written this in the context of doing engineering, but it's equally true in all fields - like trying to sell a product into a new an unfamiliar market.

I like that term 'meta ignorance'. It is important to realize that ignorance isn't "bad" it can be fixed by learning, but its good to know that you have it so you can start fixing it.

The other aspect of this that I see, even in seasoned engineers, is the 'presumption of simplicity'. Some things look really simple, but inside they are actually quite complicated. A smart phone is just a screen, a computer, a battery and a radio right? Too many times people see something, think "that doesn't look too hard I bet I could do that in a weekend"[1] but don't actually follow through on that thought and spend a weekend building what ever it was that was so easy. Sometimes the more experience you have the more dangerous this is. I can recall times in my own life we're I've grossly underestimated the difficulty of stuff, its lead me to be a lot more conscious about things I know about and things I think I know something about, especially when predicting schedules and work effort.

I think you mix the meanings of simple and easy here. Simplicity is an absolute metric and describes the number of dependencies a thing has, while ease is a relative metric describing your understanding of said thing.

For example, a singleton is easy to learn and easy to use, but since every function using it adds a hidden dependency it quickly grows in complexity to the point its impossible to reason about it without forgetting something.

On the other hand, a Promise is simple as it depends on nothing but a producer and a consumer, no matter how much you compose them. Yet I've seen many experienced developers struggle to learn how to use them as they're not easy to understand at first.

This is somewhat related to meta ignorance. From my own experience I've seen a tendency in novice programmers to stick with things which are both easy to learn and use. Their projects go well initially but they grow less and less productive over time as complexity creeps in from the composition of all these easy to use things.

I've always said experience in our industry is knowing what not to use in order to stay productive in the long run.

Speaking of meta, I absolutely loathe how the basic distinction between simplicity and ease of use has since become a meme so persistently associated with Rich Hickey. There is nothing I can really do about it, but it nonetheless annoys me to no end.

I myself learned it from Rich in the very talk I linked to a few years ago and I'm the first to admit I didn't make that distinction beforehand. I've met more developers unaware of the distinction than otherwise, which is why I'm curious as to why you think it has become a meme?

Also note that english isn't my first language (I'm french Canadian) and even here in french the distinction is seldom made.

The distinction between two main types of simplicity, those of parsimony and elegance, has been a long-standing philosophical topic [1].

In engineering, the so-called KISS principle (first coined as such in the early 20th century) has always had the implication of minimalism and implementation simplicity, in contrast to mere ease of use.

Fred Brooks wrote a famous paper in 1986 [2] perfectly describing the differences between accidental and essential complexity, and of the semantics of complexity management in software projects.

Hickey has said absolutely nothing spectacular, but his name comes up every time from the typing fingers of the historically illiterate whenever simplicity and ease of use are brought up.

"historically illiterate" are pretty strong words. Actually everyone is historically illiterate by these standards because the ideas that any one person is familiar with is a vanishingly small percentage of all the ideas the human race has ever had. Furthermore, the origins of ideas are impossible to trace with any great precision. Is the most famous person the person with the best ideas? Was the person with access to the printing press the person with the best ideas? Frankly it strikes me as a form of intellectual hipsterism to be bothered so much by this.

Rich Hickey gained fame for this because he stated an idea very clearly and compellingly, this is non-trivial and should not be so flippantly dismissed as just recyling old ideas—all your ideas are recycled too.

I dunno, people use the same kind of argument to say that nobody's really done anything new in philosophy since Kant or even Aristotle. The KISS principle is not the same as a distinction between simplicity and ease. Accidental vs. essential complexity is orthogonal to simplicity vs. ease. And parsimony and elegance are both about simplicity rather than ease. Some people can be a little bit too historically literate for their own good.

Except that Rumsfeld had a different problem, he had people who didn't want him to know what he didn't know, because it was a tactical advantage for them.

This, like most engineering tasks, is more straight forward. You have to know what its going to take to build each piece and assemble them into the final product. So if you are making a AAA indie game, it helps if you can talk with the project manager at a company that made AAA games and go through their list of deliverables, and then think about how you're going to deliver the same thing. Sometimes you'll be tempted to say "we won't need to do that" but like Chesterton's fence, you need to understand why they did that so that you can really say you won't need to do that thing.

It is always ok to start with "how hard could it be?" but that starting point requires you educate yourself on exactly how hard it could be. For game development I think pretty much all the questions are knowable (except for consumer reception of course)

Yeah, it's descriptive. I think I re-purposed a term from an academic paper that referred to meta-stupidity: stupid people are too stupid to know that they're stupid.

They did an experiment where they got people taking an exam to guess what mark they were going to get. In general, the lower scoring people in the exam over-estimated their mark more than the smart people.

Meta-stupidity actually has another term for it as well - the Dunning-Kruger effect - https://en.wikipedia.org/wiki/Dunning–Kruger_effect - people that are inexperienced in a topic don't know what they don't know (as you put it, meta-stupidity) and so think they're smarter than they actually are - people that are more knowledgable know how vast the topic is, so realize their knowledge is less compared to overall knowledge, even though they're better than average.

The dunning kreuger effect is useless. Dumb people think they are smart, smart people think they are smart. Nothing to be gleaned from that. Only at the very fringe of the intelligent side do the lines of actual intelligence and perceived intelligence cross.

Actually the tendency is, that the more you know the more aware you get of the things you don't know. So while dumb people may overestimate their aptitude the smarter people tend to underestimate themselves. The most extreme manifestation would probably be the impostor syndrome. https://en.wikipedia.org/wiki/Impostor_syndrome

What I take from Dunning-Kruger is that unskilled people have misconceptions about themselves, not being aware of what they don't know. Likewise, skilled people have misconceptions about others, assuming they know as much as they do.

Its very easy to forget all the efforts invested in learning a skill when you've been using that skill for many years. When it becomes second nature, you assume its also that way for others.

I have and it's very insightful. The people in the studies that were least competent were also, relatively, the most delusional of their inadequacies. This principle explains why you get people with little coding experience applying to developer jobs and failing the simply FizzBuzz exercise.

I think that the longer I've worked as a developer, the more inclined I am to be a pessimist. Sometimes projects look so freaking easy but then you realize you've just spent 3 weeks working on one feature out of a dozen and you're about to hit your first deadline.

I definitely do the "manager doubling" when it comes to estimates. If I think it'll take a month, I estimate 2 months to my boss. If I have to report several levels up, I multiply by 2 for every level.

What's frightening is the variability of daily productivity. Some days everything comes out the first time, you're using the right framework and you finish 20% of the project with 10 lines of code. Other days, you lose hours chasing down a bug from a single typo, or debugging an obscure corner case incompatibility between two libraries. And there's no sure way of telling what kind of day you'll get next.

> I definitely do the "manager doubling" when it comes to estimates. If I think it'll take a month, I estimate 2 months to my boss. If I have to report several levels up, I multiply by 2 for every level.

Hah, I like this. Every level up that sees it is also a potential for more scope creep, as each person will have their own vision of what's being produced.

One of my favorite estimating guidelines from a mentor was that development tasks are either 2, 4, or 8 hours. If it's more than 8 hours it can be broken up into smaller tasks.

The most common time the 8 hours policy breaks down (insert rage face here) is when going bug hunting on non straight forward bugs. I just got out of a 5 day bug hunt where the fix was literally changing a 10 to 40 (so, 1 character change). The debugging process to realise this was painstaking elimination of possible causes. I eventually stumbled on the root cause by mistake when I out of frustration just switched an unrelated boolean flag to see if I could get a laugh out of what happened next. These tasks can really screw up an entire dead line. I have a policy on those which is two steps:

1) Daily report in the morning SCRUM style of what was tried, and what's going to be tried next.
2) After a full working week, reassess everything and try and understand where you are going with this.

Rinse and repeat. Basically the estimate is set at 5 full working days whenever you work on an obscure problem. But it's not a hard deadline. It's just a hope that it'll work out by then and a mild mental aid of giving something to work towards.

The issue that I'm struggling with as I picked up experience and learned how to estimate better (read: higher) is of becoming a wet blanket. People will rave about this awesome thing we could do, and I'm the one saying "no, we can't, and this is why". That's why Steve Jobs said to stay foolish, it lets you chase after ideas that a rational person would consider too risky or just plain impossible.

I don't know what the opposite is of a "wet blanket" (perhaps "ray of sunshine"?), but is it possible to express almost the same hesitation from the opposite direction? For instance, say, "Yes, we can do that, but it will mean hiring four new devs and investing in XYZ server." Or does that just come across as transparently disingenuous?

I ask this sincerely, because I am trying to find a way to approach office politics as just part of my job and play it well. But as a developer, I'm also trying avoid the version of office politics where you just promise impossible things then find scapegoats later.

The flip side of meta ignorance is the curse of experience. Inexperienced people are more likely to take big risks because "they don't know any better." Most of the time, these risks predictably don't pan out, but sometimes they do!

"Hofstadter's Law" at work: any unfamiliar task will take longer than planned, no matter how much extra time there is the plan. You can't account for your ignorance with padding, you need some kind of actual knowledge.

The result, then, is that there are only a few ways to actual make an AAA game. One is to hire some quorum of people who've already made one, so that they can bring you up to speed. The other is to do enough proofs of concept that you can see the problems coming.

It looks like Woolfe went from some mobile, educational games to a AAA PC title. That, it seems, was too big a gap to predict and prepare for the hurdles they would face.

Even some of the smartest people in the world fall victim to this. For example, Elon Musk originally estimated that SpaceX would launch its first flight in November 2003. The first launch, which still failed, didn't take place until March 2006 and the next two launches also failed. The first successful launch didn't take place until September 2008, a delay of almost five years.

In his biography, Elon Musk acknowledges that his original estimates for SpaceX were completely ridiculous and based on his experience with writing software and ignorance of the realities of building spacecraft.

This is actually exactly what I heard on an NPR report about a consultant (yes, cries from the peanut gallery) who helps companies learn to manage projects better.

After doing all the normal project management estimates, etc, the very first thing he does is sit down everyone who will actually be implementing the project (or a representative sample).

He then asks them a simple question: "Come up with one scenario where this project would fail."

The implication being that this is lateral to the train of thought people use to make initial estimates. Iow, in all possible estimation scenarios, the project is assumed to have been a success! It's not unreasonable to expect that implicit optimism to filter down through sub-estimates.

His point was that giving people permission to imagine scenarios for total and abject failure identified important risks that could be back-fed to improve the initial optimistic estimate.

Whenever I'm about to build something big, I first start thinking about things that would be hard to do, or perhaps that I even don't know how to do yet. Then I build prototypes of those to make sure I can to it and do it efficiently.

Once all pieces are ready, it's just a matter of putting them together which is straightforward and you have some chance to actually estimate the time.

Another great approach is to take the outside view. Instead of figuring out what the path to success looks like, find a reference class which your project belongs to, and assume it'll be about the same.

Want to do better or find an edge? Figure out a way to be part of a better reference class.

We just ported an AAA game with a team of 6 programmers and barely made it through with massive delays. The game was already feature complete and bug free when we got the source, all we needed to do was port it to mobile devices.

It's so easy to underestimate the work to be done, graphics easily break when you change the underlying hardware and renderers, gameplay had to be adapted to touch controls, etc. It didn't help that our estimate multiplier was only 3.

All of this gave me a much greater appreciation of the work you guys did at Naughty Dog.

The game's prototype was funded and built in 2013, then by mid 2014 the entire game was already funded and close to completion, by September they finished the kickstarter and aimed to wrap it up by February. Consider that they're a game dev since 2002 and had other projects and income, too. So bridging September to February on $70k for a small team of about 5-6 people who are working on the project part-time, isn't crazy.

I think they failed to earmark the funds for rewards (they did ship the game, the issue is they didn't ship the art books, wallpapers etc, and went bankrupt), and mostly just failed to build an awesome enough game. They probably expected strong sales as they'd been hyped quite a bit, greenlit in a few days, featured at the E3 etc. But when the sales didn't happen and the loans had to be paid back etc, they went bankrupt and couldn't ship the merchandise rewards.

I won't back games unless they either ask for a few 100k, or are vastly over target already. The reason is that I'd rather back someone with a realistic, workable (but maybe a bit too high) budget than someone optimistic about a probably too low budget. Double so if they state that they want to hire someone.

I see a lot of kickstarter project where a team of 5 is looking for 20K or somesuch. While some teams can and do pull it off, that just seems crazy low and too much risk (especially when you factor in KS' fee and cost of rewards).

EDIT: Removed the as a rule from the first sentence (was I, as a rule, won't back...). After reading andallas' reply, I realised that there are some very rare occasions where I do back projects I otherwise wouldn't, but they have to have really proven themselves.

I thought it was interesting to be heavily invested in one Kickstarter as it made it to the finish line with just a few hours to go, while watching a similar one fail a little while after.

Descent:Underground [0] and Starfighter Inc. [1] both had well-seasoned teams of industry veterans, including founders who had written popular space games (X-Wing, Wing Commander).

One of the big differences I identified between the two campaigns was that D:U asked for an amount of money that seemed like enough to run a small game studio for a year ($600,000) while SI asked for an amount of money that seemed like it wouldn't get them anywhere close to done ($250,000 for a bigger team). D:U was also really clear in their messaging -- if we don't raise the funds, that's a market signal that we shouldn't make this game. SI communicated several times that if they didn't reach their KS goal they'd just do it another way. What that indicated to me is that D:U had a realistic crowdfunding plan that could get them to release, while SI seemed not to know what they were trying to accomplish with kickstarter.

To be sure, there were a lot of other factors at play. With D:U, Eric Peterson pulled a lot of support from Star Citizen backers and they released an excellent trailer in the last 48 hours of the kickstarter. SI dodged questions about what engine they were using and had a number of other customer interaction problems. But I think one of the big issues was that they didn't ask for a realistic amount of money, which made it seem like they either didn't know what they were doing or were relying on additional funding which would increase the risk for kickstarter backers.

Keep in mind that there are also experienced studios that don't get all their funding from Kickstarter. I don't know (and doubt) if this was the case with this Kickstarter project.

$50,000 could have afforded them an artist for 6 months, or helped them to secure some essential software, or just supplemented the incomes of the existing team, while they continued doing 'side-jobs' to generally pay the bills.

Now ideally, they would get all their funding from one source (so we know what they received) and it would be enough to pay for everything. But I think a lot of people who pledge on Kickstarter have a gross misunderstanding of how much money a game takes, considering each team-member may have a salary ranging from $50k - $100k is not unreasonable at all.

It was probably enough for a loan however. Which is probably why the company folded at this time, because they ran out of funding and couldn't make it.

With that said, I've been following Patreon and the financial models at Patreon just make more sense than Kickstarter.

Various game development groups have constantly slipped their schedule, and backers respond typically by keeping the donations steady. As long as the game developer releases a demo demonstrating progress, backers are more than eager to continue their $5/month or so funding.

The main issue is the slow pace of early Patreon funding. A lot of games have to bootstrap with $500 / month or $1000/month for two or three months. Even "successful" Patreons only get $3000/month to $5000/month, and those are few and far between. (Patreon doesn't seem to have the reach of Kickstarter or Indiegogo quite yet. Perhaps if Patreon expanded and grew it'd be better?).

The Patreon model is also better for "subscription" stuffs. Podcasts, youtube videos and the like are more popular there. I can't find too many games on Patreon by searching, but I've seen them advertized on various forums.

In any case, even with $5000/month, it'd be hard to run a team of 6 on such meager money. I've been following a team of 3 and they're barely making it on that.

But Patreon does strike me as more plausible than Kickstarter.

-----------

On the other hand, Kickstarter is extremely good at measuring "hype". So perhaps the Kickstarter model is better if you're trying to prove to outside funding groups that your team is worth investing into.

I don't think Destin (Smarter Every Day) is a single-person team, but he's making $6.8k/video, with two videos per month.

Patreon seems to work out for smaller-scale projects, like youtube videos or podcasts. Its a bit harder to map it to a video game, but I still think the Patreon model for video games is superior to Kickstarter.

Everything is crap though, some real innovation should happen in this space. Unfortunately, I'm out of ideas...

It was way too little money ... but I guess what happened is that they expected a third party investor to complete the development budget given the success of their KS campaign. Well it looks like it didn't happen. In my opinion , if one choose this strategy, one should make sure some investors are actually interested,even better, sign an agreement with potential "angels" before launching a campaign.

72k can keep you alive on sub-standard living conditions, but it will not keep a team of 6-7 talented programmers and artists alive for very long. Not even for way below market pay and working in a garage.

And a single person just cannot turn out something close to a AAA title.

Definitely not. I'm getting at the issue of employees versus founders. I'm responding to the commenters less than the OP. A lot of people are implying that the 72k would be spent on wages. If that was the expectation, then that's part of the failure.

I have friends who've launched XBLA games on much, much less working full time out of a basement. During development, they could barely pay rent. Afterwards, they buy new houses.

I feel like people don't take real risks any more, and seem to misunderstand what actual ownership of the outcome looks like.

Sign of the times?

And the issue of AAA ... well... Get a simple game or playable demo going and build off of that rather than shooting for the stars. Or did we lose site of what an MVP is, too? Sounds like the OP did.

Overwhelmed by rewards seems to be a common failure on KS. The list he rattles off in this case is quite extensive, and the postage cost is all that's preventing it from being sent. I wouldn't be surprised if they have 10k into rewards alone, especially when you account the time spent developing them.

Congrats to you and your friends, but that's nowhere near the average (50th percentile) case.

Out of my friends that graduated with me, in the set of {Google, Microsoft, Amazon, General Dynamics, AMD, Intel} employers, no-one, and I mean no one made above 90k starting. These people all graduated Suma Cum Laude (>3.96), with >3 years experience on average through internships.

No one should ever assume their counterparts are making anywhere close to them.

Holy crap quite a world of difference there then. Wow. A lot of my friends have never even been in internships and make a good amount more than me (I've been in one). My GPA was acceptable, not spectacular. I have some shitty personal projects, many of them have none.

It's very odd to think about how software development in one place costs a different amount than another place, when it doesn't really concern the place at all. All supply and demand, I suppose, but it's just weird to think about how much different the wage is as a result.

It's not just software development though. Salaries in different cities and states are all higher or lower depending on the cost of living there. For example, the average 2 bedroom apartment rents here for around $900-$1000 a month. Same sort of apartment costs my friend in Seattle about twice that much. Food and goods seem to be about the same here as there for most items, but that's a pretty big increase in rent.

Going out to lunch here at a decent sit-down restaurant is around 8-10 dollars with tip (assuming you're drinking water). Somewhere like SF or NYC will be much higher.

Same sort of development one would do anywhere from my experience. I'm not sure what you imagine when you think of the Midwest, but I live in a city with over a million people (not Chicago). I have friends that work for companies on the East and West Coast and we trade stories that seem fairly similar about our work life. Startups and large companies exist here too and aren't much different. Startups may not be as numerous, but I've worked for some and moved on past that phase to want more stability.

I'm the tech lead for my team at a medium sized company that builds high demand, critical backend services. Customers include small businesses, some well known Silicon Valley companies and some established Fortune 500s. I like what I do and get to see just how well software I work on scales with customers that push it to the limit. I mostly use C# with a bit of C++ right now, but previous jobs I used Python, Java, JavaScript and PHP. VCSs have ranged from SVN, TFS to Git. The one used mostly depends on when the company was founded. Early 00s probably means SVN while the last 5 to 10 years probably means Git or TFS.

Does company culture differ here? I don't think so really. Like anywhere, it depends on where and whom you work for. I work 40 hours a week on average, have a workstation that rivals my own gaming PC at home (we also had the choice of a high end laptop, but I prefer a desktop) and wear jeans/shorts/t-shirts to work (assuming I'm not working at home that day). My brother works for a company with a similar culture to mine and they have their own personal chef that serves them lunch every day. My employer and his are only about 10 to 15 years old, so that probably contributes somewhat to the cultural similarities.

Interesting, had the impression the Midwest was mostly cornfields. What city do you live in with over a million people? Even Seattle (proper) has a population of 600k.

Feels like there's a real dearth of opportunity here in Seattle compared to the Bay Area. There's mainly Amazon and Microsoft, and nobody I personally know who work in either place like it much. Google and Facebook have some small offices here, but most of the interesting and exciting work they do appears to be done in the Bay Area.

Do you feel like you can find another job you'd enjoy in the city where you live pretty easily?

> What city do you live in with over a million people? Even Seattle (proper) has a population of 600k.

Columbus, OH. Around 800k-850k if you just include the city limits and well over a million with the metro area. Third largest city in the Midwest. Lots of job growth and probably more tech jobs here than anywhere else in state. Factor that with cost of living being pretty cheap, I don't have much of a reason to leave right now.

> Do you feel like you can find another job you'd enjoy in the city where you live pretty easily?

I'm sure I could. I didn't used to think so, until I actually started looking for work after doing mostly consulting and contract work. I think companies here have more trouble finding versatile developers that can wear "many hats" versus developers finding interesting work. Most only know one language well enough to use it (mostly Java or C#) and are either bound into knowing Linux or Windows Platform development and rarely both. That's just my opinion on it though. I'm comfortable with doing most types of development in most languages, so that helps with finding jobs I like.

No interest really in working for large companies. From my experience, that's generally the reason people move West (or East) that live here. I prefer companies where people actually know who I am.

As an ex C++ programmer who did embedded telecom (amongst other things), I've transitioned to web development. It's not really that different. In fact people with embedded skills can do very well because things like memory management and performance optimisation are very important in web development. Often people with actual experience in that area are in short supply.

The main difference on the downside is a plethora of frameworks that often constrain your design badly. Of course, on any large project there is always a way to do things, and it is not always pleasant ;-) So it's not really so different. I also had to learn quite a lot about databases, something that I shied away from in my earlier days. Again, many frameworks try to shield you from database details, but they bugger up the object modelling so badly that you are much better off being quite aware of what is happening under the hood.

On the plus side, code bases are generally very small. We're talking having to maintain small 10's of KLOC (and very often less than 10k code) as opposed to 100's of KLOC. I wouldn't say they are toy problems, but they are definitely on the small to medium size. You can pretty much understand all of how an app works. Frameworks often bring the total code size up to above 100 KLOC (yes, I have spent some time debugging Rails ;-)), but again, it's not something that would scare a seasoned dev.

My main pleasures are being able to work with 100% free software tools from back to front. It's truly awesome to be able to debug and tweak anything I want. For frameworks and libraries, no longer do I have to depend on marginal documentation to see how something works -- I can just read the code. This is a massive plus.

For me, the acceptance of unit testing as a normal development procedure has been wonderful. Not all web devs subscribe to it, and very few do it well, but it is at least a mainstream concept. Also the tools available for TDD are really great. Rspec style tools (including Jasmine) are worth their weight in gold.

I think the biggest surprise for me making the transition was that there is a huge amount of complexity in web development. Yes, there are lots of people who do web development as a kind of paint by numbers, but honestly I've seen those kind of devs everywhere I've gone.

You know, I would. My partner is also a programmer, works in the same city, is the same age as me, and makes $15k more than I do. But then again...I absolutely love what I'm doing. I work on a AAA project that is going to be one of the major releases of 2015/2016 and that millions of people are going to play. The work is great, the people are fantastic...I love it. It's only the pay that is shit. But do I really want to be doing something that I won't enjoy just to have more money? Probably not.

For example I work in France and I keep approx $36k after all taxes (but still not taking VAT into account, which is 20% on most products). This salary costs around $78k for the company I work for. Approx breakdown: 25k in employer taxes, 12k in employee taxes (the distinction is rather arbitrary, but sometimes the government changes the rate of one or the other...) and then 5k in household income taxes (if you have some capital gains it also goes to income, but actual work is taxed over and over and over... :p )

Also I work full time: 218 days a year (with no fixed number of hours per week, but it is very reasonable). That's more than 6 weeks of paid vacation + a dozen of holiday here and there. And when I was sick for 3 weeks last year, it was not deduced from my vacations (but the sickness indemnity during those 3 weeks were less than my usual rate, maybe 60% but I'm not sure)

I also sometimes work on the side as an independent and on the monetary side the end result is roughly the same: if a client pays me X, at the end I keep approx X/2 and the other X/2 goes to various taxes (but it is better to be salaried, because those X/2 in tax provides you virtually no security as an independent, whereas they do when salaried)

Now to compare anything you'll also have to at least convert in PPP, and also consider what is provided to you (and even to others!) for "free" with your taxes. This is arguably not reducible to a single figure; I would rather make a "little" less if that lets other people have better health care and if that means that nobody will ever ask me a fucking ridiculous amount of money if I need a really expensive medical treatment (not that the French health care system is the best, but I guess it is not that bad compared to some other countries...). Also a lot of social and related services are provided for completely free to the users. And even in areas where housing is expensive you would find it insanely cheap compared to NYC or some places in the silicon valley.

And to finish remember that the value of the Euro has decreased a lot the last few months compared to the USD. That make all the amounts even more difficult to compare: one year ago I would have told you that I keep maybe $47k after all taxes (from an amount in € that was actually a little lower than what i have now...)

Well after taxes I take home ~1300GBP(~2000USD)/month.
The cost of living is higher,but then - I'm in the North East of England, which is comparatively cheap. I do get 25 days of paid holidays + unlimited sick leave. Other companies over here pay a lot more, but then...I love what I'm doing. It's a hard decision man.

I just Googled by city and name, found a Quora post, people that actually live there were saying is either $24K or about $20K and car / housing or other type of bonus that companies apparently get to avoid getting taxed hard. I did my research not like you... Lazy.

These teams should move to a low cost developing country after a successful Kickstarter. They could live very well for 1k per person. You already have the clients, you already have the investment and hopefully, you already have the team. It doesn't make sense to stay in a high cost area.

The biggest problem with making games is the fact that there's no end goal of "Done." It's just a vague point in time, really. Is it done when you've made 10 levels or 11? Is it done when you've added 10 power-ups or 11, or 15? In a movie, you can't make the thing 50 hours long, no one would watch it all. But for a game, 50 hours is now considered medium sized, compared to MMO's and MOBAs that offer hundreds of hours of time to play.

Scope creep is the biggest danger in the games industry. Reading this update, where they talk about going from 2D to 3D, my heart broke. That's such a major change, and as he writes, such a major uptick in difficulty, it was the beginning of the end.

Terry Cavanaugh [http://distractionware.com/blog/ ] gets scope right. Valve gets scope right. You can tell when it's done right because the pacing of the game feels damn near perfect, like Half-Life 2: there's little down time, and you're always moving forward, even if yer not always shooting.

I feel bad for the Woolfe team, but they clearly had rose colored glasses on. There's a very good reason modern console games cost at least 8 figures. Hell, even with 8 figures of budget, many big name titles aren't even that good. The games industry is a harsh mistress. Like judging what's funny for a comedian, judging what's fun for a game developer is the toughest task they have.

What does "AAA Game" mean anyway? I'm a bit lost on the definition. Wikipedia says:

> In the video game industry, AAA (pronounced "triple A") is a classification term used for games with the highest development budgets and levels of promotion. A title considered to be AAA is therefore expected to be a high quality game and to be among the year's bestsellers.

By definition, a budget of $75k + 6-10 people wont make an AAA game.

> When we made Crash Bandicoot (with a team of 7), it was already virtually impossible to make a AAA game with 6-10 people, and that was 20 years ago.

I'm biased/invested in this area, but I think nowadays tools are really helping strip away the technical challenges of making a game. Increasingly game development is less about technical challenges but more about the creative challenge. Individuals are willing to pour thousands of hours of work into artistic and creative projects, and I am seeing far less resistance and challenges going forwards for these sorts of people to produce AAA quality games, when previously game creation was extremely inaccessible to them.

Also, thank you for Crash Bandicoot. Amazing game :) One I loved playing growing up!

Its easy to underestimate the scale of AAA titles today. Even a few millions dollars with a team of twenty isn't nearly enough to cut it. If that wasn't enough, they also go for the top talent and expertise the industry has to offer in order to manage the massive complexity of these projects. Its also easy to underestimate how much a difference it makes to have experienced developers for projects of such scale. This has to be by far the most important factor when determining the success of these projects.

This is both good and bad news for indie developers. Bad in the sense they'll never reach that scale and level of graphics without serious investments. Good in the sense the triple-A giants are anything but flexible and indie developers can therefore differentiate themselves through innovative gameplay and storytelling, which is very much lacking in the AAA scene nowadays.

That's a pretty bad definition of a AAA game. I'd say it's a game that is up to the current standard in video gaming, which is rapidly evolving. This means that the graphics aren't archaic, the game content has appropriate magnitude (the game doesn't feel short, or lacking in items, game levels/areas etc. as applicable for each genre). It's the level of "polish" the game has. Now that implies the team sizes and budgets, but not that the game will be a bestseller. It can be a total flop, although the high cost of producing such game implies that it has significant marketing, which should bring at least some sales.

Anything below that is either a classic game remastered (for example, the recent Homeworld remaster), or an indie title.

Frequently a AAA game (like Diablo III) will have a AA game come out at the same time to compete with it (Torchlight II.) In a lot of cases, like Torchlight II and Call of Juarez: Gunslinger, the AA game is actually a far superior game to the AAA games its competing with. ;)

Sometimes the sequel to a AAA game is a AA game, I'd say that's the case with Wolfenstein: The Old Blood.

Some games, like Portal and Portal 2, straddle the line. But I'd slot the Portal games as AA.

I'd argue that The Sims 1 was an AAA game (in that it became the best selling PC game of all time) BECAUSE it ran on last year's hardware, used software rendering, and didn't require a 3D graphics accelerator, so a lot more people were able to play it (like little brothers and sisters who inherited their older sibling's computer when they got upgraded).

Very true. The simple graphics also made it possible for many players to create their own content with less complicated tools like Photoshop + Transmogrifier instead of advanced tools like Maya, which also contributed to its success.

Minecraft was also a great achievement in terms of game design and gameplay, yet had simple graphics, and consequently was easily moddable, which greatly contributed to its popularity and success. But is was considered "AAA" when it was released, and is it considered "AAA" now that Microsoft bought it (even though it's essentially the same game)?

So how important do you think fancy graphics are to the definition of an "AAA" title, versus accessibility (in terms of how many people can play it because of its lower hardware requirements, and how many people can mod it because of its graphical simplicity)? And how important are game design, gameplay, popularity, making money and other issues like moddability to the definition of an "AAA" title?

Another way for a game to achieve easy (but more limited) moddability without limiting its graphical complexity is to support advanced built-in tools for user created content (like Spore for example, which has advanced built-in specialized tools as opposed to supporting simple generic third party tools).

Subsequent versions of The Sims had much fancier graphics, and much more advanced built-in content creation tools (like create-a-sim), but that made it harder to create content outside of the game (because objects were 3D meshes instead of 2.5D sprites, and texture maps for character meshes were not nearly as simple).

The original Sims 1 team (which I worked on) only had four core programmers, but it was developed over many years before the point that EA bought Maxis and put more people on shipping it. So The Sims 2, 3 and 4 were much more typically AAA-ish, and had vastly larger teams working on them. (The Sims Studio became one of four major sub-divisions of EA.)

I think small indie teams would be better off focusing on game design and gameplay instead of fancy graphics.

PC-only AAA games are incredibly rare these days. I'd say most AAA games are on as many platforms as possible to have as many sources of revenue as possible to recoup dev (and marketing) costs. The exceptions are games that get a large amount of funding to be single platform by the platform's owner and there's no real "owner" of the PC platform (arguably Microsoft, but their focus is on Xbox) to do that.

Pick up The Orange Box (https://en.wikipedia.org/wiki/The_Orange_Box). It includes Half Life 2, (plus the two sequel episodes), and Portal. Portal is a fantastic game, with a surprising amount of story given the fact that it appears at first to be a fairly simple puzzle game. Half Life 2, is a great first person shooter, again with great story.

What I like about these games is that they're immersive, without being overly long or tedious. You can play through Portal in a few evenings, without feeling like you're missing out on hundreds of side quests. The gameplay is tight, fun and innovative.

The Orange Box is a fantastic collection of games. I can also recommend Portal 2.

However, if you have a friend who likes first person shooters, you could play through one or more of the Gears of War games in cooperative split-screen mode. I hate, HATE, HATE playing FPSs on consoles, but I had a blast playing Gears in coop. It was the first console FPS I'd ever played where

* the gunplay felt good

and

* I didn't find myself longing for a keyboard and mouse after two minutes of play.

Indeed, the combat works really, really well with a controller.

I don't know how the game has held up over the years, but it was lots of fun back in the day.

I think you're correct... saying that you created a AAA title means that you can compete with major game studios... I think.

However, in my opinion AAA doesn't apply to scope or budget but quality of experience. Super Meat Boy is one of the best games I've played in years and it doesn't come anywhere near the scope of say GTA 5 (which is also awesome).

So then, I think the real issue here is aiming to make a game that compete with major studios who have budgets of $50m and employ hundreds of specialized team members.

These devs claim it was a passion project, but then they throw in the towel on the entire concept of running an indie studio after the negative reviews come in. I understand the bankruptcy move, but if I were the founder's shoes, I would move somewhere cheap, take a side job and keep making games on the side. If I still loved the Wolfie project, I would fix it for free and I would find a way to send my KS backers whatever I owe them. I would read every negative review of my game, try to bucket the feed back (something like: legit, semi-legit, ignore) and then fix the game. I would also print my favorite positive reviews and hang them up around the house for motivation.
If I no longer believed in the game, I would build something new but smaller. 13 years is a long time to wash down the drain due to one failure.

Currently I'm trying to make indie games along with my gilfriend. She has no formal 3d art training but has been teaching herself via the internet. I learned Unity and have been teaching myself c# / cg and all the other shit that goes with 3d game development (I come from a python / js background). We will most likely fail hard but I don't understand the concept of quitting when you got into something for fun in the first place.

edit: it also sounds like these guys tried to build their own engine... or something. If they did, that was a bad idea. If they didn't, I don't understand why they had so many issues with collision detection. Haven't played the game though... its just odd to hear that they had major collision detection issues.

Collision detection is a ball ache even when you use an off the shelf solution. There's still loads of problems and edge cases to handle. There's the bullet through paper problem, where fast moving bodies will skip through thin objects. There's scale issues when dealing with large bodies colliding with small bodies. There's floating point precision if you're too far from the origin, how you handle narrow things tag can get stuck in geometry. Writing a robust character controller is difficult too. Handling jumping and crouching are both full of collision nightmares.

Suffice to say, it is entirely powerful and possible to create incredible games. Ori and the blind forest, Hearthstone, and plenty of AA and indie games use unity, lots of kickstarter projects.

To the chagrin of gamers, Steam has greenlit several terrible unity based games, that only use free/cheap unity store assets, namely Airplane (?) and DayZ (sic) , which are quite infamously reviewed by Jim sterling on YouTube. Enough to cloud the market for games created by any future indie dev.

Advice is plentiful, though you do have a lack of great sources to ask questions.

IMO, devs starting with unity are almost drowned in advanced features to get started from concept to prototype. Easy to learn, hard to master, easier to edit store bought prefabs than to DIY. They can become a crutch very quickly.

The unity store can be a boon and a roadblock to progress, because it doesn't always help. Starting out, it won't give you the best direction or how to code, but it will take you forward to developing a playable prototype. Sometimes within hours of starting a new project.

If you have a team, unity is going to be the source of frequent anguish, but it is still more flexible than it ever was.

the biggest hassle will more likely to be assets, and far down the road, dealing with rigid bodies, collision, raycasthit, and the PhysX implementation of mesh and mass collision.

For most devs, the initial unity workload will be around 70-80% asset creation to 20% game coding, and about 290% of the budgeted time, debugging.

A good idea is to build a prototype, get the mechanics working while the art is being developed. Once the code is in place, debug/playtesting is critical to developing assets that then can be designed, built as mesh, and imported within the prototype in a few seconds.

Character controllers are generally easy, more dynamic mechanim is able to avoid some pitfalls of the capsule collision system, and there are ways to add crouch/jump use cases with pathfinding and AI,(which is a good bit of work.)

Unity does excel at getting a walking mesh going, using mechanim to blend animation and mesh collision, and 3d art assets designed in Maya or 3dsmax, etc.

When I started with unity in 2012, it took forever to even think about how to build a menu system, until NGUI. It's improved drastically since the store was added.

Yeah, the term AAA can be a bit slippery. Thankfully, the year 2015 gave us a game that feels like the proverbial "spherical AAA game in a vacuum": The Order 1886. Seriously, it might not be the best game ever, but it feels like it was made to become the definition of "AAA look" for the next year at least. And yes, making a game of such visual quality with only 10 people is flat out impossible with current technology, and it's hard for me to even imagine the technology that would make it possible.

The only way to make a "AAA" game on a low budget now, competing with the massive asset libraries AAA games build up, is to create some kind of easily modded game, where players can build their own assets.

Minecraft does this, in a way. Instead of a villiage-sized Barbie / GI Joe playset (with everything perfectly designed), you get a giant bucket of lego bricks.

Not that Minecraft / df style games have done much better on Kickstarter.

Calling the number 5 selling game of 2014 "not AAA" is pretty much semantic.

I agree, it's not AAA, in terms of assets, but I'm saying a "box of lego bricks" (or something like that - copying Minecraft won't work now) is the only way to compete with AAA on assets. PCG looks like a holy grail, but I doubt it (unless you can get it good enough to sell to AAA, then why wouldn't you just license the tool?).

Minecraft is AAA success. Your big-box retailer probably has Minecraft guide-books for sale. There's nothing that touches it. Papers, Please is an "indie game done good", and it sold maybe 5% of Minecraft.

"AAA" doesn't mean "very successful." It is a category for the games with the very highest development and advertising budgets. Minecraft was built and released with near-zero budget and is, therefore, not a AAA game.

This, I think, is fundamentally the equation driving modern game design and the "indie revolution".

For AAA games, modern, high-end systems increase the difficulty of providing a product. Modern development tools help, but graphics demands and expectations are inexorably growing. In a world where animating reflections might be a full-time task, 6-10 people can't even count on developing a trailer for a AAA product.

In response, then, indie games. Why the rebirth of pixel art and simple, geometric visuals? Not because it's 'retro' or 'clean', but because it's easy. Procedural generation, low-poly graphics, and user generated content aren't popular because they produce the best products, they're popular because they're all decisions that slash the number of employees you need.

Woolfe's mistake was setting out to make their game as good as it could possibly be. For an indie team today that's simply not possible.

(Fun anecdote: I'm from Russia and I didn't know English when I was a child. I had no idea what "save game" means so I started Crash Bandicoot 3 from scratch every time. Imagine my surprise when I realized how saving works.)

Believe it or not I'm star struck. I remember reading an interview about how you guys dealt with fitting such a large game into such a small space with a paging system. Amazing and inspiring! Thanks for the childhood memories :)

>> .. or, as Mark Cerny (our producer on Crash) used to tell us, "add one and increase the unit"

Reading this made my day, it's the exact same rule I try to apply to my own effort estimate, to the letter. I was 100% convinced I came up with it myself, but now I'm starting to think I may have (unconsciously) picked it up from someone else at some point ;-)

Offtpic but you worked on Crash Bandicoot!? That's so awesome. I loved each version of that game and it made our childhoods much more fun (at countries all over the world, just as an example I'm from Colombia)

I've tried and beat Crash Bandicoot and Crash Cart for the first time
this year. (I'm 25, and neither me nor my friends had PS1 in my
childhood). While playing this game I was like always been on the
bleeding edge between difficulty, wishing to give up and
encouragement... I mean, I just couldn't resist, -- I couldn't turn off the
console, in spite of the fact that I had to re-run some levels about
~50-80 times to beat.. It was amazing, very challenging journey, i had always been
feeling that arcade spirit while playing, which is such rarity nowdays.

I think that's part of what made the adventure Crash games such a success. They were challenging, but not impossible; death was penalized fairly; there were always more things to discover and collect. In addition, the games were easy enough to learn but not overly simplistic. The games weren't ground-breaking, but the balance and polish were fantastic and made the games great.

I feel like CTR was slightly easier, but still a great arcade racing game.

Agreed. I've been gaming for about 30 years, and Crash Bandicoot is one of the games that stands-out as best-all-around for its time. It's really hard to convey how good the game looked in '96, and the gameplay was just fun.

The combination of ahead-of-its-time graphics with really fun gameplay is rare, particularly after the 80s.

Properly estimating is probably one of the hardest things to do in our field, specially so if you are starting a product or company where you have to go with your gut for a lot of the decisions. I think the key to survive this is not so much learning to get estimates right, but being prepared for them to be wrong, that's probably why the "multiplicative factor" gets thrown around so often. The good thing is that the more experience you have at estimating things wrong, the better you'll become at guessing what the "multiplicative factor" should be. I started with 10 a few years ago, nowadays it's down to 4 :)

I usually attribute this to the idea of reuse: by definition, every software problem we are solving is "new", because if it wasn't new we'd just use existing solutions and there'd be no team. (Now there are people that insist on resolving problems, but they put themselves into the "new" category by choice).

I wasn't really into video games as a kid but my father bought me a Playstation (the first one) used from some place and it came with a copy of Crash Bandicoot. It is the only video game I can say I've played. It was my favorite thing growing up. Such good times. Thank you!

Also there is a parallel between your "add one and increase the unit" and the estimation of what kind of manpower you need to take an defended objective. The wisdom here is basically the same: If you are facing an objective defended by a squad, you will need a platoon to take it. If you are looking at a platoon dug in to the objective, you will need a company to take it. An entrenched company needs to be uprooted by a battalion, et cetera.

It's from this (The Mythical Man Month as you mention, and the estimations the infantry taught me) that I have built a kind of mental model for estimating things, that ends up being sort of universal like the 80-20 rule.

It's a fidelity problem. If you restrict yourself to very simple interactions and behaviors, you can make a Twine game and get the effect you're looking for without touching code. But then your scope gets more ambitious and you want characters to be fully realized, a mobile camera, voice acting, combat simulation, and so on... and then the asset pipeline blows up into something way bigger, every interaction requires additional scripting steps, you start needing to customize the rendering code...

No one of those features stops you, but you get a "death by paper cuts" effect, because you eventually hit a mine that requires original technology to be written - most commonly, something to do with collision code. A good team led by a competent producer is able to figure out how to get the right effect within the budget, but it's never an easy process, and game productions have a habit of "pitching the moon and shipping with swiss cheese".

There are lots of frameworks for game development, but the problem is exactly in what you've described: rails. You can only get so far within the constraints of those frameworks before you need something it just can't do, and have to develop yourself. Depending on the framework, this may not even be possible (which is what leads to creation of new frameworks).

With that said, a lot of great (albeit typically somewhat simple) games are made with frameworks. When I when to a few game jams back in college, we typically started with something so that we could have a "complete" game by the end.

Also, using an existing engine often means giving up the possibility of having unique features or capabilities. Having the only game with multidimensional quad-resolution bumpmap tensors (in realtime!) can be a major selling point. Of course, in most cases, you'll get a better overall game by focusing on the art, story, gameplay, etc., but that doesn't always translate to sales.

Yes http://www.unity3d.com/ is essentially "Game on Rails". But, just as you can quickly and easily build a failed startup with Rails, Unity doesn't really save you from building a game that no one really wants.

hey i'm sure people following you around telling you this probably gets old but i started playing Crash when it came out, i was 8. i'm 27 now and working in gaming. i just wanted to say Crash was really a huge part of my life and i would be doing us both a disservice not to take this opportunity to say thanks for such a great work of art!!!

Completely agreed. I work in a large games studio and on our project the team that does nothing else but makes sure the game is compliant(so that it is accepted by Sony and Microsoft for release) consists of 7 people - 5 programmers, a designer and a QC tester. I imagine doing the work we are doing for an indie game would be extremely time consuming.

We were not a typical sports game studio (Treyarch), and the other sport project that was done at the same time was for EA (some baseball game), while the one I was and talking about was for SEGA. It was a bit of a problem really, so people were not allowed to get mixed into teams. After that game Treyarch moved to do mainly Spiderman, and then Call of Duty. The latest COD also has quite a spectacular UI system, but much less coders are involved into bringing it up. But today's standards are even much higher - cool animations, links, socially enabled (twitch, facebook, etc.).

Some studio have made success with flash-compatible renderers, and we even thought for NHL2K2 to use one, but first it was only compatible with the Windows version of the Dreamcast (okay, I barely remember now, as we didn't used it, but there was a choice whether you can ship your game with some barebones NT system on the Dreamcast - but you had to lose 2mb to the oS). So that middleware (I think) required it, and it wanted somewhere from 2-4mb more on top.

One big problem, back in the days, with the UI is that you can have tons of memory in your main menu, but you barely have anything while the game is playing. So your "PAUSE" menu might use completely different code path, and look quite differently. The alternative is to reload things after PAUSE, but this decreases the quality of the product, also much harder to test, and you would hear the DISC "screaming" soo much :)

Nonetheless Dreamcast was fun to work on, spare the compiler, and the debugger :)

After reading the Mythical Man Month, I derived an estimation scheme that was something like this:

1. I know this, I have done it before in this codebase. Estimate is accurate, multiply by 2 to account for watercooler talk, documentation, and other non coding work.
2. I know this codebase, I have a pretty good idea of what to do, how, and where. I have walked trough the code and can't identify any major roadblocks. Multiply by 5
3. I don't have an honest idea. Multiply by 20, and make it clear that it's a guesstimate at best. Usually a better estimate can be given after working on it some time.

From reading the reviews of the game on Steam, the main complaints are that it's buggy (especially with regards to enemy AI), and it's incredibly short (not even 2 hours' worth of content.)

There are a lot of $10 games that will entertain for 4-5 or more hours (even excluding the outliers like Terraria). I'm surprised that the developer was surprised at the perceived value of their product.

That said, he's right that the market for indie games is ridiculously saturated right now.

I was in gaming for 4 years or so. It was horrifying to see how much code that is not reused (excluding math/physics libraries). IMHO - The art pipline is the real time suck and can make or break a game deadline. Just think of the staff alone to pull off a art pipeline. Producer -> Game Play expert -> Artist -> tools engineer -> graphics/engine guys -> game designers/ai guys. Just that chain alone from a tech perspective: LAU, databases, openGL, Maya3d, C++ tools, art vector tools, phyics engines, ai engines which are mostly proprietary. Budgets were like movies. Millions and millions. Oh yeah. And most are canceled after years of hard work.

I think if you are making a 3d game you haven't thought things through.

I'm so surprised that there are people who actually have a sense of things. I try to tell others all the time that in software having an inexperienced developer you need to multiply with 3x for best hopes and with 10x for common case, sometimes even experienced devs ending up between 2x and 5x.

People often don't believe me, some reason being that some developers really deliver on time and sell it as finished but then having the rest of the team fixing bugs and working around strange APIs for months. Often it's just that they also haven't experienced the pain of developing something of reasonable scale and stability.

I lived with this Mark Cerny rule for a few decades, and it was mostly true. But finally after I'm over 50 I came to proper and good estimates now. Mostly because I'm a much more experienced now.
2d == 1.5d, 1w == 5d and so on.

But game programming is like architecture. There's so much more to do.