RobG3dBlog of Rob Galanakis (@techartistsorg)2015-03-03T16:36:40Zhttp://www.robg3d.com/feed/atom/WordPressRob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=18202015-02-15T05:07:34Z2015-03-03T16:36:40ZWhen people are discussing what language/framework/library to use for something, the general criteria people talk about is “what best solves the business problem.”

This criteria is used to justify rewriting backend services in Go, rather than sticking with Python. Or not.

It’s used explain why you wrote a new CRUD app in node, even though you’re already using Ruby. Or not.

It’s used to choose between frameworks like Watir or Capybara, even though they’re basically the same thing. Or not.

It’s used to introduce superior programming patterns into legacy codebases. Or not.

It’s used to introduce new unit test runners or libraries. Or not.

“Best solution for the business problem” is used to justify all manner of decisions that are risky and without worthwhile benefits. Likewise, it’s used to justify all manner of decisions that are restrictive and regressive.

I’m sorry to rat on my fellow developers but choosing by “what best solves the business problem” is a load of bullshit.

It’s much more honest to just admit that technology choices are made from a desire to work with a new technology, or because a technology is familiar.

“We have approval to rewrite this service. I’m tired of working in dynamic languages, and I’d like to try Go.”

“This is a small internal app, and I wanted to try node.”

“I am familiar with Capybara, and will be writing most of the tests, so I’d like to use that.”

“I am uncomfortable introducing a new programming pattern that I am unfamiliar with, even though it’s better.”

“I don’t like using this library, and the one I do like can live side-by-side, so I’d like to add it.”

I actually don’t have a problem with deciding this way. In fact, I think it’s a good thing! I want to keep employees happy. But it’s important to be clear about how you expect a codebase and culture to evolve (encourage or discourage change). I want to understand why and how we actually make decisions, so we can get better. Honest discussion leaves fewer loose ends, and less surface area for future criticism.

We are focusing on A and B, and in a month or so we’ll start focusing on C, while also keeping focus on A and B.

Sound familiar?

When we do prioritization at work, I insist we have a single column of priorities or coarse features. In other words, “what do you want delivered next?”*

If a team or person isn’t working on one of the top two or three priorities, they’re doing unimportant, and possibly counter-productive, work. You’d be amazed how many people are working on things someone arbitrarily said was important, which aren’t inline with actual priorities.

You’d be even more amazed how unimportant most “high priority” work is when it needs to be stacked along with everything else. A feature can easily sit at the number 4 spot for months. Just be careful work doesn’t move up the queue just because it’s been in the queue. I don’t think this is a problem, though, because when you tell a product person “we are only executing on the next 2 things to deliver” they are going to have to make hard decisions.

I’ve worked on projects from 10 to 500 people, and generally the times we were humming along was when we had one or two priorities. We ended up producing crap when we had n priorities (where n is often the number of people or teams). Big teams don’t mean more priorities. It is just the granularity of the priorities that changes.

This sort of rigid, columnar prioritization communicates to product people that work only gets done when it’s at the top of the column. I’ve run across countless people, both managers and developers, who just sort of, well, expect that stuff just sort of, well, gets done, somehow. And generally it appears as if things are getting done, until everyone finds out they weren’t really. Are there significant bugs in some old system? It’s not fixed until it’s a priority. Is that new system still unpolished? It’s not improved until it’s a priority. Want to build something that requires some serious infrastructure? Well, that infrastructure stays at the top of the column until it’s done, to the exclusion of other work. Do you want good tools? Well, it means you aren’t going to get features.

It’s an extremely simple and powerful technique, and I highly recommend it if you are having trouble coordinating a product group.

* This doesn’t include ongoing product support, small fixes, and improvements. I think you need a way to handle this outside of normal feature development teams, with some sort of “live support” that can react quickly. A topic for another post.

]]>1Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=18082015-02-02T04:43:05Z2015-02-17T13:32:43ZBPS Research Digest is a great site, highly recommended for anyone interested in why people behave the way that they do. A little while ago, they reported on a study where anxious participants were more likely to cheat and excuse their own unethical behavior than the control group.

When we’re stressed out and feeling threatened, our priority becomes self-preservation. According to new research, this defensive mode even affects our morality, making us more likely to cheat and excuse our own unethical behaviour.

What’s striking is the cause of the anxiety: they listened to Bernard Herrmann’s Psycho score. Compare this to the stress of micromanagement, yearly review season, project bonuses and deadlines, or even general water cooler politics, and it’s no surprise what goes on in most corporate offices.

It’s also a good example of why it’s a company’s job to remove anxiety-causing policies. The less anxiety you cause your employees, the healthier they are and the healthier your culture and company is (we want people working together, not behaving selfishly). These policies include:

Annual performance reviews. Much has been written about this.

Individual performance-based bonuses. They have been proven to be counter-productive, without a single shred of evidence supporting their utility.

Limiting career and salary growth based on positions. People should not compete for a single “senior” spot.

Limiting PTO and not having separate sick days. Being sick is not a vacation.

Not forcing/encouraging people to take a vacation. This causes paranoia, burnout, and envy.

Limiting the flow of information. People will worry if they know what they need to know.

The list goes on and on. And the lesson is very simple:

When you reduce anxiety, you get better work.

]]>1Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=18052015-02-02T04:42:56Z2015-02-10T12:14:49ZManager->Director->VP->CO, or whatever similar hierarchy. I do believe a better structure exists (or that the lack of structure is better!), but I have not seen it, and this arrangement is certainly the most popular, so let’s work from that. Second, let’s define the manager’s responsibilities. There is the leadership aspect (setting direction for a team/group), and there is the procedural aspect (hiring, firing, raises). These can be found in the same person, or separate. If we operate in a strict hierarchy, where everyone in a team reports to a team’s lead, leadership and procedure must be handled by the same person. If people report to a “department manager” or someone else who is not a team, leadership and procedure are handled by different people. That established, how would “choosing your manager” work? “Choosing your manager” would mean individually choosing only the procedural person. The leadership person must be chosen collectively. The reasons for this are obvious. They could be the same person, though. Collectively choosing the leader but having an assigned procedural manager will not work. The person doing the hiring, firing, and appraising ends up with the power anyway....]]>Someone once brought up to me a plan about enabling employees to choose their own manager. The idea has stuck with me for a while, and being in my current position of authority I’ve pondered it more actively. I’ll use this post to collect my thoughts, and maybe present some ideas for discussion. I’m not going to evaluate the benefits or if this is a good idea, but only think about the practicalities.

First, let’s define the role of “manager”. There are many ways the role of manager can be split up or changed or redefined, but I’m specifically going to talk about the extremely popular and stubborn setup of Dev->Manager->Director->VP->CO, or whatever similar hierarchy. I do believe a better structure exists (or that the lack of structure is better!), but I have not seen it, and this arrangement is certainly the most popular, so let’s work from that.

Second, let’s define the manager’s responsibilities. There is the leadership aspect (setting direction for a team/group), and there is the procedural aspect (hiring, firing, raises). These can be found in the same person, or separate. If we operate in a strict hierarchy, where everyone in a team reports to a team’s lead, leadership and procedure must be handled by the same person. If people report to a “department manager” or someone else who is not a team, leadership and procedure are handled by different people.

That established, how would “choosing your manager” work?

“Choosing your manager” would mean individually choosing only the procedural person. The leadership person must be chosen collectively. The reasons for this are obvious. They could be the same person, though.

Collectively choosing the leader but having an assigned procedural manager will not work. The person doing the hiring, firing, and appraising ends up with the power anyway. It’s not fair to the leader and results in terrible politics when things are not in perfect alignment.

Choosing managers would basically prohibit outside hiring at management level. Good managers want to continue to be managers, and while I do believe they need to be talented programmers/designers/whatever, many would not want a 100% full time role with only the hope/possibility of doing management later. So you have cut down your pool of experienced managers significantly. But maybe that’s ok (most managers aren’t very good anyway).

The procedural stuff involves significant confidentiality in a traditional company. Generally, a manager is first and foremost vetted by management, and only secondarily by her reports. To flip this on its head would require radical transparency. Anyone at any time could become a “manager” and have access to extra confidential information. Salaries, at the very least, would need to be common knowledge (which would imply many other things are common knowledge).

Transparency is the big issue. Employees choosing their own managers would require a radically non-traditional company. At this extreme, you may as well get rid of the notion of managers altogether. Unfortunately it’s traditional companies that would benefit most from structural management changes. But at this point, I’m skeptical of “employees choosing their own manager” being a good idea. The thinking is in the right place! But I don’t see the way there. I’d love to hear of companies where this is done, though, as some concrete examples would help validate or disprove these thoughts (Spotify is the closest I can think of).

]]>2Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=18022015-01-31T01:02:31Z2015-02-01T11:45:54ZIn the tech-artists.org G+ community page there was a comment on a thread about unit testing:

A key factor in TA tools is the speed at which we need to deliver them, and our audience is considerably smaller than, say, engine tools code. Therefor it becomes somewhat hard to justify the time spent on writing the unit tests, and then maintaining them as the tools change or are ported or updated to match new APIs.

In other words: Testing is great, but we don’t have time for it. Or the common alternative: Testing is great, but it’s not feasible to test what we’re doing.

Codebases without tests manifest themselves in teams that are stressed and overworked due to an ever-increasing workload and firefighting. Velocity goes down over time. Meanwhile, I’ve never known a team with thorough test coverage that delivered slower than a team without automated tests. In fact I’ve observed teams that had no tests and crunched constantly, added tests and became predictable and successful, then removed the tests after idiotic leadership decisions to artificially increase velocity, and watched their velocity drop way down once the testing infrastructure, and especially culture, fell into disrepair.

Companies that do not require automated tests do not respect their employees, and do not care about the long-term health of the company. It’s that simple (or they are incompetent, which is equally likely). We know that no testing results in stress, overwork, and reduced quality. We know that more testing results in more predictability, higher quality, and happier teams. I would love to blame management, but I see this nonchalant attitude about testing just as often among developers.

The “do it fast without tests, or do it slow with tests” attitude is not just wrong, but poisonous. You are going to be the one dealing with your technical debt. You are the bottleneck on call because your stuff breaks. You are the one who doesn’t get to work on new stuff because you spend all your time maintaining your old crap. You are the one who is crunching to tread water on velocity.

I have a simple rule: I will not work at a job that doesn’t have automated testing (or would be in any way inhibited instituting it as the first order of business).

I have this rule because I love myself and my family. There are enough unavoidable opportunities to interrupt evenings and weekends for work reasons. It is irresponsible to add more ways for things to break.

I have this rule because I care for the people I work with. I want them to have the same option for work-life balance, and work with me for a long time.

I have this rule because I want the company I’ve decided to invest in (employment is the most profound investment!) to be successful in the long term. Not until the end of the quarter, or even until I leave, but for a long, long time.

]]>4Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17892015-01-26T16:51:49Z2015-01-29T15:01:19ZHave you heard about #noestimates? No? Well I’m sure you can guess what it is anyway. But reading the debates reminded me of a story.

While at Game Developer’s Conference a few years ago, I was arguing about estimation with a certain project manager, who, despite having no actual development experience, was in charge of development (Icelandic society is notoriously nepotistic).

“So, maybe no estimation works for your small projects, but when you have to do big projects, and you need to ask for budget, and coordinate many departments and offices, and you need to plan all this in advance, what do you do? How would you plan Incarna?”

Incarna was CCP’s expansion that introduced avatar/character-based “gameplay” into EVE Online. What shipped was the ability for your avatar to walk (not run!) around a room. It was massively over budget, behind schedule, and under delivered. A few months later, 20% of the company was laid off. There’s been no active development on Incarna since 2011, and World of Darkness- which continued to use Incarna’s core technology- was cancelled and the team laid off earlier this year. It was, quite simply, the biggest disaster I’ve seen or heard of my career.*

A character-based game is also something CCP had never done before. They are massively- MASSIVELY- more technologically complex than the “marbles in viscous fluid” EVE flight simulator. CCP did not have the in house experience, especially in Iceland, where most of the (very smart) engineering team had never worked on character based games.

So it was pretty hi-larious that a project manager was using Incarna as an example of why estimation is necessary. But cognitive dissonance is nothing new. Anyway, my response was:

“You don’t plan Incarna. You greenlight a month of development. At the end of a month, you see where things are. Do you keep going for another month? If you are happy with the spend and progress, keep going. If not, pull the plug. Once you can make a prediction at the start of a month, and it holds true for that month, and you do this two times in a row, maybe make a prediction for two months and see how it plays out.
You may pass a year this way. Well, a year isn’t a long time for developing a character-based MMO and game engine from scratch. But at the end of the year, you at least have some experience. But you keep going. If your velocity is consistently predictable, you estimate further out. Eventually, if you can get your velocity stable at the same time you’re growing and developing, you have a fighting chance.**
When your velocity isn’t stable, you reign things in and figure out. If you go through a year of missed month-long predictions, you need to change things drastically (or reboot entirely) if you hope to get something predictable.”

Nothing really insightful there of course- I’m just parroting what has worked me me and many, many others, from Lean-inspired methodologies (and this one in particular says traditional yearly budget cycles are responsible for many terrible business decisions).

A couple months ago I was asked if a significant new feature could get done by June. It would build on several months of foundation and other features. I responded that I was pretty confident that if we aim for June we would have it by September. My rationale, simply, was that previous similar projects shipped 3 or more months late, and I didn’t have enough experience with the team to give a more accurate estimate.

The best predictor of future behavior is past behavior. You need to create historical data before you can extrapolate and plan.

The historical data also needs to be “meaningful.” That is a much more nuanced topic, though.

* It should go without saying that disasters the scale of Incarna are 100% at the hands of management.

** On Star Wars: The Old Republic, management took an interesting strategy of driving velocity into the ground so that while it was terrible individually, it was at least stable. They could then increase the number of people resources and predict, pretty reliably, when it could ship. The game ended up costing about $200 million (I suspect much more, actually), but it wouldn’t have shipped otherwise.

…great testers understand one the cardinal rules of software engineering—- change is the enemy of quality.

This is not a cardinal rule. This is a outdated and obsolete mode of thinking. Change is how you discover great UX. Change is how you refactor and reduce technical debt. Change is how you incrementally improve both your product and code quality.

Maybe that’s too obvious, and clearly Sinofsky isn’t arguing for static software. More nuanced (and the rest of the piece provides that nuance) would be “change inevitably introduces bugs, and bugs reduce quality.”

This too I take issue with. Your codebase should be verifiably better after you fix a bug: you’ve found a shortcoming in your automated tests, so you add a test, and maybe refactor some stuff as well. Or, you’ve identified a bad experience, and can change it to be better in a controlled manner. A bug is an opportunity for improvement. Without bugs, it can be very difficult to improve.*

It can be difficult for anyone who hasn’t worked in a codebase with extensive testing to understand this. In most cases, fixing bugs is playing whack-a-mole. Whack-a-mole is unacceptable to me. Every change we make at Cozy is making the code clearer, simpler, better tested. It’s making the product smoother, faster, and more intuitive.

Change is necessary; it is up to you to determine if it is a friend or foe.

If you’re practicing disciplined development and automated testing and not creating many bugs, good job! This post isn’t for you :)

]]>4Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17832015-01-22T05:12:18Z2015-01-22T05:12:18ZIn my previous post about technical debt, I explained how modern definitions of technical debt are harmful. Now I turn my attention to equally harmful metaphors.

Viktoras Makauskas made the following metaphor in a comment on my last post. This is a pretty perfect stand-in for metaphors I’ve read in other articles that harmfully define technical debt.

Imagine your car gets a strange rattle. You go to your mechanic and he says, “it’s your exhaust pipe holder, you need to replace it, but it’s gonna take a while to order a part and ship it, so just park your car here and come back in a week”. You say “no, I have this weekend trip planned, is there something we can do now?”. They say “yeah, we’ll put a strap on it meanwhile, just drive a little more careful and it should hold, but make sure to come back and do a proper fix”. Mechanic charges you now, and then a bit later.

This seems sensible on first read. But upon closer inspection, it’s quite clear the roles here are totally wrong*:

The mechanic is the programmer (the role of the “expert”). Well, a mechanic may or may not see your car ever again. They do not have a vested interest in your choice. A mechanic’s relationship to a car is totally different from a programmer’s relationship to code.

“You” are the “business” (the role of the “stakeholder”). The metaphor assumes that if you are irresponsible, it only impacts you (it’s your car, your money, your time). This is a problem. A programmer is impacted by business decisions in a way a mechanic is not impacted by whether you fix your car now or later.

This isn’t a simple language problem. It is a fundamental misunderstanding of roles that is naive to the way software development works. Programmers will be the primary sufferers of technical debt.Eventually the business will suffer with a slower pace of innovation and development and higher turnover. But well before that, programmers will be fixing (and refixing) obscure bugs, will bristle under management that tells them to go faster, will be working extra hours to try to improve things, and will eventually burn out. The business will only suffer once real damage has been done to a programming team, and many have given up.

This is why control of technical debt must be in the hands of programmers. Definitions or metaphors that urge otherwise are actively harmful.

Let me close by pointing out I’m just repeating what Ward Cunningham has already written about the original technical debt metaphor. The article ends with:

A lot of bloggers at least have explained the debt metaphor and confused it, I think, with the idea that you could write code poorly with the intention of doing a good job later and thinking that that was the primary source of debt.
I’m never in favor of writing code poorly, but I am in favor of writing code to reflect your current understanding of a problem even if that understanding is partial.

Thanks Ward.

* There are also a couple other problems with this metaphor. First, if “you” and the mechanic are the same person, and responsible for both business and implementation? In that case, there’s no need for a metaphor at all. Second, what happens if the exhaust fails? Do you become stranded? Does the car catch fire? What’s presented here is a false choice between a “correct” solution (replacement) or a “sloppy” solution (strapping it on). Why not rent a car? If there’s no responsible-but-relatively-cheap decision (there almost always is!), it’s still never acceptable to make an irresponsible decision.

]]>4Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17802015-01-11T22:14:05Z2015-01-11T22:14:05ZSome Twitter friends were discussing how to get Sphinx to work with mayapy to build documentation for code that runs in Autodesk Maya. I’ve had to do this sort of thing extensively, for both Maya and editor/game code, and have even run an in-house Read The Docs server to host everything. I’ve learned a number of important lessons, but most relevant here is:

(I do not have the code in front of me so this may be slightly wrong. Perhaps an ex-colleague from CCP can check what used to be in our conf.py.)

Now when Sphinx tries to import your module that has import pymel.core as pmc, it will work fine. That is, assuming your modules do not have some nasty side effects or logic on import requiring correctly functioning modules, which you should definitely avoid and is always unnecessary.

When your documentation generation breaks, it’s now a simple matter of adding a string in one place, rather than a several hour debugging session.

Don’t say I didn’t warn you!

* If anything, I’m philosophically more inclined to use mayapy. So that should tell you what sort of bogeymen await!

]]>0Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17692015-01-04T00:08:30Z2015-01-06T15:37:43ZFor me, technical debt is defined pretty loosely as stuff you don’t like in the code and need to change to keep up velocity. However, I’ve seen lots of articles lately discussing a precise definition of “technical debt.” I would sum them up as:

Technical debt is incurred intentionally. Sloppy code or bad architecture is not debt.

It is a business decision to incur technical debt.

It is a business decision to pay down technical debt.

I hate this characterization of technical debt. I hate it because it’s damaging. It assumes a conversation like this happens:

Manager: “How long to do this feature?”
Programmer: “We can do that feature in 4 weeks properly, or 2 weeks if we take shortcuts that will hurt our velocity in the future.”
Manager: “OK, take a shortcut and get it down ASAP.”
… 2 weeks later …
Manager: “How long to do this feature?”
Programmer: “We must spend 2 weeks paying down our technical debt, then another 2 weeks to do the feature.”
Manager: “That sounds fine.”

Every muscle in my body twinges when I think about this. Quality is not something you can put off to later. The idea that a team would do a sloppy job but have the rigor to repay it later is unbelievable. The closest I’ve seen is rewriting a system after years of shortcuts, which often does not end well. This mentality goes along with “how many bugs you have should be a business decision”. This isn’t OK. Do not write something you do not plan on living with. Do not place the responsibility of doing a good job on the business. I find it sad that a programmer would think such behavior acceptable. This is your life. This is your code. Take some responsibility. Take pride in your work.

(I just want to take a moment to give credit to the team at Cozy. We recently had a couple weeks of crunch. The team delivered fully tested code the entire time).

]]>10Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17642015-01-03T00:05:20Z2015-01-03T00:05:20ZThis was an interesting holiday season, work-wise, for three reasons.

First: My work was closed down from Dec 20th to Jan 4th (except for Customer Support and whichever developer was on firefighting duty, though that is all remote). We shipped two large products on December 17th, which was a bit too close for comfort, but things went OK and it gave us a few days to fix issues.

Second: I was working a couple hours a day while my son napped. I have quite a backlog of pull requests waiting to get in.

Third: On December 31st at about 5pm, we realized our emails hadn’t been going out. Our email service decided to ship 43,000 lines of code the day before, which resulted in a partial outage for some customers (they sent us success responses but things then broke internally).

What lessons did I learn?

First, if you’re going to ship two days before vacation, make sure your work is solid. We had one deployment on Sunday the 21st for some bugs we didn’t want to live with for 2 weeks, but other than that no new work has gone out. We shipped some solid code, thankfully.

Second, if you’re going to work over a holiday, don’t generate work for others. I really want to get the work I’ve been doing out to production, which would require 1) a code review and 2) a deploy of new code. Even if I skipped code review and deployed myself, if shit hit the fan or I introduced some new bug, I’m making work for others. I took a lot of discipline but I’m proud to say that I have fifteen open pull requests and not a single one is reviewed yet. It’ll be a busy Monday and Tuesday but that’s better than messing with peoples’ vacation.

Third, two weeks is a really long time to shut down. In some ways, shutting down is great, as I’ve written about before. But it sucks not having a good way to get fixes and improvements out to customers. There are a lot of considerations here. I’m not sure what we’ll do next year. It’ll largely be up to the team.

Fourth, you should never, ever ship something directly before a holiday or before you go on vacation. It’s immature and unacceptable. You not only screw over your team when something goes wrong, you screw over everyone depending on your product. They need to jump into action and figure out what’s going on, how to mitigate things, respond to customer complaints, etc. I cannot believe I need to tell anyone this. Don’t ship directly before a holiday.

Anyway, just some thoughts. Happy New Year!

]]>0Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17582014-12-22T20:59:10Z2014-12-30T16:14:20ZDavid Smith over at baleful.net makes some interesting points about the length of most interviews:

So mathematically, you will most likely get the highest confidence interval with: 1) Resume screen, 2) Phone interview, 3) In-person interviews 1-3. From the above, this should represent about 50% of the total causes, but should produce 91% of the total effect. Adding additional interview steps after that 91% brings only incremental improvement at best and backslide at worst.

He makes an extremely compelling argument, and I encourage you to read the entire piece. That said, I still prefer a full day of interviews as both the interviewer and interviewee.

The interviewee angle is easy. I enjoy interviews. I like to dig into my potential employer. I want to grill your second-string players. I want to hear how junior people feel treated. I want as much information as possible before making my choice. But I know this is just me, and people who are less comfortable with interviews probably prefer shorter ones. I also admit I don’t think I’ve learned anything in the second half of a day of interviewing that would have made me turn down a job. But I have learned things that helped me in my job once hired.

The benefits of full-day interviews for the interviewers is much more complex. There are several factors:

We have diverse backgrounds and expertise, and each group brings a unique perspective. Candidate postmortems are not dominated by the same couple interviewers.

I want to give as many people experience interviewing as possible. I consider it an important skill. Limiting things to three in-person interviews means the interviewers are all “musts” and I don’t get to experiment at the periphery with groups or combinations.

People want to be a part of the process. I’ve personally felt frustrated when left out of the process, and I know I’ve frustrated others when I’ve left them out.

For a developer role, I want them to meet with at least: founders, ops, lead developer, two developers, myself. We’re at an absolute minimum of 7. That is with a narrow set of views, without inexperienced interviewers, and leaving good people out. What am I supposed to do?

For starters, the interview process should be more transparent and collaborative. Ask the interviewer if they want a full day, two half days, morning or afternoon, etc.

No group lunches. I’ve never gotten useful feedback from a group lunch. Keep it down to one or two people. A candidate just doesn’t want to embarrass themselves, so they just shut up, and side conversations dominate.

Avoid solo interviews. I used to hope to solo interview everyone. But over time, I’ve found that pairing on interviews enhances the benefits listed above. There are still times I will want a solo interview, but in general I will pair.

Cut the crap. Interviewers should state their name and role. Don’t bother with your history unless asked. Don’t ask questions that are answered by a resume. Instead of “tell us about yourself” how about “tell us what you’re looking for”.

Keep a schedule. Some people are very bad at managing time. If someone isn’t done, too bad, keep things moving. They will eventually learn how to keep interviews to their allotted time.

Thanks to David for the insightful post. I’ll continue to keep full-day interviews, but we’ll definitely change some things up.

]]>6Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17552014-12-22T20:14:07Z2014-12-27T19:32:52ZI don’t say it with a hint of sarcasm. I’ve put together a lot of furniture lately, and IKEA instructions are the only instructions that are consistently correct and unambiguous. In dozens of units, I’ve confirmed one case of an ambiguous step. But even in that case, I was able to read ahead and eliminate the ambiguity.

Compare this to almost every other piece of furniture I’ve put together. The drawings are often ambiguous, and even worse, the furniture can be constructed in multiple ways. This is rare with IKEA furniture. You may get to the end and find out you messed up, but things won’t really fit together. With my son’s crib, though, I had moulding pointing the wrong way with no structural effects. Unacceptable.

Assembling furniture from basic components is necessarily complicated. IKEA does a great job embracing this complexity by supplying extremely concise-yet-precise instructions and products where the construction process is considered in the design. My guess is that most people who have problems with IKEA construction jump in without understanding what they are doing. Fiberboard planks and screws are deceptively simple.

I think of this lesson often with the design of complex systems. Anything that deals with ACH (credits and debits in the US) is necessarily complex. You can only abstract to a certain level. Did you know that an ACH payment can transition from Succeeded to Failed? Attempting to “hide” the complexity of ACH, like we successfully hide the complexity of a file system, is a fool’s errand. Instead of making a payments API that’s simple to use, it’s be much better to make one that’s precisely defined, thoroughly tested, and well documented. There are still some problems that require a little bit of RTFM. It’s better to make this complexity front and center in a design like IKEA furniture, than to gloss over it and end up with client code that is built like second-rate DIY furniture.

The bad news is, they won’t make it free. The good news is, my editor said that Packt often runs free eBook campaigns, and would make the book part of the free campaign whenever they come up. I will blog here when they do (and also please tweet me @techartistsorg if I miss it).

If you can acquire a pirated copy of my book, I encourage you to do so. Packt does not use DRM as far as I know, so just ask a friend who has the book.

Sorry I can’t make it totally free right now, as much as I want to. It sucks to not have full control over something you have personally invested so much in, but I don’t have the energy to fight my publisher on this one (and the fact that they’re DRM-free makes this much less of an issue).

]]>2Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17502014-12-21T21:56:50Z2014-12-21T21:56:50ZBen Sandofsky wrote a post about why QA departments are still necessary, specifically with regards to mobile app development. He makes a good point: mobile apps create a distribution bottleneck that makes very rapid iteration impossible. I agree, and this is a good angle to think about. I would have been happy with an article focused on this.

Ben is clearly a talented guy but this post was insane. In a literal sense. It is a rant for anti-Agile curmudgeons at best, and would leave me questioning the experiences of anyone that thinks this way at worst.

Websites ship embarrassing bugs all the time. They get away with it because they didn’t ship it to all users. You roll-out everything out to 1% of users, and watch your graphs. If things look good, slowly roll out to 100%.

The idea that this is this sort of incremental rollout is ubiquitous amongst web developers is crazy. It requires infrastructure, code designed to support split testing, experienced operations engineers, robust monitoring, a disciplined process, and more. The institutions with this sort of sophistication all have strong automated testing environments. Which brings me to my next issue:

I think automated testing accelerates development, but I haven’t seen a direct correlation between testing and quality. On projects with massive, high quality test coverage, I’ve seen just as many bugs slip through as projects with zero coverage.

This is the software equivalent to climate change denial. Where does this experience come from? I am not sure I’d be able to find a single developer who would corroborate this. Oh, right:

Tell a game developer you don’t need [QA], they’ll tell you you’re nuts.

The game industry is full of these folks who believe what they are doing is such an untestable snowflake. Unsurprisingly, games have historically been the buggiest software around. Never, ever look at game development as an example of how to do QA right. Not just automated testing, but manual QA too.

…a great QA team is far from a bunch of monkeys clicking buttons all day.

Game development has a hallmark technique of hiring masses of QA people and have massive layoffs at the end of projects. There is an entire website dedicated to tales of horror from QA people. It makes The Daily WTF look like paradise.

Take the unicorn of “two week release cycles.” As you build infrastructure for faster releases, simple code becomes unwieldy. Tasks that should take hours take weeks.

What does this even mean? There are endless apps on two week release cycles. I am confused how building infrastructure for faster iterations ends up adding complexity to simple code or tasks.

Disciplined development is a lost art.

You could make this argument when we moved away from punch cards. But the idea that success in mobile apps is achieved through discipline, but success on the web can be achieved by recklessness, is beyond baseless. It’s downright insulting.

I consider it a tragedy that, when faced with the reality of App Store distribution bottlenecks, Ben’s answer is to go back to the process of yesteryear and throw out the lessons we’ve learned. Why not invent new ways of building in quality? New ways of iterating on apps faster? There are so many interesting problems to solve.

Finally, Ben cautions:

Today, any web developer who wants to stay employed has learned to build apps. If web companies want to remain relevant, they’ll have to do the same.

I have a better warning. Don’t throw away the incredible advances we’ve made over the last decade. Don’t downplay the success and rate of innovation in web development as something that doesn’t apply. Don’t throw away the universal “good idea-edness” of automated testing. Don’t rely on a separate department to enforce quality. Don’t stop looking for ways to make development better.

]]>1Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17442014-12-08T06:20:13Z2014-12-16T00:38:40ZUncle Bob, who I consider my favorite programming writer, had a post a few weeks ago titled “Thorns around the Gold“. In it he describes how writing tests for your core functionality first can be harmful. Instead, Uncle Bob prefers to probe for “thorns” around the “gold” first.

I shy away from any tests that are close to the core functionality until I have completely surrounded the problem with passing tests that describe everything but the core functionality. Then, and only then, do I go get The Gold.

I haven’t been doing TDD for nearly as long as Uncle Bob but I was shocked to read this. I’ve always learned and taught that you should create positive tests first, and only need as many negative tests as you feel are warranted. While you may not grab the gold immediately, you at least step towards the gold. How many thorns you expose is a judgement call. In Python, most people don’t even bother validating for None inputs, and instead just let things raise (or not). Of course, this depends on your users. For libraries limited to one internal application, I wouldn’t “probe many hedges.” For open source libraries, I validate pretty aggressively.

Of particular interest was this:

I often find that if all the ancillary behaviors are in place before I approach the core functionality, then the core functionality is much easier to implement.

I always thought you should only program what you need and no more. It seems very strange to assume the ancillary behaviors will be needed. It seems like a violation of YAGNI.

I have been trying to reconcile Uncle Bob’s advice here, and the TDD best practices I’ve learned and developed. But I cannot. Either I’ve been receiving and giving years of bad advice, or Uncle Bob has made a rare mistake.

]]>2Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17392014-12-07T22:42:27Z2014-12-11T10:38:02ZLast week, iker j. de los mozos posted a Qt tutorial on his blog. The post was retweeted a number of times, so I figure people liked it.

The post exemplifies what is wrong with the Qt Designer, and also how a little more investment in learning can pay dividends for your career.

I know it’s unfair to give people code reviews on things they just put out for free, but I consider it even worse to allow people to continue to use the Qt Designer with a clear conscience. I thank Ike for his post, and for syndicating his feed on Planet Tech Art, and hope that no one takes my writeup below personally. It’s not an attack on a person, it’s trying to point out there there is a much better way to do things.

There are 117 lines of Python code in Ike’s tool for locking and unlocking transformation attributes. This sounds like a small amount, but is for an experienced Pythonista it indicates huge waste. For comparison, the entire ZMQ-based request/reply client and server I built for Practical Maya Programming with Python is the same size or smaller (depending on the version). If we take a closer look at his code (see link above), we can see a ton of copy and pasted functionality. This is technically a separate concern from the use of the Designer, but in my experience the two go hand-in-hand. The duplication inherent in GUI tools carries over to the way you program.

Let’s look at some refactored code where we build the GUI in code (warning, I haven’t tried this code since I don’t have Maya on this machine):

Why is this code better? Well, for starters, it’s less than a third of the size (37 lines) and there’s less duplication. These are very good things. When we want to change behavior- such as auto-updating the checkboxes when our selection changes- we can put it in one place, not nine or more.

So the code is better, but what other benefits are there to not using the Designer?
– We pull common primitives, like a “row” (QWidget with HBoxLayout) and “table” into a qthelpers module, so we can use this across all GUIs. This saves huge amounts of boilerplate over the long run, especially since we can customize what parameters we pass to it (like onClick being a callback).
– The GUI is clear from the code because the UI is built declaratively. I do not even need to load the UI into the Designer or run the code to understand it. I can just read the bottom few lines of the file and know what this looks like.
– You learn new things. We use functools.partial for currying, instead of explicit callbacks. This is more complicated to someone that only knows simple tools, but becomes an indispensable tool as you get more advanced. We are not programming in Turtle Graphics. We are using the wonderful language of Python. We should take advantage of that.

Again, I thank Ike for his blog post, and hope I haven’t upset anyone. Ike’s code is pretty consistent with the type of code I’ve seen from Technical Artists. It’s time to do better. Start by ditching the Designer and focusing on clean code.

My concern is that the absence of QA is the absence of a champion for aspects of software development that everyone agrees are important, but often no one is willing to own. Unit tests, automation, test plans, bug tracking, and quality metrics. The results of which give QA a unique perspective. Traditionally, they are known as the folks who break things, who find bugs, but QA’s role is far more important. It’s not that QA can discover what is wrong, they intimately understand what is right and they unfailingly strive to push the product in that direction.

I believe these are humans you want in the building.

At my current job, we don’t have a QA department either. And like Rands, I wasn’t comfortable at first. I’ve worked on teams without QA, but an entire company without a QA Department? I’ve certainly had questions about the use of a QA department, but does that mean they are a bad idea?

I am a staunch believer of “building quality in.” Every bug that slips out is a failure of your development process. The way to higher quality is not to find, or fix, more bugs. It’s to avoid them in the first place.

If you rely on QA to champion unit testing, automation, bug tracking, and quality metrics, your development process is lacking its most important tools and measures to improving quality. Quality can’t be imposed by QA, it must grow out of enabled and engaged development teams.

I have a saying: “Don’t hire to fix a problem.” If you have a quality problem, hiring a QA department isn’t going to fix it. You instead hide the systematic problems that cause quality issues in the first place.

This is not to say “the QA mindset” isn’t valuable. It is. One of my best hires was Bjorgvin Reynisson, who was a Test Engineer at Nokia and I hired as a QA Engineer at CCP. He was embedded with the graphics engine team and he helped them develop extensive automated correctness and performance testing systems. He worked with them to recognized holes in their process and test coverage. He helped with tracking issues and increasing quality. This is the “QA Mindset” I treasure, and this type of person is invaluable to development teams. Bjorgvin unlocked a latent “culture of quality” in the team he was a part of.

I contrast this “QA Mindset” with the “QA Department Mindset“. The QA Department Mindset has two damning characteristics. First, it is innately adversarial, as Rands notes.

Yes, there is often professional conflict between the teams. Yes, I often had to gather conflicting parties together and explain[…]

Second, it is by definition a separate department, which creates obstacles to better integrating engineering and QA.

Bjorgvin should be spending time with his teammates and the rest of the developers figuring out how to improve the entire development process. He should not be spending time with other QA personnel focused on QA functions. When I was Technical Director for EVE Online, I made sure there were minimal discussions gated by job title. Talk of a trade went on in Communities of Practice, which were open to all. Sometimes this didn’t happen, and those times were mistakes.

Like Rands says:

Yes, we actually have the same goal: rigorously confirming whether or not the product is great.

If that’s the case, QA should not be broken out into a separate department. QA should be working side by side, reporting into the same people, measured by the same success metrics, contributing to the holistic success of an entire product.

I love the QA Mindset. It’s tragic that having a QA Mindset gets confused with having a QA Department.

]]>2Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17112014-11-08T22:18:13Z2014-11-17T14:05:04ZWe use Slack for team communication at Cozy. I struggled with the transition. When I reflected on my struggles, it made me better understand what a destructive format email is for workplace communication.

A quick disclaimer. This is only about work communication and not personal communication. I love email. I think email will be around for a long time and I will lament if and when it goes away. I just don’t think we should be using email for work.

Oration is the highest form of feeding an ego. You craft your message carefully. You research, write, and rehearse. Finally, you take the stage. You command everyone’s attention. And once you’re done, an important topic has been thoroughly addressed and everyone can go on with their lives, better off after hearing what you said.

Email is oratory without the speaking* (or skill). My problems with email stem from when it is used for one-way communication. I suspect that most emails I’ve ever received from anyone in management have been one-way. Generally these emails are meant to, first and foremost, communicate the sender/manager’s self-importance. Often the email contains a nugget of actual information which should be hosted elsewhere. Sometimes the email is an announcement no one understands. And as a rule, you can’t rely on people reading the email you send anyway.

When you craft a long email, like an orator crafts a speech, it is an ego boost. Each one is a masterpiece. You are proud of your fine writing. When you craft a long chat message, on the other hand, you look like a dramatic asshole. It puts in stark perspective how awful the written format is for important or high-bandwidth communication. I’ve never seen someone post a 300-word message to chat. How many 300-word emails do you have in your inbox?

Removing email also levels the playing field for communication. You don’t need to be a manager or orator. Everything you write has a visibility you can’t change. You choose your audience based on topic. Is there a question about a product’s design? Well, it goes into the product or design channel, whether you are Executive Emperor or QA Associate II. Also, no one really wants to read your dramatic flair so please keep it short and to the point.

I used to get frustrated when I’d write an excellent email, send it out, and within a few minutes someone would reply with a message like “Yeah, just to build on what Rob said, it’d be a good idea to do X.” You idiot! You are an Ice Cream Truck driving through the State of the Union. But of course, the problem was mine, playing a manipulative game, focusing too much on this amazing message I’d created. Sometimes these emails would be about the manipulative games people were playing and how we weren’t focused on the employees and customers and things that were actually important.

Email in the workplace is a systematic problem. We take it for granted. We use it constantly. We don’t question it. But email has a cost. It feeds into the already inflated ego of managers. It encourages one-way communication. It is wonderful for grandstanding. We spend a lot of time crafting museum-quality correspondence no one wants to read. And in the end, there are better ways to accomplish what we use it for.

* One of the greatest “speeches” of all time, Pro Milone by Cicero, was written, not spoken. We know great orators by their writing, not their speaking.

]]>0Rob Galanakishttp://www.robg3d.comhttp://www.robg3d.com/?p=17232014-11-08T23:01:57Z2014-11-13T22:39:56ZFrom a wonderful post by Matt Williams about the type of business he is looking for:

A Business ManifestoWe are uncovering better ways of running a business and helping others do it.Through this work we have come to value:
– People and interactions over profits and prestige
– Quality service over quantity of service
– Customer relationships over contract negotiation
– Flexibility over following a planThat is, while there is value in the items on the right, we value the items on the left more.

In a nutshell I want to work for a company which values people — both inside and out of the company. I want to work where people strive to do things right.

When I go home, I want to be able to look in the face of my daughter and not have to make excuses for the work that I do and the effect it has on others.

Sums things up nicely (and definitely what we aspire to at Cozy, by the way we’re hiring).

It is a reason I left the video games industry. I wanted to use my skills to do something I felt was more constructive.

But more than that, I was amazed and frustrated with how the industry was run (almost as bad as films). Mass layoffs even on successful projects. Over-managed projects that go on for 4, 5, 6 years and are cancelled. Creating an exploitative product in order to milk a customer base. Huge budgets, huge marketing, appeals to lowest common denominators (often sexual). There are good companies but the business models are so insane that you can be around for 10 years and fold tomorrow.