Hiring developers is hard

What’s the secret to hiring a good developer? What are the best interviewing techniques? Before you start rating your recruitment prowess, take a look at how the hiring process differs outside the interviewing medium.

Hiring criteria when looking for a good developer is never a cut and paste scenario. Depending on the technology, language, experience and work environment you have, the requirements for a new hire can vary drastically from company to company.

While many companies claim to understand this fickle situation, they still employ the same techniques when trying to hire someone. And according to Thomas Ptacek, Principal at Starfighter, those techniques are wrong.

Ptacek is very clear about his views on the subject and claims that “the software developer job interview doesn’t work. Companies should stop relying on them”. Ivan Kirigin from YesGraph shares this sentiment, saying that traditional coding interviews “don’t measure what you hope they do”.

Despite the plethora of Google search results showcasing the ‘Top 5 essential developer/Java/C++/company you wanna work for questions’, more and more hiring practices are evolving to cater to things other than performing well during an interview. Let’s explore their reasoning.

Interviews suck

Ptacek is unwavering in his opinion about software developer job interviews:

Years from now, we’ll look back at the 2015 developer interview as an anachronism, akin to hiring an orchestra cellist with a personality test and a quiz about music theory rather than a blind audition.

The process of hiring via interview doesn’t weasel out the people who can speak expertly about programming, but can’t effectively code. “The majority of people who can code can’t do it well in an interview”. In this vein, the process of hiring doesn’t properly assess candidates and fails in terms of finding the right people for the job.

Kirigin understands this dilemma, saying that “your assessment better represent the real work of a job well, or else you will filter out people that might thrive and hire people who are a poor fit”. So does that mean devising better questions?

No freaking way, says Ptacek. There is no ‘perfect’ interview question because to him, interviews are “incredibly hostile experiences for candidates”. They don’t work because people aren’t hired for their ability to perform under unnatural stress. For Kirigin, coding questions in particular produce much the same thing:

The issue here is confidence bias. While it helps to reveal candidates who are good at being interviewed, it doesn’t take into account genuinely competent and effective developers who physically shake at the prospect of being interviewed.

Take Andrew C. Oliver’s article on three make-or-break interview questions for developers. His rundown of the ‘usual’ process includes an HR interview, the “typical” three-pass process of a technical phone chat, a face-to-face interview, and the submittal of code samples or a sample project. That this process is considered the standard for most is a failure for the likes of Ptacek and Kirigin.

Collecting data and machine learning

In order to find yourself the right candidate, both Ptacek and Kirigin say that collecting data to make an informed decision is the best possible path to follow. At YesGraph, they’re making a take-home problem focused on a real machine learning challenge, while Ptacek is part of a team creating Capture The Flag games (CFTs) that you play by programming.

Producing realistic problems to solve can tell a company a lot more about a candidate than their resume can. Kirigin admits that while their method of a take-home problem isn’t perfect, it’s still better than the alternative:

Candidates will produce significant code that actually looks like the code someone would write in production. It tests high level knowledge of learning. If you have no experience, you probably won’t know where to even start. But it also allows for differentiation and personality, so that great candidates can shine… It also allows people to work in their familiar development environment.

At Ptacek’s previous company Matasano, work-sample tests were their most important hiring decision factor. By collecting the right data, and as much of it as they could, they effectively multiplied the size of their team and retained every single person they hired. They also weeded out interviewer bias as much as possible, created standardised meetings that followed a script and adhered to a scoring rubric that could assess candidates the same way every time.

Ptacek asks us to look at the following questions when assessing the hiring process:

Is it consistent? Does every candidate get the same interview?

Does it correct for hostility and pressure? Does it factor in “confidence” and “passion”? If so, are you sure you want to do that?

Does it generate data beyond hire/no-hire? How are you collecting and comparing it? Are you learning both from your successes and failures?

Does it make candidates demonstrate the work you do? Imagine your candidate is a “natural” who has never done the work, but has a preternatural aptitude for it. Could you hire that person, or would you miss them because they have the wrong kind of GitHub profile?

Are candidates prepped fully? Does the onus for showing the candidate in their best light fall on you, or the candidate?

The onus for showing the candidate in their best light is an interesting take on the hiring framework. To get the most of out of someone, the potential employer would be expected to make them as comfortable as possible so that they’re performing at their peak. This again comes back to the interview phase and the hostile atmosphere it can generate.

The ‘usual’ method of hiring à la Oliver’s example needs a facelift if you want to score the right talent. Ptacek calls this traditional method a moral problem and a market failure. Taking his advice might just allow companies to profit from an overhaul of the system.