This post describes how to deploy a BIND 9 DNS server to Amazon AWS using bosh-init, a command-line BOSH tool that enables the deployment of VMs without requiring an additional VM (in the case of MicroBOSH) or several VMs (in the case of BOSH). [1]

This blog post is the second of a series; it picks up where the previous one, How to Create a BOSH Release of a DNS Server, left off. Previously we described how to create a BOSH release (i.e. a BOSH software package) of the BIND 9 DNS server and deploy the server to VirtualBox via BOSH Lite.

3. Deploy

Let’s deploy (if you see Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension while installing, you may need to run xcode-select --install and accept the license in order to install the necessary header files):

4. Test

We test our newly-deployed DNS server to ensure it refuses to resolve queries for which it is not authoritative (e.g. google.com), but will answer queries for which it is authoritative (nono.com). We assume that no change has been made to the manifest’s jobs/properties/config_file section; i.e. we assume that the server that has been deployed is a slave server for the zone nono.com..

BOSH does not install ‘packages’ (e.g. .deb, .rpm), instead, one must build a custom BOSH release or take advantage of community-built releases.

Appendix A. The Importance of Disallowing Recursion

We disable recursive queries on our DNS servers that have been deployed to the Internet because it prevents our server from being used in a DNS Amplification Attack. DNS Amplication Attacks are doubly-damning in the sense that we pay for the attack’s bandwidth charges (we pay in the literal sense: Amazon charges us for the outbound traffic).

The good news is since version 9.4 BIND has a non-recursive default (our BOSH release’s version is 9.10). If you truly need to allow recursion, add the following stanza to the deployment’s manifest’s jobs→properties→config_file stanza; it will configure the BIND server to be an Open Resolver (A DNS server that allows recursive queries/recursion is known as an Open Resolver). Don’t do this unless your server is behind a firewall:

options {
recursion yes;
// DO NOT put the following line on an Internet-accessible DNS server
allow-recursion { any; };

An easy way to test if your server is an Open Resolver is to run the following dig command (substitute 52.6.149.97 with the address of your deployed DNS server):

dig freeinfosys.com ANY @52.6.149.97
...
;; MSG SIZE rcvd: 33

The “MSG SIZE rcvd: 33″ means that recursion was denied (i.e. our server is properly configured). If instead you see “MSG SIZE rcvd: 3185″, then you need to edit your deployment’s manifest and re-deploy.

Probed within 3 Hours, Exploited within 3 Days

Our server was probed within 3 hours of deployment (logs from /var/log/daemon.log):

Acknowledgements

Dmitriy Kalinin‘s assistance was invaluable when creating the sample manifest.

Footnotes

1 We use bosh-init rather than MicroBOSH or BOSH primarily for financial reasons: bosh-init: with bosh-init, we need but spin up the DNS server VM (t2.micro instance, $114/year [2] ). Using MicroBOSH requires us to spin up an additional VM (m3.medium instance, $614/year), ballooning our costs 538%. Full BOSH, which requires several VMs, would increase our costs even more.

2Amazon EC2 Prices are current as of the writing of this document. A t2.micro instance costs $0.013 per hour. Assuming 365.2425 24-hour days/year, this works out to $113.96/year. An m3.medium instances costs $0.070 per hour, $613.60/year. Our calculations do not take into account Spot Instances or Reserved Instances.

Admittedly there are mechanisms to reduce the cost of the Micro BOSH (or BOSH) VM(s)—for example, we could suspend the Micro BOSH instance after it has deployed the DNS server.

Whatever you call it—feature testing, acceptance testing, end-to-end testing—top level behavioral test coverage is absolutely necessary for any web application. We’ve noticed that there aren’t that many resources on the web to help you get started with this, so we’ve extracted some notes from a recent application.

Note that in this case we’re talking about a conventional web application, and we aren’t using angular. If you’re using AngularJS, you should probably look into Protractor. If you’re only exposing an API, you should probably look into tools like request or supertest.

Many ‘Getting Started’ guides will tell you to immediately call `app.listen()` at this point.
However, for testing purposes, it can be helpful to extract that responsibility to a separate file called engine.js:

To start this application easily when we aren’t in test mode, we’ll need an index.js to start our engine for us:

// ./index.js
require('./engine').start();

Note that in engine.js, we made the port an environmental variable. This allows us to easily boot the application on a different port for testing without conflicting with the possibly-running application. We can make using this environmental variable trivial for development and test by updating our “scripts” block in package.json:

Before we can start writing tests, we’ll need a couple helpers to configure zombie and start the application in test mode. By default, jasmine will execute any files in ./spec/helpers before your test run begins.

So, you’ve been programming for a while now, but you’ve been using one stack. Let’s say you’ve been building web applications with Java, Javascript and HTML. New languages or frameworks seem like fun, but you are worried that you won’t be able to convince your manager that it will be worthwhile to try something new.

How to quickly scratch that itch

Find someone who does know those languages and frameworks. Have them teach you! At Pivotal, we do pair programming nearly 100% of the time. I’ve found that pairing is a great way to ramp up on something new. The time-gap between when I have a question and when I can get it answered is far less than using Google and Stack Overflow to answer questions. I’ve had a lot of fun learning new languages and frameworks this way. I’ve picked up Java and Spring, Node.js, Django and Python, Angular.js, and the list goes on. Each time, I’ve been sitting next to someone that can guide me quickly through the learning process. My pair can often quickly use my existing knowledge to translate into the new situation.

Don’t be afraid. Go seek out new knowledge, and learn it fast. Ask a bunch of questions. Spread your new knowledge to others. You’ll be able to bring some of the things you learn back to where you started and improve there too.

Even better, help someone else through the learning process via pairing. You’ll learn a bunch.

At this year’s RailsConf I am going to be teaching the workshop: Get started with Component-Based Rails applications! It is a 90 minute session that gets you from 0 to 10 components in 90 minutes. The session is in the Labs track and will be held on day 1, Tuesday, April 21 at 3:50pm. If you are attending RailsConf, you can sign up for the workshop here.

This short post outlines the steps to get your machine set up for the topics we will be covering in the workshop.

Workshop Preparation

https://github.com/shageman/sportsball contains the sample application that we will be working with. The short of it is that if you can checkout and build this app successfully, you are all set for the workshop on Tuesday.

You will need to install Ruby 2.2.2 and bundler to perform the above steps successfully.

#cbra Book

I am in the process of writing a book on Component-Based Rails applications. The book is in progress; nonetheless, I have started to publish it at leanpub.com/cbra. It is not necessary to have read the existing parts of the book for the workshop. However, the book will be a great way to recap details after the session.

To make the decision easier, I will be giving out a 50% off coupon to participants of the workshop!

“If I was forced to only use the product on my iPad, I would want to put a bullet in my head.”

The user interview was conclusive. The feedback couldn’t be any clearer, and we had heard this from more than one user that day – iPad-only was not the way to go. Problem was, we were already three days into developing the product for iOS. We needed to change course – but at what cost?

Conclusively arriving at the wrong decision

The second Wednesday of the engagement we brought together 7 stakeholders, laid out the facts as we saw them, and near-unanimously agreed to ultimately go down the wrong track. The Wednesday after that we had the aforementioned user interview. How were we so misguided about our decision only a week before?

In retrospect we were making the classic mistake of building for a persona that doesn’t exist. Our key deciding factor was “mobility,” the idea that a user would want to use the product anywhere, regardless of the cell phone service. We envisioned giving users iPads (we had a captive user group, so iOS vs. Android and iPad market penetration were not concerns) and these users would use the application in the back of taxicabs, on the subway between client meetings, in abandoned industrial lots with no electricity, you name it.

At the point of our decision, we had only conducted one user interview, and for whatever reason had neglected to ask anything that would address our platform concern. The “on the go” user belief still stood tall. We knew we lacked the necessary information, and we knew this was a big concern, so why decide to decide?

Design was split between web and iPad. Without putting a stake in the ground, we couldn’t progress in our designs. And while having arbitrary deadlines is sometimes helpful to keep the team on track, oftentimes it can lead to rash decisions like the one we made.

But, with 7 people in a room all agreeing together, it seemed like we had made a defensible choice.

The truth dawns on us

At Pivotal Labs, we try not to work past 6 pm. We believe in sustainability, that willpower is a finite, depleting resource (http://en.wikipedia.org/wiki/Ego_depletion). For developers, coding late into the night makes for a higher error rate and less robust code that needs rewriting in the morning. For product managers and designers, making decisions after 6pm can lead to emotionally-charged, desperate conversations about the state of the project.

Turns out adults and toddlers aren’t so different. We both become better people after naptime.

The Tuesday after our decision, we had one of those desperate early-evening conversations about the iPad decision. The client team comprised 3 people, only one of which was in the initial platform conversation. The one client kept repeating to the others “if you were only in the room, you would be on our side,” but as the talk continued, that line became less of a declarative statement and more of a doubtful platitude. Nobody was saying iOS-only was wrong, but everybody saw pitfalls in a more limited platform than the web.

The next morning, bright-eyed and with two user interviews scheduled for that day, we decided to mitigate our fears by attacking the platform question head on.

Our fears were confirmed instead.

Donald Rumsfeld once had a much-maligned quote that surprisingly succinctly sums up the types of knowledge you get out of user interviews, needed when building a product:

“…There are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”

The user interviews that day had illuminated an unknown unknown, that our users were not necessarily individuals, they were firms. And firms, employing multiple people, could not run their entire businesses using an iPad – the medium was far too limiting for small businesses.

Our “on the go” user was an illusion; the “work at your desk alongside coworkers” user was reality.

It… wasn’t that bad of a change

We gathered the entire client team in a room (lesson: always have all decision-makers present for less-reversible decisions), laid the facts out as we now saw them, and decided to pivot. But what of all our iOS-focused research and wireframes?

Turns out the impact was actually pretty minimal. Like good Agile practitioners, we plan our development backlog in user stories, which are small, concrete tasks that users want to accomplish. User stories are naturally user-focused, describing functionality that should be available to the user, and specifically refrain from addressing implementation details.

So with great trepidation we laid out our prioritized set of user stories (on index cards) to assess the damage, and… we had to add a login page. That was it. Changing the platform, one of the biggest technical decisions we had to make, had practically no effect on our product backlog.

By using a business value focused backlog tool such as user stories, we were able to separate product decisions from technical decisions almost entirely. We could have been using a TI-83 calculator as a platform and it wouldn’t matter – because the business logic did not impose on the technical decisions, we were able to pivot immediately and start work on a web version without trashing any of the user research and product work done to date.

What being Agile really means (minimizing future opportunity cost is the mantra)

“The biggest source of waste in a startup is building something that nobody wants.” – Eric Ries

Sure, we lost a few days of development work, but from an Agile mindset that’s just the cost of reducing product risk.

Where companies struggle with Agile methodologies most is in recognizing and truly internalizing the concepts of opportunity costs and sunk costs. Humans hate recognizing small losses (http://en.wikipedia.org/wiki/Prospect_theory); this bias contributes to gamblers playing “just a few more hands” so they don’t end up in the red, to people investing in steady low-risk long-term government bonds over more lucrative long-term stocks, and to continued usage of less-than-optimal software practices.

Turns out that people actually prefer one gigantic loss over many little small losses, even if the small losses don’t come close to adding up to the one large one. We’re all guilty of not purchasing that useful app for a dollar while shelling out way too much for rent (here in NYC at least); every day people purchase the thousand dollar extras with their new car then take the long route home to avoid a $3 toll.

Traditional organizations bury their heads in the sand and refuse to acknowledge that something might go wrong with product development until problems inevitably surface at the end of the process, culminating in a gigantic loss function they must recover from. Agile/Lean organizations recognize that things will inevitably go wrong and assumptions will ultimately change, and so actually seek out small losses so they can be addressed as soon as they appear. Traditional software building only recognizes losses bundled together, Agile/Lean methodologies are specifically built to counteract our natural biases and tackle the small losses.

At the end of the second decision meeting, nobody was upset. In fact, we were elated – we had just successfully avoided a problem that would have sunk the business. The consensus was that losing some development time was a bummer, but not even close to the bummer that would exist had we released an iPad app that everybody hated.

It’s easy to lie to yourself, to tell yourself that you know what people want and that all decisions made are the right ones. It takes true courage to admit that your assumptions are just that – assumptions – and that at any moment your view of the customer and product can (and should) be challenged, reassessed, and updated by the latest evidence. It’s not weakness to admit you know nothing; Silicon Valley is littered with failed companies whose founders “knew” the right path to take and never deviated from their beliefs.

I’m proud of the team for being able to turn on a dime and move in a different direction. Lean methodologies gave us the tools to recognize our own mistakes, while Agile processes ensured that pivoting was as painless as it could be. Just goes to show that with the right mindset, any mistake – no matter how fundamental – can be rectified, as long as you’re willing to tell yourself the truth.

BOSH is a tool that (among other things) deploys VMs. In this blog post we cover the procedure to create a BOSH release for a DNS server, customizing our release with a manifest, and then deploying the customized release to a VirtualBox VM.

BOSH is frequently used to deploy applications, but rarely to deploy infrastructure services (e.g. NTP, DHCP, LDAP). When our local IT staff queried us about using BOSH to deploy services, we felt it would be both instructive and helpful to map out the proceduring using DNS as an example.

Note: if you’re interested in using BOSH to deploy a BIND 9 server (i.e. you are not interested in learning how to create a BOSH release), you should not follow these steps. Instead, you should follow the instructions on our BOSH DNS Server Release repository’s page.

We acknowledge that creating a BOSH release is a non-trivial task and there are tools available to make it simpler, tools such as bosh-gen. Although we haven’t used bosh-gen, we have nothing but the highest respect for its author, and we encourage you to explore it.

0. Install BOSH Lite

BOSH runs in a special VM. We will install a BOSH VM in VirtualBox using BOSH Lite, an easy-to-use tool to install BOSH under VirtualBox.

BOSH expects us to place source files within the BOSH package; however, we are deviating from that model: we don’t place the source files withom our release; instead we configure our package to download the source from the ISC. But we need at least one source file to placate BOSH, hence the placeholder file.

Configure a blobstore

We skip this section because we’re not using the blobstore—we’re downloading the source and building from it.

Create Job Properties

We edit jobs/named/templates/named.conf.erb. This will be used to create named‘s configuration file, named.conf. Note that we don’t populate this template; instead, we tell BOSH to populate this template from the config_file section of the properties section of the deployment manifest:

<%= p('config_file') %>

We edit the spec file jobs/named/spec. We point out that the properties → config_file from the deployment manifest is used to create the contents of named.conf:

We have successfully created a BOSH release including one job. We have also successfully created a deployment manifest customizing the release, and deployed the release using our manifest. Finally we tested that our deployment succeeded.

Addendum: BOSH directory differs from BIND‘s

The BOSH directory structure differs from BIND‘s, and systems administrators may find the BOSH structure unfamiliar.

Here are some examples:

File type

BOSH location

Ubuntu 13.04 location

executable

/var/vcap/packages/bind-9-9.10.2/sbin/named

/usr/sbin/named

configuration

/var/vcap/jobs/bind/etc/named.conf

/etc/bind/named.conf

pid

/var/vcap/data/sys/run/named/named.pid

/var/run/named/named.pid

logs

/var/log/daemon.log

(same)

That is not to say the BOSH layout does not have its advantages. For example,
The BOSH layout allows multiple instances (jobs) of the same package, each with its own configuration.

That advantage, however, is lost on BIND: running multiple versions of BIND was not a primary consideration—only one program could bind [4] to DNS’s assigned port 53 [5] , making it difficult to run more than one BIND job on a given VM.

Footnotes

1 We chose BIND 9 and not BIND 10 (nor the open-source variant Bundy) because BIND 10 had been orphaned by the ISC (read about it here).

There are alternatives to the BIND 9 DNS server. One of my peers, Michael Sierchio, is a strong proponent of djbdns, which was written with a focus on security.

2 Although it is convenient to think of BIND and named as synonyms, they are different, though the differences are subtle.

For example, the software is named BIND, so when creating our BOSH release, we use the term BIND (e.g. bind-9 is the name of the BOSH release)

The daemon that runs is named named. We use the term named where we deem appropriate (e.g. named is the name of the BOSH job). Also, many of the job-related directories and files are named named (a systems administrator would expect to find the configuration file to be named.conf, not bind.conf, for that’s what it’s named in RedHat, FreeBSD, Ubuntu, et al.)

Even polished distributions struggle with the BIND vs. named dichotomy, and the result is evident in the placement of configuration files. For example, the default location for named.conf in Ubuntu is /etc/bind/named.conf but in FreeBSD is /etc/namedb/named.conf (it’s even more complicated in that FreeBSD’s directory /etc/namedb is actually a symbolic link to /var/named/etc/namedb, for FreeBSD prefers to run named in a chroot environment whose root is /var/named. This symbolic link has the advantage that named‘s configuration file has the same location both from within the chroot and without).

3 The number “9” in BIND 9 appears to be a version number, but it isn’t: BIND 9 is a distinct codebase from BIND 4, BIND 8, and BIND 10. It’s different software.

This is an important distinction because version numbers, by convention, are not used in BOSH release names. For example, the version number of BIND 9 that we are downloading is 9.10.2, but we don’t name our release bind-9-9.10.2-release; instead we name it bind-9-release.

4 We refer to the UNIX system call bind (e.g. “binding to port 53″) and not the DNS nameserver BIND.

5 One could argue that a multi-homed host could bind [3] different instances of BIND to distinct IP addresses. It’s technically feasible though not common practice. And multi-homing is infrequently used in BOSH.

In an interesting side note, the aforementioned nameserver djbdns makes use of multi-homed hosts, for it runs several instances of its nameservers to accommodate different purposes, e.g. one server (dnscache) to handle general DNS queries, another server (tinydns) to handle authoritative queries, another server (axfrdns) to handle zone transfers.

One might be tempted to think that djbdns would be a better fit to BOSH‘s structure than BIND, but one would be mistaken: djbdns makes very specific decisions about the placement of its files and the manner in which the nameserver is started and stopped, decisions which don’t quite dovetail with BOSH‘s decisions (e.g. BOSH uses monit to supervise processes; djbdns assumes the use of daemontools).

The buzz around the microservice way of architecting software systems is taking hold across the Internet. Many people are trying to figure out what this means for their deployment strategy and their DevOps folks. Cloud Foundry provides great support for deploying these types of applications and this will be the first in a series of posts that will show how to effectively deploy a microservices architecture to Cloud Foundry. We will start with a single service and build up to a web or multiple services that talk to each other.

This series of posts is not going to cover why you would (or why you wouldn’t) want to use this method of software construction in your next, or current, project. There are plenty of good posts across the web about the pros and cons of a microservice architecture. They will assume that you’ve decided to use microservices and are planning on deploying them to a Cloud Foundry installation, either Pivotal Web Services (PWS), a private installation of Pivotal Cloud Foundry (PCF). To simplify local development, we will be using Lattice as a stand in for a full installation of Cloud Foundry. It will also assume some level of familiarity with Java and Spring Boot.

This first installment will show you how to deploy a Spring Boot based, Java 8 microservice to Lattice using Docker as a packaging mechanism. You could very easily substitute a Go or Ruby based microservice packaged in a Docker container into this flow. That’s one of the benefits of using a microservice architecture.

Before we start, please follow these instructions for getting Lattice set up on your laptop. We will wait here for you to return so there is no rush.

Now that we have that out of the way, let’s get started. The first thing we will need is a running Spring Boot application. You can get the sample code locally by cloning this repository to you computer and following along as we explore each file in the repository.

The README.md file contains some information about building the Docker images. Take some time to read through it and try to build the Docker images by running

./gradlew build duildDocker

in a terminal window.

Now that you have the Docker container build, it is time to deploy it to your locally running Lattice installation. Check out the bin/deploy-to-lattice.sh file for all of the details. The first thing the script does is to remove any currently running Docker container for this service and then creates an application with the newly minted Docker container from above. Yes this may cause some downtime for the service. We will address this concern in a future blog post. Yes it is that simple.

Yep it was that easy. For more details of how the Docker container gets built, check out the build.gradle file. Here you will see that it uses the gradle-docker plugin to help simplify the creation. There is a little magic at the end of the buildDocker task that takes the jar file created by the build task and makes it available to be added to the Docker container.

The rest of the files live in the src directory and are the actual code for the microservice. Take a look around and if you have any questions, post them here and I will address them in a future post.

Thanks for reading and stay tuned as we deploy multiple services to our Lattice environment and tie them together using a service discovery service for easy access.

Lean Startup fails when it becomes routine or ritualized. This will be a frank discussion of where we most often go horribly wrong in the dogma of the MVP.

We’ll also go over a simple way of organizing your toolbox of lean startup and user research techniques to quickly get at the heart of the issue and design a useful test of our theories. We’ll go over techniques like Wizard of Oz, Pocket Test, & Picnic in the Graveyard.

About Tristan Kromer

Tristan helps product teams go fast.

As a lean startup coach, Tristan works with product teams on three continents to apply lean startup principles to teams and innovation ecosystems. He has worked with companies ranging from early stage startups with zero revenue to enterprise companies with >$1B USD revenue (LinkedIn, Fujitsu, Swisscom, Pitney Bowes).

With his remaining hours, Tristan volunteers his time with Lean Startup Circle, a non-profit grassroots organization helping to develop innovation ecosystems with meetups in over 80 cities around the world. He blogs at GrasshopperHerder.com

Interestings

ActiveRecord doesn't query with Sets

We encountered a strange bug when we were eager loading some user specific data and weren't seeing the optimizations actually be used. Turns out the ActiveRecord doesn't know that Ser is enumerable and the query to the database ended up being users.id = NULL instead of users.id in (…)

As a tech consultant, I help companies optimize how they build software. I talk to a lot of people about implementing new practices and processes that will help them put the user value at the center of the development practice. But with many of the companies I work with full assimilation of these concepts is a distant goal. Until yesterday, I’d never seen an organization learn about a process and start implementing it full-force the next day. That, my friends, is catching sight of a unicorn in the wild.

They had done a lot of work towards their research plan. They had recruited 60 people for a group of super-user participants. They had two focus groups scheduled in their office. They were preparing to conduct some pretty huge global surveys. They wanted the feedback that they got to drive their next stage of development and they came to me to figure out some ways to help them do that. How could they best decide which usability issues to work on and where they should focus enhancements?

I took a deep breath and broke the news that if I would recommend anything, it would be to cancel the focus groups and redo their whole research strategy.

Instead of getting mad, OpenSignal got curious. Here are some of the problems we talked about.

Problem 1: Focus Groups are Dumb

Focus groups are highly risky way of gathering information. If people are shy they won’t speak up. If they feel like others know more, they won’t say that they don’t understand. One aggressive person can take over the session and run it off the rails.

Essentially, people pollute people.

Additionally, a focus group format would not allow OpenSignal to watch their users play with the app, only hear about how people felt about it after the fact. Getting late feedback means that they were going to miss out on 99% of the valuable information that they would have caught if they had seen the app used for the first time in front of their eyeballs.

Solution: 1-on-1’s

Scrap the focus group and reschedule each participant as an individual user interview. This will allow OpenSignal to go in-depth with each person, digging into their personal experiences, challenges and hearing how they are reacting to the tool as they using it.

The transcripts of these interviews will contain nuggets of pure gold and uncut diamonds. The added bonus is that user interviews have a relatively small overhead. Generally, five 1-hour interviews are enough to help surface patterns in user behavior and usability issues strongly enough to help set priorities. This leads us to the second major problem that OpenSignal had…

Problem 2: No synthesis means you’re losing stuff

No plan for how to use the feedback and incorporate it into their development pipeline. Currently, OpenSignal fed bugs directly into their development team, but no one was really prioritizing them. Suggestions, usability complaints, and opportunities were tackled on an ad hoc, scattershot basis, and the work wasn’t really well connected to overall feature goals or metrics.

When you don’t take the time to synthesis and plan, all of the value of user research is lost.

Solution: Download and find the themes

OpenSignal needed a few baseline synthesis activities that would help them shake out the nuggets of wisdom from their interviews, as well as their general feedback pipeline. A really simple one is the whole team going through the raw feedback and write one take-away per sticky, talking them through and then bucketing them into similarities. Then you pull out themes, challenges and opportunities out of the buckets. These can then be mapped to specific business goals.

The main point of having the synthesis session is to clearly identify the patterns that emerged from your conversation and map them to your overall business goals. This helps the team understand where user value maps to goals and also sets priorities. BUT in order to achieve this you need your whole team to participate, which leads us to their final problem.

Problem 3: Not Enough Org Participation

The three OpenSignal ladies who joined me for Product Office Hours were collaborating on the user research stuff, but were finding it difficult to get other parts of the organization consistently involved. They hadn’t found a way to involve the developers, and so in turn devs didn’t see user feedback as part of their own workflow. This has made it doubly difficult for them to incorporate the feedback organically into the development backlog.

It can be difficult for small start-ups when there is so much going on, but I tend to say:

Not making the time to participate in user research is like ignoring a chainsaw as you try to cut down a tree with a butter-knife.

Solution: Make it really available

I encouraged these ladies to start having conversations with OpenSignal leadership about user interviews and ask them to try to listen to at least 1–2 of the user interview sessions. With OpenSignal, an invite is probably all that is needed, but if there is resistance, sometimes a knuckle cracking is in order to get higher-ups to tune in. At a small organization like OpenSignal, making sure that leadership is as close to direct user feedback as possible is critical to keeping priorities straight.

With developers, I take a really soft approach. I book space for them to listen to the interviews. I invite them all to both the interviews and the synthesis, and then announce to the whole team whenever the interviews are happening, day before, morning of and five minutes before the interview starts. This helps it get it in front of their faces. The same goes for synthesis. Leave the door wide open, and bang a drum that it’s going on. More often than not, curiosity will get the better of them.

The Full Monty

The next afternoon I went down to the OpenSignal office and was floored by how quickly they had implemented some of my suggestions.

They had cancelled the focus groups and scheduled several user interviews.

They had taken all of the existing customer feedback that was in their ZenDesk and put it through a synthesis process on a wall in the middle of the office.

They spoke about the new user research process with their CEO and invited him and other leadership folks to participate in the upcoming user interviews.

Finally, they sat me down and conducted their first user interview with me as their guinea pig, with an amazing script that was copied from a script template. They nailed the exercise and delivered it like true pros. It was this process that offered the ah-ha moment for them as to why this stuff is the absolute Poo.

“So much of what you said is what I’ve been saying, but can’t get anyone to take action on.”

“We had so many conversations around whether or not people knew what to do there, but couldn’t decide how to proceed.”

This is what this type of direct user research helps organizations do, understand what’s important vs. what can be put off.

There is very little ambiguity when you’re sitting next to a user and they do not notice, ever, that a button is clickable, or that there is another page of information that they can access. A few days later I got an update from Ellie saying that they had several interviews, videoed them, and shared out the live-feed with the whole organization and a ton of people listened. They also did a synthesis exercise and presented the synthesis to the whole organization. In the words of Ellie herself: “It was fascinating seeing the users go through our app, and at points frustrating because we were thinking ‘why can’t you see the button? it’s right there!” I have never been so impressed by a company’s commitment to putting their customer at the center of their process. OpenSignal, kudos for just doing it. You guys rock.

You don’t have to be a genius to build a successful product! You do, however, need a solid feedback loop. At Labs, they don’t presume to know all the answers, but they do know how to ask questions and how to turn findings into action.