Sunday, December 18, 2016

Sam Clemens is co-founder and Chief Product Officer of InsightSquared, a Boston-based startup. Sam has been a frequent guest instructor in the Product Management 101 elective at Harvard Business School. My students and I have learned a lot about product leadership over the years from Sam. His insights on product management processes are nicely captured in this interview with Mike Fishbein for the podcast, "This is Product Management." Mike and Sam have graciously allowed me to republish this lightly edited transcript of their original interview.

* * *

“This is Product Management” host Mike Fishbein: On this episode, I’ll be speaking with
Samuel Clemens, co-founder and chief product officer at InsightSquared. A few weeks
ago we had Drift founder/CEO David Cancel on the show to talk about customer-driven
product teams and implementing a fluid product management environment. David previously
was the head of product at Hubspot, but before him, Samuel Clemens was Hubspot’s
product lead. The two share many beliefs, but have different views regarding
the purpose and implementation of process in product management.

Clemens, 1:19: My
topic is “Active Process is Good Product Management.” I’m picking something
potentially controversial here, since I know a lot of your previous guests have
been advocates against heavy process in product management. And truth be told,
I’m a process-light guy. Most of the time, I believe that process should be
introduced after having a bunch of smart, driven people figure out your first-ever
iterations. Indeed, you should stay away from them with any kind of process. So
the question becomes: What do you do after
those early iterations?

If you think back over the years of building things at your
company, how many customer visits have you done? How many mock-ups? How many
times have you pushed out a release? How many times have you triaged a bug? The
numbers are probably in the hundreds, if not thousands.

If you do something that many times, you will develop a process. The question is
whether you’re an active or passive participant in developing that process. There
are some advantages to passive — reasons you’d want to let process develop
organically. But if you choose to be an active participant, you get the
opportunity to reinforce attributes that you’re looking for.

2:35, Host: How did you
get into tech in the first place and what has shaped your perspectives?

I’m the child of two photographers. One was an artistic,
creative photographer; the other was very mathematical and logical. I think
having two ways of thinking about a problem—creative and logical; ideation and
testing—is useful to any product manager. Cycling back and forth is often key
to problem solving.

As an undergrad, I was an applied math major, training to
become what we now call a data scientist. This is really just a fancy label for
‘here’s a toolbox for problem-solving.’ I chose to apply the tools to product
management.

After college, I got into management consulting, but found
it unsatisfying. I’m a builder. So right away, I went into the first of five
start-ups — a freelance marketplace for services, which eventually merged with
O-Desk. My second startup was a word-of-mouth media firm that was acquired by a
British retailer. The third was a 3D modeling company called Models for Mars. At
my fourth startup, I ran product at Hubspot for a couple of years, and then
left to co-found a company called InsightSquared with two very good friends. InsightSquared
is now five-and-a-half years old.

We’re a B2B software company that does sales analytics and
business intelligence for the non-Fortune 500. We are middle-growth stage with
about 160 people. On the R&D side, including engineering, product and design,
we probably have 40 or 45 employees. Besides leading product at InsightSquared,
I’m an entrepreneur-in-residence at Harvard Business School, and I do frequent
speaking engagements at HBS and MIT on product management, design, and product
marketing.

5:10, Host: What have
been the key lessons you learned about creating process?

I often speak with entrepreneurs and managers about how to
implement agile development and related iterative product management practices.
They don’t get hung up on the theory—they’re bought into that—but rather, they
struggle with how to implement the theory to get a smooth-running machine.

Over time I’ve identified seven actions that help teams
implement good product management processes.

The first action is at the core of good product management:
You have to know your customer very well and in particular, gain this knowledge
through on-premises customer visits.

I stress the on-premises part. You need to be at the
customer location to see the animal in their native habitat. If you’re on a
phone call with them showing a new mock-up via a screenshare, there are many
things that you’re not seeing. Most obviously, you’re not seeing their facial
expressions — whether they’re confused about something. If you’re careful, you may
be able to pick up hesitation in their vocal tone over the phone. You’re also
not seeing the white board in their office, covered with their brainstorming
notes, which might give you a sense for what really bothers them. You’re not
seeing the way that they relate to their peers. You’re missing all of that
context, and when it comes to really building an insightful product, that
context matters.

You can’t just do surveys. You can’t just do phone calls.
You have to get out of the building and visit customers. So, one of the
processes for my product team is mandatory, once-a-month, on-premises customer
visits for all of my PMs.

The second item on my list is tuning whatever processes you’ve chosen for your engineering teams.
Common choices include kanban and scrum, but whatever you select, the key is to
really tune the process so that iterations flow easily and the process becomes
an enabler — and not a burden for your teams.

A common question with scrum is cycle frequency: Should it
be seven days, two weeks, or four weeks? The answer matters. We ran one-week
cycles for the first year at InsightSquared. This eventually resulted in a
reactive approach and burnt-out PMs.

One-week cycles resulted in a rushed product development experience.
We often didn’t have enough time to plan out more complex features — the type
you might build in your second year. So that’s why, starting in year two and
continuing today, we’ve been running two-week cycles. They still yield a sense
of urgency, but with less burn out. Four-week cycles are an option, too, but I
find that them to be better suited to a more mature company where projects are
longer and require more planning.

Other choices include how you set up estimation so that it’s
easy, not painful. How do you prioritize and triage bugs? This makes a big
difference to your engineering team and to your customer success team.

How do you demo software that you’ve built? We do a
once-a-month internal demo to the entire company. We make sure we describe the
business value of stuff that we’ve built. The audience includes our customer
success teams and our sales teams; they’ve taken an hour from their busy days,
so we had better show them something impressive.

To summarize, whatever process you choose will have many
levers. It’s important to study those levers and really tune them, so when
you’re doing two hundred reps of the process, it’s smooth and it's an enabler
rather than a burden for your teams.

10:10 – The third thing on my list is specifications. Frequently, entrepreneurs will ask me: How do you
spec out what you will build? I actually have a “kill on sight” order on specs
in my company. I won’t allow them. I think the most dangerous thing about a
spec is that someone might actually build it. The problem is that a spec
represents a point-in-time view of what someone once believed about the product
— maybe three, six, or nine months ago. Today, that view is almost certainly
out of date.

You’ve met a bunch of customers since then. You’ve had a
bunch of customer support cases. None of that is reflected in the spec. Also,
the product has progressed. New features have been built, introducing lots of
new interdependencies that the spec can’t possibly consider. And finally, no
matter how detailed the spec is, it can never anticipate all the questions that
might come up. I’ve seen such problems with specs over and over. My first two
startups used waterfall processes, with 14-page Word docs and all that.

A spec can give you a false illusion of completeness. Rather
than harbor that illusion, I say, ‘Kill the spec.’ Replace it with a set of
conversations between the product manager and the engineers. When an engineer
starts a story, I prefer that they turn to the product manager, and they say,
“Hey, what are we doing with this feature?” The PM says “Ah, glad you asked. So
here’s why we’re doing this, here’s what we’re trying to do, here’s the
problem.” Now the engineer has context. Then engineer says, “So, there are a
couple of ways we can do this –approach A or B,” and the PM says “Considering the
pros and cons of each, I don’t think the pros for approach B would be relevant
for our customer, so let’s go with A.”

The engineer comes back the next day, and says, “Hey, you’ll
never guess what happened – when the count for variable X is zero, the whole
thing breaks.” And the PM says, “Wow, I never thought of that. Of the ways you
say we could handle that edge case, here’s the one that will pose the fewest
problems for our customers.”That kind
of exchange that would never be in a traditional PRD spec, yet it’s critical. An
ongoing conversation between the PM and engineer is the most reliable way I
know of to build quality product.

12:40 – The fourth thing on my list is release frequency. How often do you release software into
production? Four or five years ago, the answer might be once a month, or maybe
every two weeks at the end of a sprint. Essentially, this means you are in
batch mode.

Instead of batching, you can set up your engineering process
as a flow in which you’re releasing new
features and product improvements as they are ready, multiple times per day —
dozens or even hundreds of times per day. The benefits from shifting from batch
to flow are immense. Boosting your release frequency enables a more iterative
approach to development. You can develop a module, have it gated and hidden,
push it out quickly, get customer feedback on it, and go back and keep
iterating to develop the next module.

For the engineering team, there are other benefits. Your
most senior engineers don’t have to stay up until 2:00 am every two weeks,
trying to push out a mass of software. For your customer success team, not
pushing out a big chunk of code means that fewer bugs will crop up due to
interdependencies. Fixing such bugs usually requires roll-backs and other complicated
maneuvers. Instead, when each release involves a much smaller chunk of code, if
a bug happens, it’s easier to identify where that bug came from.

14:37 – The fifth item on my list is to have everyone on the product team coding, including both product
managers and designers. There are some steps that an engineering team needs to take
to enable PMs and their designers to code, so it takes some effort to do this.
I believe that this effort is worthwhile, because one of the highest impact
things you can change in a product is customer-facing copy. A typical process
for tweaking product copy might involve a product manager saying, “Every time I’m
with a customer, they find this page confusing. I believe that once they read the
headline, they have the wrong frame of mind for everything else on the screen.
I want to change that headline.”

They might go to an engineer on their team, and say, “Could
you change that headline?” The engineer will say, “Sure, no problem!” Then the
PM hits reload, and the new, longer headline is wrapping in a weird way. She
says, “Could you change it again, with this shorter headline?” The engineer
changes it, but now it’s not clear again, and so they go back and forth. After
a couple of rounds, the engineer feels like a typist, and the PM feels like an
ass for making the engineer feel that way. Next time, the PM chooses to avoid
that situation — to preserve team harmony, she’ll choose to live with an
inferior product.

The friction to go from good to great is just too high. You
can decrease that friction if you enable your PMs and your designers to code, giving
them direct control over the last layer of detail. You get a productivity gain,
a motivational gain, and a quality gain. For designers, this can be even more
impactful. When you have designers who can code instead of handing off mocks,
they are less likely to suggest UX that is not implementable, and more likely
to suggest UX that is informed by the kind of cool things that you can do in
code these days.

17:55 – The sixth thing on my list is testing. This overlaps with the question of how to run betas. I
view testing as a many-layered onion. When you move out from the onion’s
center, you get tests with increasing fidelity, but also with increasing cost. At
the onion’s center, you have the cheapest possible test: simply asking a nearby
engineering team member: “Hey, can you look at this screen and tell me what you
think?” The cost for that is a few minutes. The fidelity may be okay, but it is
potentially questionable if your colleague doesn't have a true customer
perspective on the item in question.

Slightly more expensive is going to your customer success
people, who are a proxy for the customer. This takes maybe 5 minutes, and gets
you higher fidelity.So then you can go outside to a beta customer, with whom you
have close relationships, increasing cost and fidelity. The cost is say, 2
hours. The benefit is that these are real customers, but a drawback is they are
beta customers, so they’re not truly
representative of the entire set. They’re too bug-tolerant and they like new
software. So, you can go out another layer, and do a gated release to a subset
of one-tenth of your entire customer base. The cost to set up now is several
hours, because you have to manage a release.So on and on you go, until the improvement is released
to your entire base. Even then, you are testing. You’re watching the
improvement being used and then tweaking and iterating it. Whenever you’re
releasing something, you should ask yourself: What uncertainty do we face with
this software? And for that type of uncertainty, what is the appropriate level
of testing?

20:56 – The last item on my list is creating a roadmap. I get lots of questions about
this from entrepreneurs and product managers at other companies. At
InsightSquared, we use a four-quarter roadmap, showing what we plan to build in
each of the next four quarters. Once a month, I revise the roadmap. It’s fully
transparent internally – available to the entire company. I ask salespeople to
tape it on their desks. I do regular lunch conversations open to anyone in the
company about items on the roadmap, and explain the trade-offs behind it.

Levels of certainty decline as the quarters progress. For
the current quarter, maybe there’s an 80% chance that nothing will change by
the time we finish the quarter. But if you go four quarters out, there might be
a 20% chance. We may change how we’re selling things, how we’re marketing, our
understanding of customer needs, and so forth. We make sure that consumers of
the road map within the company know that it’s not all set in stone. They get a
feel for which pieces are more fluid, and which are more reliable.

22:42, Host: In Samuel’s
list of product management processes, he repeatedly emphasizes autonomous
product teams and a customer-centric perspective. It seems like he and product
leaders like David Cancel see eye-to-eye on this. Is that the case?

23:00 – I’m not against customer-driven as a concept, I’m
against customer-driven as it’s interpreted and implemented by many product
management organizations. Many product leaders say, ‘Let’s be customer driven;
let’s have a team dedicated to improving engagement, to reduce the number of
customer support tickets. We will look at all the tickets that have come over
the last X months, then group them and force rank them according to which
customers have found most urgent. Then we’ll burn down that list. Oh, and let’s
put a metric on the team, so that they have to reduce the list from 20 down to
5. That’s our customer-driven approach.’

I have a major problem with that. When you interpret
customer-driven that way, you often miss the bigger picture. You end up with
incremental levels of product improvement. Fixing issues that your customers
are surfacing for you will indeed give you an improvement on each individual
issue, if you’re doing it right. But each individual issue is often part of a
bigger problem with flow in the product, so you really you shouldn’t be fixing
the 5 individual things. You should be rewriting the entire flow.

Or, imagine you have five different and very big problems with that
product; it’s a mess. In a ‘customer-driven’ process, you will never hear a
customer say, “That product, you really need to kill it.” And yet that’s
something that may need to do as a product manager: decide to not support a
product, so that you can focus more resources on something new.

Instead, I advocate for product managers taking a much more
active role in guiding the development of their product. Customer needs are
indeed an input, in fact, they are probably the most important input, but they
are not the only driver of what you’re developing. There are other inputs, like
competition and corporate strategy, the cost of things and sequencing. All of
these things need to get mixed into an active process of road mapping and planning,
instead of just saying ‘Hey, we’re customer driven. We’re gonna let the
customer drive this bus’. As a product manager, you need to drive the bus.

I do think the foundation is having a good understanding of
the customer. I like hiring product managers from the customer success team,
because they already understand customer needs at a very deep level

Sunday, December 11, 2016

This guest post from my Harvard Business School student, MBA candidate Jenny Jao, describes a learning-by-doing project she completed for academic credit this semester. Jenny helped produce a podcast and hosted an episode.

I hijacked Tom’s blog to share with you a cool side project
I worked on this semester.

The whole concept of podcasting fascinates me. At its core,
it’s a low-touch forum to preserve and distribute content. It’s a way for
amateurs or the less technically savvy of us to chronicle interesting stories
and share them with friends, family and peers and contribute some small facet
to our collective learnings.

A unique opportunity surfaced this semester when I got to work
on Traction,
a podcast produced by the seed stage venture capital firm NextView Ventures. The show invites early-stage
entrepreneurs to discuss challenges they faced in the early innings of their
business and lessons they’ve learned along the way.

In short, I had a blast hosting a podcast. Listening to
myself afterward made the hair on my arms stick up straight, but the whole
process was a fresh learning experience that complemented and extended what I’d
learned in business school classrooms.

I credit much of my learning to Jay Acunzo, with whom I worked
closely. Jay is host of Traction and is not only a veteran podcaster, but also a
media and content expert.

So what did I learn about podcasting?

1.Most important thing is to keep the
listener from pressing "Pause." Great podcasts keep the listener engaged –
but it’s certainly easier said than done. My approach was to keep things simple
and try to create something that I would want to listen to. This meant treating
the interview like a story, forgetting the microphone was there, and asking a
series of questions that made the guests more comfortable and eager to dive
into their tales. Stories are interesting and engaging when they are replete
with details. Out of these details, we tease out lessons that stay with us.

2.How I
phrase a question affects the response I get, and I have 3 shots at getting what
I want. A question such as “What was it like raising your seed round?” can elicit
an elevated response, meaning guests may speak in broad, sweeping
generalities. You risk getting a cookie cutter answer that they’ve probably given
in a past interview. What is interesting about that? Not much, so if I get a
generic answer, I ask for an example to hopefully solicit more vivid details.
I’m searching for the story behind a particular pitch to a VC firm or the ride
back to the airport after a day of terrible meetings. If that fails, I would follow
up with a hypothetical sequence of events and ask the interviewee how they might react in that
particular situation. Luckily, I haven’t had to try this last tactic yet.

3.Find your
angle. There are dozens of tech podcasts, so how do I make this podcast or
this episode stand out? The goal is to not necessarily be better than
competitors, but to do things differently. We’ve been giving that advice to
entrepreneurs for years, and it works with podcasts, too. For example, Traction focuses on early stage challenges and attempts
to provide entrepreneurs with practical tactics. It’s not a heavily produced
show: an episode consists mostly of an unedited interview with little voiceover,
aside from the beginning and end. This approach preserves the conversational
nature of an interview.

There you have it: some lessons I’ve taken away from
my stint in podcasting. I can’t say that I’m now an expert, but I hope this sheds
some light from a host’s perspective, debunks some misperceptions of podcasting,
and encourages some of you to give it a go.

Check out my first
podcast, in which I interview Jay Acunzo about how he differentiates
Traction from other podcasts and how he manages guest interviews.

Sunday, June 5, 2016

What I’ve said that turned out to be right will be considered obvious, and what was wrong will be humorous.

—Bill Gates,
The Road Ahead, 1995

In 2001, I wrotea book explaining why accelerated growth strategies created value for some Internet companies and destroyed value for others. The book, Speed Trap, was poised for publication as I came up for promotion that year at Harvard Business School. However, in a reversal of the familiar prescription for scholars, my mentors told me, “If you publish, you might perish.” They were concerned that Speed Trap had been written in a hurry—a cruel irony, given its title and topic. I had confidence in the quality of my work, but not enough to bet my job. I canceled publication and forfeited my advance. It was painful to scuttle Speed Trap, but I don’t second guess my decision: I got promoted.

I recently read Speed Trap for the first time in many years, curious to see if its ideas had stood the test of time. I was surprised by the book’s sober tone. It has a morning-after hangover vibe: ”I guess last night was amazing, but I can’t remember it all; I must have blacked out. Now it hurts just to blink. Let’s never do that again.” Speed Trap shows how we thought about online opportunities one year after the dot com bubble burst. In retrospect, many of my thoughts about the Internet’s future evolution missed the mark. If you think history repeats itself, then these forecasting errors might be germane, since startup valuations and VC investments are once again declining. Reasoning that today’s tech entrepreneurs and investors might value a history lesson, I’ve published Speed Trap as an ebook, which is downloadable for free in ePub, Mobi and PDF formats, and available in the iBooks Store for free and in the Kindle Store for $0.99. With the benefit of fifteen years worth of hindsight, it is evident that Speed Trap, when looking ahead, had a profound status quo bias. The book did anticipate that broadband and wireless Internet access technologies would spread. Otherwise, Speed Trap didn’t foresee major changes in how consumers and businesses would use the Internet. As a result, Speed Trap’s errors of omission are alarming. Here’s a sample what I didn’t see coming as I wrote the book in 2001:

Google’s dominance. Speed Trap devotes just two paragraphs to Google, which, by 2001, already was the 15th largest U.S. website. Despite this traffic, Google’s ecosystem impact was still modest at the time. The company was still a year away from adopting the paid search model that would revolutionize digital marketing.

Apple’s iPod. As I finished writing Speed Trap, the music industry was celebrating the shutdown of Napster, and the major labels were busy organizing their own download services. A few months later, like a bolt out of the blue, Apple launched the iPod and changed everything.

Amazon’s Kindle. Speed Trap speculates about whether Microsoft or Adobe would be better positioned to establish the dominant ebook standard. The possibility that the world’s biggest bookstore might eventually win that race hadn’t dawned on me.

Social networks. We’d had hints of strong demand for social networking services, including AOL’s chat rooms and SixDegrees, a social networking site that peaked at 3.5 million members before failing in 2001. Despite these developments, I didn’t foresee the rapid rise of Friendster (founded in 2002), MySpace (2003), and Facebook (2004).

User-generated content. Although GeoCities had demonstrated the appeal of user-generated content during the late 1990s, Speed Trap failed to anticipate that blogging and user-generated video would become mainstream phenomena within a few years.

Because it turned a blind eye to these black swans and big trends, Speed Trap assumed that market shares would remain stable in key online markets. For example, the book asserted that by 2001 online recruiting was a mature category, and predicted that Monster.com was unlikely to be usurped as its leader. By 2012, according to Reuters, Monster’s 23% U.S. market share lagged CareerBuilder’s 34%. LinkedIn, propelled by the social networking wave, had captured 16% of the market in 2012 and was poised for explosive growth.Readers who are too young to recall the dot com crash might reasonably ask whether my interpretations in 2001 were idiosyncratic—perhaps because I was, as an academic, either excessively cautious or simply clueless. With respect to my understanding of Internet businesses, I’ll let Speed Trap’s content speak for itself. With respect to the book’s conservative tone, I do think I was reflecting the Zeitgeist. If you are skeptical, read Michael Lewis’s 2002 New York Times Magazine article, “In Defense of the Boom.” Lewis writes, “The markets, having tasted skepticism, are beginning to overdose. The bust likes to think of itself as a radical departure from the boom, but it has in common with it one big thing: a mob mentality.” Likewise, the economist Charles Kindleberger, in his seminal 1978 book on the history of stock market bubbles,Manias, Panics and Crashes, explains that after a bubble bursts—during what he calls the “revulsion” phase—most investors lose their appetite for risk taking. Our collective conservatism in the immediate wake of the dot com crash might be seen as what management scholars call a threat-rigidity response. Individuals and organizations, when confronted with a severe threat, tend to constrict their information processing, focusing “tunnel-vision” attention on dominant rather than peripheral environmental cues. Threatened parties then tend to rely, in a rigid manner, on familiar responses to those cues—often with bad results. If this premise seems plausible, then we should ask: Have recent declines in startup valuations and VC investments been big enough to elicit another threat-rigidity response? Probably not—at least not yet. But when bright shiny objects—akin to today’s virtual reality or Internet of Things—finally fail to attract investor interest, then we might posit that the cycle is reaching its trough. So if, in the wake of a future sector crash, entrepreneurs and investors anticipate a sector-level threat-rigidity response, what should they do? In a severe downturn, entrepreneurs must, of course, cut costs and conserve capital. Dozens of VC blogs over the past few months have already offered this advice. VCs face a more difficult decision if they expect an industry-wide threat-rigidity response: When most peers are cautious and skeptical of radical new venture concepts, maybe it’s time to be a contrarian? Likewise, during a bubble’s revulsion phase, established corporations face interesting opportunities to acquire valuable assets at firesale prices. One surprise from the early 2000s was how few corporations exploited this opportunity, despite having strong balance sheets. Only a handful of Internet startups were acquired by big incumbents during 2001: HotJobs by Yahoo, MP3.Com by Universal, and Peapod by Ahold. The pace of acquisitions didn’t really pick up until 2004.If you read Speed Trap, I hope you will find value. For Internet veterans, the book should rekindle nostalgic memories of startups that bring to mind my favorite line from the movie Blade Runner: “The light that burns twice as bright burns half as long.” Remember Boo.com, Webvan, eToys, Pseudo.com, Pets.com, and Kozmo? Disney’s $790 million Go.com debacle? For tech entrepreneurs who may have been in middle school during the late 1990s, Speed Trap may provide some historical perspective on the origins of the business models upon which you are building, and some lessons for navigating a boom-bust cycle.