This site is about everything digital, giving an update on new things as I learn

Category: Data

“Managing products of the future” came up when I was thinking of a suitable title for a piece about products that look and feel very different to most products that we see today. Products such as driverless cars and voice assistants popped into my head as examples of products that are likely to dominate our daily lives before we know it.

However, these products are here already and I’m keen to look at if and how this does affect the role and focus of product management.

Will we manage products differently when the user interface of these products changes? Do we need to think differently about our products when data becomes the main output? Will customer needs and expectations evolve? If so, how? These and other questions I will start thinking about; considering the nature of machine learning, different product scenarios and their impact on the role of the product manager.

Taken from: https://robertmerrill.wordpress.com/2009/04/15/the-future-is-already-here/

It’s easy to get swept up by the hype surrounding AI and products based on machine learning, and to start feeling pretty dystopian about the future. But how much will actually change from a product management point of view? People will continue to have specific needs and problems. As product managers, we’ll continue to look at best ways of solving these problems. Granted, the nature of people’s needs and problemx will evolve, as it has always done, but this won’t alter the problem solving and people centric nature of product management.

To illustrate this, let’s look at some AI-base products and the customer needs and problems that they’re aiming to solve: Google Photos, Sonos One and Eigen Technologies.

Google Photos’ strap-line is “One home for all your photos – organised and easy to find”. Over the coming months, Google Photos will roll out the following features:

Using facial recognition, Google Photos will know who’s in a picture and will offer a one-tap option to share it with the person in question – provided that this person is in your phone’s contact list, Google Photos will have learned this person’s face. If that person appears in multiple images, Google Photos will even suggest to share all of them in one go.

Automated image editing suggestions, Google Photos will suggest different corrections based on the look and quality of the image. For example, if there issues with the brightness of the image, Google Photos will automatically display a “Fix brightness” suggestion.

With these new features, Google Photos aim to address customer needs with regard to sharing pictures and improving image quality respectively. These needs aren’t new per se, but the ‘intelligent’ aspect of Google Photos’ approach is.

The Sonos One is entirely controlled by voice. The speaker works fully with Amazon Alexa, which means that if you’ve got an Amazon Alexa compatible device, you can control your Sonos sound system through Amazon Alexa. Because Alex is a native app within the Sonos platform, you don’t even need to have an external Amazon device – i.e. Echo or the Dot – installed to control your Sonos One speaker. The installation of the Alexa mobile app will be enough.

The integration with the Amazon’s Alexa voice assistant is a logical next step within Sonos’ mission to “empower everyone to listen better” and makes it easier for people to control the music they listen to. Granted, the user interface of Sonos One is different to other product; it doesn’t have buttons, for example. However, it still is a product like any other in a sense that it delivers tangible value to customers by solving their music listening needs.

“Turn your documents into data” is London and New York based Eigen Technologies’ mission statement. The company enables the mining of documents for specific data. For example, if you work for a mortgage lender and are looking to make a decision about the credit worthiness of a home, Eigen’s data extraction technology helps to quickly pull out key ‘decision inputs’ from a number of – often very lengthy – property documents.

The way in which Eigen Technologies use machine learning algorithms, is ultimately to improve the speed and quality of decision making. Even though the underlying technology is based on machine learning, the outcome is very much like that of any other product: a clear user interface which shows the relevant document data that a user is interested in and needs to make decisions.

Main learning point: AI and machine learning based products will no doubt change the ways in which we interact with products and what we expect of them. However, existing examples such as Google Photos and Sonos One already show that the core of the product manager’s role will remain unchanged: building the right product for the right people and building it right!

These smart glasses connect to a feed which taps into China’s state database to detect out potential criminals using facial recognition. Officers can identify suspects in a crowd by snapping their photo and matching it to their internal database.

Wrong360 is a Chinese peer-to-peer lending app which aims to make obtaining a loan as simple as possible. When users of the Wrong360 app enter the amount of loan, period, and purpose, the platform will automatically do the match and output a list of banks or credit agencies corresponding to the users’ requests. On the list, users can find the institution names, products, interests rate, gross interests, monthly payment, and the available periods, etc. Applying for a loan can done fully online, and the app uses facial recognition as part of the loan application process.

Product 3 — Security camera

Security cameras in public places to help police officers and shopkeepers by improved ways of face matching. Traditionally, face matching is based on trait description of someone’s facial features and the special distance between these features. Now, by extracting the geometric descriptions of the parts of the eyes, nose, mouth, chin, etc. and the structural relationship between them, search matching is performed with the feature templates stored in the database. When the similarity exceeds the set threshold, the matching results are shared.

Whether it’s “SenseTotem” — which is being used for surveillance purposes — or “SensePhoto” — which uses facial recognition technology for messaging apps and mobile cameras — it all comes from the same company: SenseTime.

The company has made a lot of progress in a relatively short space of time with respect to artificial intelligence based (facial) recognition. The Chinese government has been investing heavily in creating an ecosystem for AI startups, with Megvii as another well known exponent of China’s AI drive.

A project with the code name “Viper” is the latest in the range of products that SenseTime is involved. I’m intrigued and slightly scared by this project which is said to focus on processing thousands of live camera feeds (from CCTV, to traffic cameras to ATM cameras), processing and tagging people and objects. SenseTime is rumoured to want to sell the Viper surveillance service internationally, but I can imagine that local regulations and data protection rules might prevent this kind of ‘big brother is watching you’ approach to be rolled out anytime soon.

Main learning point: It seems that SenseTime is very advanced with respect to facial recognition, using artificial intelligence to combine thousands of (live) data sources. You could argue that SenseTime isn’t the only company building this kind of technology, but their rapid growth and technological as well as financial firepower makes them a force to be reckoned with. That, in my mind, makes SenseTime very special indeed.

Normally when I talk to other product managers about product pricing, I get slightly frightened looks in return. “Does that mean I need to set the price!?” or “am I now responsible for the commercial side of things too!?” are just some of the questions I’ve had thrown at me in the past.

“No” is the answer. I strongly believe that as product managers we run the risk of being all things to all people — see my previous post about “Product Janitors” — and I therefore believe that product people shouldn’t set prices. However, I do believe it’s critical for product people to think about pricing right from the beginning:

Do people want the product?

Why do they want it?

How much are they willing pay for it?

Answers to these questions will not only affect what product is built and how it’s built, but also how it will be launched and positioned within the market. I’ve made the mistake before of not getting involved in pricing at all or too late. As a result, I felt that I was playing catchup to fully understand the product’s value proposition and customers’ appetite for it.

Fortunately, there are two tools I’ve come across which I’ve found very helpful in terms of my comprehending the value a product is looking to achieve — both from a business and customer perspective: the Van Westendorp Pricing Sensitivity Meter and the Conjoint Analysis respectively.

At what price would you consider the product to be so expensive that you would not consider buying it? (Too expensive)

At what price would you consider the product to be priced so low that you would feel the quality couldn’t be very good? (Too cheap)

At what price would you consider the product starting to get expensive, so that it is not out of the question, but you would have to give some thought to buying it? (Expensive/High Side)

At what price would you consider the product to be a bargain — a great buy for the money? (Cheap/Good Value)

The aforementioned Van Westendorp questions are a good example of a so-called “direct pricing technique”, where the pricing research is underpinned by the assumption that people have a basic understanding of what a product is worth. In essence, this line of questioning comes down to asking “how much would you pay for this (product or service)?” Whilst this isn’t necessarily the best question to ask in a customer interview, it’s a nice and direct way to learn about how customers feel about pricing.

The insights from applying these direct questions will help in better understanding price points. The Van Westendorp method identifies four different price definitions:

Point of marginal cheapness (‘PMC’) — At the point of marginal cheapness, more sales volume would be lost than gained due to customers perceiving the product as a bargain and doubting its quality.

Point of marginal expensiveness (‘PME’) — This is a price point above which the product is deemed too expensive for the perceived value customers get from it.

Optimum price point (‘OPP’) — The price point at which the number of potential customers who view the product as either too expensive or too cheap is at a minimum. At this point, the number of persons who would possibly consider purchasing the product is at a maximum.

Indifference price point (‘IPP’) —Point at which the same percentage of customers feel that the product is getting too expensive as those who feel it is at a bargain price. This is the point at which most customers are indifferent to the price of a product.

Range of acceptable pricing (‘RAI’) — This range sits between the aforementioned points of marginal cheapness and marginal expensiveness. In other words, consumers are considered likely to pay a price within this range.

In addition to the Van Westendorp Price Sensitivity Meter, I’ve also used Conjoint Analysis to understand more about pricing. Unlike the Van Westendorp approach, the conjoint analysis is an indirect pricing technique which means that price is combined with other attributes such as size or brand. Consumers’ price sensitivity is then derived from the results of the analysis.

When designing a conjoint analysis study, the first step is take a product and break it down into its individual parts. For example, we could take a car and create combinations of its different parts to learn about combinations that customers prefer. For example:

Which of these cars would you prefer?

Option: 1

Brand: Volvo

Seats: 5

Price: £65,000

Option: 2

Brand: SsangYyong

Seats: 5

Price: £20,000

Option: 3

Brand: Toyota

Seats: 7

Price: £45,000

This is an overly simplified and totally fictitious example, but hopefully gives you a better idea of how a conjoint analysis takes into account multiple factors and will give you insight into how much consumers are willing to pay for a certain combination of features.

Main learning point: I personally don’t expect product managers to set prices for their products or design price research. However, I do think we as product managers benefits from a better understanding of the pricing model for our products and a better understanding of what constitutes ‘value for money’ for our customers. The Van Westendorp Price Sensitivity Meter and the Conjoint Analysis are just two ways of testing price sensitivity, but are in my view to good places to get started if you wish to get a better handle on pricing.

As a product manager it’s important to understand the unit economics of your product, irrespective of whether you’re managing a physical or a digital product. Unit economics are the direct revenues and costs related to a specific business model expressed on a per unit basis. These revenues and costs are the levers that impact the overall financial success of a product. In my view there are a number of reasons why I feel it’s important for product managers to have a good grasp of the unit economics of your product:

Helps quantify the value of what we do – Ultimately, product success can be measured in hard metrics such as revenue and profit. Even in cases where our products don’t directly attribute to revenue, they will at least have an impact on operational cost.

Customer Value = Business Value – In an ideal world, there’s a perfect equilibrium between customer value and business value. If the customer is happy with your product, buys and uses it, this should result in tangible business value.

P&L accountability for product people (1) – Perhaps it’s to do with the fact that product management still is a relatively young discipline, but I’m nevertheless surprised by the limited number of pr0duct people I know who’ve got full P&L responsibility. I believe that having ownership over the profit & loss account helps product decision making and and accountability, not just for product managers but for the product teams that we’re part of.

P&L accountability for product people (2) – Understandably, this can be a scary prospect and might impact the ways in which we manage products. However, owning the P&L will (1) make product managers fully accountable for product performance (2) provide clarity and accountability for product decisions, (3) help investments in the product and product marketing and (4) steep product management in data, moving to a more data informed approach to product management.

Assessing opportunities based on economics – Let’s move away from assessing new business or product opportunities purely based on “gut feel”. I appreciate that at some point we have to take a leap, especially with new products or problems that haven’t been solved before. At the same time, I do believe it’s critical to use data to help inform your opportunity assessments. Tools like Ash Maurya’s Lean Canvas help to think through and communicate the economics of certain opportunities (see Fig. 1 below). In the “cost structure” part of the lean canvas, for example, you can outline the expected acquisition or distribution cost of a new product.

Speaking the same language – It definitely helps the collaboration with stakeholders, the board and investors if you can speak about the unit economics of your product. I know from experience that being able to talk sensibly about unit economics and gross profit, really helps the conversation.

Now that we’ve established the importance of understanding unit economics, let’s look at some of the key components of unit economics in more detail:

Naturally the exact cost per unit will be dependent on things such as (1) product type (2) point of sale (3) delivery fees and (4) any other ‘cost inputs’.

In a digital context, the user is often the unit. For example, the Lifetime Value (‘LTV’) and Customer Acquisition Cost (‘CAC’) are core metrics for most direct to consumer (B2C) digital products and services. I learned from David Skok and Dave Kellogg about the importance of the ‘CAC to LTV’ ratio.

Granted, Skok and Kellogg apply this ratio to SaaS, but I believe customer acquisition cost (‘CAC’) and customer lifetime value (‘LTV’) are core metrics when you treat the user as a unit; you’ve got a sustainable business model if LTV (significantly) exceeds CAC. In an ideal world, for every £1 it costs to acquire a customer you want to get £3 back in terms of customer lifetime value. Consequently, the LTV:CAC ratio = 3:1.

I’ve seen companies start with high CAC in order to build scale and then lower the CAC as the business matures and relies more on word of mouth as well as higher LTV. Also, companies like Salesforce are well known for carefully designing additions(“editions”)to increase customer lifetime value (see Fig. 2 below).

Netflix are another good example in this respect, with their long term LTV view of their customers. Netflix take into account the Netflix subscription model and a viable replacement for another subscription model in cable. The average LTV of Netflix customers is 25 months. As a result, Netflix are happy to initially ‘lose’ money on acquiring customers, through a 1-month free trial, as these costs costs will be recouped very soon after acquiring the customer.

Main learning point: Youdon’t need to be a financial expert to understand the unit economics of your products. Just knowing what the ‘levers’ are that impact your product, will put you in good stead when it comes to making product decisions and collaborating with stakeholders.

Data aware — In the book, King, Churchill and Tan distinguish between three different ways to think about data: data driven; data informed and data aware (see Fig. 1 below). The third way listed, being ‘data aware’, is introduced by the authors: “In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions.” If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.

How much data to collect? — The authors make an important distinction between “small sample research” and “large sample research”. Small sample research tends to be good for identifying usability problems, because “you don’t need to quantify exactly how many in the population will share that confusion to know it’s a problem with your design.” It reminded me of Jakob Nielsen’s point about how the best results come from testing with no more than 5 five people. In contrast, collecting data from a large group of participants, i.e. large sample research, can give you more precise quantity and frequency information: how many people people feel a certain way, what percentage of users will take this action, etc. A/B tests are one way of collecting data at scale, with the data being “statistically significant” and not just anecdotal. Statistical significance is the likelihood that the difference in conversion rates between a given variation and the baseline is not due to random chance.

Running A/B tests: online experiments — The book does a great job of explaining what is required to successfully running A/B tests online, providing tips on how to sample users online and key metrics to measure (Fig. 2) .

Minimum Detectable Effect — There’s an important distinction between statistical significance — which measure whether there’s a difference — and “effect”, which quantifies how big that difference is. The book explains about determining “Minimum Detectable Effect” when planning online A/B tests. The Minimum Detectable Effect is the minimum effect we want to observe between our test condition and control condition in order to call the A/B test a success. It can be positive or negative but you want to see a clear difference in order to be able to call the test a success or a failure.

Know what you need to learn — The book covers hypotheses as an important way to figure out what it is that you want to learn through the A/B test, and to identify what success will look like. In addition, you can look at learnings beyond the outcomes of your A/B test (see Fig. 3 below).

Experimentation framework — For me, the most useful section of the book was Chapter 3, in which the authors introduce an experimentation framework that helps planning your A/B test in a more structured fashion (see Fig. 4 below). They describe three main phases — Definition, Execution and Analysis — which feed into the experimentation framework. The ‘Definition’ phase covers the definition of a goal, articulation of a problem / opportunity and the drafting of a testable hypothesis. The ‘Execution’ phase is all about designing and building the A/B test, “designing to learn” in other words. In the final ‘Analysis’ phase you’re getting answers from your experiments. These results can be either “positive” and expected or “negative” and unexpected (see Fig. 5–6 below).

Main learning point: “Designing with Data” made me realise again how much thinking and designing needs to happen before running a successful online A/B test. “Successful” in this context means achieving clear learning outcomes. The book provides a comprehensive overview of the key considerations to take into account in order to optimise your learning.

Data driven — With a purely data driven approach, it’s data that determine the fate of a product; based solely on data outcomes businesses can optimise continuously for the biggest impact on their key metric. You can be data driven if you’ve done the work of knowing exactly what your goal is, and you have a very precise and unambiguous question that you want to understand.

Data informed — With a data informed approach, you weigh up data alongside a variety of other variables such as strategic considerations, user experience, intuition, resources, regulation and competition. So adopting a data-informed perspective means that you may not be as targeted and directed in what you’re trying to understand. Instead, what you’re trying to do is inform the way you think about the problem and the problem space.

Data aware — In a data-aware mindset, you are aware of the fact that there are many types of data to answer many questions. If you are aware there are many kinds of problem solving to answer your bigger goals, then you are also aware of all the different kinds of data that might be available to you.

Cohorts and segments — A cohort is a group of users who have a shared experience. Alternatively, you can also segment your user base into different groups based on more stable characteristics such as demographic factors (e.g. gender, age, country of residence) or you may want them by their behaviour (e.g. new user, power user).

New users versus existing users — Data can help you learn more about both your existing understand prospective future users, and determining whether you want to sample from new or existing users is an important consideration in A/B testing. Existing users are people who have prior experience with your product or service. Because of this, they come into the experience with a preconceived notion of how your product or service works. Thus, it’s important to be careful about whether your test is with new or existing users, as these learned habits and behaviours about how your product used to be in the past can bias in your A/B test.

Goal — First you define the goal that you want to achieve; usually this is something that is directly tied to the success of your business. Note that you might also articulate this goal as an ideal user experience that you want to provide. This is often the case that you believe that delivering that ideal experience will ultimately lead to business success.

Problem/opportunity area — You’ll then identify an area of focus for achieving that goal, either by addressing a problem that you want to solve for your users or by finding an opportunity area to offer your users something that didn’t exist before or is a new way of satisfying their needs.

Hypothesis — After that, you’ll create a hypothesis statement which is a structured way of describing the belief about your users and product that you want to test. You may pursue one hypothesis or many concurrently.

Test — Next, you’ll create your test by designing the actual experience that represents your idea. You’ll run your test by launching the experience to a subset of your users.

Results — Finally, you’ll end by getting the reaction to your test from your users and doing analysis on the results that you get. You’ll take these results and make decisions about what to do next.

How large of an effect will your changes have on users? Will this new experience require any new training or support? Will the new experience slow down the workflow for anyone who has become accustomed to how your current experience is?

How much work will it take to maintain?

Did you take any “shortcuts” in the process of running the test that you need to go back and address before your roll it out to a larger audience (e.g. edge cases or fine-tuning details)?

Are you planning on doing additional testing and if so, what is the time frame you’ve established for that? If you have other large changes that are planned for the future, then you may not want to roll your first positive test out to users right away.

A few weeks ago, I listened to a podcast interview in which Christophe Gillet, VP of Product Management at Vimeo, gave some great pointers on how to best assess market viability. Christophe shared his thoughts on things to explore when considering market viability. I’ve added my sample questions related to some of the points that Christophe made:

Is there a market? – This should be the first validation in my opinion; is there a demand for my product or service? Which market void will our product help to fill and why? What are the characteristics of my target market?

Is there viability within that market? –Once you’ve established that there’s a potential market for your product, this doesn’t automatically mean that the market is viable. For example, regulatory constraints can make it hard to launch or properly establish your product in a market.

Total addressable market – The total addressable market – or total available market – is all about revenue opportunity available for a particular product or service (see Fig. 1 below). A way to work out the total addressable market is to first define total market space and then look at percentage of the market which has already been served.

Understand prior failures (by competitors) – I’ve found that looking at previous competitor attempts can be an easy thing to overlook. However, understanding who already tried to conquer your market of choice and whether they’ve been successful can help you avoid some pitfalls that others encountered before you.

Strong mission statement and objectives of what you’re looking to achieve –In my experience, having a clear mission statement helps to articulate and communicate what it is that you’re looking to achieve and why. These mission statements are typically quite aspirational but should offer a good insight into your aspirations for a particular market (see the example of outdoor clothing company Patagonia in Fig. 2 below).

Business goals –Having clear, measurable objectives in place to achieve in relation to a new market that you’re considering is absolutely critical. In my view, there’s nothing worse than looking at new markets without a clear definition of what market success looks like and why.

How to get people to use your product – I really liked how Christophe spoke about the need to think about a promotion and an adoption strategy. Too often, I encounter a ‘build it and they will come’ kind of mentality which I believe can be deadly if you’re looking to enter new markets. Having a clear go-to-market strategy is almost just as important as developing a great product or service. What’s the point of an awesome product that no one knows about or doesn’t know where to get!?

Main learning point: Listening to the interview with Christophe Gillet reinforced for me the importance of being able to assess market viability. Being able to ask and explore some critical questions when considering new markets will help avoid failed launches or at least gain a shared understanding of what market success will look like.

Bond Street lends to small businesses that might typically struggle to get a loan from traditional banks. In a recent talk on a MIT Fintech course that I was doing, David Haber – Bond Street’s CEO/Founder – mentioned how Bond Street saw a clear niche in the market for small business loans and acted on it. Haber encountered a problem that seemed pretty common for early stage, online small businesses: banks or other financial services offering small loans for short durations at high rates. To resolve this problem, Bond Street offers loans range between $50k-$500k, for as long as 1-3 years and with rates starting at 6% (see Fig. 1 below).

Fig. 1 – Loan size, rate and terms comparison between Bond Street and other small business lenders – Taken from: https://bondstreet.com/

In the MIT talk, Haber mentioned that OnDeck – a direct competitor of Bond Street – offers small business loans for an average amount of $35k, 10 months’ duration and charges of 40% Annual Percentage Rate (‘APR’). Bond Street competes on rate and speed, but as Haber explained, the business is very focused on “offering more value beyond the economics of a loan, since capital is essentially a commodity.”

Haber then explained that technology allows Bond Street to not just innovate on the loan transaction itself, but to provide a great customer experience on either side of the transaction. For example, by offering a borrower data about similar size businesses, the borrower can then make a better informed decision about taking up a loan.

Haber mentioned one other thing which really resonated with me: “building an ecosystem around your business.” By, for example, leveraging data on an entrepreneur across a network of (similar) entrepreneurs, Bond Street and others can really help people grow their businesses. This doesn’t mean committing data violations, but using data to build an ongoing relationship with one’s customers, and being able to warn them about potential risks or suggest new market opportunities.

A great example is how easy Bond Street makes it for its customers to link to their accounting packages (see Fig. 4 below). I see this is a simple but good example of creating an ecosystem where data is combined in such a way that people and business can derive tangible benefits from it. Through linking to your accounting package as part of the loan application process, businesses save a lot of precious time and effort, since they no longer have to manually input all kinds of financial data.

Main learning point: Even though lending isn’t a new proposition, I really like what Bond Street are doing when it comes to offering loans to small businesses. It has carved out a specific market niche – small, early stage businesses – that it targets with a compelling proposition and an intuitive customer experience to match.