My notes on the Net Gain presentation by Bernie Malinoff and Norman Chang on Facial Biometric in Advertising Research (without asking a single question). There has been limited editing and hence there will be typos.

Over 90% of human behaviour is driven by emotions.

Emotions in consumer research.

Real Eyes Facial Coding: using webcam can code emotions based on your face – can calibrate to within 1%.

Works across cultures, if subject has a beard of not, or even if they wear glasses.

People never express anger in advertising, most often they express confusion.

Like this:

The following are notes from Margot Acton and Vanessa Killeen’s Net Gain 2016 presentation: “Social Insight in Action: The Power of Combining Social Media Analysis and Traditional Survey Research”. This post has had limited editing and hence there will be typos.

A lot of noise, and challenges for brands to cut through the clutter. P&G Chief Brand Officer pulled back on advertising because: “as the world was getting louder and more complex, we were simply adding to that noise”.

The bar of acceptability for brands is higher than it was 20 years ago, and it’s getting higher every day. Harder for brands to break through – best in class advertising is going up.

Not every touchpoint matters? Consumers filter and manage their relationships with brands in new and powerful ways

20% of touchpoints can deliver – and this varies from one brand to another

Social media has allowed consumers to move from passive relationsihps with brands to co-creators or destroyers.

Q: If a research project is delivered and no one remembered it, did it happen?

Often brand tracking falls into this.

Tracking – often long surveys, with many people who have put in questions.

Declining response rates on panels.

Flat-line metrics – what do you do if nothing is changing? What is the insight?

Problem – data can take weeks or months to be received

Social media

full range of opinion

real-time

low cost?

But

Data is a bit spikey, volume of tweets can go up and down day-to-day

Too much noise – one study looked at 100k pieces of content for a brand, found 380 were relevant

Automated sentiment coding is poor

It’s not representative – though the real issue is not representativeness it’s predictability

Challenge – can we model survey KPIs for social and search data sources?

TNS study – reproduced survey based KPIs for 80 brands in six categories. Building models that can predict brand equity four weeks in advance.

Can predict sales accurately for categories such as cars and toilet paper.

Learning:

A lot of the traditional metrics are focusing on the wrong things and are not predictive.

Critical assets and what brand is generating is not being gathered by traditional KPIs.

Result: Turning off some metrics while leveraging predictive power of social.

Obtain much more dynamic insights from social media, as things change more often.

The following are notes from Alex Hunt’s Net Gain 2016 presentation: “When Behavioral Science Turns the Classical Marketing Model on its Head”. This post has had limited editing and hence there will be typos.

Problem with today’s marketing – we are advertising by hitting people over the head with product benefits.

However, we think much more than we like to admit – more like Homer Simpson than we would like to acknowledge.

Alex provided a quote (can’t recall from who) saying that humans think in the same way cats swim – can but would rather not to.

Reference to System 1 thinking (Kahneman reference) that Hunt says that 95% of our decisions and actions are based on System 1 thinking, and occasionally System 2 comes into play.

Fame – it a brand comes readily to mind, it must be a good choice – current brand share. Doesn’t really speak to product benefits though.

Kahnman “Nothing in life is as important as when you are thinking about it.”

Example – Hunt has moved to suburbs so his amount of driving has increased, which means they are more likely to get in a car accident. However, they are not concerned about it – instead spend $50 per month on home security, even chances of a traffic accident air much higher.

Why do they make this decision? A: Fear of home invasion is top of mind.

Example: A mobile company in the UK that was in fifth place (called 3), gave a brief to their ad company to “make the brand famous” – had an add with a dancing pony, which went viral. Helped improve brand.

Feeling – If i feel good about a brand, it must be a good choice -predicts fruture brand share

Example – did a presentation in front of a room of accountants, and asked how many had done a cost-benefit analysis before buying their last car. Only one had – clear example that system 2 hadn’t been used in a very important choice.

Commercial example:

Most effective ad out of 500 in 2014 in how it made people feel. One year after campaign launch sales had doubled in France. Put emotion front and centre, not product-focused at all, and by making people laugh had the most successful ad. Take-away: better to ask what people think about a brand than ask product detail questions.

Fluency – if I recognize a brand quickly, it must be a good choice – gives you the toolkit to build brand share.

Example – British Airways in 80s stood for trust, and used the Union Jack as their symbol – a shortcut for trust. They ended up removing it – Richard Branson bought the rights to do this and it worked well for Virgin. The power of distinctive assets.

Fluency examples – P&G logo, Gatorade’s stylised G.

Commercial example:

While it is important to test attributes and look at logos at the very end of the questionnaire, important to put the focus on distinctive assets like logos.

Bringing in behavioural science means we should be able to simplify marketing.

The following are my notes from the 2016 Net Gain presentation of “Emotion Analytics Can Predict What People Do and Explain Why They Do It”, by Lana Novikova of Heartbeat Ai Technologies Inc.. This has been posted shortly after the end of the presentation, so there has been little editing and hence there will be typos.

Had an idea 15 years to create a product to analyse open ends, while she was at another job when internet data collection was just starting. In March 2016 she had built a prototype of her product. Product would analyse phrases and anaylse the meaning into different emotions.

First project was based on how women would feel toward chocolate – feelings of anger, joy and sadness.

The following are my notes from the 2016 Net Gain presentation of “The Internet of Things”, by Greg Dashwood of Microsoft. This has been posted shortly after the end of the presentation, so there has been little editing and hence there will be typos.

Like this:

The following are my notes from the MRA 2016 CRC presentation “Stats 2 Story” by Dave Decelle (Netflix) and Ted Frank (Backstories Studio). As there has been limited editing there will be typos.

Presentation ends with lights off and clips from “Moneyball” — good start!

Most people loved the movie — it was one big pitch for the use of data in sports. “Once again, nerds rule!”

But, before Moneyball, Bill James spent 20 years trying to get used to what he had come up with — and many people ignored him.

Dave is here as Ted’s case study on storytelling.

The movie had story-telling on its side, which had the advantage that Bill James did not.

Executivtes often say that seeing chart after chart of stats of a presentation is like a firehose.

So, rules:

Keep it simple – like movies cut out about half of the book.

Highlight what really matters – three or four things. Find out what they need – reformulate a product for example, then hit what is important

Cut out everything else

Parse it into chunks the brain can handle

Example – Annual Netlfix meeting called QVR

Dave had 30 minutes to present, and he used these principles:

First used an example of not simple – used a MaxDiff methodology to show differences in use of “Netflix Original” logo. Took 1 minute 33 seconds.

Then he asked audience a quiz to see how much information people had rememberd

Then when he just used visualization and focused less on then details, and took a bit less time it was much easier to understand.

When he did the same thing at TMRE, he split the session into two groups, asked one two view the first method, second the other. Those who had seen the more detailed one generally knew the methodology but not the result.

Make it real (like movies)

setting

characters

action

Other example: 78% of Netflix members have heard of House of Cards (great), but only 38% know you can watch on Netflix (problem) and only 30% know that it is exclusive to NF.

Showed this by having an original picture of a picture from the show, then showed decreases beside them for each situation.

Make it Powerful & Emotional

works by – 1) deepening clarity and empathy/compassion and 2) inspires people to got off their butts

But – we usually speak to rational side.

Difference b/n Netflix & HBO Content Promotion

NF – helpful, informative, convenient, relevant – used bar charts to show that Netflix performs better than HBO on these scores, then used clips of customers with similar opinions

Make sure to create tension, play music, use framing, pacing and anticipation

Business generally doesn’t use anticipation – which is why everybody falls asleep in meetings. You do not convince them to stay engaged.

Example of using Tension in Presentation

David mentioned how social media was showing buzz around OTNB skyrocketed, but Netflix name did not.

Tension, how do you bring up the name of NTFX at the same time?

Then mention about study that showed the “Bill Burr” effect – adding the “A Netflix Original” logo, help to increase Netflix awareness of tie with show.

And make sure to add the other elements: pacing, music, tension etc.

Presentation was extremely well received, asked to show it to many different internal stakeholders – the power of a memorable story.

The following are my notes from the presentation given by Simon Callan (Foursquare) at MRA’s 2016 Corporate Researcher Conference. Due to limited editing there will be typos.

Original premise of FourSquare – use your mobile phone to find cool places and compete with your friends. Still has two consumer apps to use on your phone.

Very much now on personalized city guides. After a week of using FourSquare it will start pinging you and suggest places to go to.

Swarm — is a game, allows you to check in and compete with friends over who checks in most at a certain place.

FSQ: These apps generate a lot of data – 500 mllion photos, 87 million tips etc. From these can start mining tastes.

Swarm: 10 billon+ global check-ins, 8 million check-ins/day, 85 million public places and 105 million total places.

What do they do with it?

FSQ powers the geolocation of Waze, Uber, Twitter and What’s App.

Last year made a prediction of how many new iPhones were sold based on foot traffic to store. Predicted 13-15 million, and according to Tech Insider predictions “right on the nose” – closer than the analysts.

A lot of hedge and quant funds for stock predictions. Retail and CPG use it for customer analysis.

Predicted Chipotle’s sales dropped 30%; actually dropped 29.7%.

Location is hard to do, which is why it has taken FSQ several years to do.

FSQ can tell with confidence when someone is in an Apple Store, and stays there for a few minutes. Not looking at users being near places, need to be in the location.

Q: How do they turn the signals into data?

Have 55 million active monthly users global (25% in US). Look at this base and carve out a panel of about 12 million that they link to census data. Generates a total of 300 million visits a month.

Apply normalization to panel, weight census to gender for example. Also have to account for changes in app that might impact visit counts.

What does Foursquare provide:

Store level data – anyone can buy every single store – foot traffic, demographics, where they go before and after and tastes

Chain level data – trends across chains

Market share reports

Turning this into insights

Can use to look at competitive reports of foot traffic counts between different stores within the same category

Can do this at a city level, to see which are under-performing or over-performing

The following are my notes from the MRA 2016 CRC panel discussion with the following participants: Brett Townsend (Pepsico), Mark Kershisnik (Transform Strategy Partners), Rob Stone (Market Strategies International, Pratiti Raychoudhury (Facebook) and Jill Donahue (Nestle Purina North America). There has been little editing on these notes, and hence there will be typos. Responses are from individual panel members, and have not been attributed.

Recent survey

1/3 say dep’t does poor job of proving ROI

1/2 say acting on insights is often/always a challenge

Key techniques to transform research into driving ROI?

Problem: Often get wrapped up in the “interesting”, which isn’t sales. “We don’t have the luxury of interesting”. Need to have the mindset of everything must lead to sales.

How i measure research is have i changed their mind

A lot of it is pre-work and post-work — what are you trying to learn/what is the objective and what is the answer you are looking for. Helps to determine how to design methodology. Post role is to always be voice of consumer.

Important to try to make the human beings that are customers as tangible. So they can find out what customers are feeling — helps them with inputs into what candiates they are choosing.

Have allowed researchers to say “no” if they don’t fit into the year’s strategic goals — can think of 100 things that need to be done but doesn’t mean they should work on all of them at once.

Recent study – 65% corporate researchers say they have too many projects for staff, 50% have too many projects for budget.

Q: How frequently do you allow researchers to say no?

Q: What are the other things you are doing to make sure you are delivering enough insights.

Have people look at what they already know before taking on another project if for example a previous study already answers 85% of what they need to know.

Also looked at researcher’s time to see how their day was spend. About 1/3 was spent on managing logistics. Ended up setting up a system and reduced it to 5% – created more capacity.

Put more of the burden on agencies hey work with. Said we are busy, you have to do more for us and you need to be more consultative in nature not just give us an 80 page deck.

What Have You Done to Help Your Agencies to Do This?

Tell them “give me the trade-offs and let me choose”

We expect them to be better story-tellers. If you can tell the story in 10 slides than you don’t have a story, you just have data.

Kick-off calls at the beginning of a project is important. Sometimes supplier wants to provide everything about the industry where only a tiny piece might be necessary.

Staff are becoming busier, how are you driving level of stakeholder engagement outside of project so it’s not an order-taking function?

Insights people are embedded within teams, brand insights people within brand team for example so that never occurs.

As a result of this researchers aren’t playing catch-up, they are aware of issues as things happen.

One panelist indicated that his company set up a lab where people could come into a room where they could watch research taking place remotely so they are part of the conversation.

Are you doing anything different with technology to engage stakeholders?

Not a tech result but created a quiz and had stakeholders figure out the answers.

Had video clips, asked questions they asked of execs, and then played video responses to the same questions from consumers.

Try to have a very diverse pool of employees, split between academia and industry. Spent a lot of time understanding what students are learning in schools, then honed in on certain programs.

So many different skills exist now in research that it is impossible to have someone that can do everything. Important to get specialists in different areas, then setting up that person as the go to at the same time it is necessary to have generalists.

Need to look within the market research function, looking at people within different industries – example data scientists from financial industries, people with backgrounds like psychiatry, consumer behavior etc.

The following are notes from the presentation by Wayne Hwang (Twitter) and John Mitchell (Applied Marketing Science) on “What is a Good Experience Really Worth? – Using Conjoint Analyisis to Quanitfy the Value of Customer Service”

Wayne told a story of United Airlines losing his suit one day before his wedding. He went through the standard customer service channels and nothing worked. Late at nigh he reached out over Twitter and complained about it. It was found within 10 minutes.

Got him thinking about the relationship between tweets and customer service.

Presentation based on airline industry.

85% of companies think they give good customer service, 8% of customers agree.

Airlines had record number of complaints in 2015.

Disconnect: in publication like HBR talks about how important customer service is.

80% of social customer service requests come from Twitter Not all of them are happy.

People generally say they don’t get responses from companies on Twitter, but are happy when they do.

Research questions:

1.Do customers remember good or bad experienes

2. Are they willing to pay more after a good experience?

Problem of asking people what they want – want everything but they want to pay less.

Have to be cautionary, could be argued that even asking a question changes results. For example, 18% roughly of Republicans claim they would be very upset if their child dated a Democrat. However, roughly the same proportion of Red Sox fans say the same thing of their children dating a Yankees fan – so maybe asking research sets up a response.

The Study.

Group: Twitter users who in past 6 months receied a responses from an airline via Tiwetter (“Test Group”) as well as one that hadn’t (“Control Group”)

Summary stats

tweets included top 5 major US airlines

median time to response was 21 minutes

in 7,217 out of 273,359 tweets (Top 3) was less than one minute

In 59,514 response less than 5 minutes

Longest was 2,298 hours

Conjoint Basics

What is it – survey technique and model used to measure preference for products and services

Underlying assumption – consumer overall value or utility for a product is a weighted sum of the value of each of its parts

Used it as follows:

Assigned people to cells given observed behavior and known experiences

Varied the product attributes in a defined way in choice tasks

Analyze how people choose

Examined deltas in utilities across cells to back out brand value in dollars

Survey showed up in users Twitter feed.

Choice tasks based on airline, seat location, % on time arrival and price. Seat location and % on time were dummy variables, were really only trying to see if people would pay more if their issue was resolved.

Also asked willingness to recommend and a few other questions afterward.

Ran a hierarchical Bayesian regression.

Challenges

Hard to adminster because user experience had to be consistent with Tiwtter’s brand value – short, concise, clean and mobile

Analysis – control for halo around certain brands, ensure enough sample for pairwise comparisons among cells and build all final analyses by hand

Results

Responding quickly drives value

Customers were willing to pay slightly more if they were responded to in over 67 minutes, but over $20 more if it was only a few minutes.