Low level task-based AI gets commoditized quickly and more general AI is decades off. In the meanwhile, will new AI startups succeed or will the value accrue to Google, Facebook, and Amazon?

While most of the machine learning talent works in big tech companies, massive and timely problems are lurking in every major industry outside tech.

What is Vertical AI?

In a recent talk at AI by the bay, I laid out a four-factor definition of what I consider to be a vertical AI startup.

1. Full stack products

Provide a full-stack fully-integrated solution to the end customer problem from the interface that solves for the need all the way down the stack to the functionality, models, and data that power the interface. This ecosystem is much more defensible over time than just proprietary data or models. Designing the right product interface requires subject matter expertise, and owning the interface allows you to instrument it and gather proprietary data. Then you’re able to build models that drive high-value functionality in a virtuous cycle between the interface and the data. You control the ‘data value chain’ and have pricing power and defensibility over time.

Example: Blue River builds agriculture equipment that reduces chemicals and saves costs. They ‘personalize’ treatment of each individual plant, applying herbicides only to the weeds and not to the crop or soil. They use computer vision to identify each individual plant, machine learning to decide how to treat each plant, and robotics to take precise corresponding action for each plant. Blue River is defensible because it’s incredibly hard to replicate such a complex full-stack product, from gathering the training data for the various models, to incorporating the models alongside robotics into the machines, to integrating these machines into existing farm equipment and distribution channels.

2. Subject matter expertise

Product and sales at vertical AI startups benefit from bringing in key leaders from the industry early on in the business. Building full-stack products requires deep subject matter expertise. Selling these products requires trust, respect, and relationships within the industry. Teams that manage to combine the subject matter and technical expertise are able to model the domain richly and drive innovation that comes from thinking outside the box by understanding what the box is. Teams that come with a domain-first approach tend to get stuck inside the box, and teams that come with a tech-first tend to get stuck out in left field. There is also a major issue with team evolution -- if you’re unable to set the joint domain-tech DNA early, then one side dominates, and it becomes a real challenge to bring in world class folks from the other side, as they will never have the same level of authority and respect within the company.

Example: the Zymergen leadership team is a great mix of strong capabilities targeted at industrial biology; commercial (CEO Joshua Hoffman), scientific (CSO Zach Serber), and data (CTO Aaron Kimball). The harder it is to assemble the mixed team and set the company joint-DNA early on, the more defensible the business.

3. Proprietary data

The technology market is hyper competitive. As soon as you demonstrate good results, many people will copy you almost instantly if they can. Defensible AI businesses are built on proprietary data that is difficult to replicate. This happens in two phases, bootstrapping and compounding. In the bootstrap stage, you are building a unique set of training data by aggregating publicly available data and enriching it in some challenging way, running simulations to generate synthetic data, or doing BD deals to gather a set of internal company data. Once you have bootstrapped, you are building a ‘data flywheel’ into your products, so that you are capturing totally unique data over time from how your product is used, and that data capture is designed precisely to serve the needs of your models, which are designed to serve the needs of the product functionality, which is designed to meet the needs of the customer. This data value chain ensures that the customer’s motivation is aligned with your motivation to compound the value of your proprietary dataset.

Example: Merlon Intelligence gathers training data from compliance analyst interactions with a financial crimes investigation dashboard. Gathering the data requires a full stack product where the interface is designed and instrumented to gather data that feeds into the models. It’s a learning to rank setup -- learning to rank for risk just like the Facebook newsfeed learns to rank for engagement. Banks have a great deal of operational risk in deploying new financial crimes compliance software, so it’s a challenge to penetrate the market. The harder it is to gather your data, and the more its intertwined with the product and go to market strategy, the more defensible the business.

4. AI delivers core value

Amazon, Netflix, and Facebook are all companies that use AI to drive very high percentage lift in revenue and engagement. That’s valid and awesome, but AI is not the core value of their products -- Amazon is an ecommerce store, Netflix is a video entertainment company, and Facebook is a social media company. Back when we first started Data Collective, we called this scenario the ‘data side car’ -- like those really cool old motorcycles with an attached sidecar. AI is not the core value, but an attachment that optimizes the core value. By contrast, Vertical AI solutions are about AI unlocking entirely new opportunities rather than just optimizing existing opportunities.

Example: Opendoor’s entire business model for making a more liquid market in real estate is predicted upon the notion that they can use models to price a home so accurately that they can make an offer immediately. The more AI delivers the product's core value by unlocking a totally new opportunity through rich domain modeling within the vertical and models built on top of proprietary data gathered via the product itself, the more defensible the business.

Why Go vertical?

1. Don’t get ripped off

Solve the business problem directly for the end customer and put yourself in a position of leverage to capture value from the full-stack solution. Avoid being disintermediated from the end customer and getting into a position of weakness. You will wind up solving the the hardest technology problems down the stack, but subject to the strength of the solution designers up the stack, who will constantly negotiate you down and erode your slide of the pie.

2. Tasks get commoditized

You might think you have a special market position due to a novel new deep net architecture, or that you have invested massive amounts of time in building an named entity or image tagger. The reality is that these low level tasks are commoditized very quickly. Today’s novelty is tomorrow’s open source, and that’s happening faster and faster each year. Look at low level tasks as building blocks that you compose into higher level solutions rather than as the critical IP of your business.

3. Software is eating the world

Every company in every industry needs to be a tech company, but most industries are struggling to deploy tech effectively, let alone AI. Carefully analyze the markets you are considering, and determine whether the incumbents have a protected market position (e.g. through regulation) and you should sell them picks and shovels, or where the incumbents are lacking strong barriers to entry, in which case you may want to go for a disruptive challenger model.

4. Enterprise exits come in cohorts

Over 90% of AI startups are enterprise. Enterprise exits come in cohorts, and many are cohorts within specific industry verticals like financial services or healthcare. Rather than being a singular outlier, you want to be part of a wave of investment focused on a particular cohort of startups going after a niche. Focus your energy on analyzing verticals where both the customer segments and the venture capital community are keen to see solutions, and it will make it much easier for you to sell your products to customers and your company to investors.

Understanding Enterprise Cohorts

The folks at Sapphire Ventures had a couple of goods posts on why enterprise funds may return more capital than consumer funds, and how enterprise exits come in cohorts, whereas consumer exits are dominated by outliers like Facebook, Snapchat and WhatsApp.

Compared with consumer startups since 1995, enterprise startups have returned 40% more capital overall. Enterprise and consumer startups have generated equivalent IPO value, but enterprise has generated 2.5X the M&A value.

There are three major advantages to focusing on enterprise:

1. You are aiming at a 40% larger pool of value creation at the time of exit; $825B total exits for enterprise versus $582B total exits for consumer.

2. A broader distribution of value means that you’re probably more likely to create a $B+ company in enterprise than in consumer. The top five enterprise companies account for 11% of total value creation, whereas the top five consumer companies account for over 3X that amount, or 36% of total value creation.

3. The greater value created by M&A means that you probably have greater optionality for large M&A exits ahead of an IPO. Enterprise M&A accounted for $410B of exits, which is 2.5X the $168B of consumer M&A exits.

According to CBInsights report on AI startups that have raised more than 30M, there are nearly 10X the number of enterprise startups as compared with consumer startups.

Selecting Vertical AI Cohorts

Market

First, look for big addressable markets (TAM) with healthy margins. Be scientific when evaluating TAM. Don’t fall into the trap of confirmation bias, and seek out information that validates your opinions. Rather, thinking like a scientist and objectively seek out all available data; especially data that challenges your views. Avoid the 1% fallacy; also called the large market fallacy. we’ve all heard the one where ‘all we need to do is get 1% of market X, and we’re golden.’ A proper evaluation of TAM takes significant time and research, but it’s way cheaper and easier than wasting two years of your life chasing a market that is orders of magnitude smaller than you thought, or even worse, nonexistent.

If we’re looking top down at sectors in the US stock market, Finance and Healthcare are the biggest markets with the highest margins.

The next most attractive sectors are energy, utilities, basic industry, transportation. Since energy and industrials tend to have higher margins, and utilities the lowest margin, you might consider focusing on energy and industrials.

Digging further into CBInsights data on both unicorn startups and AI startups, both have strong vertical representation from fintech and healthcare.

This is a good example where the data are all aligned -- fintech and healthcare are the largest markets with the highest margins and the most representation among both unicorns and AI startups. So these are solid markets to aim at.

Whitespace

Are there already a lot of other smart people working on this who are probably already the winning cohort? Given the massive investment in autonomous vehicles lately, and the fact that the size of that market is a bit smaller, you might instead consider focusing on a market like energy.

If we look at fintech unicorn cohorts, we see that most of the action has been in lending and payments, which have historically fallen mostly under the traditional banking industry. Insurance is about ⅓ the size of banking in the public markets, but only ⅕ the aggregate valuation and number of startups on the unicorn list.

Total US Market Cap by Industry in the Finance Sector

As another example, consider pharma R&D process. Many AI pharma startups focus on finding new candidate compounds that they can sell to pharma companies. This is a sane strategy, because it avoids the $2.9B, 10+ year, and < 10% success rate process to bring that new drug to market. It also surely feels motivating to work on finding new compounds that may help to treat something like cancer, but it leaves open whitespace downstream in the process, where the big money and the big bottlenecks are. Thought arguably more of a ‘shallow tech’ a ‘deep tech’ company, Science 37 example of a clinicals venture that is really innovating on the fundamental model for running trails.

Timing

The right idea with the right team at the wrong time == the wrong idea.

Remember that the non-consumer stuff is likely to come in a big cohort of exits rather than a single outlier. As yourself if you are the only one who sees this opportunity in the market right now. If so, that may not be a good thing. You want the customers within your target vertical to have immediate unmet needs and VCs scouting that vertical ready to invest.

Nobody cares about your idea, they care about their needs. Even when it comes to their own needs, they can only focus on a few needs at a time. So they really only care about the few most timely needs this year. Are you focusing on an issue that is one of the top few needs of the year within your target industry?

One of my favorite descriptions of the importance of timing is laid out in a TED talk by Bill Gross.

He describes how, of the five factors he explored across 100 Idealab startups, and 100 non-Idealab startups, timing was the dominant factor driving success. He gives a couple great examples; Uber and Airbnb ere both perfectly timed during a recession, and people needed the extra money. IDealab started z.com in the 1999-2000 period, when broadband penetration was too low and streaming video in the browser was janky. Two years later broadband was over 50% penetration and adobe flash fixed the browser issue, and Youtube was perfectly timed.

Look at the market and be really honest with yourself about whether the consumers/business you are targeting are really ready for what you have to offer them.

Defensibility

My claim is that Vertical AI startups are inherently defensible. According to the four-factor definition above; AI Startups build full stack products, have subject matter expertise in their vertical, gather proprietary data, and use AI to deliver the core value of their product. Each of the four core components of a Vertical AI business makes it more defensible.

Full stack products: The more complex it is to create the experience, the more defensible the business.

Subject matter expertise: The harder it is to assemble the mixed team and set the company joint-DNA early on, the more defensible the business.

Proprietary data: The harder it is to gather your data, and the more its intertwined with the product and go to market strategy, the more defensible the business.

AI delivers core value: The more AI delivers the product's core value by unlocking a totally new opportunity through rich domain modeling within the vertical and models built on top of proprietary data gathered via the product itself, the more defensible the business.

Have fun exploring

If you’re interested in vertical AI startups, I encourage you to follow the process outlined above for selecting opportunities based on market, whitespace, timing, and defensibility. In a recent talk at mlprague, I laid out a number of different examples that I find interesting.

Vertical AI has been the exclusive focus of my career; in financial services since 2002, as a startup founder since 2009, and as a founding partner of DCVC since 2011. I started Flightcaster in 2009, which seems to be the first AI startup in YCombinator, Prismatic in 2012, which linkedin acquired in 2016, and Merlon in 2016, which grew to $Ms in revenue in its first year powering financial crimes compliance for global banks. We may be in an AI startup hype cycle now, but i’ve been doing this stuff for 15 years and will continue doing it long after the current hype cycle has subsided.

With AI in a full-fledged mania, 2017 will be the year of reckoning. Pure hype trends will reveal themselves to have no fundamentals behind them. Paradoxically, 2017 will also be the year of breakout successes from a handful of vertically-oriented AI startups solving full-stack industry problems that require subject matter expertise, unique data, and a product that uses AI to deliver its core value proposition.

Bots go bust

Over the past year a mania has risen up around ‘bots.’

In the technical community, when we talk about bots, we usually mean software agents which tend to be defined by “four key notions that distinguish agents from arbitrary programs; reaction to the environment, autonomy, goal-orientation and persistence.”

Enterprises have decided to usurp the term ‘bot’ to be mean ‘any form of business process automation’ and create the term ‘RPA’, robotic process automation.

While business process automation will of course continue to play out for decades to come, the current mania around ‘bots’ defined as conversational interfaces over voice and chat will begin its collapse in 2017. Here’s why:

The social vs. personalization wars in consumer internet provide a good guiding light. Ultimately the winning personalization platform was facebook, which was the winning social platform. People still like to interact with other people for most things, and i suspect that many of the chatbots will go the same way as the non-social media platforms that tried to bet on personalization without social curation. A lot of the thinking around bots is naively utilitarian and lacks the social intelligence to recognize the range of human needs being met by person-to-person interaction. For this reason, most bots will fail to retain users even if they can attract them initially.

There are a lot of misguided signals being drawn from the global messaging app boom, the rise of slack, and the success of certain interactions on platforms in china like weibo. A lot of folks have extrapolated from these trends to bet on platforms like AI-powered digital personal assistant. Per #1 above, these social platforms are solving for both utilitarian and emotional needs, and it’s not clear that we can extrapolate from this setting and apply it to pure utility AI-driven chatbots.

Conversational interfaces are often very inefficient to accomplish tasks as compared to other more visual solutions. Conversational interfaces are interesting and have been around in the HCI community for decades. There are certain applications where conversational interfaces are awesome, but in reality i think we’ll see that for the vast majority of applications, there are far more efficient interfaces to get things done.

Note that none of my reasons for the bot bust state that ‘the AI isn’t good enough yet.’ The issue with most systems like siri is more that they’re poorly implemented. We can build many interesting bot interfaces using modern techniques, the bigger issue in my mind is that its not clear humans want to use them.

Deep learning goes commodity

Deep learning is in full mania right now. For those without much of a sense of what various AI terms mean, deep learning is part of machine learning, which is part of of AI. Deep learning is not a different thing, its just a cool body of work that’s yielding state of the art results for lots of important problems, and so people are rightly availing themselves of it. If you want to understand the longitudinal picture here and how deep learning fits into the ever-evolving AI landscape, I wrote about this last fall.

Deep learning startup acquihires have replaced the iOS mobile apps startups of 5 years ago. A bunch of companies were blindsided by the ability of deep learning, especially for computer vision, to generate superior results and tackle new problems. As a result, we’ve witnessed a major wave of Google, Facebook, Twitter, Uber, Microsoft, and Salesforce running out an aggressive M&A strategy to fill the gaps.

So if this is so important and highly sought after, why do i think it’s going commodity this year? NIPS 2016 and the overall conference circuit of 2016. It’s very clear that deep learning is everywhere now. There are so many grad students coming out now with these skills. Four years ago the story was dramatically different. The market has adjusted to create more supply.

Now, all this being said, i need to make a clarifying statement. I am suggesting that deep learning will become more commodity among machine learning people this year, but i am not suggesting that machine learning itself will become commodity. The premiums on machine learning talent will still be incredibly high. The premiums on deep learning startup acquihires that we’ve seen in past few years will collapse after the second tier of tech companies and those outside tech (like the folks in detroit) finish their current wave of acquisitions. I expect a steady flow of late adopters this year coming in with dumb money, but that later in the year we may see that this wave of m&a deals starts to slow.

Cleantech isn’t a market, it’s a cross cutting concern. Issues of climate change and sustainability are very serious issues and incredibly worthy ones to think about both as causes and for profit businesses. A cross-cutting concern isn’t a business though, a business is something that sells a product or service that customers want to buy. Tesla and solar city are arguably success stories for cleantech, but note that they are both ‘full stack businesses’ -- a car company and a solar energy company respectively. So when cleantech is an element of a full stack company selling a real product into a real market, it works, but cleantech for cleantech’s sake doesn’t work because it doesn’t start from the premise of a customer need. Great businesses start with customer need. Great missionary businesses start with a vision defined by customer need, and incorporate a mission that aligns to satiating the need. An organization with a societal mission but without a customer-centered vision is at best a moderately effective philanthropic organization. Great business put customer needs first, not a cross-cutting technology trend, even if its a missionary one.

Green energy isn’t a market, energy is. Solar is king and growing fast -- because now it works economically. When Warren Buffett and Elon Musk are competing over a market, that’s likely a sign that it makes good business sense. Both view sustainability as an important mission, but also understand that it has to make sense as a business and for the customer first, and the mission much be achieved in service of the needs of the businesses customers and employees. Nothing is more ironic than an unsustainable business with a mission of sustainability.

Self-important save-the-world mentality. In cleantech, there was a lot of the hubristic knight-in-shining-armor attitude that is characteristic of tech manias. In AI over the past couple years, we’ve started to see self-aggrandizing AI ethics committees and the like, people talking about what to do when the robots take all the jobs, and so on. It’s the attitude that those working in and around AI are now responsible for shepherding all human progress just because we’re working on something that matters. This haze of hubris blinds people to the fact that they are stuck in an echo chamber where everyone is talking about the tech trend rather than the customer needs and the economics of the businesses. This toxic reality distortion field is what allows the mania to draw large numbers of smart but self-important people into the impending web of doom.

Cleantech and AI are both deeply technical problems, and a startup and VC community increasingly trained up on consumer internet and trivial SaaS services is increasingly incapable of adequately evaluating investment opportunities in deeply technical domains. Driven by the state of hubris outlined in #3, people dive in after reading a few blog posts and hearing a few pitches. Linked profiles are duly updated, and an era of ephemeral experts are born.

So how does this play out?

I have a theory that the information era of the economy fundamentally changed the mania-panic cycles we’ve experienced throughout human history. As a former hedge fund guy, I have read all the great books on financial history and market psychology. It’s been interesting to track how things have evolved differently since the mid-90’s.

I think that the rapid increase in social interaction and spread of information online created a self-heisenberging effect that pulls manias up to the front of a business cycle before it even really begins. Consumer internet is a great example, where the 90’s pre-mainia lead to a 2000 crash just as the actual business cycle was getting started. Two years later in 2002, Google, which had started in 1998, was hiring up all the talent at the bottom of the bust and defining the real business cycle for consumer internet.

Four years after cleantech was pronounced dead by wired, solar is cleanest and cheapest source of energy, Elon and Warren are all over it. Tesla and solar city are becoming a full stack cleantech empire.

So I think we are in this pre-mania for AI startups right now. Most of what I see out there right now is going to fail in the same ways that AI startups have been failing for 10 years now. There is a very tiny community of folks that have been doing AI startups for 10 years or more, and the batch that are diving in at the top of this pre-mania are making the same mistake that cleantechs did -- they are diving into AI instead of diving into a customer need.

AI startups right now are mostly hammers looking for nails. As this becomes more evident over the next 12-24 months, and the bigcos exhaust and ramp down their appetite for AI acquihires just as they did for mobile app dev shops, I suspect that we start to see potential founders and VCs realize that something is off. At that point, I will get fewer AI startup pitches on linkedin from people who have decided to get into AI in the past 12 months.

MLaaS dies a second death

Machine Learning as a Service is an idea we’ve been seeing for nearly 10 years and it’s been failing the whole time.

The bottom line on why it doesn’t work: the people that know what they’re doing just use open source, and the people that don’t will not get anything to work, ever, even with APIs.

Many very smart friends have fallen into this tarpit. Those who’ve been gobbled up by bigcos as a way to beef up ML teams include Alchemy API by IBM, Saffron by Intel, and Metamind by Salesforce. Nevertheless, the allure of easy money from sticking an ML model up behind an API function doesn’t fail to continue attracting lost souls.

Amazon, Google, and Microsoft are all trying to sell a MLaaS layer as a component of their cloud strategy. I’ve yet to see startups or bigcos using these APIs in the wild, and I see a lot of AI usage in the wild so its doubtful that its due to the small sample size of my observations.

Whether services from the big cloud providers or from startups, the end will be the same, as they go sideways this year. Cloud providers will leave the services on but they won’t be big money makers, the MLaaS startups will start meeting their demise this year as growth goes sideways and appetite to double down on them dries up.

The problem here is a very practical matter; the MLaaS solutions have no customer segment -- they serve neither the competent nor the incompetent customer segment.

The competent segment: you need machine learning people to build real production machine learning models, because it is hard to train and debug these things properly, and it requires a mix of understanding both theory and practice. These machine learning people tend to just use the same open source tools that the MLaaS services offer. So this knocks out the competent customer segment.

The incompetent segment: the incompetent segment isn’t going to get machine learning to work by using APIs. They are going to buy applications that solve much higher level problems. Machine learning will just be part of how they solve the problems. It’s hard enough to bring in the technical competence to do machine learning internally, and its much much harder to bring in the ‘data product’ talent that can help you identify the right problems and means to productize machine learning solutions. the incompetent segment includes everyone outside of tech companies with established strong machine learning and data product teams. yes, that means the entire global business world across every industry. Its quite a large segment. If you buy into the “software is eating the world” thesis, then you think that every company in every industry more or less has to become a tech company at some level. The same will be true for becoming a data company. There’s already a very wide gap in technical competence between top tech companies like google and facebook and the top companies in each industry outside tech. This gap is dramatically wider when it comes to data competence.

Full stack vertical AI startups actually work

I have been working with AI for nearly 20 years, and building silicon valley AI startups for nearly 10. I’m a cofounding partner of DCVC, a leading AI and data focused VC. My experience makes me both broadly excited and soberly focused on full stack vertical AI applications.

I’m broadly excited because I think that every industry will be transformed by AI. I’m soberly focused because low level task-based AI gets commoditized quickly. I think that if you’re not solving a full stack problem that’s high level enough, then you will be stuck in a commoditized world of lower level AI services, and you are going to have to be acquired or wind down due to lack of traction.

While most of the machine learning talent works in consumer internet giants and related general tech companies, massive and timely problems are lurking in every major industry outside tech. If you believe the ‘software is eating the world’ hypothesis, then every company in every industry will need to become a tech company.

When you focus on a vertical, you can find high level customer needs that we can meet better with AI, or new needs that can’t be met without AI. These are terrific business opportunities, but they require much more business savvy and subject matter expertise. The generally more technical crowd starting AI startups tend to have neither, and tend to not realize the need for or have the humility to bring in the business and subject matter expertise required to ‘move up the stack’ or ‘go full stack’ as I like to call it.

New full stack vertical AI startups are popping up in financial services, life sciences and healthcare, energy, transportation, heavy industry, agriculture, and materials. These startups will solve high level domain problems powered by proprietary data and machine learning models. Some of these will hit 100M in ARR in 2017-2018. These full stuck AI startups will be to AI as Tesla and Solar City were to cleantech.

This guide aims to present you with an easy way to understand and apply empathy better. It will hopefully be useful to everyone, and is written especially for leaders.

We start with an overview of emotional and social intelligence, and end with an audiobook workout routine.

Emotional and Social Intelligence

Much has been written about emotional and social intelligence. As the phrases become increasingly popular, they are often used interchangeably. Let’s start by establishing the distinction.

Emotional intelligence is the ability to accurately observe one’s emotions and the emotions of others, and social intelligence is the ability to apply these observations to navigate complex social situations.

People with higher emotional and social intelligence may exhibit better mental health, better work performance, and better leadership effectiveness.

Big Empathy and Little Empathy

It’s easier to empathize with others when you are sitting on a meditation cushion alone next to a babbling brook in a Zen garden. It’s harder when interactions with others stir up emotions in you during daily life. This is why I like to think of big empathy and little empathy.

Big empathy is a deep sense of compassionate interconnectedness such as that cultivated through Buddhist meditation practices. Buddhists see our perceptions of an independent and permanent self as an illusion constructed from constantly changing mental activity. As we resolve these illusions, subject and object become one and we experience a primordial awareness and loving interconnectedness. Big empathy is deep in its construction, and simple in its realization.

Little empathy is the everyday application of thoughtful interaction with others focused on how we can meet our own needs and the needs of others. Little empathy is easy to understand, and difficult to apply consistently. In the case of big empathy, we need only reckon with our own mind, in the case of little empathy, we have three things to deal with; 1) our mind with its thoughts and feelings, 2) the minds of others with their thoughts and feelings, 3) the thoughts and feelings that arise in both our minds as the result of the thoughts and feelings expressed by the other.

Little empathy is especially tricky for leaders and those engaged in conflict resolution, because they play professional roles with an increased volume of sensitive interactions. We must be able to be with and communicate our own feelings and needs clearly without sacrificing them. We must hear the needs of others, even when they can not see or express their needs yet themselves. Often the thoughts and feelings of others provoke emotional responses in us. These emotional responses can carry valuable information, but are challenging to interpret accurately and translate into fruitful actions — especially in real time. It’s also often quite easy to react to these emotions in a way that does not meet our needs or the needs of others. We tend to be easily frightened and defensive, so we often misread the actions of others — for example we read resistance when really someone is scared or there’s a lack of clarity between us, or we read exhaustion as laziness.

To make matters more complicated, people exhibit myriad defense mechanisms. As an example of a particularly tricky defense mechanism to perceive correctly, reaction formation leads people into exaggerated behaviors and claims that are in direct opposition to their true thoughts and feelings. if someone working with you wants to have an impact but doesn’t know how, and isn’t performing well, it’s not unusual for them to shut down, reduce effort, and claim that it is because they do not care. If you are this person’s manager, it can be frustrating — they are performing poorly, not putting in the effort, and telling you they do not care. Cracking down on this situation will reinforce the pattern, whereas a sensitive but firm stance will have the best chances for resolving the issue and turning things around.

To be most effective, a leader needs to develop some of the skills that a great therapist has to see through surface level manifestations to the underlying issues. The rest of the guide focuses exclusively on how to build a foundation for this practice.

I’d listen to them back to back and loop that sequence a few times in total. You’re aiming to develop a second-nature understanding of the basics. These books are pretty simplistic popsci*, so you may be tempted to eschew them or jump ahead. It’s worth taking the time to repeat them and allow it all to fully sink in. Remember, with little empathy, the ideas are easy, but training yourself to execute the ideas is hard!

As you’re learning the fundamentals, start trying to apply them in small ways each day as you notice your own feelings and begin to notice what others are communicating indirectly.

The goal with these two books is to understand how to be smart about daily micro interactions and longer term macro interactions by developing a sense of pattern matching against all the examples shared in the books. Notice how the same ideas apply at the level of a fleeting interaction or long term statecraft.

As you’re reading, try to reflect on how the examples map to the concepts from emotional and social intelligence that you picked up during the basics stage so that you get a sense of not on what to do, but why.

Of all the books I’ve read over the years, this is one of the most impactful on my daily life. Marshall’s approach to micro interactions is steeped in a deep understanding of macro-level empathy, so it really brings it all together. He shares a unique approach to getting in touch with your own needs, hearing the needs of others, and focusing on how everyone can get their needs met. His fundamental hypothesis is that everyone can get all their needs met, and that our conflicts arise from not being in touch with our own needs, not being able to communicate our needs, not being able to hear the needs of others, and not looking for solutions where everyone’s needs can be met.

Marshall’s book may bring a lot together for you if you’ve read the other books. Put it all into practice daily in any small ways that you can will help it sink in. Have some self-empathy while practicing, you will often wish you’d done a better job with interactions during the day, and may regret it in the evening, or even in real-time. That’s OK, give yourself permission to just reflect and notice what you might try differently next time. Don’t be afraid to keep looping any books you find most helpful, or finding new books.

Expect three months of looping these audio books and practicing in your daily interactions to develop a sense of pattern matching and feel more confident in your interactions with others. Expect six months for it to transition to your be your new normal.

*academic results for emotional and social intelligence

Daniel Goldman provides a compelling narrative and anecdotes about how emotional and social intelligence impacts leadership effectiveness, but his work isn’t exactly replete with scientific rigor.

Ongoing academic debate centers on being precise about both how we measure emotional and social intelligence, and exactly how much of work performance and leadership effectiveness they predict above other established factors like IQ and personality tests like the 16PF questionnaireand big 5 personality traits.

…while they still offer justification for using the quite “broad” Goleman model, which includes almost every individual-difference variable that is not IQ (for an interesting critique of this model see Stemberg, 1999) they now recognize the value of the focused definition of El as proposed by Peter Salovey and associates (cf Mathews, et al., 2002).

…correlation effect size values are considered small if less than or equal to .10, medium if equal to .25, and large if greater than or equal to .40. This meta-analysis yielded a combined effect of r= .380 which can be interpreted as a moderately strong relationship between emotional intelligence and leadership effectiveness. Although claims of the paramount or essential value of emotional intelligence as a component of leadership may be overstated, it would appear that emotional intelligence is at least an important element in the exercise of effective leadership.

I think that debates here are largely about how we define and measure different traits that are predictors of leadership. If you look at individual traits, or higher level factors driven by several traits, a lot of the same kinds of ideas show predictive power for leadership effectiveness whether you call them social intelligence or personality traits. So let’s just assume that personality traits and emotional and social intelligence are all inputs to exercising applied empathy, and that exercising applied empathy increases leadership effectiveness.

As quantitative finance has matured and the importance of computation has exploded, it's time to use machine learning to harvest the new low hanging fruit. Traditionally, quants might work alongside engineers and computer scientists -- the quants provide the statistical expertise, and the computer scientists and engineers provide the computational expertise. Machine learning folks combine statistics and computation in one brain to build models that leverage new levels of scale and richness to generalize better to unseen data and tackle new problems.

Quants

Finance and statistics have overlapped for over 100 years, at least since Louis Bachelier published The Theory of Speculation in 1900. Modern quantitative finance folks, called quants, have strong statistical backgrounds and come from a broad set of fields like finance, economics, physics, statistics, actuarial science, and so on. Pure quants are stronger in statistics and less so in computation. As the importance of computation has exploded, a traditional pure quant skill has become outmoded. This is actively changing over the past decade, so the non-computational characterization of quants is by no means universal. Many statisticians argue that computational statistics should be part of the primary education of a statistician.

Quants write equations

Scale

Machine learning folks have both strong statistical and computational backgrounds. That said, there are those focused more on machine learning theory, especially in academia, who are incredibly deep at the statistics end of the spectrum. For those focused more on applying machine learning in industry, there is no need to be as statistically deep as a stats PhD. Rather, there is a need to combine statistics and optimization with distributed systems to tackle large scale problems that present complicated engineering challenges. Leveraging state of the art systems engineering and big data allow applied machine learning folks to tackle data problems on a new scale (size of data) and complexity (richness of data -- for example working with text or images). Much of this industry work was pioneered at Google during the early 2000’s -- for example the now widely used MapReduce model for distributed computing was first developed to meet the needs of training a machine learning model on web scale data sets.

Machine learning engineers write code

Richness

A good way to understand the difference between the quant and machine learning toolbox is to consider how each would throw a linear model at a problem. A quant will tend to use a data sample, an out of the box model from a software package like R, and fit it in a standard way. A machine learning person might train it on a larger dataset over a larger parameter space with a loss function and optimization algorithm tailored to the specific problem. This combination of model, features, large data, loss function, and optimization algorithm allow machine learning folks to use a richer toolbox to build models that generalize better to unseen data.

Newness

Many unexploited opportunities have been evident to quants for decades, but although the solutions may be clear statistically at some level, the limiting factor has been computation. A simple example is incorporating unstructured data like online content, or semi-structured data like company reports and transaction data, into predictive models. Feature engineering is the process machine learning folks use to generate inputs to statistical models from raw input data. There are approaches for automated feature learning with techniques like deep learning -- recently, this has allowed us to unlock the potential of understanding and labeling images. Then there are approaches that require collaboration with subject matter experts. Effective feature engineering requires an understanding of how to feed the right kind of information into an optimization algorithm that allows the model to learn what it needs to in order to perform well at its task. The approaches that machine learning folks take to feature engineering are computationally sophisticated in a way that harnesses much more information from the raw data than traditional quants. There are also techniques like unsupervised learning and distant supervision that enable building models that perform well on tasks for which there is either little or no training data to learn from. Lastly, fields like natural language processing and deep learning allow us to capture information contained in raw text and images, and use this information to solve totally new problems.

Data Science

While ‘quant’ and ‘machine learning’ are clearly defined terms at this point, ‘data science’ is a place to be careful, as the definition is still in flux. This title could be used by someone with a background in applied machine learning whose deliverable is working production models that accomplish a specific task. It could also be used by someone who plays an analyst role leveraging environments like matlab, R or python whose deliverables are visualizations, studies, and presentations. Both are valuable, but it is important to work backward from your desired deliverable to define your staffing needs. If you are searching for people on linkedin to build out a team that’s meant to deliver working models, I’d search for ‘machine learning’ rather than ‘data science.’

Quantitative finance has matured to the point that it yields terrific results for many problems -- delivering models that eclipse the performance of humans for many tasks. Machine learning is not magic, depending on the problem and available data, a traditional quant approach might be state of the art. In these cases, quants are doing the same thing that a machine learning person would do. Machine learning shines when the scale of the available data and richness of the toolbox enables models that generalize better to unseen data, or allows us to tackle new problems. Due to the maturity of quantitative finance and the economic benefits of better performance on important finance problems, the market for applying these traditional approaches is pretty efficient. This means that much of the new low hanging fruit lies in applying machine learning approaches pioneered in the tech world at larger scale with new data, richer models, and to new problems. If your financial services organization is looking at running a machine learning initiative, try to find a problem where machine learning provides a real edge above and beyond traditional quant approaches you are already employing. Then work backward from a clear deliverable to define the team composition most likely to yield success.

Follow Bradford on Twitter to stay up to date on machine learning in finance.