cdixon blog2019-01-09T16:01:42Zhttp://cdixon.org/feed/atom/WordPress.comcdixonhttp://cdixon.org/?p=19772019-01-09T16:01:42Z2019-01-09T05:44:53ZDuring a media tour in 2007 in which Steve Jobs showed the device to reporters, there was one instance in which a journalist criticized the iPhone’s touch-screen keyboard.

“It doesn’t work,” the reporter said.

Jobs stopped for a moment and tilted his head. The reporter said he or she kept making typos and the keys were too small for his or her thumbs.

Jobs smiled and then replied: “Your thumbs will learn.”

When the iPhone was introduced in 2007, it mystified its competitors, because it wasn’t built for the world as it existed. Wireless networks were too slow. Smartphone users only knew how to use physical keyboards. There were no software developers making apps for touchscreen phones. It frequently dropped phone calls.

But the iPhone was such a remarkable device — fans called it “The Jesus Phone” — that the world adapted to it. Carriers built more wireless capacity. Developers invented new apps and interfaces. Users learned how to rapidly type on touchscreens. Apple kept releasing better versions, fixing problems and adding new capabilities.

Smartphones are a good example of a broader historical pattern: technologies usually arrive in pairs, a strong form and a weak form. Here are some examples:

Strong

Weak

Public internet

Private intranets

Consumer web

Interactive TV

Crowdsourced encyclopedia (Wikipedia)

Expert-curated encyclopedia (e.g. Nupedia, Encarta)

Crowdsourced video (YouTube)

Video tech for media companies (e.g. RealPlayer)

Internet video chat (Skype)

Voice-over-IP (e.g. Vonage)

Streaming music (Spotify)

MP3 downloads (e.g. iTunes)

Touchscreen smartphones with full operating system and app store (iPhone)

Limited-app smartphones with physical keyboards (e.g. Blackberry)

Fully electric cars (Tesla)

Hybrid cars

Permissionless blockchains powered by cryptocurrencies

Permissioned/private blockchains

Public cloud

Private / hybrid cloud

App-based media companies (e.g. Netflix)

Video on demand delivered by cable companies

Virtual realty

Augmented reality

E-sports

Traditional sports delivered over the internet

Strong technologies capture the imaginations of technology enthusiasts. That is why many important technologies start out as weekend hobbies. Enthusiasts vote with their time, and, unlike most of the business world, have long-term horizons. They build from first principles, making full use of the available resources to design technologies as they ought to exist. Sometimes these enthusiasts run large companies, in which case they are often, like Steve Jobs, founders who have the gravitas and vision to make big, long-term bets.

The mainstream technology world notices the excitement and wants to join in, but isn’t willing to go all the way and embrace the strong technology. To them, the strong technology appears to be some combination of strange, toy-like, unserious, expensive, and sometimes even dangerous. So they embrace the weak form, a compromised version that seems more familiar, productive, serious, and safe.

Strong technologies often develop according to the Perez/Gartner hype cycle:

During the trough of disillusionment, entrepreneurs and others who invested in strong technologies sometimes lose faith and switch their focus to weak technologies, because the weak technologies appear nearer to mainstream adoption. This is usually a mistake.

That said, weak forms of technology can be successful. For example, it is very likely that augmented reality will be important, watching traditional sports on the internet will be popular, and so on.

But it’s strong technologies that end up defining new eras. What George Bernard Shaw said about people also applies to technologies:

The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

Weak technologies adapt to the world as it currently exists. Strong technologies adapt the world to themselves. Progress depends on strong technologies. Your thumbs will learn.

]]>cdixonhttp://cdixon.org/?p=18692015-10-18T06:57:34Z2015-06-07T17:28:30Z“How to hit home runs: I swing as hard as I can, and I try to swing right through the ball… The harder you grip the bat, the more you can swing it through the ball, and the farther the ball will go. I swing big, with everything I’ve got. I hit big or I miss big.” – Babe Ruth

One of the hardest concepts to internalize for those new to VC is what is known as the “Babe Ruth effect”:

Building a portfolio that can deliver superior performance requires that you evaluate each investment using expected value analysis. What is striking is that the leading thinkers across varied fields — including horse betting, casino gambling, and investing — all emphasize the same point. We call it the Babe Ruth effect: even though Ruth struck out a lot, he was one of baseball’s greatest hitters. — ”The Babe Ruth Effect: Frequency vs Magnitude” [pdf]

The Babe Ruth effect occurs in many categories of investing, but is especially pronounced in VC. As Peter Thiel observes:

Actual [venture capital] returns are incredibly skewed. The more a VC understands this skew pattern, the better the VC. Bad VCs tend to think the dashed line is flat, i.e. that all companies are created equal, and some just fail, spin wheels, or grow. In reality you get a power law distribution.

The Babe Ruth effect is hard to internalize because people are generally predisposed to avoid losses. Behavioral economists have famously demonstrated that people feel a lot worse about losses of a given size than they feel good about gains of the same size. Losing money feels bad, even if it is part of an investment strategy that succeeds in aggregate.

People usually cite anecdotal cases when discussing this topic, because it’s difficult to get access to comprehensive VC performance data. Horsley Bridge, a highly respected investor (Limited Partner) in many VC funds, was kind enough to share with me aggregated, anonymous historical data on the distribution of investment returns across the hundreds of VC funds they’ve invested in since 1985.

As expected, the returns are highly concentrated: about ~6% of investments representing 4.5% of dollars invested generated ~60% of the total returns. Let’s dig into the data a little more to see what separates good VC funds from bad VC funds.

Home runs: As expected, successful funds have more “home run” investments (defined as investments that return >10x):

(For all the charts shown, the X-axis is the performance of the VC funds: great VC funds are on the right and bad funds are on the left.)

Great funds not only have more home runs, they have home runs of greater magnitude. Here’s a chart that looks at the average performance of the “home run” (>10x) investments:

The home runs for good funds are around 20x, but the home runs for great funds are almost 70x. As Bill Gurley says: “Venture capital is not even a home run business. It’s a grand slam business.”

Strikeouts: The Y-axis on the this chart is the percentage of investments that lose money:This is the same chart with the Y-axis weighted by dollars invested per investment:

As expected, lots of investments lose money. Venture capital is a risky business.

Notice that the curves are U-shaped. It isn’t surprising that the bad funds lose money a lot, or that the good funds lose money less often than the bad funds. What is interesting and perhaps surprising is that the great funds lose money more often than good funds do. The best VCs funds truly do exemplify the Babe Ruth effect: they swing hard, and either hit big or miss big. You can’t have grand slams without a lot of strikeouts.

]]>cdixonhttp://cdixon.org/?p=18222015-05-15T21:42:25Z2015-05-13T02:02:00Z"How did you go bankrupt?"
"Two ways. Gradually, then suddenly.”
― Ernest Hemingway, The Sun Also Rises
The core growth process in the technology business is a mutually reinforcing, multi-step, positive feedback loop between platforms and applications. This leads to exponential growth curves (Peter Thiel calls them power law curves) [...]
]]>

The core growthprocess in the technology business is a mutually reinforcing, multi-step, positive feedback loop between platforms and applications. This leads to exponential growth curves (Peter Thiel calls them power law curves), which in idealized form look like:

The most prominent recent example of this was the positive feedback loop between smartphones (iOS and Android phones) and smartphone apps (FB, WhatsApp, etc):

After the fact, exponential curves look relatively smooth. When you are in the midst of them, however, they feel like they are divided into two stages: gradual and sudden.

Singularity University calls this the “deception of linear vs exponential growth”:

Today, smartphone growth seems obviously exponential. But just a few years ago many people thought smartphones were growing linearly. Even Mark Zuckerberg underestimated the importance of mobile in the “feels gradual” phase. In 2011 or so, he realized what we were experiencing was actually an exponential curve, and consequently dramatically increased Facebook’s investment in mobile:

Exponential growth curves in the “feels gradual” phase are deceptive. There are many things happening today in technology that feel gradual and disappointing but will soon feel sudden and amazing.

]]>cdixonhttp://cdixon.org/?p=17342015-04-26T18:17:36Z2015-03-24T11:30:48ZOver the past decade, computing resources that were previously available only to large organizations became available to almost anyone. Using cloud-scale development platforms like Amazon Web Services, developers can write software that runs on hundreds or even thousands of servers, and do so relatively cheaply.

But it is still difficult to write software that makes efficient use of this abundant computing. For some projects, like creating websites, there are well-known software architectures that work reasonably well. In other areas, there’s been progress building generalized tools (for example, Hadoop in data processing). For the most part, however, developers need to solve the parallelization problem over and over again for each application they develop. New tools that help them do this are sorely needed.

Today, I am excited to announce that a16z is investing $20M in Improbable, a London-based company that was founded by a group of computer scientists from the University of Cambridge. Improbable’s technology solves the parallelization problem for an important class of problems: anything that can be defined as a set of entities that interact in space. This basically means any problem where you want to build a simulated world. Developers who use Improbable can write code as if it will run on only one machine (using whatever simulation software they prefer, including popular gaming/physics engines like Unity and Unreal), without having to think about parallelization. Improbable automatically distributes their code across hundreds or even thousands of machines, which then work together to create a seamlessly integrated, simulated world.

The Improbable team had to solve multiple hard problems to make this work. Think of their tech as a “spatial operating system”: for every object in the world — a person, a car, a microbe —the system assigns “ownership” of different parts of that entity to various worker programs. As entities move around (according to whatever controls them — code, humans, real-world sensors) they interact with other entities. Often these interactions happen across machines, so Improbable needs to handle inter-machine messaging. Sometimes entities need to be reassigned to new hardware to load balance. When hardware fails or network conditions degrade, Improbable automatically reassigns the workload and adjusts the network flow. Getting the system to work at scale under real-world conditions is a very hard problem that took the Improbable team years of R&D.

One initial application for the Improbable technology is in gaming. Game developers have been trying to build virtual worlds for decades, but until now those worlds have been relatively small, usually running on only a handful of servers and relying on hacks to create the illusion of scale. With Improbable, developers can now create games with millions of persistent, complex, interacting entities. In addition, they can spend their time inventing game features instead of building back-end systems.

Beyond gaming, Improbable is useful in any field that models complex systems — biology, economics, defense, urban planning, transportation, disease prevention, etc. Think of simulations as the flip side to “big data.” Data science is useful when you already have large data sets. Simulations are useful when you know how parts of the system work and want to generate data about the system as a whole. Simulations are especially well suited for asking hypothetical questions: what would happen to the world if we changed X and Y? How could we change X and Y to get the outcome we want?

Improbable was started three years ago at Cambridge by Herman Narula and Rob Whitehead. They have since built an outstanding team of engineers and computer scientists from companies like Google and top UK computer science programs. They’ve done all of this on a small seed financing, supplemented by customer revenue and research grants. We are thrilled to partner with Improbable on their mission to develop and popularize simulated worlds.

I felt it the first time when I visited a school. It was third and fourth graders, and they had a whole classroom full of Apple II’s. I spent a few hours there, and I saw these third and fourth graders growing up completely different than I grew up because of this machine.

What hit me about it was that here was this machine that very few people designed — about four in the case of the Apple II — who gave it to some other people who didn’t know how to design it but knew how to make it, to manufacture it. They could make a whole bunch of them. And then they give it some people that didn’t know how to design it or manufacture it, but they knew how to distribute it. And then they gave it to some people that didn’t knew how to design or manufacture or distribute it, but knew how to write software for it.

Gradually this sort of inverse pyramid grew. It finally got into the hands of a lot of people — and it all blossomed out of this tiny little seed.

It seemed like an incredible amount of leverage. It all started with just an idea. Here was this idea, taken through all of these stages, resulting in a classroom full of kids growing up with some insights and fundamentally different experiences which, I thought, might be very beneficial to their lives. Because of this germ of an idea a few years ago.

That’s an incredible feeling to know that you had something to do with it, and to know it can be done, to know that you can plant something in the world and it will grow, and change the world, ever so slightly.

]]>cdixonhttp://cdixon.org/?p=17022015-02-02T07:32:06Z2015-02-01T23:38:18ZAn “idea maze” is a map of all the key decisions and tradeoffs that startups in a given space need to make:

A good founder is capable of anticipating which turns lead to treasure and which lead to certain death. A bad founder is just running to the entrance of (say) the “movies/music/filesharing/P2P” maze or the “photosharing” maze without any sense for the history of the industry, the players in the maze, the casualties of the past, and the technologies that are likely to move walls and change assumptions.

I thought it would be interesting to show an example of an idea maze for an area that I’m interested in: AI startups. Here’s a sketch of the maze. I explain each step in detail below.

“MVP with 80–90% accuracy.” The old saying in the machine learning community is that “machine learning is really good at partially solving just about any problem.” For most problems, it’s relatively easy to build a model that is accurate 80–90% of the time. After that, the returns on time, money, brainpower, data etc. rapidly diminish. As a rule of thumb, you’ll spend a few months getting to 80% and something between a few years and eternity getting the last 20%. (Incidentally, this is why when you see partial demos like Watson and self-driving cars, the demo itself doesn’t tell you much — what you need to see is how they handle the 10–20% of “edge cases” — the dog jumping out in front of the car in unusual lighting conditions, etc).

At this point in the maze you have a choice. You can either 1) try to get the accuracy up to near 100%, or 2) build a product that is useful even though it is only partially accurate. You do this by building what I like to call a “fault tolerant UX.”

“Create a fault tolerant UX.” Good examples of fault-tolerant UXs are iOS autocorrect and Google search’s “did you mean X?” feature. You could also argue Google search itself is a fault tolerant UX: showing 10 links instead of going straight to the top result lets the human override the machine when the machine gets the ranking wrong. Building a fault tolerant UX isn’t capitulation, but it does mean a very different set of product requirements. (In particular, latency is very important when you want the human and machine to work together—this generally affects your technical architecture).

Ok so let’s suppose you decide to go for 100% accuracy. How do you get there? You won’t get the 10–20% through algorithms. You’ll only get there with lots more data for training your models. Data is the key to AI because 1) it’s the missing ingredient — we have great algorithms and virtually endless computational resources now, and 2) it’s the proprietary ingredient—algorithms are mostly a shared resource created by the research community. Public data sets, on the other hands, are generally not very good. The good data sets either don’t exist or are privately owned.

“Narrow the domain.” The amount of data you need is relative to the breadth of the problem you are trying to solve. So before you start collecting data you might want to narrow your domain. Instead of trying to build a virtual bot that can do anything (which would basically mean passing the Turing Test—good luck with that), build a bot that can just help someone with scheduling meetings. Instead of building a cloud service that predicts anything, build one that can predict when a transaction is fraudulent. Etc.

“Narrow domain even more.” After you are done narrowing the domain, try narrowing it even more! Even if your goal is to build X, sometimes building an MVP that is part of X is the best way to eventually get to X. My advice would be to keep narrowing your domain until you can’t narrow it anymore without making the product so narrow that no one wants to use it. You can always expand the scope later.

“How do you get the data?” Broadly speaking, there are two ways: build it yourself or crowdsource it. A good analogy here is Google Maps vs Waze. Google employs thousands of people driving around to map out roads, buildings, and traffic. Waze figured out how to get millions of people to do that for them. To do what Google does, you need far more capital (hundreds of millions, if not billions of dollars) than is generally available to pre-launch startups.

Startups are left with two choices to get the data. 1) Try to mine it from publicly available sources. 2) Try to crowdsource it.

The most common example of 1) is crawling the web, or big websites like Wikipedia. You could argue this is what the original Google search did by using links as ranking signals. Many startups have tried mining Wikipedia, an approach that hasn’t led to much success, as far as I know.

The most viable approach for startups is crowdsourcing the data. This boils down to designing a service that provides the right incentives for users to give data back to the system to make it better. Building a crowdsourced product is its own topic (which is why that part of the idea maze points to another, nested idea maze), but I’ll give an example of one approach to doing this, which was tried by company called Wit.ai that we invested in last year. Wit’s idea was to provide a service for developers for doing speech-to-text and natural language processing. The v1.0 system gave the right answer most but not all of the time. But it also provided a dashboard and API where developers could correct errors to improve their results. For developers using the free version of the service, the training they performed would get fed back to make the overall system smarter. Facebook acquired Wit so their future will unfold now as part of a larger company. The approach they took was very clever and could apply to many other AI domains.

This is a rough sketch of how I see the AI startup idea maze. A few caveats: 1) I could very well be mistaken or have overlooked other paths through the maze — idea mazes are meant to aid discussion, not serve as gospel, and 2) As Balaji says, new technological developments can “move walls and change assumptions.” Look out especially for new infrastructure technologies (internet, smartphones, cloud computing, bitcoin, etc) that can unlock new pathways in many different idea mazes, even ones that at first seem unrelated.

]]>cdixonhttp://cdixon.org/?p=16842015-02-03T06:44:49Z2015-01-31T21:27:14ZA popular strategy for bootstrapping networks is what I like to call “come for the tool, stay for the network.”

The idea is to initially attract users with a single-player tool and then, over time, get them to participate in a network. The tool helps get to initial critical mass. The network creates the long term value for users, and defensibility for the company.

I’m going to give two historical examples and leave it to readers to think of present-day examples (there are many): 1) Delicious. The single-player tool was a cloud service for your bookmarks. The multiplayer network was a tagging system for discovering and sharing links. 2) Instagram. Instagram’s initial hook was the cool photo filters. At the time some other apps like Hipstamatic had filters but you had to pay for them. Instagram also made it easy to share your photos on other networks like Facebook and Twitter. But you could also share on Instagram’s network, which of course became the preferred way to use Instagram over time.

The “come for the tool, stay for the network” strategy isn’t the only way to build a network. Some networks never had single-player tools, including gigantic successes like Facebook and Twitter. But starting a network from scratch is very hard. Think of single-player tools as kindling.

]]>cdixonhttp://cdixon.org/?p=16752015-01-25T04:08:07Z2015-01-25T01:02:44ZThe holy grail of virtual reality, the one that’s always been out of reach until now, is presence.

In the VR community, “presence” is a term of art. It’s the idea that once VR reaches a certain quality level your brain is actually tricked — at the lowest, most primal level — into believing that what you see in front of you is reality. Studies show that even if you rationally believe you’re not truly standing at the edge of a steep cliff, and even if you try with all your might to jump, your legs will buckle. Your low-level lizard brain won’t let you do it.

With presence, your brain goes from feeling like you have a headset on to feeling like you’re immersed in a different world.

Computer enthusiasts and science fiction writers have dreamed about VR for decades. But earlier attempts to develop it, especially in the 1990s, were disappointing. It turns out the technology wasn’t ready yet. What’s happening now — because of Moore’s Law, and also the rapid improvement of processors, screens, and accelerometers, driven by the smartphone boom — is that VR is finally ready to go mainstream.

Once VR achieves presence, we start to believe.

We use the phrase “suspension of disbelief” about the experience of watching TV or movies. This implies that our default state watching TV and movies is disbelief. We start to believe only when we become sufficiently immersed.

With VR, the situation is reversed: we believe, by default, that what we see is real. As Chris Milk, an early VR pioneer, explains:

You read a book; your brain reads letters printed in ink on paper and transforms that into a world. You watch a movie; you’re seeing imagery inside of a rectangle while you’re sitting inside a room, and your brain translates that into a world. And you connect to this even though you know it’s not real, but because you’re in the habit of suspending disbelief.

With virtual reality, you’re essentially hacking the visual-audio system of your brain and feeding it a set of stimuli that’s close enough to the stimuli it expects that it sees it as truth. Instead of suspending your disbelief, you actually have to remind yourself not to believe.

This has implications for the kinds of software that will succeed in VR. The risk is not that it’s boring, but that it’s too intense. For example, a popular video game like Call of Duty ported to VR would be frightening and disorienting for most people.

What will likely succeed instead are relatively simple experiences. Some examples: go back in time and walk around ancient Rome; overcome your fear of heights by climbing skyscrapers; execute precision moves as you train to safely land planes; return to places you “3D photographed” on your last vacation; have a picnic on a sunny afternoon with a long-lost friend; build trust with virtual work colleagues in a way that today you can only do in person.

These experiences will be dreamt up by “experience makers” — the VR version of filmmakers. The next few decades of VR will be similar to the first few decades of film. Filmmakers had no idea what worked and what didn’t: how to write, how to shoot, how to edit, etc. After decades of experiments they established the grammar of film. We’re about to enter a similar period of exploration with VR.

There will be great games made in VR, and gaming will probably dominate the VR narrative for the next few years. But longer term, we won’t think of games as essential to the medium. The original TV shows were newscasts and game shows, but today we think of TV screens as content-agnostic input-output devices.

VR will be the ultimate input-output device. Some people call VR “the last medium” because any subsequent medium can be invented inside of VR, using software alone. Looking back, the movie and TV screens we use today will be seen as an intermediate step between the invention of electricity and the invention of VR. Kids will think it’s funny that their ancestors used to stare at glowing rectangles hoping to suspend disbelief.

]]>cdixonhttp://cdixon.org/?p=16732015-04-26T18:18:55Z2015-01-20T22:38:30ZToday we are announcing that A16Z is leading a $40M investment in Stack Exchange, along with earlier investors USV, Bezos Expeditions, Spark, and Index.

One of the major startup opportunities of the information age is: now that more than two billion people have internet-connected devices, how do we create systems to efficiently share and store their collective knowledge? Requirements for successful collective knowledge systems include: 1) users need to be given the proper incentives to contribute, 2) the contributions of helpful users need to make the system smarter (not just bigger), and 3) users with malicious intent can’t be allowed to hurt the system.

Many entrepreneurs and inventors have tried and failed to solve this problem. As far as I know, only two organizations have succeeded at scale: Stack Exchange and Wikipedia. Stack isn’t as large as Wikipedia on the readership side — the topics are more specialized—but, on the contributor side, is closely comparable to Wikipedia. Last year, Stack had over 300 million unique visitors, and 3.8 million total registered users, who contributed over 3.1M questions, 4.5M answers, 2.7M edits, and 17M comments.

Stack’s business model is based on job placement. Employers create company pages (here is Amazon’s—over 6000 companies have created pages) and then run targeted ad campaigns for open job opportunities. Revenue has grown quickly, and the company employs over 200 people. The HQ is in NYC, with offices in Denver and London, and remote workers in Israel, Brazil, Japan, Germany, Slovenia, France, and across the US.

I believe Stack Exchange’s growth has now reached escape velocity. Not only will the existing topics continue to grow, but many new topics will emerge, until the network covers every topic that is amenable to objective Q&A. As new generations of people grow up on the internet, old habits — searching through textbooks or how-to books, or asking friends—will fade away. People will come to expect that any objective question can be instantly answered with a Google search.

I’ve been a personal investor in Stack since its initial funding, and it has always been one of my favorite investments. Stack’s cofounder & CEO, Joel Spolsky, is an amazing entrepreneur and internet visionary. I’m very happy to back him again with this new investment.

]]>cdixonhttp://cdixon.org/?p=16192015-04-26T18:18:21Z2015-01-15T14:00:30ZI’m excited to announce today that Andreessen Horowitz is leading a $3M financing of Skydio, a startup developing artificial intelligence systems for drones.

The Skydio team is awesomely qualified. They worked on drone vision systems at MIT and then co-founded a drone project at Google[x] called Project Wing. The company’s mission is to create smart drones. As cofounder Adam Bry says:

Drones are poised to have a transformative impact on how we see our world. They’ll enable us to film the best moments of our lives with professional quality cinematography and they’ll also change the way businesses think about monitoring their operations and infrastructure. This grand vision is starting to come into focus, but existing products are blind to the world around them. As a consequence, drones must fly high above the nearest structures or receive the constant attention of an expert operator. “Flyaways” and crashes abound. These problems must be solved for the industry to move forward.

Smart drone operators will simply give high-level instructions like “map these fields” or “film me while I’m skiing” and the drone will carry out the mission. Safety and privacy regulations will be baked into the operating system and will always be the top priority.

This is my second drone investment – the first one was Airware. I see Airware and Skydio as complementary (and I’d like to make more drone investments – at any stage including seed investments – as long as they don’t compete with Airware or Skydio). You can think of Airware as the operating system and Skydio as the most important app on top of the operating system. The founders of both companies have deep expertise in both aviation and computer science, the key prerequisites for creating smart drones.