I always enjoy people trying to scare other people to death. Roller coaters, haunted houses, scary movies, risk management.

In addition to global warming, all kinds of diseases and death by food, the news has been full of hand wringing over artificial intelligence (AI) . I thought of this while watching a new series on AMC called Humans. I think they got it right. The danger is not so much killer robots. It is a more human problem.

If you think about it, the problem already being caused by technology is that it simply makes it easier to avoid other humans. AS it stands, technology allows us to avoid having to talk to people most of the time. And this is often a blessing. I certainly don't relish talking to some random phone rep half way across the globe every time I need to check on the status of an internet order. Hitting a hyperlink is much faster and easier.

But imagine if we ever had realistic robots such as those in Humans. In the show, people quickly figure out that a programmable robot, without an ego or personal needs makes a far better spouse than a human does.

What if we are starting a shift away from human interaction towards a more automated and isolated paradigm. This is probably a better starting place to imagine the risks than evil killer robots.

I wrote for Risk&Insurance for a while and during that time got to know the owner, Matt Kahn, and editor at the time (he has now left) Jack Roberts as well as writer Peter Rousmaniere. They are truly great men and I was very sad to have to give up writing when I took my role at Fidelity.

I love writing, but need a good editor, and Matt is a really great editor to work for, so it was just a matter of time before I started writing again.

This Blog is sort of the waiting room for many of my ideas. I write far more here than is published, but some of these ideas will show up over at www.riskandinsurance.com in a more polished form.

Also, my true love is modeling and quantifying uncertainty. These are the heart and soul of risk management, although not all types of risk management have caught up with this. So while some modeling stuff gets published at R&I, the rest will be here.

There is a tremendous talk these day about driverless cars. Most discussion centers around the risks, mostly imagined. I am reminded of the story of John Henry, who competed with a steam engine in digging a tunnel. The story captures the anxiety widely felt about machines replacing men. You can't help but root for John Henry.

So once again people are expressing their anxiety in the appropriate medium of the age: risk aversion.

The discussion, so far, has not looked at just how accurate our anxieties are in predicting the future. They aren't. As La Monaigne said, " I have lived through many great tragedies, some of which actually happened." We imagine horrible scenarios by nature. It is what we do when confronted with uncertainty and change.

I am reminded of some other predictions. In the 19th century, there was fear that electricity would leak from empty sockets and plugs, form pools on the floor and kill anyone walking through them. The satellite era saw predictions of the radio waves frying peoples minds. Speaking of frying, microwaves were feared by many for the same fear of frying our brains. And now we have the worry crowd concerned that cell phones are literally frying our brains.

And so it is with driverless cars. A new technology is proposed and so what do we as humans do? Imagine all of the hidden and awful downsides. Fortunately history has proven that we are awful at these predictions. The things listed above that were surely going to cause grievous harm didn't, and the things that we thought were great ideas, like the Treaty of Versailles, socialism, and drinking turned out to be catastrophic in many ways such as WWI, Stalin, and...well, I'll let you fill in your own drinking stories.

We should learn. Driverless cars will arrive. They will transform our world. They will result. ultimately in fewer accidents. At some point I predict we will stop worrying about the dangers and turn to our other favorite hobby: complaining about how the new technology that promised us more leisure has actually given us less. Cell phones after all did not fry our brains, but they did remove our excuses for not being at the beck and call of others.

So yes, tragedy could occur, but it might simply turn out to be the tragedy of annoyance and ease. If I am going to suffer, those seem like reasonable things to endure when compared to the Gulag.

There is a principle in modeling that a model must be falsifiable. This means that a model which predicts behavior or outcomes must be subject to some test which would prove that the model is accurate. So a model predicting that adding 1 gallon of water to a bucket will raise the water level one inch is falsifiable if one can measure the water level to test the model. But let's say that the model predicts the water level in such a way that whether the water goes up, down or stays the same, the model is correct? Well, that is a pretty worthless model, not only because it doesn't predict well, but also because it tells us nothing about the system it purports to approximate. Models are supposed to be a pared down version of the real system. The reason we make them is essentially to perform the scientific method, i.e., to form a hypothesis and test it against a control. Since we can't always create a control (for example, we can't test the cities traffic grid by creating a second identical city with identical streets) we make a model with which we can change and control variables to test outcomes. A model should let us run tests. In the above hypothetical, we hypothesize that adding one gallon of water adds one inch to the water level in the bucket. We run the model and then compare the results with reality. If one gallon was added in real life and the water level stayed the same, the hypothesis would be disproved and the model discarded. . If the model creator can't admit that his hypothesis was disproven, and instead says "well, there are other one-time factors, so the model is still right.", then he is pronouncing his model non-falsifiable; however the fact remains that a model which does not approximate the system, nor predict its behavior, is pointless. Of course, such models are being used for risk management. Models used by the IPCC for the Earth's climate are a good example. All of their predictions of higher temperatures 100 years from now are based on computer models. These models were declared close to perfect in the late 1990's and all predicted large temperature increases as atmospheric carbon increased. Think of this as a different version of the water-in-the-bucket model: like water raising the level in the bucket, when CO2 goes up, temperature goes up, hypothetically. Only it doesn’t. For 18 years CO2 has gone up even faster than predicted while temperature has remained flat. Water was put into the bucket, and the level did not rise. Wherever global mean temperature goes - up, down or sideways-- the models are declared correct. When using models to make decisions regarding risk, one has to keep this principle in mind. A non falsifiable model is as useful as a weatherman who predicts that the weather tomorrow will either be the same or different – of no value.

In business, risk is measured in dollars. This seems a simple statement, but behind it is a vastly complex and difficult concept: taking a nebulous, uncertain and wholly conceptual event and turning it into numbers. It is like measuring the volume of the color yellow. How do you do that? Whether the quantification is simply snatched from thin air, or run though a computer, it is done with models. For those going with a “gut feeling” or relying on experience, the model being employed is called a heuristic. Heuristics are the mental programs that we develop to handle complex problems quickly. They have served us well for millennia; it was far better, after all, to simply run when you heard something roar inside of your cave than to calculate the odds that it was a cave bear and consult actuarial tables to guage the probablility of losing an arm vs. becoming bear chow. Another type of model commonly used is the more formal computer model. This is what the astrophysicists and PhD’s working on Wall Street often use to assess market risk and make investments. While these are often maligned, they are very powerful tools with distinct advantages over the gut-feeling approach. Trying to measure the returns on 45 different funds in never before seen market scenarios is not something that our minds ever evolved to handle. In between these two approaches there are an infinite variety of models: spreadsheets, looking for patterns in charts, reading tea leaves, etc. No matter what the model, they are terribly dangerous. The problem is that the person who creates the model, tends to believe it. And the more official, formal and credible the model is, the more compelling it is. Models create their own siren song. We might know that they are luring us into folley, but we put a lot of time into them and they seem so right. Even the smartest people fall into this trance. Take for example Nobel Prize (economics) winners Fischer Black and Myron Scholes. They created a model to value options that was so good, they build a multi billion dollar investment company around it. And it worked great…..until it didn’t. When it stopped working, it caused a colossal market crash that almost took down the US economy. But that is not the most interesting thing about their model. Rather it is that in spite of this horrific, public failure, the Black-Scholes model is still the one most commonly used on Wall Street and elsewhere. It is as if your best friend drank a glass of wine and immediately fell dead, and everyone else in the restaurant ordered the same wine. This is not to say that quant models are any more dangerous than any other kind of model. Whichever one you use, you must not fall not the trap of believing that the model is reality. In putting dollar figures to a risk, you have to make sure that you are not increasing the level of risk by believing that your assessment is anything but a sophisticated, well thought out guess.

One of the best books on Risk Management is about the finches of the Galápagos Islands (Beak of the Finch by Jonathan Weiner). It might seem a stretch, but when it comes to understanding how a rapidly changing environment can drive change and extinction, we have much to learn from these little birds. Nature and business are very similar in a survival-of-the-fittest sense. We all understand this on some level, but biology provides the best mental models. One of these is the "fitness landscape". The best way to understand this is to think of a mountain range, like the Himilayas. Imagine that there is a low fog lying in the valleys win the peaks rising above it, and imagine that this fog is poisonous. Each species inhabits one peak, and the peaks are shifting around, rising up, and sinking down into the fog. If a peak sinks into the fog, then the species has to jump to another peak or become extinct. For Finches, shifts in the fitness landscape are manifested by the types of seeds available from year to year. The size, nutritional content and toughness of the seeds varies based on changes in the weather. If their beak is 1 mm too long or too short, they die. In business, there are many changes in the environment that can kill a company. Look at how streaming wiped out the video rental business. Small, nimble entrants into the market also have the advantage of the latest advancements in IT to enhance their power and ability to unsettle the status quo. Fortune 500 companies rarely last more than 50 years; they become less and less efficient until they essentially freeze and collapse under their own weight. For those of us working at companies with over 250 people, the deadliest risk might be failure to adapt to changing business environments. But what can we do to mitigate this risk? If a finch finds that his beak is ill suited to eat the seeds available to him this year, it cannot change its beak. But it might be able to jump to a new island. Similarly, a business finding itself ill suited to the current marketplace might simply be too big to change. This is not what anyone will want to hear, but then a career in risk management is substantively different from a career as a writer of fiction. We have to deal with reality as it presents itself, not as we wish it to be. Doing so allows us to spend our resources on things that can be done. If the finches are indeed the right model, we need to stop flogging the company screaming "innovate!". Latching on to the latest buzz words such as “Big Data” is not going to make a huge lumbering behemoth of a company fast and nimble. So what is the solution? There simply might now be one that is available to Risk Managers. Finches might give the best advice: fly to a new island.