Become a Fan

March 23, 2018

In the news is a story of a driverless fatality in Tempe, Arizona. In the dark of night, a woman walking a bicycle across two lanes of traffic was struck and killed by one of Uber's new autonomous vehicles. Behind the wheel was a driver who was momentarily distracted, probably by a tablet in his hands, while the autopilot of the car cruised along at just under 40 miles per hour and ran the poor woman down. The article I read about the story quoted at least two experts who said that the lidar equipped machine should have easily seen the pedestrian and avoided the accident. I was most interested in the final paragraphs.

Raj Rajkumar, a Carnegie Mellon University professor and founder of an autonomous-vehicle software company that he later sold, suggested the laser sensor may have had a blind spot around the vehicle because of its position mounted on the roof. Still, he said, he would have expected the software to have reacted.

“Legally speaking, the pedestrian may be at fault,” Mr. Rajkumar said. “But mature, reliable self-driving vehicle technology would have done better by slowing down or changing lanes, and this major incident would have been prevented.”

I have been in the IT industry for decades and one of the most fabulously successful products ever was the iPhone. I've been using mine for 7 years. My mother has been using hers for 2 months and she gets lost all the time. She is computer literate, but a lot less so than I. The distance between what I can digest as a professional at this stage in my career and what ordinary folks can understand is the gap into which a great deal of VC money is thrown in expectation of aggregating many multiples of that in consumer spending. Yes, I have a very new, very advanced, very capable iPhone device in my pocket and I use it all of the time. It works exceedingly well for me because I am highly discriminating in my consumption of applications, and I know how application designers think when they are building for the iPhone. In that way I am immune to bad apps, and I don't get lost. Half of all wisdom is knowing what to throw away. Simple works best.

So while journalists will have ready access to founders and professors who will state what might be professionally obvious, pedestrians and the rest of the world are puzzled as to exactly what went wrong. I suspect as the story of autonomous vehicles grows, we will all find out sooner or later, most likely later. It has always been my opinion that as cars learn to pace themselves in traffic that we humans are going to have the hardest time understanding their protocols. Indeed the future of driver education will not focus so much on skills of the drivers as to understanding how cars have been programmed to think about themselves and about traffic. That's a long, dark and narrow road for all of us to travel.

What's legal today ought to be perfectly obvious. We have had crosswalks for generations. We have had crosswalks in law for generations. These things are perfectly obvious because we've had a very long time to learn the legal and physical truth about crosswalks. Lidar? Not so much. There will not be human and animal analogs for the way lidar perceives or the way cars will communicate with each other. We will have to learn new concepts and adapt. Or we will simply have to drive.

What is it like to drive in traffic with cars that don't have human drivers? We are going to have to change our perception. We know 'idiot drivers' and are quick to identify their idiosyncrasies on the road, because we understand human inattention. He's weaving like a drunk. She's obviously on her cellphone. He's obliviously blasting beats. She's a student driver. He's an old man. But Tempe offers a perfect example of a driver unable to recognize his own vehicle's inattention.

I've been a gearhead all my life and am probably unusually attracted to and informed about cars given my inexperience in actually repairing them. So I am fairly geeky into racing and the skills associated with demanding driving. I will trust autonomous driving machines when they start winning rally races and drifting competitions. Analogously, when computers were only reliable in playing an excellent game of tic-tac-toe, the mastery of chess was the benchmark for intelligence. I think that's were autonomous cars are now, four squares of tic tac toe rather than just three, with three being mere cruise control and lane alerts. It ain't chess, and it's far from go.

Here's the thing though. When machine learning figures driving out, it's still going to think like a machine and not like a human. That means there will be hacks and vulnerabilities machines will have navigating 3d spaces that we humans will have to study very diligently to understand. I expect a lot of that diligent conceptualization to happen at Carnegie Mellon, but not in DMV parking lots. We can only hope that traffic law changes very slowly.

March 19, 2018

Last week I was in Bogota again. It was less interesting than IAH. In Houston's new terminal I was accosted by the 'Stacked' business model. Stacked is a restaurant chain that got started here in Southern California several years ago. Their idea changed the way people were served. A hostess would direct you to a table at which a custom iPad had the menu. You ordered custom burgers, shakes and fries. You had no dedicated server, but a swarm of people would bring food and take empty plates away according to the instructions you pushed out of your tablet. You could order any time and add to your bill which you also paid for from the iPad. At Houston, there were literally hundreds of custom iPads at bar seats and tables inside and outside of themed restaurants. The swarm of servers went 50 yards in all directions.

It was disturbing to me that this form of automation was replacing one of the most intimate human jobs there are, then again, airport food. But what was really annoying was the fact that the idle screen on each iPad had dozens of casino-style video games with their own currency you got from ordering food or could pay for with frequent flyer miles. Clever, I know, but sucking all of the society out of a transportation hub felt quite crass and dehumanizing.

I've recently written about where data mining fits in my scope of business decision making and it's very interesting. I like AI and machine learning, but I haven't gotten to the point at which I'm comfortable saying 'ML'. Most of my experience with AI comes from old theory (backwards chaining and forwards chaining), my understanding of Markov and years of experience in video game playing. These days I'm completing another romp through the giant world of Skyrim as a maxed out conjurer. I spawn AI companions in all of my battles, especially dramora lords. But let me recap where in the world of business computing AI / machine learning lives from my perspective.

So first let me recap that my interest is in augmenting the decision making capabilities of individuals and organizations. I think that in a large set of domains, there are data available that can be captured and processed into knowledge and actionable intelligence. In the context of business, these are data-centric applications and in the context of business process management and transformation my aim is to enable 'data driven business'. In other words, after Peter Drucker, I am catalyzing the professionalization of business management by taking the seat of the pants guesswork out of much of it. Nevertheless, I recognize and respect how business leaders generate insights and visions of their business operations. So sometimes a good decision support system merely disciplines the accurate communication of performance indicators already known to the principals. In the context of AI and machine learning, we can generalize that there are two kinds of data driven businesses.

B. Data driven businesses whose leadership is unclear on what works and needs to build or re-engineer processes.

Somewhere in the middle there is generally a margin for improvement. But those operating in relative data darkness closer to B, are more vulnerable to outsized claims about the possibilities provided by AI.

So without getting into a mountain of details, the most important thing that determines the quality of AI is the amount of data it has access to, and the size of the domain it is expected to perform in. What's changing, therefore about the nature of what AIs do is the scale of data that cloud native architectures are able to deliver. For example, speech recognition at Google is superior to that at Apple, especially when it comes to the context of asking for navigation directions. Apple famously kicked Google Maps out of its IOS offerings several years ago. Consequently Siri has some shortcomings when it comes to recognizing spoken street and city names as compared to Google's voice recognition.

When customers and prospects ask about AI in the cloud, I tend to push back. The first thing I want to know about their possible application is how well their data is managed in the first place. From my perspective, if you don't have a serious structured data lake up and running, you are probably not going to get a lot of value training your AIs. We are at the very beginning of AI, and those things that we have AIs for, we tend not to think of them as frightfully scary AIs. For example, probably the most mature AI in existence today is speech recognition. Think of all of the millions of people who have iPhones who have been speaking to Siri for years. And yet every Siri user finds its comprehension inferior to that of a 12 year old. That's billions and billions of transactions and Siri still can't recognize the name of the avenue just down the block. 'Del Amo' simply doesn't work in Siri, which thinks I'm saying 'The lommel'. What the heck is a lommel?

AIs can best be thought of as business models, and their capitalization is the data behind it. If you're going to have a sophisticated AI model, you need to throw a lot of data at it over time and you're going to have to know how to validate what it does.

There is an ugly undercurrent I perceive in the idea of replacing low wage workers with AIs. And I think it is instructive to think of 12 year old children in this regard as well. For there are dimensions of satisfaction and delight that people can take in the performance of young people that would be completely absent from a machine doing the same thing. So I guess I'll conclude with the following story.

In 2000, I went to the Sydney Olympic Games and sat in the at my apartment watching the long jump competition. As the competition began, we were treated to a short documentary about the introduction of an automatic pit machine that sat on rails at the far end of the sand pit. It's job was to level the sand after each jump. This had traditionally been done by volunteers in every Olympics past, but we were regaled by the story of the dogged determination of the engineer who poured his life savings into the project. After years of testing, he finally got his design right. Then came the problem negotiating with the Olympic officials to have it trialed. Finally he got the approval, here it was, ladies and gentlemen. In the afternoon of the first day of competition it rained. The 500 pound machine broke and required four people to detach it from the rails and retire it causing a 45 minute delay in the competition. Now the officials had to find somebody with rakes and sand, something a 12 year old boy would be honored for life to do.

People ought to be wary of the machinations of clever engineers in their ability to take honor out of work and give tasks to machines, especially when it is something that uses poor substitutes for human intelligence. When we take human beings out of what could be honorable work we depopulate social spaces and make them sterile and mechanistic. There are appropriate places for AI, but the best use of AI is to augment human capability not to replace it. This is an old lesson, but one that bears repeating. If you don't know what you're doing, don't put an AI in to figure it out for you. If you do know what you're doing let the AIs be additional eyes and ears. Let them amplify your own senses, not replace them. Either way, you're going to have to get the data in first.