The title above is a play off of the “Too Many Secrets” revelation in the 1992 movie Sneakers, in which Robert Redford’s character, who has a secret or two himself, finds himself in possession of the ultimate decryption device, and everyone wants it.

Today we have too many cameras around us. This was brought home to me rather starkly when I received an email that said:

I’ve been recording you with your computer camera and caught you <censored>. Shame on you. If you don’t want me to send that video to your family and employer, pay me $1000.

I pause. Did I really do <censored> in front of my computer camera? I didn’t think so, but I do spend a lot of time in front of the screen. In any case, <censored> didn’t quite rise to the level of blackmail concern, in my opinion, so I ignored it.

But is this scenario so completely far-fetched? This article lists all of the cameras that Amazon can conceivably put in your home today, and in the near future, that list will certainly grow. Other services, such as your PC vendor and security system provider, will add even more movie-ready devices.

In some ways, the explosion of cameras looking at our actions is good. Cameras can nudge us to drive more safely, and to identify and find thieves and other bad guys. They can help find lost or kidnapped children.

But even outside our home, they are a little creepy. You don’t want to stop in the middle of the sidewalk and think, I’m being watched right now. The vast majority of people simply don’t have any reason to be observed, and thinking about it can be disconcerting.

Inside, I simply don’t think we want them, phone and PC included. I do believe that people realize it is happening, but in the short term, think the coolness of the Amazon products and the lack of friction in ordering from Amazon supersedes any thoughts about privacy. They would rather have computers at their beck and call than think about the implications.

We need to do better than that if we want to live in an automated world.

It turns out that most people who care to comment are, to use the common phrase, creeped out at the thought of not knowing whether they are talking to an AI or a human being. I get that, although I don’t think I’m myself bothered by such a notion. After all, what do we know about people during a casual phone conversation? Many of them probably sound like robots to us anyway.

And this article in the New York Times notes that Google was only able to accomplish this feat by severely limiting the domain in which the AI could interact with – in this case, making dinner reservations or a hair appointment. The demonstration was still significant, but isn’t a truly practical application, even within a limited domain space.

Well, that’s true. The era of an AI program interacting like a human across multiple domains is far away, even with the advances we’ve seen over the last few years. And this is why I even doubt the viability of self-driving cars anytime soon. The problem domains encountered by cars are enormously complex, far more so than any current tests have attempted. From road surface to traffic situation to weather to individual preferences, today’s self-driving cars can’t deal with being in the wild.

You may retort that all of these conditions are objective and highly quantifiable, making it possible to anticipate and program for. But we come across driving situations almost daily that have new elements that must be instinctively integrated into our body of knowledge and acted upon. Computers certainly have the speed to do so, but they lack a good learning framework to identify critical data and integrate that data into their neural network to respond in real time.

Author Gary Marcus notes that what this means is that the deep learning approach to AI has failed. I laughed when I came to the solution proposed by Dr. Marcus – that we return to the backward-chaining rules-based approach of two decades ago. This was what I learned during much of my graduate studies, and was largely given up on in the 1990s as unworkable. Building layer upon layer of interacting rules was tedious and error-prone, and it required an exacting understanding of just how backward chaining worked.

Ultimately, I think that the next generation of AI will incorporate both types of approaches. The neural network to process data and come to a decision, and a rules-based system to provide the learning foundation and structure.

The mantra in tech over the last several years has been “Move fast and break things.” That culture has been manifested by headliners Uber and Facebook, as well as by countless Silicon Valley startups eager to deliver on what they know for sure is a winning strategy.

It’s long since time that we pushed back on that misguided attitude. First, no, you don’t have to move fast. While we retain the myth of the first mover advantage, if you look at history it is very much a myth. Tech history is rife with lessons of established companies moving into a new area, “validating” that space, and pushing out the pioneering startups (Oracle in SQL databases, Facebook against MySpace, Microsoft in just about every market until about 2005, to cite three well-known examples).

Second, you don’t have to break things. This wrongheaded attitude represents only a misleading part of a larger truism, that if you are headed in the wrong strategic or product direction, it’s better to know it earlier rather than later. The implication with “breaking things” is that you don’t know if you are headed in the wrong direction unless you break something in the process. Um, no. You know it because you have business acumen, and are paying attention, not because you have broken anything.

It gets worse. Companies such as Uber, Airbnb, and Zenefits have redefined breaking things to include laws and regulations that are inconvenient to their business models. I simply cannot conceive of how this comes about. The arrogance and hubris of such firms must be enormous.

Certainly there are countless laws and regulations that need to be rethought and rewritten as advances change how business might be practiced. I have always said that the (only) positive thing about Uber was that it drastically reshaped the taxi industry, I think largely for the good.

But ignoring laws and regulations that you don’t like is simply wrong, in any sense you might think of it. Rather, you work with government entities to educate them on what it possible to advance a particular product or service, and to openly advocate for legal change.

Oh, but that takes far too long for tech companies convinced that they have to move fast. And they simply can’t be bothered anyway. See my first point – moving fast is rarely a competitive advantage in tech.

It’s clear that Silicon Valley startups won’t buy into what I say here. It’s up to us, the customer and the public, to object to such an absurd business mantra. To date, we the public have either stayed on the sidelines, or even actively supported such criminal practices as Uber’s because of the convenience afforded us by the end result. This has got to change.

Update: Case in point, https://qz.com/1257229/electric-scooter-startup-bird-wants-to-make-it-legal-to-ride-scooters-on-the-sidewalk/. It’s illegal but it’s not stopping the companies.

Years ago, I was a pilot. SEL, as we said, single-engine land. Once during my instruction, for about an hour, we spent time going over what he called recovery from unusual attitudes. I went “under the hood”, putting on a plastic device that blocked my vision while he placed the plane in various situations. Then he would lift the hood, to where I could only see the instruments.

I became quite good at this, focusing on two instruments – turn and bank, and airspeed. Based on these instruments, I was able to recover to straight and level flight within seconds.

My instructor pilot realized what I was doing, and was a lot smarter than me. The next time, it didn’t work; it made things worse, actually. I panicked, and in a real life scenario, may well have crashed.

Today, I have a presentation I generically call “What Aircrews Can Teach IT” (the title changes based on the audience makeup). It is focused on Crew Resource Management, a structured way of working and communicating so that responsibilities are understood and concerns are voiced.

But there is more that aircrews can teach us. We panic when we have not seen a situation before. Aircrews do too. That’s why they practice, in a simulator, with a check pilot, hundreds of hours a year. That’s why we have few commercial airline accidents today. When we do, it is almost always because of crew error, because they are unfamiliar with their situation.

It’s the same in IT. If we are faced with a situation we haven’t encountered before, chances are we will react emotionally and incorrectly to it. The consequences may not be a fatal accident, but we can still do better.

I preach situational awareness in all aspects of life. We need to understand our surroundings, pay attention to people and events that may affect us, and in general be prepared to react based on our reading of a situation.

In many professional jobs, we’ve forgotten about the value of training. I don’t mean going to a class; I mean practicing scenarios, again and again, until they become second nature. That’s what aircrews do. And that’s what soldiers do. And when we have something on the line, that is more valuable than anything else we could be doing. And eventually it will pay off.

Are we prepared to take on the responsibility of the consequences of our code? That is clearly a loaded question. Both individual programmers and their employers use all manner of code to gain a personal, financial, business, or wartime advantage. I once had a programmer explain to me, “They tell me to build this menu, I build the menu. They tell me to create these options, I create these options. There is no thought involved.”

In one sense, yes. By the time the project reaches the coder, there is usually little in doubt. But while we are not the masterminds, we are the enablers.

I am not sure that all software programmers viewed their work abstractly, without acknowledging potential consequences. Back in the 1980s, I knew many programmers who declined to work for the burgeoning defense industry in Massachusetts of the day, convinced that their code might be responsible for war and violent death (despite the state’s cultural, well, ambivalence to its defense industry to begin with).

Others are troubled by providing inaccurate information being used to make decisions, or by trying to manipulate people’s emotions to feel a particular way, to buy a particular product or service. But that seems much less damaging or harmful than enabling the launch of a nuclear-tipped ballistic missile.

Or is it? I am pretty sure that most who work for Facebook successfully do abstract their code from the results. How else can you explain the company’s disregard of personal reaction to their extreme intrusion into the lives of their users? I think that might have relatively little to do with their value systems, and more to do with the culture in which they work.

To be fair, this is not about Facebook, although I could not resist the dig. Rather, this is to point out that the implementers, yes, the enablers, tend to be divorced from the decisions and the consequences. To be specific: Us.

Is this a problem? After all, those who are making the decisions are better qualified to do so, and are paid to do so, usually better than the programmers. Shouldn’t they be the ones taking the responsibility?

Ah, but they can use the same argument in response. They are not the ones actually creating these systems; they are not implementing the actual weapons of harm.

Here is the point. With military systems, we are well aware that we are enabling war to be fought, the killing of people and the destruction of property. We can rationalize by saying that we are creating defensive systems, but we have still made a conscious choice here.

With social systems, we seem to care much less that we are potentially causing harm than in war systems. In fact, the likes of Mark Zuckerberg still continue to insist that his creation is used only for good. That is, of course, less and less believable as time marches on.

And to be clear, I am not a pacifist. I served in the military in my youth. I believe that the course of human history has largely been defined by war. And that war is the inevitable result of human needs, for security, for sustenance, or for some other need. It is likely that humanity in general will never grow out of the need to physically dominate others (case in point, Harvey Weinstein).

But as we continue to create software systems to manipulate people, and to do things that make them do what they would not otherwise do, is this really ethically different than creating a military system? We may be able to rationalize it on some level, but in fact we also have to acknowledge that we are doing harm to people.

So if you are a programmer, can you with this understanding and in good conscience say that you are a force for good in the world?

This article is so fascinatingly wrong on so many levels that it is worth your time to read it. On the surface, it may appear to offer some impartial logic, that we should automate because humans don’t perform consistently.

“At some point, every human being becomes unreliable.” Well, yes. Humans aren’t machines. They have good days and bad days. They have exceptional performances and poor performances.

Machines, on the other hand, are stunningly consistent, and least under most circumstances. Certainly software bugs, power outages, and hardware breakdowns happen, and machines will fail to perform under many of those circumstances, but they are relatively rare.

But there is a problem here. Actually, several problems. The first is that machines will do exactly the same thing, every time, until the cows come home. That’s what they are programmed to do, and they do it reasonably well.

Humans, on the other hand, experiment. And through experimentation and inspiration come innovation, a better way of doing things. Sometimes that better way is evolutionary, and sometimes it is revolutionary. But that’s how society evolves and becomes better. The machine will always do exactly the same thing, so there will never be better and innovative solutions. We become static, and as a society old and tired.

Second, humans connect with other humans in a way machines cannot (the movie Robot and Frank notwithstanding). This article starts with a story of a restaurant whose workers showed up when they felt like it. Rather that addressing that problem directly, the owner implemented a largely automated (and hands off) assembly line of food.

What has happened here is that the restaurant owner has taken a management problem and attempted to solve it with the application of technology. And by not acknowledging his own management failings, he will almost certainly fail in his technology solution too.

Except for probably fast food restaurants, people eat out in part for the experience. We do not eat out only, and probably not even primarily, for sustenance, but rather to connect with our family and friends, and with random people we encounter.

If we cannot do that, we might as well just have glucose and nutrients pumped directly into our veins.

And let’s start with the human memory. “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” is one of the most highly cited papers in psychology. The title is rhetorical, of course; there is nothing magical about the number seven. But the paper and associated psychological studies explicitly define the ability of the human mind to process increasingly complex information.

The short answer is that the human mind is a wonderful mechanism for some types of processing. We can very rapidly process a large amount of sensory inputs, and draw some very quick but not terribly accurate conclusions (Kahneman’s Type 1 thinking), we can’t handle an overwhelming amount of quantitative data and expect to make any sense out of it.

In discussing machine learning systems, I often say that we as humans have too much data to reliably process ourselves. So we set (mostly artificial) boundaries that let us ignore a large amount of data, so that we can pay attention when the data clearly signify a change in the status quo.

The point is that I don’t think there is a way for humans to deal directly with a lot of complexity. And if we employ systems to evaluate that complexity and present it in human-understandable concepts, we are necessarily losing information in the process.

This, I think, is a corollary of Joel Spolsky’s Law of Leaky Abstractions, which says that anytime you abstract away from what is really happening with hardware and software, you lose information. In many cases, that information is fairly trivial, but in some cases, it is critically valuable. If we miss it, it can cause a serious problem.

While Joel was describing abstraction in a technical sense, I think that his law applies beyond that. Any time that you add layers in order to better understand a scenario, you out of necessity lose information. We look at the Dow Jones Industrial Average as a measure of the stock market, for example, rather than minutely examine every stock traded on the New York Stock Exchange.

That’s not a bad thing. Abstraction makes it possible for us to better comprehend the world around us.

But it also means that we are losing information. Most times, that’s not a disaster. Sometimes it can lead us to disastrously bad decisions.

A couple of years ago, I did a presentation entitled “Famous Software Failures”. It described six events in history where poor quality or untested software caused significant damage, monetary loss, or death.

It was really more about system failures in general, or the interaction between hardware and software. And ultimately is was about learning from these failures to help prevent future ones.

I mention this because the protagonist in one of these failures passed earlier this year. Stanislav Petrov, a Soviet military officer who declined to report a launch of five ICBMs from the United States, as reported by their defense systems. Believing that a real American offensive would involve many more missiles, Lieutenant Colonel Petrov refused to acknowledge the threat as legitimate and contended to his superiors that it was a false alarm (he was reprimanded for his actions, incidentally, and permitted to retire at his then-current rank). The false alarm had been created by a rare alignment of sunlight on high-altitude clouds above North Dakota.

There is also a novel by Daniel Suarez, entitled Kill Decision, that postulates the rise of autonomous military drones that are empowered to make a decision on an attack without human input and intervention. Suarez, an outstanding thriller writer, writes graphically and in detail of weapons and battles that we are convinced must be right around the next technology bend, or even here today.

As we move into a world where critical decisions have to be made instantaneously, we cannot underestimate the value of the human in the loop. Whether the decision is made with a focus on logic (“They wouldn’t launch just five missiles”) or emotion (“I will not be remembered for starting a war”), it puts any decision in a larger and far more real context than a collection of anonymous algorithms.

The human can certainly be wrong, of course. And no one person should be responsible for a decision that can cause the death of millions of people. And we may find ourselves outmaneuvered by an adversary who relies successfully on instantaneous, autonomous decisions (as almost happened in Kill Decision).

As algorithms and intelligent systems become faster and better, human decisions aren’t necessarily needed or even desirable in a growing number of split-second situations. But while they may be pushed to the edges, human decisions should not be pushed entirely off the page.