“Just what do you think you’re doing, Dave? Dave, I really think I’m entitled to an answer to that question.”

Thinking machines have been around since 1948, when the Manchester Small-Scale Experimental Machine, nicknamed the Baby, was the first to execute a program stored in its memory.

Nearly seventy years later, computers now speak freely to us, take our commands, field our questions, deliver mail in the office, greet customers in stores, cook pancakes, produce creative works, win strategy-based games, provide companionship in hospital wards, assist in the operating room and on the factory floor, fight wars, drive cars, and are increasingly a fixture in our homes and workplaces.

A computer’s brain is empty, however, until it’s fed the information and algorithms it needs in order to process what it learns and deliver results.

An algorithm is a step-by-step set of instructions, as in a recipe. One way to teach a computer is to feed into it huge quantities of information and rules about the world so that it can call upon an encyclopedic store of know-how, coupled with algorithms that instruct it what to do with all that information when tasked with a given challenge. This is the “big data” approach and it’s had its successes.

Still, the “big data” approach is not without its problems. For one, it requires a herculean effort on the part of programmers. For another, no matter how much information is fed into the computer, the machine can’t deal very well with ambiguities or requests that just don’t quite fit the input – much like automated phone menu systems that do not address a caller’s specific need.

If we are to continue to develop smart machines, which is inevitable, a better approach is needed. Ideally, we need a teaching method that requires less of us, the humans who feed them, and more of the machines.

We can’t teach a child everything there is to know, so as parents and educators we opt instead to attempt to teach the child how to learn, how to discern what’s important from what isn’t, how to identify pattern and outlier facts, and how to make sense of ambiguity. Further, we reward the right behaviors and correct wrong ones in the hope that the child will develop judgment over time.

“Deep learning,” as this approach is known, attempts to simulate the way the brain works as it absorbs and processes input. It is programming that mimics the activity in layers of neurons in the brain’s neocortex – the complex, wrinkly 80 percent of the brain where higher functions like thinking occur. The deep learning approach involves educating the computer to recognize complex and abstract patterns by feeding large amounts of training data through – and here’s the important part – successive networks of artificial neurons, refining at each pass the way those networks respond to input.

Consider for example a hand-drawn image of a box.

The human mind has an astounding ability to recognize what is depicted – even if it is drawn poorly. To do this, we rely on a primary visual cortex (known as a V1) containing roughly 140 million neurons with tens of billions of connections between them. We also have V2, V3, V4, and V5 layers that progressively process even more complicated elements of the image we are attempting to discern.

It’s as if our brain is a supercomputer tuned by countless examples absorbed over millions of years of evolution to make visual recognition seemingly effortless.

To teach a machine to do this, however, is hard. Where our brain is superbly adapted to understand the visual world, writing the algorithms to translate what we see into a computer’s ability to recognize such an image is very difficult, especially with a rule-oriented, exceptions noted, sloppy handwriting and ambiguities contemplated, big data approach.

Neural networks approach the problem differently.

The concept is to use a large number of training examples from which the computer can infer the rules for recognizing a box when it sees one – whether the box is expertly or sloppily drawn, or even just inferred from a few dashed-off lines. Increasing the number of training examples increases the computer’s accuracy.

The big idea here is that the computer works with a map of artificial neural networks (ANNs) that are inspired by our biological neural networks. Just as biological networks are interconnected, the artificial ones likewise exchange information between the layers.

In the artificial world, the connections have numeric weights that are adjusted based on experience of the right and wrong answers, which make the neural nets adaptive to inputs and thus capable of learning. These weights, tuned by a learning algorithm, determine how each artificial neuron responds to the digitized features that, in our example, make up the shape of a box.

Working through successive layers of virtual neural networks, the software learns to recognize patterns. A networked system creates multiple pathways to a particular endpoint, which limits system failures.

The first layer learns primitive features, like how to discern the edge of the image, such as the straight edge of the box, or the tiniest unit of speech, as in the sound of the individual letter “b.” It does this by finding combinations of digitized pixels (for image recognition) or sound waves (for speech recognition) that occur more often than they would by chance.

Once that layer accurately detects these features, they’re fed to the next layer, which trains itself through a large input of examples to recognize more complex features, like the corner of the box, or a combination of speech sounds, such as “b-o-x.”

The process is repeated in successive layers until the system can reliably recognize the object or sound it is attempting to identify. Further, the computer can train itself on known data and apply what it knows to new data.

In fact, Stephen Hawking, Britain’s preeminent scientist, has noted the very real possibility that a smart machine could learn and redesign itself at a rate that far exceeds human evolution, which means, technically, that our inventions would be able to control us and not the other way around. His view, shared by many, is that the “development of full artificial intelligence could spell the end of the human race.”

Which raises the question of whether we could we combat the dangers of technology run amuck by imbuing our robots with a code of ethics? We’ve given them ever-evolving brains, to do our bidding, and humanoid form, to make interaction more palatable, and personalities to add interest and individuation – but how about limitations? Can a robot parse ethical decision making, develop wisdom, and display empathy? Can we imbue our robots with a sense of right and wrong and the capacity to make a moral choice?

The next challenge seems to be, can we give a robot a heart?

Denise Shekerjian is a writer and lawyer with a keen interest in creativity and artificial intelligence. You can find her at soulofaword.com.