When we think artificial intelligence we usually think robots. This can likely be attributed to the way the human brain works: concrete objects are easier for us to understand than abstract ideas. When it comes to robots, however, the concrete form that we perceive with our senses is merely the tip of the iceberg.

Mankind has (relatively) long since developed machines which are physically capable of performing very complex tasks. The challenge now lies in creating software that would make machines smart enough to perform these tasks on their own. In other words: Your Roomba vacuum cleaner wouldn’t be very useful if you had to control it by hand, now would it? (Editor’s note: you could still use a joystick to clean your house from the couch, which is pretty cool)

If a machine is the human body, artificial intelligence is the mind. Now I don’t know about you, but I find the latter much more interesting than the former. It is what allows animals to function in the world, it is what makes us different from plants, and it is what separates robots from other machines.

Artificial intelligence (AI) in robotics serves two main purposes: control and perception. Control means getting the machine to do what you want and perception means getting the machine to understand the world around it. Control isn’t really a problem. If you’ve ever driven a remote controlled toy car what you were doing is manual control. If we can have manual control that is precise enough to perform our desired tasks, we could have automatic control. The problem is that if we give control over to a machine, it won’t know what to do with it unless it has some understanding of the world around it and the obstacles it presents.

So the main problem with AI is perception – creating a machine equivalent to our own five senses. In the past programmers working on AI wrote long, incredibly complex software to try to systematically explain the world to machines using a (huge) set of rules. As it turns out, this task is so hard it would take lifetimes to accomplish properly. The world is simply too complex to be defined by scripted rules. So modern AI research turned away from traditional programming and looked at how our own brain handles input from the five senses to create our perception of the world.

AI can work a lot like the human brain

“If you look at how the human brain does perception; rather than needing tons of algorithms for vision, tons of algorithms for audio, it may be that most of how the brain does it may be a single learning algorithm, a single program” -Andrew Ng, director of the Artificial Intelligence lab at Stanford University The key phrase in the above quote is “learning algorithm”. The basis of AI development in recent years has shifted from humans trying to teach machines about the world, to making machines teach themselves. This is a concept quite straightforwardly named machine learning. All this may sound like something from the future, but machine learning and AI are all around us: the spam filters in some email applications, for example, use machine learning to learn what you consider spam and what is useful mail. I have used robotics as a way to explain these basic principles of AI, because it helped me relate them to something concrete that we can all easily imagine. In the real world, however, you are not likely to come across an intelligent robot very often (no offence to Roomba). The way most of us interact with AI is through software applications that have no physical form, but the same principles still apply. Let’s conclude with an example of these principles at work: SIRIS is AI used as a user interface for other software applications. It is integrated into Accounting Box, which is accounting software for small businesses. The way it works is that it takes input in natural language and transforms it into commands for the software. So you can just tell it to “pay my bills” and it does so. Now imagine that SIRIS is a little robot inside Accounting Box, which is… a box. This box has many buttons attached to its inside, which represent its various functions. The control part here is that SIRIS knows how to get to the button that says “pay bills” and press it. The perception part is that SIRIS understands what you mean when you tell it to pay your bills and therefore knows where in the box to go. We could just program SIRIS to perform this specific task when it gets the input “pay my bills”, but what if someone was to phrase it differently, let’s say “pay what I owe”? SIRIS wouldn’t know what to do. So for you to be able to communicate with the AI successfully it has to be able to understand natural language and all the different variations it comes in, which requires a deep thematic interpretation of texts and isn’t something that can easily be defined by a set of rules. See how this is trickier for the AI then knowing how to carry out the command?Quote transcribed from http://www.youtube.com/watch?v=AY4ajbu_G3k“Artificial intelligence” Image courtesy of Victor Habbick / FreeDigitalPhotos.net

Author

﻿﻿We, the creators of SIRIS, the virtual assistant, believe that artificial intelligence can usher in a new era of prosperity for mankind. We are also realists and know that AI still has a long, long way to go. This blog is not about dreams and visions, but about how AI can be used today.﻿﻿