IBM Fellow Grady Booch -- a pioneer in software engineering and collaborative development environments -- says the answer is yes. In fact, he says the rise of sentient machines is "inevitable."

Booch defines sentience as having typically human characteristics, such as self-awareness, the ability to set goals, and a sense of creativity: "If we don't achieve that degree of sentience, I believe we're very close to achieving the illusion of sentience whereby we are in a place where we'll, on a large-scale basis, have to interact with these things."

At Silicon Valley's Computer History Museum this week. Booch cited the rise of systems able to respond to voice recognition and synthesize speech, such as Apple's Siri and IBM's Watson computer, which competed on the "Jeopardy" game show. Although "Watson is not sentient like the HAL 9000," the electronic antagonist from the movie "2001: A Space Odyssey," he notes, some of the pre-sentient machines already can harm humans.

One example of harmful pre-sentient devices include the intelligent drones used in warfare: "We're building a generation of autonomous devices that kill." Still, he notes, these systems are equipped with intelligence to distinguish between legitimate targets and what not to target.

Pre-sentient computers are also displacing humans from many jobs. "We can now outsource to our machines," even though these systems are not yet sentient, Booch says.

Smart devices are providing us with new functionality, but at a cost: "We are slowly surrendering our intelligence, our choice, our responsibility, to devices such as this." Although such sentient machines are inevitable, Booch says that humankind can "co-evolve" with these intelligent devices.