So goes the sixth point in Google's official statement of core values years ago, often condensed to the informal company motto, "Don't be evil."

It's supposed to make us feel better about trying new technologies from Chrome notebooks to Google Glass. Don't worry -- we're not going to screw with you. So when it comes time someday to inject our brains with nanobots that give us the ability to speak new languages, we can rest assured that Google won't change the user agreement on us after the fact and plant advertisements into our dreams while we're asleep. That is, if the company judges such practices to be truly evil.

Well, some of Google's recent forays are waking people up to the fact that evil is in the eyes of the beholder. The company just acquired military robot maker Boston Dynamics, leading to great consternation in the Twitterverse. As @BrentButt put it this week in a tweet that caught fire:

Truth be told, the robots Boston Dynamics makes are pretty cool. Based on animal physiology, they can run, jump, balance and even chase stuff. As something of a technogeek myself, I can see why a bunch of engineers would want to play with this technology.

What we have to ask, and keep asking at every turn, is: To what end? What real purpose are we serving?

Not doing evil is actually a pretty low bar to begin with. Is this really a high aspiration? To avoid embodying Satan in silicon?

Google's growing army of robots

Even if we accept avoiding evil as the mantra of the digital age, the presumption here is that evil is like a line of code that can simply be excluded from the overall program. Oops! Line 45 of that app has some evil in it. Better change it.

We can't employ an entirely programmatic approach to human affairs. However well we think we might be embedding our technologies with the values we hope to express, more often than not we also get unexpected consequences.

Cars lead to pollution and oil to wars. Smartphones lead to distraction and car accidents. Big data leads to coercive marketing and government to massive surveillance. Something awfully close to evil is quite a common side effect.

Google seems aware of this, at least from a public relations perspective. Likely concerned about what it looks like for the company to be developing military hardware, the company recently donated $5 million to the World Wildlife Federation for drones to track down rhinoceros poachers in Africa. It's as if they're out to prove that drones are not necessarily evil.

Still, we can't help but do a bit of evil when we build technology upon technology, without taking a pause to ask what it's all for.

New technologies give us the opportunity to reevaluate the systems we have been using up until now, and consider doing things differently.

But the stock-market-fueled culture of Silicon Valley too often focuses on efficiency of execution rather than clarity of purpose.

The result is that our best Stanford computer science graduates end up writing algorithms that better extract money from the stock market, rather than exploring whether capital is even serving its original purpose of getting funds to new businesses.

Or the engineers behind Bitcoin develop a brilliant new digital currency without evaluating the purpose of money in our society. The problem to be addressed is that too much cash has ended up stuck in the coffers of the speculators. Instead of thinking about how to encourage peer-to-peer transaction, Bitcoin's developers simply built another speculative currency, only this time on digital steroids.

Likewise, war is not a great approach to conflict resolution. Adding robot soldiers to the mix merely improves the efficiency of killing. How might robots be used to reduce conflict instead of enact it?

When we develop technology in a vacuum, disconnected from the reality in which people really live, we are too likely to spend our energy designing some abstract vision of a future life rather than addressing the pains and injustices around us right now. Technology becomes a way of escaping the world's problems, whether through virtual reality or massive Silicon Valley stock options packages, rather than engaging with them.

But the don't-do-evil mandate doesn't even ask Google's programmers to evaluate the purpose of a technology -- only to perform a basic "checksum," or error correction, for evil itself. How pathetically binary. This is not enough for a company that appears dedicated to uploading human consciousness to the cloud, no matter how many robot warriors we have protecting our virtual reality servers from the people we leave behind.

I'm not arguing against technology or that we do less with it. Quite the contrary. I'm arguing we do more. It's not enough to computerize and digitize the society we have, and exacerbate its problems by new means. We must transcend the mere avoidance of the patently evil and instead seek to do good. That may involve actually overturning and remaking some institutions and processes from the ground up.

That's the real potential of digital technology. To retrieve the values and ideas that may have seemed impossible before and see whether we can realize them today in this very new world.