Wednesday, August 30, 2017

When I was
asked recently to speak on the subject of whether robots are taking our jobs, I
said that’s a given. If you’re wondering
if robots will take our jobs, the answer is ‘yes.’ They already are.

Planes have
flown with autopilot for over 100 years, and we’re rapidly extending the list
of professions under threat from accountants and lawyers, to medicine and
engineering.

If you’re
wondering if robots will take your particular job, the answer is ‘it depends’ on how
routine your job is (The Economist,
2016). Hint: If your job is routine, no
matter how interesting and intelligent you are, you’re at risk of being
replaced with automation and/or machine intelligence.

In short, if
you think a computer can do your job, it probably can.

We love our machines

Back in 1982, we were living in Santa Fe, New
Mexico, in a little casita next to our landlord’s house. Our landlord was a professional editor and
author, who had purchased one of the first generation word processors (a Kaypro,
I believe). And, I remember telling my
writer friend, ‘John, I’ll be the last generation that won’t use a
computer.’ And, I meant it.

Five (5) years later, I bought my first
Macintosh (a Mac Plus) when I started working on my PhD and that computer became more than
just a machine. It shared its secrets with me as I learned basic programming. It was there for me when I felt I couldn’t go
on. It watched over my work while I was sleeping. It holds the place of highest honour on my
office bookshelf. Machines in general,
and computers in particular, are a part of our family. An extension of
ourselves.

So, this is a story not about slaying mechanical dragons. It’s about the quest to be better humans, who happen
to work with machines.

Let’s face it, computers are simply better than
we are (or will ever be) at certain things.
They land planes more consistently, they are better at facial
recognition, they are more accurate at medical diagnoses, and they can of
course beat us at games like chess and go.

Worrying about the potential downsides of
machine intelligence and automation can blind us to the incredible
possibilities and opportunities that such technologies may offer us in the
future. We buy and sell, hire and
fire, promote and demote with a huge degree of human bias (error). The good
news is that machine intelligence is going to be able to help us out.

Rather than worrying about what machines can
do, we should worry about what machines can’t do, in part because we still need
to build better machines, and in part because that gives us a hint as to what
we humans must continue to do well for our part in this co-evolution.

What are humans good for?

Walking down stairs

Even though robots can do amazing things,
what they can’t do is equally astounding, like walking down a stairway. (See
Nicolas Carr’s Automation and Us).
So, let’s celebrate the agility we have as humans, and take care of our human
bodies.

Collaboration:

Ever struggle with astrophysics? Me too. Most of us find math difficult
because our cognitive brains are less developed than our social brains.

As one neuroscientist puts it, our comparatively large
brains were developed to solve complex problems, but they were not the problems
of thermal dynamics, they were social problems, like who was in charge in the
tribe, how to deal with tribal politics, how to build inter-tribal
relationships and so on.

We are social all the way down:Collaboration is a particularly human skill
and our social needs are driving us to use technology—maybe too much.

Social
mediation may be worse than automation: While many people see automation as
the worse-case scenario for humans in relation to machines, media over-use and
over-dependence is just as bad as or worse than machine intelligence.

We used to worry about how much TV people
were watching, but nowadays screen time has become far more pervasive than TV
ever was.

It’s not just that we look at screens so
much, but that software and devices mediate—that is go between—us and the world
around us.

As comedian Aziz Ansari says in his book, Modern Romance, when observing on-line
dating behaviour, he found that too many people spend too much time working on
their profiles and not enough time dating!

At some point, we have to look up from our
screens, if we want to live life to the full.

Cultural
intelligence

In a world of diversity and difference,
humans are uniquely able to seek common connections between ourselves, even
though we differ from one another. In a world of mindless tribalism and
nationalism, it is important that humans continue to seek connections with
others who are different from us.

Context

IBM’s Watson can make a highly accurate
diagnosis of an illness, but only the attending physician can sense the
patient’s will to live.

Compassion

Health ‘care’ means
just that, ‘caring’ for people, even while using intelligent systems to diagnose, and telemedicine
(sensors and tablets) to gather data. We must preserve the human qaulity of kindness
and compassion.

Sadly, when we humans
interact, we act more like machines than humans. For example, when I rent a car,
the rental company’s representative spends the first half of their time with me
gathering routine data, and the second half trying to on-sell insurance and
other add-ons. Why can’t better apps do
these things and let the car renting human serve as a host or guide, welcoming
me to their city and sending me off toward my final destination with style and
grace? Instead, we have turned ourselves
into robots. As Nicholas Carr observes:

“Industrialisation didn’t turn us into
machines, and automation isn’t going to turn us into automatons. We’re not that
simple. But automation’s spread is making our lives more programmatic. We have
fewer opportunities to demonstrate our own resourcefulness and ingenuity, to
display the self-reliance that was once considered the mainstay of character.
Unless we start having second thoughts about where we’re heading, that trend
will only accelerate.” (Carr, pp. 198-199).

The problem is most organisational work doesn’t involve inquiry or
critical thinking. As Josh Bersin says in his Deloitte report,

“The future of
work is not simply about using technology to replace people. The real “future
of work” issue is all about making jobs “more human”—redesigning jobs,
redesigning work, and redesigning organizations so that the “people side” of
work has even more importance and focus than ever.”

Computers are
good at providing answers, but humans are better at asking questions.

Coupled with the rapid and radical advances in computation, we need to
not just preserve, but continue to grow and develop the human character traits
of curiosity and courage, coupled with compassion. These are the principals
that underpin programs like Outward Bound.
Along with programming skills, we need education to focus on the attributes
that makes us more resilient, more resourceful and more ready to make the hard
calls when analysis has reached its limits. Human character, however, is not just about stoicism or tenacity, it's about being both resourceful and interesting

Here's a test: Your friends would never think of taking a long trip without their smartphone, but if they wouldn't take you on a long trip, why not?

What can we
learn from robots? What can we teach
them?

Lifting the
bar: Don’t expect less from machines, expect more from them!

As Tom Peters says, when software doesn’t
deliver, it’s not your fault. For those of us who were not digital
natives, we believe that we are idiots when a computer doesn’t do what we want
it to do. Things are getting easier for the app generation, but computers still
need to get better. If my healthcare is going to be delivered remotely
via a sensing, analytical nurse-bot, then it needs to be pretty darn
good. I don’t want it to be 80% right, or even 90% accurate (even though
I would accept that from a human doctor). Strangely, and unfairly to
machines, we need our machines to be better than we are in order to trust
them. And, that requires us to be smart and fussy consumers and ‘employers’
of machine intelligence.

What could possibly go wrong?

Things will change, possibly slower than many predictions, but more rapidly
than we can prepare for. Headlines
in the past year or so have emphasized the link between robots taking our jobs
(technological change) and the economic ramifications, in particular the
growing income gap in most developed countries.
This has triggered related discussions on universal basic income, and so
on. All this is good, but social changes
are unlikely to evolve as fast as technological ones, unless there is a social
revolution that accompanies this technical revolution. If that happens, all bets are off.

Things won’t change – the tendency toward ‘winner-take-all’ economics in
technology might see 1 or 2 companies dominating the robotics sector and
progress slows; in NZ we understand how duopolies work, and globally in the
1990s we saw how one company’s dominance of the PC world meant that software
development slows, or actually gets worse, when there are no rival options. So, we need smart and fussy consumers of
machine intelligence and competition among providers, or we risk having less
smart robots.

Shit will happen

The
notion of ‘normal accidents’ introduced by sociologist Charles Perrow suggests
that when you have complex systems, there is not just a chance, but a high
probability that something can and will go wrong (read, for instance, Flash Boys by Michael Lewis, on high
frequency trading). Machines make
mistakes too!

A happy ending

One way to think to about it is not so much that machines and humans are
in a zero-sum game, where a job is either held by a human or lost to a
machine. Perhaps a more salient metaphor is that of co-evolution, wherein
humans and machines are both evolving together, like some believe dogs and
humans have evolved in relation to one another.

As Garry Kasparov says, the fact that machines are getting better and
better offers a chance for humans to get better and better. “Machines have calculations. We have
understanding. Machines have instructions. We have purpose. Machines have
objectivity. We have passion.”

As Peter Drucker has said, “We don’t know the future, yet we
create it.” When it comes to automation
and machine intelligence, humans and technology will co-evolve, but whether
humans continue to develop our unique qualities is totally up to us.

Thursday, July 20, 2017

The general commentary (and there's lots of it) on artificial (machine) intelligence is just that, 'general,' abstract, hypothetical.

We don't often hear from para-legals who lost their jobs to pattern matching algorithms, or clerks who are no longer needed with accounting software like Xero around.

Think of what it would be like to sit face-to-'face' with a machine and have it beat you at chess. That's what Garry Kasparov did. He's the Chess Grandmaster who famously faced IBM's Deep Blue computer ... and lost. Well, actually, as he points out, he actually did win earlier matches, and one match in the final attempt, but that wouldn't be news.

So, you might think Kasparov would be bitter or disparaging of the capabilities of machine intelligence. Or, quick to point out its limitations. But he's not. In fact, his TED Talk on the story of losing to AI is intelligent (of course), but also entertaining (witty), and more importantly to the conversation and debate about how humans can and should respond to the rise and rise of machine (artificial) intelligence, Kasparov's perspective is deeply inspirational about what humans can and should always do, with or without other forms of intelligence.

Friday, June 23, 2017

Simon Sinek, whose masterful TED Talks and books on motivation are legendary, has come out with a bold and rather brash assessment of Millenials in the workplace. He bases their attitudes on growing up with adults hovering over them, giving them rapid and almost constant (often unwarranted) praise and promising them control of their world. Except it doesn't really work out that way when they get their first jobs.

Importantly, in this video interview, Sinek also describes these digital natives as 'addicts' to connective media. Addiction--as compared to just bad habits or social norms--is based on chemical reactions in the brain, which come from instant gratification from checking social media. The addict eventually can't stop looking at their screens.

While there is risk in over-generalising generations, and some evidence that young people's reaction to media is explained more from being young than any other significant difference from other generations (which means they will change their behaviours over time), Sinek's commentary raises some serious questions about personal habits, control and connectivity.