One of the species of early hominids is named Homo habilis, meaning “handy man,” after their significant advancement in tool use over previous hominids. One of the goals of the AGI Roadmap is to chart paths to full human intelligence, and one of the paths might follow the one that evolution took. The Wozniak Test, i.e. being able to make coffee in any randomly-chosen home, is a case of tool use competence. It is a special case of what we might call the Nilsson Test, as outlined in a paper in 2005 by Nils Nilsson, one of the leading figures in AI:

Machines exhibiting true human-level intelligence should be able to do many of the things humans are able to do. Among these activities are the tasks or “jobs” at which people are employed. I suggest we replace the Turing test by something I will call the “employment test.” To pass the employment test, AI programs must be able to perform the jobs ordinarily performed by humans. Progress toward human-level AI could then be measured by the fraction of these jobs that can be acceptably performed by machines.
Let me be explicit about the kinds of jobs I have in mind. Consider, for example, a list of job classiﬁcations from “America’s Job Bank.” A
sample of some of them is given in ﬁgure 1:

Just as objections have been raised to the Turing test, I can anticipate objections to this new, perhaps more stringent, test. Some of my AI colleagues, even those who strive for human-level AI, might say “the employment test is far too difﬁcult—we’ll never be able to automate all of
those jobs!” To them, I can only reply “Just what do you think human-level AI means? After all, humans do all of those things.”

Now some of those jobs require specialized training and years of experience, while some of them are entry-level, accessible immediately to the average human. Most are somewhere in between. Note that “Maid and housekeeping cleaner” is in itself a superset of the Wozniak Test.

The ability of an AGI (= human-level AI) to do most or all of the jobs humans do is cause for a certain amount of concern. This brings us to a recent post by Robin Hanson:

Yes, techies agree on the long term plausibility of machines doing almost all jobs at a cost below human subsistence wages, thereby gaining almost all income, while economists ignore this scenario. …

Economists should listen more to techies on what techs will be feasible at what costs, but techies should also listen more to economists on the social implications of tech costs. Alas, just as economists prefer to rely on their intuitive folk tech forecasts, techies prefer to rely instead on their intuitive folk economics. …

The standard views of techies about what techs will be feasible might be wrong, and the standard views of economists of how to forecast tech consequences might be wrong. And it is fine for contrarians to try to persuade specialists they are in error, though contrarians would be wise to at least understand the standard view before trying to overturn it. But surely what the world needs first and foremost is to see and take seriously the simple combination of the standard views on such important topics.

One of the standard economic laws that applies in this case is Ricardo’s Law of Comparative Advantage. It states basically that it is generally to the advantage of parties of differing productivities to trade. In particular, the counter-intuitive part, it is to the advantage of the more productive party (e.g. the machines) to trade with the less productive (us, in the robot economy scenario). The exception is where the abilities (productivities across goods) are in the same exact proportions, leaving the parties nothing to specialize in.

It seems to me that one obvious way to ameliorate the impact of the AI/robotics revolution in the economic world, then, is simple: build robots whose cognitive architectures are enough different from humans that their relative skillfullness at various tasks will differ from ours. Then, even after they are actually better at everything than we are, the law of comparative advantage will still hold.

An intelligence challenge should not involve building mechanical robot controllers – IMO. That’s a bit of a different problem – and a rather difficult one – because of the long build-test cycle involved in such
projects.

There are plenty of purer tests of intelligence that use more abstract ideas – games, puzzles, and other classical intelligence test fodder.

If you want to measure the abilities of mechanical robots, then fine, but let’s not pretend that it’s the same thing as measuring intelligence.

Maybe a better strategy is instill an appreciation or aesthetic in future robots for the products of those tasks that humans will remain “better” at – hence we would retain a Ricardo-ian comparative advantage. Perhaps robots that will trade the fruits of their superfast/superefficient/superprecise skills for a nice painting?

Why limit machine intelligence to the human level? It seems to me we should be trying to find complementary niches for our friendly bots to fill. While the image of a world full of C3PO’s causes me nightmares, Neal Asher’s AI’s seem a good point to aim for. I don’t know about you, but when I look at what we humans do to each other, the last thing I want to see is an electronic version without the physical limitations which hold our excesses in check.

Nilsson’s employment test is impractical because it takes too long. And it does not explicitly require that all the jobs are passed by a single agent. Just talking about habile systems is not enough, their development should be supported by the test.