The capacity for error is not a new phenomenon though. The owner of the AI device would be liable for any mistakes that it may make in exactly the same way that an employer would be liable for the mistakes made by an employee, the difference being that you cannot reprimand your AI!

(well you could but what would be the point? In these cases the AI is more likely to learn a positive lesson from the mistakes being made)

Liable, perhaps, but not for exactly the same reasons. My point was about the predictability of the nature of the error, and the possibility of reducing misunderstandings if the robot said "I am a robot".

To spell out my example of Watson on Jeopardy!: The category was "U.S. Cities", the "question" was "Its largest airport is named for a World War II hero; its second largest, for a World War II battle", and Watson's response was "What is Toronto?" No human with a rudimentary idea of North American geography would make that mistake, but it's easy to understand that a computer might. It's even predictable when you know Watson was told not to pay much attention to categories.

If I know I'm dealing with a human I have a good idea how to avoid misunderstandings. If I think I'm dealing with a human when I'm not, less so.

Quote:

Originally Posted by ganzfeld

That is an interesting point Chas. It's also interesting that predictability doesn't come up, for example in Asimov's laws, even though it seems so central to people feeling OK dealing with people and other autonomous agents.

I think Asimov's laws - a literary device and not necessarily something that can really be programmed - were based around the idea of robots doing their jobs and doing them safely. The details were part of the robot's education, and below the level of the laws, which were supposedly "hard-wired" into the positronic brain.

I wonder if, possibly, they are referring to the ethics of inventing technology with the potential of fulfilling the job duties that an actual human used to be paid to do? I've noticed quite a bit of rumbling from various sources worried about the robots taking people's jobs, and there are still a bunch of folk out there who work in assistant capacities which includes doing things like making appointments (although I find it unlikely that those who employ such workers are going to replace their human assistants with virtual ones).

I think it's an interesting question how an AI assistant might be viewed legally in terms of contract law. I suspect that most if not all current definitions in agency law require that an agent is a person.

Under current definitions, then, using an AI is not like using an agent, it is like acting via technology in other ways. Having an AI make a deal for you might be like sending an email with a question about something and then having your email set to send a predrafted acceptance if the response contains certain words. There is no agent in that scenario. Just one's own foolish programming. And the questions it would raise would, I think, be about what would reasonably be understood from what was communicated. So if it is an unambiguous acceptance, you'd probably be bound. If the automated email clearly made no sense in the context of the email exchange, then probably not.

There might be a point where AI becomes sophisticated enough that laws are changed to include it within the definition of "agent."

Right now, I think it probably would matter very much whether the person on the other end knows they are talking to an AI rather than the principal's agent. It seems to me that you would understand certain kinds of responses differently, particularly if the scope of the conversation had changed. It might be less reasonable to think that an AI is still acting within the scope of its programming if the negotiations have gone very far afield from where they began. And if you know it's not a person, you could also try to manipulate its responses in ways that would not work with a human agent. Lots of interesting questions. (And again areas, like with AI cars, where it would be better to think it through and create at least a general legal framework rather than having courts decide in the context of specific cases what the rules ought to be.)

I was taking the question as a hypothetical about future systems that would be classified as classic AI, not the current system. The Assistant is at best a sophisticated algorithm capable of very narrow actions so it is limited to its programming and the strictures of its learning. But if such a system becomes closer to classic AI, then it is less a question of programming and more a question of the system as it is. Of course, by the time classic AI is approaching, there are going to be a whole host of other issues.

* By classic AI, I mean near-sentient systems that do not have a pre-designed "scope" of abilities but are able to change and accommodate to new situations similarly to what humans are capable of.

I wonder if, possibly, they are referring to the ethics of inventing technology with the potential of fulfilling the job duties that an actual human used to be paid to do? I've noticed quite a bit of rumbling from various sources worried about the robots taking people's jobs, and there are still a bunch of folk out there who work in assistant capacities which includes doing things like making appointments (although I find it unlikely that those who employ such workers are going to replace their human assistants with virtual ones).

One ethical issue I've seen raised is dumping the work of getting the right response out of the AI to the poor minion answering the phone. Were the people on the phone informed they were part of a product demo?

I suspect Google's target "market" is people who can't currently afford a PA.

I would think the Big Five really really hope so. I mean, aren't the vast majority of agreements in the 21st century made between a human and a server? (I mean, I guess there are exponentially more between server and server but they probably aren't a matter of legal discourse...)

IANAL, but I believe that would be the case if the mistake was in the scope of the AI's job. In current law, a principle is liable for the acts of their agent that are within that agent's scope of employment. For example, if you hired a bidding agent to bid for a classic Mustang at a car auction and that agent assaulted another bidder, you would probably not be liable as that is beyond what the agent was hired for. But you might be still required to pay for the Mustang even if the agent bid beyond what you had set as the max bid.

Ooh, I think I see a job opening! I'm going to start selling things, and on the side I'll set up an agency for surrogate bidders in auctions. Then I'll get my bidders to bid way over the odds on behalf of their clients, and punch the client when the client argues. The client won't be liable for the assault, so they can't complain, but they'll still be liable to pay me the full amount on the auction price!

I'd probably not want to publicise the link between me as the seller and me as the agent. Also, I might need to hire people who were too thick to want paying enough to cover bail money. But those things would be less of an issue because I could just get the bidders to punch anybody else who raised inconvenient objections...!

I would think the Big Five really really hope so. I mean, aren't the vast majority of agreements in the 21st century made between a human and a server? (I mean, I guess there are exponentially more between server and server but they probably aren't a matter of legal discourse...)

But a server isn't an AI. An AI makes decisions. A server presents a fixed response specifically programmed by a person.

AI is just a machine because there isn't any definition of what constitutes intelligence. If an agreement between a human and a machine is binding then so is one between a human and AI. (In fact, if servers couldn't make decisions, they wouldn't be very useful at all. All computers can make decisions. That's what makes them computers.)

If you ignore the category, Toronto isn't a bad guess. One of the airports in the Toronto area is named after a WWI hero and another is named "Region of Waterloo".

Yes, exactly. I really don't know why ChasFink thinks that a human is less likely to make that mistake than an AI. I had to look up the Toronto airports to find out what he meant, and one of them (which seems to be the second largest) is explicitly named after a WWI flying ace (Billy Bishop) while the largest is named after a former Prime Minister who also fought in WWI, although that doesn't seem to be his most prominent achievement (Lester B. Pearson). I'd not even spotted the Waterloo one (named after the area of a battle), and I've no idea what the real answer to the question is.

Confusing Canada with the USA, or forgetting the country stipulation on "city", in that situation is an extremely easy mistake for a human to make - and it's not the only mistake the AI apparently made (it got the wars wrong too). In fact I'd have thought an AI would be less likely to make the mistake of guessing an airport in the wrong country, since the location of the airport is far more objective than the definitions of "World War II hero" and even "World War II battle".

Private Eye magazine in the UK has a regular column called "Dumb Britain" which contains stupid answers people have given on quiz shows. Quite often, I think they're being unfair because the answer was either an obvious joke, an obvious misunderstanding of the question, or a reasonable enough guess from somebody who didn't know the answer. (They're not always obvious questions - quite often I have no idea of the answer myself, and might well have said a random thing in vaguely the right area myself, or made an obvious wrong guess as a joke rather than saying "pass").

That answer might perhaps have made the column, if the editors had judged that their readers would consider it stupid and hilarious enough to think that Toronto was in the USA, even though I doubt many of us at all would have known the real answer. (I still don't know the real answer.)

I wonder if, possibly, they are referring to the ethics of inventing technology with the potential of fulfilling the job duties that an actual human used to be paid to do?

That's not much more of an ethical violation than any other automation technology. But on the other hand, I know exactly what you mean.

People often complain about "Luddites" objecting to technology on those grounds, while forgetting that the Luddites had a real point and that the machines did cause real hardship for a lot of people before society caught up; the people who benefited were different people - either because they were the owners in the first place, or because they lived later, once it had all been sorted out, or in a few cases, because they managed to spot a new opening for a successful business which helped to sort things out for those coming later. Those openings are not getting any easier to find or implement, these days. Especially as a lot of people (sometimes the same people who moan about "Luddites") also seem to denigrate the kinds of businesses that are left after all this automation - coffee shops, entertainment, luxury items and so on.

There might well be a point after which all this stuff has worked itself out and seems like an obvious improvement. But with our current politics - which regards the "leisure time" created by previous rounds of automation as an abomination that's only taken by the undeserving who need to be punished for it (and if you're rich enough to take this leisure time without being punished, you have to pretend you're working extremely hard to deserve it, which to me is entirely missing the point) - there's also going to be a lot of hardship in between... which will probably be written out of history later by the same people who like to think that the first rounds of automation were a huge boon for all...

AI is just a machine because there isn't any definition of what constitutes intelligence. If an agreement between a human and a machine is binding then so is one between a human and AI. (In fact, if servers couldn't make decisions, they wouldn't be very useful at all. All computers can make decisions. That's what makes them computers.)

I disagree. If I use a web site (like Amazon) I agree to their terms of service (even if I haven't read them.) If I get called by an AI it isn't going to recite a 10 page long TOS. Therefore I don't see any legal validity to an agreement between a person and the AI (or server it it isn't too bright). Without a TOS there is no agreement even if there would have been an agreement if it had been between two people.

I disagree. If I use a web site (like Amazon) I agree to their terms of service (even if I haven't read them.) If I get called by an AI it isn't going to recite a 10 page long TOS.

In my experience, human agents on the phone typically go through a checklist and I just answer yes or no. One of the questions may be 'Do you agree to the TOS?'. Similarly, online contracts with machines only confirm I have read and agree. So I don't understand the distinction you're making.

(Although only relevant to my joke really; less so to the thread in general. Also, how long has that red button been there and why did I only just notice and press it? (eta) Apparently i) right since the start and ii) because I'm timid and unobservant.)

I think Asimov's laws - a literary device and not necessarily something that can really be programmed - were based around the idea of robots doing their jobs and doing them safely.

Agreed.

And I think the same goes for the "Turing test" Richard W. It was just a literary or argumentative device, a point which seems to have lost in silly contests. The actual paper is a fascinating argument. There's so much more than machines and people the Imitation Game. I mean, for example, there's an argument in there about gender as well...

Now when we talk about decisions and being Truing Complete, that's not just an argument. That's the only place the word "decision" is really meaningful - the if-then. All computers can do that.

Quote:

Originally Posted by Richard W

When you ring a restaurant to make a booking, you ask them to agree to your terms and conditions?

Nah, I don't think it's necessary for every agreement - on or off line, human or not. But I presume (IANAL) such agreements give far less legal protections to each party and therefore matter much less. Again, I don't see any difference. When you order something at a restaurant (and perhaps more importantly begin to eat it), whether you use an iPad menu or a human waiter, it's an agreement with certain legal obligations.

Now when we talk about decisions and being Truing Complete, that's not just an argument.

Of course, I meant Turing Complete. Anyway, all it really needs in to jump or branch based on some condition in any given data. Every decision higher than that (including, presumably, ones made by people and other intelligent animals) can be made by any such machine. So being ale to make a decision is not a requirement for intelligence but a requirement for being a true computer.

AI is just a machine because there isn't any definition of what constitutes intelligence. If an agreement between a human and a machine is binding then so is one between a human and AI. (In fact, if servers couldn't make decisions, they wouldn't be very useful at all. All computers can make decisions. That's what makes them computers.)

There is a pretty clearly defined definition of "AI" in the AI computer science community. One characteristic is that the response of the AI is not necessarily predictable. It may be working with a dynamic set of data and therefore produce different responses to the same question at different times. Or, the system may use stochastic elements that respond differently every time. That is basically what a human being does; the same set of inputs does not always result in the same set of outputs.