I missed this thread for a while - I saw something about the latest fatal Tesla "autopilot" crash over Easter weekend, I think, but then couldn't find it in any papers (it was quickly forgotten) and convinced myself I'd seen a reference to the first one, that had come back up thanks to the Uber thing.

In one sense, I agree... but not in the sense in the cartoon. I sometimes see people argue that children would mess about with self-driving cars by jumping in front of them to make the cars stop. (I've seen this exact scenario in Guardian comment threads - I've responded to it twice, I think, but seen it more often). In that case, I agree with Randall Munroe's assertion that obviously a human driver would also stop in that case, and the thing that stops children doing this is the same thing that stops them doing it now.

But to my mind, he completely misses the point in that cartoon, because as ganzfeld says, people can also tell the difference between a child in the road and a figure made of cardboard and straw in the road, or a mark on a sign. And even if a human driver slows down to remove a straw figure in the road, it's gone after that, and nobody else cares. How would a self-driving car remove this straw man? Drive through it? It would still rely on somebody else getting out of a car and moving it out of the way.

As far as the second fatal Tesla autopilot crash goes, and apart from my sorrow that I hadn't imagined it, the bit that jumped out to me from the story is this:

Quote:

Huang's brother, Will, told San Francisco ABC station KGO that Walter had complained "seven to 10 times the car would swivel toward that same exact barrier during auto-pilot."

"Walter took it into the dealership addressing the issue, but they couldn't duplicate it there," Will said.

... Which must sound very familiar to anybody who has experience of any sort of software or IT testing. The jokes you use to hear about how "if your car crashes, just reboot it" aren't as far from reality as they should be.

I think the main idea behind the cartoon (besides entertainment) is that many of the "what if x did y" scenarios could also cause problems for human drivers but aren't being done. So the idea that significant number of people would start doing that to automatically driven cars is not necessarily going to happen.

Can autopilot be disengaged? I don't want to sound like I'm blaming the dead, but if g-you knew g-your car had a serious problem with a particular stretch of roadway on autopilot, why would g-you keep using autopilot on that particular stretch of roadway?

I agree that some what if scenarios could be the same as with human drivers. But most of the ones I've actually heard are ones that would not be the same. People are able to use judgment and context and adapt in a way that cars can't.

If overnight someone made the lane lines on a major street wavy, human drivers would pretty much immediately recognize it as a prank, and then ignore the lines and drive in a straight line within their lane space.

An autonomous car could probably be made to weave down the road like a drunk driver.

A person approaching an intersection that is supposed to be controlled by a stop sign has a good chance of recognizing that it has been replaced by a 45 mph sign, even if it has been convincingly faked, because of the context. An AV can be fooled into thinking a genuine stop sign is a 45 MPH sign by putting some stickers on it. And no one in the AV would see the problem.

That insidiousness is part of why people fear those scenarios, and it is also why I think it will be more appealing to people to actually do these things. Turning people's own technology against them is a trope for a reason. People will be opposed to the robofication of cars, and being able to screw them up by walking through a neighborhood slapping stickers on stop signs will be very appealing.

And the Tesla crash plus videos of similar Tesla behavior on YouTube shows that repainting lines to lead directly into a barrier could work on AVs very differently than for humans.

Ha, yes, only the ones that Google deemed "necessary" based on their simulations of what would otherwise have happened... I wonder how fully they simulated each of these "thousands" of cases in order to flag up the 69 that they thought their software wouldn't otherwise have coped with?

Quote:

“It demonstrates that it is valuable to have a safety driver in the vehicle while testing, which is something we’ve always believed,” said Chris Urmson, director of Google’s self-driving car program. “But if you look at [regular] drivers, they’re effectively untrained in America. Expecting them to vigilantly monitor a system that operates as well as this does is a really a very challenging problem.”

"Well, goodness, these untrained 'drivers' we employ are, through no fault of their own, not at all helpful! Of course if they weren't there, our own systems would have done better than they did, even in the cases where they felt they had to intervene! But how on earth do you expect us to take unimpeded monopoly control of your entire transport system if we have to work with such useless people in order to convince you to let us do so?"

I think the main idea behind the cartoon (besides entertainment) is that many of the "what if x did y" scenarios could also cause problems for human drivers but aren't being done. So the idea that significant number of people would start doing that to automatically driven cars is not necessarily going to happen.

But it's a straw man. Few people who have knowledge of the subject have supposed that blatant sabotage is the real problem. When they talk about those problems the point is generally the fact that such attacks are so easy (no, it's nowhere near as easy to make human drivers have a catastrophic failure and I suspect Randall Munroe knows that) shows how fragile these systems are and are likely to be for some time.

You know, that's a cartoon, and a funny one. It doesn't actually reflect any reality about the problem. It doesn't have to; it's just entertainment. Munroe is a terrific commentator but that doesn't mean all of his cartoons have some "grain of truth" or whatever. (cf horse battery 'random' passwords, which turns out not to be terrific advice.)

And, yes, people can and do throw stuff off of bridges and kill people all the time. Often they get caught. Pop it in a news search engine and see how many stories come up - and that's just the cases where it actually harmed someone. Actually, human drivers, unlike these systems, can see what's going on right away and quite often can avoid such attempted murder. They remember where the stop sign used to be when a 'prankster' takes it home, etc.

These systems could do a lot of that and more if there was any kind of standardisation or openness. They would be safer, we'd all be a lot safer. So this fact exposes the underbelly of this "we're making everyone safer so why complain?" nonsense. If they wanted to make everyone safer there are a thousand things they could lobby for and implement in practically no time at all (like five years, which is way shorter than these fifth-level cars are going to take). They want things to be safer, yes they do, but only if they get to do it by solving the hard tech problems rather than the hard society problems because that's how they see this paying off for their investors. So it really has nothing to do with making people safer, actually.

In another thread, somebody asked me how I had a driving licence, even though, in my own words, I "couldn't really drive". I made a brief comment there, but some of the comments about the lax standards of American driving tests, together with the statement from the Google spokesperson I quoted above, made me think of this again.

Driving tests in the UK (even when I did mine in the late 1980s) were reasonably stringent. You had to drive in traffic, reverse round corners, parallel park, make an emergency stop and various other points that I can't remember. There was no written component in my day, but you had to answer verbal questions from the instructor on the Highway Code. It certainly wasn't just a matter of driving round a car park, and I don't think you could pass it "untrained", as the bloke from Google implies American drivers typically are. Usually we had weeks of lessons first. Instructors' cars had duel controls - only one steering wheel, but the instructors had their own set of pedals - accelerator, brake and clutch* - and so could intervene on the brake if anything was going wrong.

One thing I'd heard about the tests was that if the instructor had to use his brake then it was an automatic fail. I was confident about tests in those days, which made up for my lack of confidence as a driver. Apart from an early wobble, my test went fine, until the last roundabout before the test centre. I was going round that when a car on another side road came in a bit fast and apparently looked as though he wasn't going to stop. My instructor reflexively braked - in the way that you do if you're a passenger in a car and you think things are happening too fast. (It took me ages to get over this habit as a passenger myself).

Because he had duel controls, it made the car itself brake. This wasn't necessary, because the driver coming down the side road stopped in plenty of time without pulling in front of me. I'd been looking where I was going (I had right of way) and hadn't really even noticed this other car. So, given my instructor had just used the brake on the duel controls, I assumed I'd failed.

I drove back to the test centre - which was only a few hundred metres further on, on the same road I'd turned off the roundabout onto - in a slightly awkward silence, and we did the spoken question parts. Then the instructor added up his marks for a bit and told me I'd passed. I was a bit surprised by this because of his use of the duel brake - but as I said, he hadn't really needed to use it; it was a reflex action on his part. In fact I think that he might have been embarrassed about it himself, and it might have led him to overlook the early wobble or two (or add a couple of marks back on) that I'd worried about.

My point, so far as I remember from when I started to type this up, was about the responsibility for my instructor braking on the roundabout, and how it fits with Google's test criteria above. Which of us in that situation was the "robot driver" and which was the "human" intervening? We didn't hit anything, and we wouldn't have hit anything even if he'd not pressed the brake. Would Google's software have seen the car coming too fast (as my instructor did) and pressed the brake, even though it wasn't necessary? Or would it have done as I did, and happily driven through - according to my right of way - and not even noticed? Or would it have noticed but driven through as I did anyway, because it had carefully calculated that braking wasn't necessary and the other driver would stop? (I'm saying I "didn't notice" for the sake of argument, but I was paying full attention to the road(s) at the time, my driving wouldn't have led to an accident even if the tester hadn't done anything; I might well have subconsciously "noticed" the other car going too fast but decided no action was necessary...)

This is long and rambling and may seem irrelevant, but it seemed quite relevant to me when I thought of it...

* (My first instructor apparently also used the clutch for me when I was changing gear, which completely messed me up when I tried to drive my mum's car with her for the first time - I stalled at every gear change, because he'd never explained what the clutch did, and had apparently been properly engaging it for me when I'd not done so...)

some of the comments about the lax standards of American driving tests

As you may have noticed, that depends a whole lot on where in the USA you are. Some states don't have lax standards.

Quote:

Originally Posted by Richard W

the driver coming down the side road stopped in plenty of time without pulling in front of me. I'd been looking where I was going (I had right of way) and hadn't really even noticed this other car.

Looking only where one's going and assuming that having the right of way means that other cars will yield is not a good idea. That car may indeed have stopped in plenty of time; but sometimes cars don't. Ditto bicycles, pedestrians, and deer. You're supposed to keep your eyes moving, including checking to both sides of the road.

Generally, I think Americans have a fairly similar amount of driving training to what you described Richard. The only thing is, few pay for driving lessons. So training is done by parents or friends, or by taking a driver's education class in high school. So, for training, most cars don't have dual controls.

I think the test typically is a road test, but I know some few places do the test on a dedicated range. There is also typically a written test and an eye test. People take the test in their own (or borrowed) cars.

I was able to get my license through an arrangement in my state where students who receive at least a B grade in driver's education only had to take the eye test and written exam to get a license. So I got my license without passing one integrated driving test, but I had been tested on all of the components, and had training from the class and from a parent.

Here, the local community college offers it, and I believe there's an arrangement similar to the one I mentioned from my own experience. If you take the class and pass it, you don't have to take the driving test to get your license. This is also true of a motorcycle driving class they offer.

"Well, goodness, these untrained 'drivers' we employ are, through no fault of their own, not at all helpful! Of course if they weren't there, our own systems would have done better than they did, even in the cases where they felt they had to intervene! But how on earth do you expect us to take unimpeded monopoly control of your entire transport system if we have to work with such useless people in order to convince you to let us do so?"

Yes, and how could an enormous company that is potentially going to make huge profits off of the technology they are developing possibly obtain workers better trained to do the task that the company hired and trained them to do? Clearly their hands are tied by the fact that their employees don't come pre-trained to do it better.

(Or could it be that it is much more convenient to not train them to be "better drivers", say that that means you have to run simulations to weed out the times they were "wrong" to intervene, and then blame your employees for intervening in all of the instances you claim actually posed no danger?)

Can autopilot be disengaged? I don't want to sound like I'm blaming the dead, but if g-you knew g-your car had a serious problem with a particular stretch of roadway on autopilot, why would g-you keep using autopilot on that particular stretch of roadway?

Because the whole reason you're out there is to prove the autopilot works. His previous experiences may have made him more attentive at that spot; maybe it wasn't enough.

Quote:

Originally Posted by ganzfeld

These systems could do a lot of that and more if there was any kind of standardisation or openness. They would be safer, we'd all be a lot safer. So this fact exposes the underbelly of this "we're making everyone safer so why complain?" nonsense.

I do agree that standardization and openness should be the way to go, but there is a caveat: it also makes it easier to hack. So we also have to have people working on these things that think like hackers: "How could this be made to work like we DIDN'T want it to, and how can we avoid having maliciously minded people do that to other people's cars without permission?"

BTW, I see that Cadillac is offering a hands-free highway cruise control already. Genie's out of the bottle!

One should at first, but when one repeatedly has that expectation proven false, one should no longer expect it.

ETA: From the reports (they are from Tesla so they may be self-serving), Huang had not had his hands on the wheel for at least six seconds before the crash and had ignored multiple warnings to that fact.

TA: From the reports (they are from Tesla so they may be self-serving), Huang had not had his hands on the wheel for at least six seconds before the crash and had ignored multiple warnings to that fact.

More like providing another excuse to keep doing things drivers have always been doing. This article says that in 1916 a police department in Suffork, VA was on the lookout for drivers who were reading their mail while driving.

In 1916, drivers might have been used to driving a horse, which was capable of looking where it was going.

-- hey, the horse or ox drawn cart was the original self-driving vehicle! Complete, of course, with a tendency to occasionally go somewhere else than the humans in the vehicle intended, including possibly into other vehicles or off the road.