will never happen, driving requires acumen which computers cannot display, even the best drivers have accidents but ai will never exceed at driving skill, there are too many active variables when driving for an algorithm to account for, some arent even properly defined until they hit the equation which is too late, ai will always be humans dumb little brother, and nobody want to be driven around by their dumb little brother...

An Apple self-driving car has been involved in a collision – the first for the iPhone maker. The accident proves that the group, whose efforts in this sphere had been shrouded in secrecy, is in the race to make autonomous cars.

The crash took place near the city of Sunnyvale, California, not far from the company’s headquarters, on August 24, according to a report filed by Apple’s Program Director Steve Kenner to the California Department of Motor Vehicles.

The test vehicle, a Lexus RX450h, was in autonomous mode when it “was rear-ended while preparing to merge onto Lawrence Expressway South from Kifer Road,” the document stated.

The Apple car was traveling at less than 1.6kph, “waiting for a safe gap” in traffic to complete the merge, when a 2016 Nissan Leaf collided with it at approximately 24kph. Both vehicles sustained damage, but no injuries were reported. Apple stated that the car was “moderately” damaged in the accident.

The iPhone manufacturer has never publicly confirmed that it was developing self-driving cars. However, Apple has reportedly been testing such vehicles and has held permits for around 60 autonomous cars since 2017. On the DMV website, Apple Inc. is included on a list of companies or individuals who hold autonomous vehicle testing permits.

According to the data from the DMV, as of August 31, 2018, the agency has received 95 autonomous vehicle collision reports. A number of companies, including General Motors Cruise Automation and Alphabet Inc’s Waymo, have permits to use self-driving cars on the roads in California, but these permits require the presence of a human driver.

Self-driving cars have been the subject of recent concern after a fatal crash in Arizona in March this year. In that incident, a woman died after being hit by an autonomous car operated by Uber. The company even suspended the car tests. In July, Uber announced that all its vehicles would be driven manually by humans, and in accordance with new safety standards.

In May, the American Automobile Association (AAA) found that 73 percent of drivers surveyed said they would be “too afraid” to ride in a completely autonomous car. In late 2017, only 63 percent said the same. The study claimed to show that the biggest increase in fear was among millennials, which is bound to worry tech firms that are investing billions in the technology.

foresee chronic drunks will buy self driving cars(or be forced buying via court order)
couple DUIs cost as much as a new computer controlled car anyways
seems like niche market they should focus sales
drunks with their own personal computerized autopilot cab service

foresee chronic drunks will buy self driving cars(or be forced buying via court order)
couple DUIs cost as much as a new computer controlled car anyways
seems like niche market they should focus sales
drunks with their own personal computerized autopilot cab service

yep, i can see fleet sales maybe, big rigs maybe, but the individual is against this kind of depowerment, it smacks of less freedom, of being slighlty less in control of your own life, freedom is woven into the human spirit, if you want to take away some aspect of autonomy then we want something in return, like flying taxi cabs motherfuckers

Unmanned Airport Control Tower Installed In Northern Colorado
LOVELAND, Colo. (CBS4) — A new high-tech experiment is underway at the Northern Colorado Regional Airport that could have ramifications around the country.

“It’s the first one that’s going to combine radar and track-based information with the video-based information that will come from the cameras to provide an even better situational picture of what’s happening here,” said David Ulane the Director of the CDOT Division of Aeronautics.

Three masts filled with cameras stretch along the airport in Loveland. The cameras stream into a room that acts as a virtual tower.

“We’ll have basically what looks like a video wall,” said Ulane. “When you’re standing in front of them make it look like from these cameras you’re looking out the windows of a traditional air traffic control tower cab.”

NCRA opened in the 1960s and has never had an air traffic tower. Most airports in Colorado do not. However, with more than 90,000 take off and landings, and a growing population in Norther Colorado the airport is getting busier.

“We have a lot of different types of aircraft that use this airport,” said Jason Licon, the Airport Director at NCRA. “This will help us maintain safety for all the users.”

“A more efficient facility, a safer facility and certainly one that if airline service resumes has a bigger economic impact on this community,” said Ulane.

The Division of Aeronautics has spent more than $8 million on the remote tower project. It’s worked hand in hand with the FAA. Testing will begin in the next few weeks and last for more than a year. The hope is to have a fully operation virtual air control up and running by the end of 2020.

State officials are looking to expand the project in the future.

“We have a number of our airports in the state that could use air traffic controls services like this, even part of the year,” said Ulane. Montrose, Telluride, Hayden, Durango and Gunnison are some of the airports with commercial service and no air traffic control tower.

In Loveland, the hope is the new technology could return commercial service there.

“It will allow additional traffic to come in in a safe way. As we grow over time it will continue to be able to accommodate that growth,” said Licon. “Having a safe airport is critical to market to those airlines.”

A recent report by investigative technology news site The Information revealed teething problems at Waymo, the self-driving car company spun out of Google, where there have been headaches caused by what humans might consider over-cautious driving.
The self-driving cars would stop abruptly in scenarios where humans might zip through, such as turning across a line of traffic.
"As a result, human drivers from time to time have rear-ended the Waymo vans,” the report noted.

A recent report by investigative technology news site The Information revealed teething problems at Waymo, the self-driving car company spun out of Google, where there have been headaches caused by what humans might consider over-cautious driving.
The self-driving cars would stop abruptly in scenarios where humans might zip through, such as turning across a line of traffic.
"As a result, human drivers from time to time have rear-ended the Waymo vans,” the report noted.

not really what is said, it says that people have a driving style, its semi defined and widely known, it cant be programmmed into an algorithm so the ai seems erratic in comparison, people have expectations of what other drivers will do, its the computer that broke the convention, its our road not theirs so the ai is the fuckup by default

heres one to ponder: on approach to an overpass you see a bunch of kids about to drop a washing machine on your car, you slow and change lanes if you can

unfortunately, the ai car will taken no action, you will get the washer

We have had 3 occasions when someone drove into the back of our cars, in each case the fault was the person driving into us. All three times the drivers freely admitted they were not concentrating.

Your point re washing machine is valid. (though in principle an AI could figure that out). But as modern aircraft fly by wire, military aircraft more so- inherently unstable, London's DLR is driverless... accidents will and do happen. Its just that the software technology is more reliable than the old mechanical & human systems. Not perfect. (and of course in defence the anti aircraft / missile systems are completely free of human intervention and decision... and again subject to error...)

In the end it will be the insurance companies which will decide... if driverless cars are safer they will be cheaper to insure.

We have had 3 occasions when someone drove into the back of our cars, in each case the fault was the person driving into us. All three times the drivers freely admitted they were not concentrating.

your irrelevant personal anecdote notwithstanding, in the cited case the ai was the fuckup by trying to improperly merge with traffic, the ai simply tries to bully the traffic flow into accepting it, in this case it didnt work, being intimately familiar with the drivers in the sunnyvale area i will guarantee with 100% certainty that she hit that fucker on purpose

Your point re washing machine is valid. (though in principle an AI could figure that out).

no it cannot, you cannot account for unknown variables, they must be processed on scene in milliseconds, the action taken must involve a sense of self preservation, an escape path, no ai will ever posess that ability

accidents will and do happen. Its just that the software technology is more reliable than the old mechanical & human systems.

except that is wrong, given the massive amount of time and miles on the roads and equipment that humans have had and the associated deaths there is no contest, ai has just begun and already killed scores on roads and factories, proportionally the ai is far more dangerous, and some want to give this fledgling ai our most dangerous machine so it can kill on purpose

Not perfect. (and of course in defence the anti aircraft / missile systems are completely free of human intervention and decision... and again subject to error...)

air defence/offence systems are a joke, none of them work for shit, they miss all the time, when dear leader claimed 100% efficacy on syria those of us in the know lold hard, even 60% on target would be fucking amazing, and still compared with the real men who killed the krauts its pathetic

I think we differ on so much of this we are not going to get any agreement. My quote was about numerous accidents with AI driver less cars being “over cautious”, and not “the AI simply tries to bully the traffic flow into accepting it,” Elsewhere there is no reason why AI cant take into account many variables, - you confuse the use of aggressive guided weapons with defence systems such as patriot, CIWS... and the israeli iron fist / dome where human reactions are far too slow. And accept the claims in Syria without evidence.

“the action taken must involve a sense of self preservation, an escape path, no AI will ever posses that ability” “some want to give this fledgling ai our most dangerous machine so it can kill on purpose” Well if it possesses the ability to kill on purpose why not save lives on purpose... I cant see an over cautious AI program doing either.

As for insurance companies in the UK statistics used means that kids pay out much larger sums than more experienced drivers, down to associated data regarding accidents and claims. That's how it works here.

My quote was about numerous accidents with AI driver less cars being “over cautious”,

the quote is expessing an opinion on why ai has crashes, it contains no examples, in their opinion its because of humans, but we were here first and ai is the one trying to be human, the faults of ai are all its own

As for insurance companies in the UK statistics used means that kids pay out much larger sums than more experienced drivers, down to associated data regarding accidents and claims. That's how it works here.

oh ok, i see you dont understand economics any better than anything else, i wont bore you with facts about supply/demand or market share or any number of localised factors, cheers

My quote was about numerous accidents with AI driver less cars being “over cautious”,

the quote is expressing an opinion on why ai has crashes, it contains no examples, in their opinion its because of humans, but we were here first and AI is the one trying to be human, the faults of AI are all its own

The quote was from the BBC site quoting a survey of crashes. And it was not because of Humans - in cars, but the 'over cautious AI'... as for the we were here first, so, other hominids before us etc. Artificial Intelligence is a human development, why should it be different to artificial flight?

AI is not trying to be human, use the flight analogy, aeroplanes don't flap their wings... it is attempting to perform human tasks, thinking, not stuff like emotions...

As for insurance companies in the UK statistics used means that kids pay out much larger sums than more experienced drivers, down to associated data regarding accidents and claims. That's how it works here.

oh ok, i see you dont understand economics any better than anything else, i wont bore you with facts about supply/demand or market share or any number of localised factors, cheers

I'm glad you wont bore me. Inexperienced young drivers in the UK pay insurance premiums around 10 times higher than more experienced drivers. Here if you drive and don't have any accidents your premium gets lower. Its lower if you live in a low car crime area and don't use your car often etc. And of course these days the quotes are available immediattely online via AI programs... no human calculates the risks.