from the you-do-realize-nobody-believes-anything-you-say,-right? dept

For years now, large ISPs like Comcast have tried to have it both ways on net neutrality. They consistently profess to support the concept of net neutrality, but they don't want any meaningful rules actually holding them to their word on the subject. And if there are rules, they want them to be so loophole-filled as to be utterly useless. That's effectively what the FCC's initial 2010 rules did, and that's why companies like Comcast are now pushing to have the tougher 2015 rules killed and replaced with a new net neutrality law they know either won't happen, or will be quite literally written by the industry itself.

This have your cake and eat it too approach continued in this week's Comcast comment on the FCC's proceeding to kill net neutrality. In it, Comcast again pats itself on its back for the company's non-existent dedication to net neutrality, uses industry-paid economists to falsely claim net neutrality rules hurt broadband investment, and trots out all manner of flimsy justifications for the kind of feeble rules that look meaningful to the nation's nitwits, but allow Comcast the leeway to act anti-competitively whenever it likes.

One long-standing ploy used by giant ISPs to scare people into compliance is to argue that net neutrality rules will somehow prevent ISPs from prioritizing medical network traffic. That point was most starkly made when Verizon tried to argue that net neutrality protections would hurt the deaf and disabled by preventing ISPs from being able to prioritize needed communications tools. That's never actually been a problem, and every set of rules we've had so far carves out obvious, glaring exceptions to these services. But that didn't stop Comcast from trotting out this bogeyman once again in its FCC filing (pdf):

"...the Commission also should bear in mind that a more flexible approach to prioritization may be warranted and may be beneficial to the public. For example, a telepresence service tailored for the hearing impaired requires high-definition video that is of sufficiently reliable quality to permit users “to perceive subtle hand and finger motions” in real time. And paid prioritization may have other compelling applications in telemedicine. Likewise, for autonomous vehicles that may require instantaneous data transmission, black letter prohibitions on paid prioritization may actually stifle innovation instead of encouraging it."

The goal here is to scare policy makers into "more flexible" rules (read: embedding all manner of loopholes into net neutrality protections) or we'll inadvertently hurt the disabled, disadvantage the sick, or kill the smart-driving car industry in the cradle. But again, this has never actually been a problem. The 2010 net neutrality rules had so many exceptions of this type as to make them utterly meaningless, letting ISPs do pretty much whatever they'd like provided they argued it was for the health and security of the network. The 2015 rules also include broad, tractor trailer sized exceptions for this kind of traffic.

What Comcast really wants is rules so "flexible" and broad they don't actually address any of the real hot-button subjects in the net neutrality debate. Like Comcast's decision to abuse the lack of broadband competition to impose arbitrary usage caps and overage fees. Or the way it exempts its own content from these unnecessary limits to put competing streaming providers at a notable disadvantage in this emerging market (aka zero rating).

So it's important to understand that when Comcast pens blog posts insisting it supports net neutrality, what it's really saying is it supports an absurdly-broad definition of net neutrality, which includes so many caveats and loopholes as to make said support utterly meaningless. That's again why you're currently seeing large ISPs argue that they want to do away with the strong 2015 rules (which more clearly differentiate anti-competitive behavior from justifiable paid prioritization), and replace it with a new, industry-written law that takes us back to the murky definitions seen in the FCC's since-discarded 2010 rules.

So once again with feeling: anybody that actually cares about net neutrality should support the simplest and easiest way to protect consumers, startups and small businesses moving forward: keep the existing rules intact.

from the I'm-sorry-I-can't-do-that,-Dave dept

So over the last few years you probably remember seeing white hat hackers demonstrate how easily most modern smart cars can be hacked, often with frightening results. Cybersecurity researchers Charlie Miller and Chris Valasek have made consistent headlines in particular by highlighting how they were able to manipulate and disable a Jeep Cherokee running Fiat Chrysler's UConnect platform. Initially, the duo documented how they were able to control the vehicle's internal systems -- or kill it's engine entirely -- from an IP address up to 10 miles away.

But the two would go on to highlight how things were notably worse, pointing out last year that they'd also found a way to kill the vehicle's brakes, cause unexpected acceleration, or even direct the vehicle to perform sudden and extreme turns:

"Last year, they remotely hacked into the car and paralyzed it on highway I-64—while I was driving in traffic. They could even disable the car’s brakes at low speeds. By sending carefully crafted messages on the vehicle’s internal network known as a CAN bus, they’re now able to pull off even more dangerous, unprecedented tricks like causing unintended acceleration and slamming on the car’s brakes or turning the vehicle’s steering wheel at any speed."

Just the gift for intelligence or private sector ne'er-do-wells looking to cause mayhem -- or worse.

"Autonomous vehicles are at the apex of all the terrible things that can go wrong,” says Miller, who spent years on the NSA’s Tailored Access Operations team of elite hackers before stints at Twitter and Uber. “Cars are already insecure, and you’re adding a bunch of sensors and computers that are controlling them… If a bad guy gets control of that, it’s going to be even worse."

The problems that Miller highlighted with the Jeep Cherokee are significantly worse when you're talking about a taxi that sees significantly more use each day. A taxi that, under current federal law, won't be able to block consumer access to the vehicle's OBD2 port (something consumers want the freedom to tinker with in their own vehicle, but perhaps not so much in a communal car):

"There’s going to be someone you don’t necessarily trust sitting in your car for an extended period of time,” says Miller. “The OBD2 port is something that’s pretty easy for a passenger to plug something into and then hop out, and then they have access to your vehicle’s sensitive network."

Miller notes that securing an automated vehicle isn't impossible, but it's going to require the use of "codesigning," restrictions built into the OBD2 port, better internal segmentation and authentication -- and basically a complete retooling of how self-driving vehicle security is implemented. But Miller notes that companies like Uber are bolting their computer systems onto already built vehicles, which complicates things. And the slow pace of finding and patching security vulnerabilities in vehicles poses an additional layer of problems.

The solution will also involve greater "open conversation and cooperation" among carmakers and developers, something Miller says was lacking at Uber, and hasn't exactly been the trademark of other automated vehicle vendors.

from the will-judge-alsup-design-his-own-lidar? dept

Judge William Alsup certainly continues to make himself known for how he handles technology-intensive cases. In techie circles, he's mostly known for presiding over the Oracle/Google Java API copyright case, and the fact that he claimed to have learned to program in Java to better understand the issues in the case (in which he originally ruled, correctly, that APIs were not subject to copyright protection, only to be overturned by an appeals court that simply couldn't understand the difference between an API and functional code). He's also been on key cases around the no fly list and is handling some Malibu Media copyright trolling cases as well.

And, last month, he was handed another big high-profile case regarding copying and Google: the big self-driving car dispute between Google's (or "Alphabet's") Waymo self-driving car company and Uber. In case you weren't following it, Waymo accused a former top employee of downloading a bunch of technical information on the LiDAR system it designed, only to then start his own self-driving car company, Otto, which was then bought up by Uber in a matter of months. Most of the lawsuit is focused on trade secrets, with a few patent claims thrown in as well.

Either way, Judge Alsup appears to want to be educated on LiDAR before the case begins. In two orders last week, Judge Alsup first asked lawyers for each side to present a basic tutorial on the basics of self-driving car technology:

For a tutorial for the judge, counsel shall please make presentations to set forth the basic
technology in the public domain and prior art bearing on the trade secrets and patents at issue
on the motion for provisional relief. Please do not refer to the actual systems or subsystems
used by either party. (Those will be presumably covered in the motion papers.) For the tutorial,
please refer only to what is in the public domain or prior art, regardless of whether or not one
side or the other actually practices it. That is, in the tutorial, please do not say what the parties
actually practice but if the item is in the public domain, you may reference the public domain
part, even if one side or the other practices it. Make sure that all points in the tutorial reside in
books, treatises, articles, public interviews, public videos, blogs, websites, seminars,
presentations, Form 10-Ks, or other publicly verifiable sources. Please exchange approximate
scripts beforehand so that each side may vet the other. Each side will have forty minutes on
APRIL 12 AT 10:00 A.M. The public may attend the entire presentation. The judge is interested
in learning the basic technology and learning publicly known art.

I'm kinda disappointed that I've got something else that I can't get out of that day so that I can't attend. Oh, and Judge Alsup also got some press attention for then asking that each side might want to send "young lawyers" for the tutorial:

This would be a good
opportunity for a young lawyer to present in court.

Of course, Judge Alsup actually has a bit of a history of doing similar things. If I remember correctly, he made a similar suggestion in the Oracle/Google case as well, and people have noted he's done it before as well. The idea is that he wants to encourage firms to enable younger, less experienced lawyers to get more courtroom experience and find areas where you don't necessarily need the veteran partner, even in a high-profile clash among mutli-billion dollar behemoths.

Still, it was another request that came a few days later that has gotten more attention (first spotted by Julia Carrie Wong), in which Judge Alsup also asked each side to recommend a book for him to read about LiDAR. But not just any book. You see, Judge Alsup wants you to know that he's not a total noob when it comes to light and optics, so don't feel the need to send him "LiDAR for Dummies" or whatever:

The judge requests each side to name one (and only one) book, treatise, article or other
reference publicly available that will inform him about LiDAR, and particularly its application
to self-driving vehicles.

Please keep in mind that the judge is already familiar with basic light and optics
principles involving lens, such as focal lengths, the non-linear nature of focal points as a
function of distance of an object from the lens, where objects get focused to on a screen behind
the lens, and the use of a lens to project as well as to focus. So, most useful would be literature
on adapting LiDAR to self-driving vehicles, including various strategies for positioning
light-emitting diodes behind the lens for best overall effect, as well as use of a single lens to
project outgoing light as well as to focus incoming reflections (other than, of course, the patents
in suit). The judge wishes to learn the prior art and public domain art bearing on the patents in
suit and trade secrets in suit.

Now I'm just waiting to find out that Judge Alsup, tinkering alone in his garage (or, better yet, at the Courthouse), will build his own damn LiDAR system, just to better understand the technology at play.

I don't always agree with Judge Alsup on stuff (I don't always agree with anyone), but I respect his desire to go deep in trying to understand the core technologies when he's reviewing cases on those subjects. That's (unfortunately) quite different than many other judges. Hopefully more judges adopt Judge Alsup's practices on cases like these.

from the get-the-market-right dept

For years, we had pointed out that one of the nice things about the new generation of tech companies was that they rarely seemed to use patents offensively. Yes, they were subject to tons of patent lawsuits from trolls or from legacy players trying to hang on against innovators, but we've pointed out in the past that young companies innovate, while older companies litigate. So, we have a tendency to watch companies to see when they shift from being patent litigation defenders, to going on the offensive. For years -- even as patent system supporters falsely claimed that Google only existed because of patents -- it was good to see not a single example of Google going on the offensive and filing patent lawsuits against other companies.

That changed, unfortunately, back in 2012 when Google brought a patent lawsuit against Apple. Some argued that it wasn't "really" Google, because it came from Motorola, a company that Google had purchased (mainly for the patents) and then only owned for a short while before dumping, but it was still a Google-owned property going on the offensive. At that time, we argued that if Google really wanted to support patent reform (as the company claimed) then it should stop being a patent aggressor.

To its credit, I don't believe the company went on the offensive again... until just now. As has been widely reported, Google's Waymo subsidiary (which works on Google's self-driving cars) has sued Uber over its self-driving car technology, which Uber obtained last year, in purchasing another startup, Otto, for its self-driving car technology. Otto, of course, was founded by a former Google/Waymo guy. Just a few weeks ago, Bloomberg had written that a bunch of early Google car team members had left to found Otto in part because Google had paid them a ridiculous sum of money, so they no longer needed to stay there.

Along with the lawsuit, both in the filing itself and in a separate blog post about the lawsuit, Waymo tries to bend over backwards to say that this situation is not your typical "patent" lawsuit, but a very specific one. Indeed, the company is clear that the patent issue is a lesser concern. The larger one is over trade secrets -- and here the company is fairly specific that Otto's founder, and several early employees, appear to have deliberately copied a huge amount of proprietary info from Google/Waymo before departing:

Recently, we received an unexpected email. One of our suppliers specializing in LiDAR components sent us an attachment (apparently inadvertently) of machine drawings of what was purported to be Uber’s LiDAR circuit board — except its design bore a striking resemblance to Waymo’s unique LiDAR design.

We found that six weeks before his resignation this former employee, Anthony Levandowski, downloaded over 14,000 highly confidential and proprietary design files for Waymo’s various hardware systems, including designs of Waymo’s LiDAR and circuit board. To gain access to Waymo’s design server, Mr. Levandowski searched for and installed specialized software onto his company-issued laptop. Once inside, he downloaded 9.7 GB of Waymo’s highly confidential files and trade secrets, including blueprints, design files and testing documentation. Then he connected an external drive to the laptop. Mr. Levandowski then wiped and reformatted the laptop in an attempt to erase forensic fingerprints.

Beyond Mr. Levandowki’s actions, we discovered that other former Waymo employees, now at Otto and Uber, downloaded additional highly confidential information pertaining to our custom-built LiDAR including supplier lists, manufacturing details and statements of work with highly technical information.

If accurate, that does sound fairly deliberate and sneaky. And you can certainly understand why the company is upset. The main focus of the lawsuit is the trade secrets claim. But the lawsuit also makes claims for patent infringement on three separate patents as well.

Again, you can understand why this situation would be frustrating for Waymo/Google. And maybe the direct evidence of downloading all that material prior to leaving Google is a legitimate reason to file a lawsuit. But it still seems problematic. When Elon Musk freed up all of Tesla's patents, he made it quite clear the reason he was doing so was that this was a brand new, emerging market, and it was going to need all the help it could get in becoming established. And that meant lots of companies competing and innovating and together educating the market. Thus, it didn't really matter if new entrants copied Tesla's electric car/battery technology, because in the end it would help create a larger market that helped everyone.

That same situation is true for self-driving cars as well. Even given the presence of the potential smoking gun of the downloads of documents, there's still something to the idea that the market would be a lot better off if everyone were just building the best possible self-driving car tech they could find, even if that means copying one another. Fighting over trade secrets and patents in a market that barely even exists feels silly. Yes, from a purely profit maximizing standpoint, you can understand the argument: the larger share of the market you can capture early can make a huge difference. But why not focus on executing in the marketplace and fighting the battles that are blocking the adoption of self-driving cars, rather than fighting back and forth with each other.

from the policy-fight! dept

In highly regulated private industries the law means what it says – right up until a regulator decides that it doesn't. For that reason Uber, a company with a reputation for aggressively challenging legal norms, must have been particularly frustrated when the California Department of Motor Vehicles decided to publicly rebuke it for complying with the law of the Golden State.

The crux of the issue is that Uber decided to move forward with deploying some of its vehicles with automated technologies onto California's roads without a permit which, the California DMV believes, it must first obtain before rolling out.

In a statement, the DMV said that it has a "permitting process in place" through which twenty manufacturers have obtained permits. Then, so as to leave no double about its position on the matter, stated that "Uber shall do the same."

Now, whether the new Volvo XC90's equipped with Uber's technologies are "autonomous vehicles" as a matter of perception or regulatory projection is up for debate. Different people have different ideas about what fits that mold. But, when it comes to whether the DMV should take action to slow Uber's work, the question turns from one of perception to one of law and textual interpretation.

California, by way of the DMV, has chosen to define an autonomous vehicle in regulation as a vehicle equipped with technology "...that has the capability of operating or driving the vehicle without the active physical control or monitoring of a natural person...." Thus, the factual question that confronted Uber before it made its decision to deploy the vehicles in California was simple: "is this vehicle capable of driving without being monitored or controlled by a driver?"

For all of their impressive capabilities, it is a matter of public record that Uber's vehicles often require human intervention. By extension, those vehicles require constant monitoring by a human driver. On that basis, Uber likely thought that, while not toeing the industry line, its vehicles do not meet the definitional threshold necessary to trigger the state's autonomous vehicle testing regulations.

Of course, what regulatory history there is that points to a different intent, one that tracks with the DMV's argument, is no doubt informative and interesting as a matter of historical record, but it should not overcome the obvious strictures of the regulation as written.

In the meantime, the DMV has sent Uber a cease and desist letter. While the merits of regulation are often a matter of debate, the even application of the plain language of the law should not be. Unfortunately, it appears that Uber, by dint of its reputation, is facing unwanted "special treatment" by its regulator. Worse, the DMV may be expanding the reach of its regulations after the fact. If that's the case, and certainty is lost, so too will be the very definitional purpose of the DMV's regulations – to make regular.

from the you-don't-own-what-you-bought dept

We've talked a lot about the end of ownership society, in which companies are increasingly using copyright and other laws to effectively end ownership -- where they put in place restrictions on the things you thought you bought. This is bad for a whole variety of reasons, and now it's especially disappointing to see that Tesla appears to be jumping on the bandwagon as well. The company is releasing its latest, much more high powered, version of autonomous self-driving car technology -- but has put in place a clause that bars Tesla owners from using the self-driving car for any competing car hailing service, like Uber or Lyft. This is not for safety/liability reasons, but because Tesla is also trying to build an Uber competitor.

We wrote about this a few months ago, and actually think it's a pretty cool idea. Part of the point is that it effectively will make Tesla ownership cheaper for those who want it, because they can lease it out for use at times when they're not using it. So your car can make money for you while you work or sleep or whatever. That's a cool idea.

But it's flat out dumb to block car owners from using the car however they want.

If Tesla wants to compete with Uber, that's great, but it should compete and offer a better deal for car owners, rather than artificially limiting what they can do. And the thing is, Elon Musk knows this. Remember, a few years ago when he famously freed up all Tesla patents into the public domain, recognizing that it was better to compete on execution rather than artificial legal limitations? So why not take that same approach with competing in car hailing services as well? Don't limit what owners can do with their cars. That's now ownership. ow they're just leasing.

Tesla's plan for a competing ride hailing service is a good idea, and I'm excited to see what the company does with it, but if it starts off by artificially blocking Tesla owners from using their cars on competing services, it makes me think that Tesla doesn't think it's own service will be very good, and therefor it needs to artificially lock Tesla owners into its own platform, rather than competing on the merits. That seems antithetical to the message that Tesla and Elon Musk have given off in the past. Hopefully Musk reconsiders this anti-consumer move and recognizes that Tesla can build such a service that can stand on its own merits without artificially restricting Tesla owners.

from the this-probably-isn't-such-a-good-idea dept

The National Highway Traffic Safety Administration earned plaudits from across the tech sphere for its recently released safety guidelines for self-driving cars.

With the NHTSA looking to offer guidance to this emerging industry, the agency issued a set of rules that largely just asks manufacturers to report on how they were following the guidelines. The 15-point checklist is vague in quite a few details, but that isn't necessarily a tremendous problem so long as the standards remain voluntary, which they purport to be. To many, this approach struck a good overall balance between oversight and flexibility.

Regulatory ambiguity can, however, turn out to be a real nightmare with standards that are mandatory. Vague rules can leave even the best-intentioned firms at a loss as to how to proceed. Given how much of a premium consumer confidence will be in a market as revolutionary and potentially transformative as autonomous vehicles, it's crucial that manufacturers comply with whatever standards the federal government promulgates.

That's why it's essential to pay close attention to an underappreciated part of the NHTSA guidelines -- the opportunities they afford federal regulators to coordinate with the states on oversight that, in practice, will be anything but voluntary. Indeed, the early signs from the first of what will be many proposed state rules to follow in the wake of the NHTSA guidelines suggests that compulsory standards are exactly what we're going to get.

First up are proposed rules from the California Department of Motor Vehicles, recently revised in response to the NHTSA guidelines. The revised draft of California's model regulations is far more permissive than the original version the agency promulgated late last year, a set of changes that were celebrated by various observers, even me.

But delve closely into the updated DMV proposal, and you'll find a requirement that manufacturers obtain a state permit certifying that any and all vehicle tests are conducted in accordance with the NHTSA’s "Vehicle Performance Guidance for Automated Vehicles." Thus, in the nation's largest testing jurisdiction, the NHTSA standards already are set to be made mandatory.

This is not to say the federal government doesn't have a rule to play in oversight of self-driving cars. The feds are better situated to oversee the development of safety standards, and the door should be open to refine those standards. But coordinating with the states to turn those standards into a set of de facto binding obligations smacks of underground rule-making.

The California DMV might be complicit in this collusion, but it can't be faulted for deferring to federal authority. Were the NHTSA’s safety standards clearer -- an undertaking that presents risks and problems of its own -- California’s approach wouldn't actually be a problem. The fact that the federal guidelines are so vague in so many of the details means that we can't really know either that manufacturers will be able to comply with California's rules or that the state will be able to enforce them.

For now, state regulators should use their discretion to be as liberal as possible about what sorts of vehicle testing comports with the NHTSA safety guidelines. Over the longer term, what we need is for states like California to communicate to the NHTSA that it's up to them to make absolutely clear what does and does not count as compliance.

It's broadly understood how overly restrictive regulations can dampen innovation, but regulatory ambiguity can be just as bad. For regulators, the clock is ticking. It's up to both the NHTSA and state agencies like the California DMV to bring the clarity this new market needs.

from the history-is-on-the-side-of-innovation dept

The future is positive, a dream. Focus on the future. Use science to stay ahead.

Those words didn't come from a tech-sector legend – not Steve Jobs or Bill Gates – or newcomer like Mark Zuckerberg. That wisdom about nurturing the previously unimaginable and embracing what technology offers came from a visionary of a different sort.

I heard Shimon Peres share this insight during my visit to the Peres Center for Peace in June, just three months before the former Israeli president and prime minister died. He saw innovation and technology as improving the world – a force for good that can break down borders, both national and political.

Peres' vision stands in stark contrast to Lord Jonathan Sacks' dystopian commentary calling computers and radical Islamists the "two dangers" of this century, defeated only by "an insistence on the dignity of the human person and the sanctity of human life."

On the contrary, I believe innovation and technology will help defeat terrorists and sustain and enhance human life.

While Sacks decries the idea of self-driving cars, this innovation can save tens of thousands of lives a year in the U.S. alone. More than 35,000 people died on our roads last year, and the federal government estimates over 90 percent of crashes are caused by human error. Eliminating the great majority of automobile deaths and serious injuries would certainly meet Sacks' goal of preserving "the sanctity of human life."

The rabbi also frets technology will threaten "the dignity of the human person." Apparently, he hasn't considered the dignity self-driving cars will deliver to seniors and persons with disabilities, providing them with previously unimagined freedom and independence. The ability to read road signs or react quickly to traffic will no longer be needed to travel alone by car.

Similarly, Sacks fears doctors will be replaced by robots with artificial intelligence. However, if health care models remain unchanged, the U.S. may face a shortage of 124,000 physicians by 2025. Virtual care solutions – wireless health devices and telemedicine technology – will increasingly allow Americans to see doctors only when necessary.

Tech-enabled remote care also would remove much of the burden of traveling to see a doctor, reducing congestion on roads and easing the strain on caregivers. With 10,000 Americans turning 65 every day, this developing technology is a mitzvah – a gift or miracle, which will provide life and good health. Innovations in healthcare technologies could help resolve an emerging healthcare crisis – they need to be embraced, not feared.

I understand Rabbi Sacks' dual concerns about the growing use of technology threatening both our jobs and our connections with one other. But every major innovation from the wheel to the factory to the car to the internet radically affected how people work. Certain jobs were lost, yes, but new jobs were created.

More, people lived longer as they ate better, stayed healthier and gained greater access to knowledge. Innovation and the myriad benefits it brings allow us to ascend the Maslow hierarchy of needs, from survival with food and shelter to love and satisfying relationships.

The issue remains whether our love affair with technology and fascination with "things" mean we're sacrificing our humanity – choosing the devices in our hands over the people in our midst. Look around a restaurant at dinner and witness seas of quiet people looking at devices. But are devices worse than alcohol, drugs, gambling or anything in excess?

As parents, we should set limits and an example. As adults, we should enjoy the five-sense experience of the people around us, and let these wonders of technology be tools for living rather than our near total life experience.

As President Peres explained to me in June, big data will deliver a new age of being able to predict – and predictability will change and improve the world. I recall him saying, Four thousand years of commandments will keep the morality. Sixty-eight years of Israel will keep innovations coming. His goal for the Center for Peace is to become a center of innovation.

Technology is changing our lives for the better – enhancing our security, removing human error from our roads, reducing trips to the doctor and cutting our workload. In doing so, it improves the sanctity and dignity of human lives. And that is a blessing, not a curse.

Should regulations and regulators focus on a utilitarian model where the vehicle is programmed to prioritize the good of the overall public above the individual? Or should self-driving cars be programmed to prioritize the welfare of the owner (the "self protective" model)? Would companies like Google, Volvo and others prioritize worries of liability over human lives when choosing the former or latter?

Fortunately for everybody, engineers at Alphabet's X division this week suggested that people should stop worrying about the scenario, arguing that if an automated vehicle has run into the trolley problem, somebody has already screwed up. According to X engineer Andrew Chatham, they've yet to run into anything close to that scenario despite millions of automated miles now logged:

"The main thing to keep in mind is that we have yet to encounter one of these problems,” he said. “In all of our journeys, we have never been in a situation where you have to pick between the baby stroller or the grandmother. Even if we did see a scenario like that, usually that would mean you made a mistake a couple of seconds earlier. And so as a moral software engineer coming into work in the office, if I want to save lives, my goal is to prevent us from getting in that situation, because that implies that we screwed up."

That automated cars will never bump into such a scenario seems unlikely, but Chatham strongly implies that the entire trolley problem scenario has a relatively simple solution: don't hit things, period.

"It takes some of the intellectual intrigue out of the problem, but the answer is almost always ‘slam on the brakes’,” he added. “You’re much more confident about things directly in front of you, just because of how the system works, but also your control is much more precise by slamming on the brakes than trying to swerve into anything. So it would need to be a pretty extreme situation before that becomes anything other than the correct answer."

It's still a question that needs asking, but with no obvious solution on the horizon, engineers appear to be focused on notably more mundane problems. For example one study suggests that while self-driving cars do get into twice the number of accidents of manually controlled vehicles, those accidents usually occur because the automated car was too careful -- and didn't bend the rules a little like a normal driver would (rear ended for being too cautious at a right on red, for example). As such, the current problem du jour isn't some fantastical scenario involving an on-board AI killing you to save a busload of crying toddlers, but how to get self-driving cars to drive more like the inconsistent, sometimes downright goofy, and error-prone human beings they hope to someday replace.