SAN FRANCISCO – Google’s self-driving prototype had another eventful month of testing in November, highlighted by a slow-as-molasses run-in with the local law. Google’s monthly self-driving car reports are fun to read through, and gives transparent accounts of what the team is up to, how the cars are performing, and any lessons learned along the way.

As automakers and tech companies talk up self-driving vehicles and the chance to bring their benefits to the world, plenty of questions are being raised about the technology.”Other than signalling devices typical to non-autonomous vehicles, such as turn signals, head lights, high beams, brake lights, reverse lights, and some audible signals (horns, reverse light beepers, etc), autonomous vehicles lack the capability to directly communicate the vehicle’s future behaviour,” it says. “Simply stopping a vehicle without these driver-initiated signals may not be sufficiently reassuring to the pedestrian that it is indeed safe to cross.” It said: “The vehicle may include sensors which detect an object such as a pedestrian attempting or about to cross the roadway in front of the vehicle.

Google Car executives didn’t comment much at the time of the Nov. 12 incident, which was reported extensively online after a Mountain View denizen captured the priceless image of a town police officer talking to a driverless car. One that’s attracted much attention is what’s called “the trolley problem.” The issue is this — do you flip a switch and divert a trolley from killing two people, so that it instead kills only one person? In the company’s latest monthly update on its six-year-old autonomous car project, released Tuesday, it answered what it described as “the most common questions we heard.” Apparently one was not why the car was doing a grandparent-like 24 mph in a 35 mph zone. Google’s self-driving project is leading the pack, and a recently granted patent (Patent #: US009196164, granted 11/24/15) signals that the company is trying to pick off some of the issues they’ve been facing with how pedestrians and other drivers interact with them on the streets. The company says its currently averaging 10,000-15,000 autonomous miles per week on public streets with 23 Lexus RX450h SUVs and 30 prototypes on the road.

Google has for years now been testing sensor-packed Lexus SUVs in both Mountain View and more recently Austin, Texas, vehicles that are capable of higher speeds. Consumers might be hesitant to take a ride in a self-driving vehicle if there’s a chance the software powering the car is programmed to put them at risk to save someone else. According to the publicly-available patent application, Google plans to make its cars more communicative with light-up signs, audio cues, mechanical hands, and, eerily, “robotic eyes on the vehicle that allow the pedestrian to recognize that the vehicle ‘sees’ the pedestrian.” The light-up signs, which appear on the front bumper or the side of the car, would say things like “safe to cross,” or “coming through.” A speaker on the outside of the car could also vocalize the text that appear on the signs, or make other audible alerts.

Lastly, the post notes that Google’s self-driving cars have an on-board library of siren sounds so that even if the flashing lights of emergency vehicles aren’t detected by the car’s cameras it will know to begin slowing down or even pull over. From the very beginning we designed our prototypes for learning; we wanted to see what it would really take to design, build, and operate a fully self-driving vehicle — something that had never existed in the world before. Two other points of note in the report include yet another rear-end collision caused by an inattentive human motorist, this time a 4 mph bumper bump at a red light. (Over six years, Google’s car have been hit more than a dozen times, each time the fault of humans.) Google’s vehicle was hit after it nudged ahead to determine if it was clear to make the right turn on red. Chris Urmson, who heads up Google’s self-driving car project, weighed in on the subject Tuesday at Volpe, National Transportation Systems Center in Cambridge, Mass. “It’s a fun problem for philosophers to think about, but in real time, humans don’t do that,” Urmson said. “There’s some kind of reaction that happens.

The vehicle’s computers clearly thought no, while the human driving the automobile behind it was sure the autonomous car was going to take the chance and turn. “Our software and sensors are good at tracking multiple objects and calculating the speed of oncoming vehicles, enabling us to judge when there’s a big enough gap to safely make the turn,” the post says. “Because our sensors are designed to see 360 degrees around the car, we’re on the lookout for pedestrians stepping off curbs or cyclists approaching from behind.” It may be the one that they look back on and say I was proud of, or it may just be what happened in the moment.” Urmson stressed that Google’s cars don’t know what person might be walking on a sidewalk or ambling in a crosswalk.

A simpler vehicle enabled us to focus on the things we really wanted to study, like the placement of our sensors and the performance of our self-driving software. Secondly, we cared a lot about the approachability of the vehicle; slow speeds are generally safer (the kinetic energy of a vehicle moving at 35mph is twice that of one moving at 25mph) and help the vehicles feel at home on neighborhood streets. So basically, if a self-driving car were to get the green light and is about to proceed forward, it could flash a stop sign on the right hand side to let pedestrians know not to cross.