Science Around the World

Friday, May 25, 2018

The initial report by the National Transportation Safety Board on the fatal self-driving Uber crash in March confirms that the car detected the pedestrian as early as 6 seconds before the crash, but did not slow or stop because its emergency braking systems were deliberately disabled.

Uber told the NTSB that “emergency braking maneuvers are not enabled while the vehicle is under computer control, to reduce the potential for erratic vehicle behavior,” in other words to ensure a smooth ride. “The vehicle operator is relied on to intervene and take action. The system is not designed to alert the operator.” It’s not clear why the emergency braking capability even exists if it is disabled while the car is in operation. The Volvo model’s built-in safety systems — collision avoidance and emergency braking, among other things — are also disabled while in autonomous mode.

It appears that in an emergency situation like this this “self-driving car” is no better, or substantially worse, than many normal cars already on the road.

It’s hard to understand the logic of this decision. An emergency is exactly the situation when the self-driving car, and not the driver, should be taking action. Its long-range sensors can detect problems accurately from much further away, while its 360-degree awareness and route planning allow it to make safe maneuvers that a human would not be able to do in time. Humans, even when their full attention is on the road, are not the best at catching these things; relying only on them in the most dire circumstances that require quick response times and precise maneuvering seems an incomprehensible and deeply irresponsible decision.

According to the NTSB report, the vehicle first registered Elaine Herzberg on lidar 6 seconds before the crash — at the speed it was traveling, that puts first contact at about 378 feet away. She was first identified as an unknown object, then a vehicle, then a bicycle, over the next few seconds (it isn’t stated when these classifications took place exactly).

The car following the collision.

During these 6 seconds, the driver could and should have been alerted of an anomalous object ahead on the left — whether it was a deer, a car, or a bike, it was entering or could enter the road and should be attended to. But the system did not warn the driver and apparently had no way to.

1.3 seconds before impact, which is to say about 80 feet away, the Uber system decided that an emergency braking procedure would be necessary to avoid Herzberg. But it did not hit the brakes, as the emergency braking system had been disabled, nor did it warn the driver because, again, it couldn’t.

It reflects extremely poorly on Uber that it had disabled the car’s ability to respond in an emergency — though it was authorized to speed at night — and no method for the system to alert the driver should it detect something important. This isn’t just a safety issue, like going on the road with a sub-par lidar system or without checking the headlights — it’s a failure of judgement by Uber, and one that cost a person’s life.

Over the course of the last two months, we’ve worked closely with the NTSB. As their investigation continues, we’ve initiated our own safety review of our self-driving vehicles program. We’ve also brought on former NTSB Chair Christopher Hart to advise us on our overall safety culture, and we look forward to sharing more on the changes we’ll make in the coming weeks.

If humanity hopes to make it to Mars anytime soon, we need to understand not just technology, but the psychological dynamic of a small group of astronauts trapped in a confined space for months with no escape.

Microsoft is celebrating the one-year anniversary of its game streaming service and Twitch competitor, Mixer, with a host of new features, including a refresh of the user experience and the launch of an expanded developer toolkit called MixPlay. The new streamer tools will roll out along with the revamped version of Mixer.com across desktop and mobile web, and will initially be available to Mixer Pro subscribers.

The company claims the service saw more than 10 million monthly active users in December 2017 – a figure, we should point out, may be higher because of holiday sales and the accompanying bump in game downloads and playtime seen across platforms.

However, Microsoft also says that the Mixer viewing audience has grown over four times since its launch, and the number of watched streams has grown more than five times. These are still not hard numbers, but third-party reports have put Mixer well behind Twitch’s sizable and still-growing lead in terms of both concurrent streamers and viewers. (Those reports aren’t 100% accurate either, though, because they can’t track Xbox viewership.)

Microsoft says the updated Mixer.com rolls out beginning today, with a focus on making it easier for viewers to find the games and streamers they want to watch, as well as those broadcasting in creative communities.

While Pro subscribers will gain access first, they’ll have to opt-in by visiting their Account Settings and turning the new look on manually. (To do so, select the “Site Version” dialog, then the “Feature/UI Refresh” option, Microsoft says.)

The full refresh will arrive to all Mixer users later this summer.

As part of the new experience, the company is also rolling out more tools for developers with the launch of MixPlay.

As Microsoft explains, instead of just adding buttons below a stream, MixPlay lets developers build experiences on top of streams, in panels on the sides of the video, as widgets around the video, or as free-floating overlays – all of which can be designed to mimic the look-and-feel of the streamed content. Basically, this means the entire window is now a canvas, not just a portion of the stream itself.

One example of what MixPlay can enable can be seen in April’s launch of Mixer’s “Share Controller” feature, which created a virtual Xbox controller that could be shared by anyone broadcasting from their Xbox One.

This allowed gamers and viewers to play along in real-time from the web.

In addition, MixPlay will enable other games that are only playable on Mixer where controls blend into the stream – like Mini Golf, which launched this month and now has 300,000 views, or Truck Stars, for example.

Three new MixPlay-enabled games are launching today, as well, including Earthfall, which lets viewers interact with streamers or even change the game; Next Up Hero, where viewers can help a streamer by taking control or freeze the streamer at the worst possible moment, depending on their mood; and Late Shift, a choose-your-own-adventure crime thriller you control.

These sorts of MixPlay experiences shift the idea of Mixer being just another game streaming service to one where viewers can actively participate by playing themselves, or at least guiding the action. That could also serve as a differentiator for Mixer as it tries to carve out a niche for itself in the battle with Twitch and YouTube Gaming.

But MixPlay isn’t just for interactive experiences, Microsoft notes. It can also help developers build experiences that simply enhance streams with additional content, too, like a stats dashboard.

Another update involves the Mixer Create app, which offers mobile support to streamers. Now, streamers can kick of a co-stream by clicking the co-stream button on their Mixer Create profile, then send out invites, among other things.

This is live on Android in beta today, and will launch soon on iOS beta, with a full rollout in early June.

In terms of perks, Microsoft is running an “anniversary” promotion offering $5 of Microsoft Store credit along with any Direct Purchase of $9.99 or more. A second promotion is giving away a free, 1-month channel subscription and up to 90 days of Mixer Pro to anyone who reaches Level 10 on their account between May 24th, 2018 at 12:00AM UST and May 28th, 2018 at 11:59PM PDT.

The company additionally announced a new partnership with ESL on esports, which will bring over 15,000 hours of programming from top competitive games to Mixer, including Counter-Strike: Global Offensive, League of Legends, and Dota 2. These tournaments will take advantage of Mixer’s FTL technology for “sub-second latency,” the company says.

Other announcements around games and esports are mentioned in the Mixer blog post, too.

InVision, the startup that wants to be the operating system for designers, today introduced its app store and asset store within InVision Studio. In short, InVision Studio users now have access to some of their most-used apps and services from right within the Studio design tool. Plus, those same users will be able to shop for icons, UX/UI components, typefaces and more from within Studio.

While Studio is still in its early days, InVision has compiled a solid list of initial app store partners, including Google, Salesforce, Slack, Getty, Atlassian, and more.

InVision first launched as a collaboration tool for designers, letting designers upload prototypes into the cloud so that other members of the organization could leave feedback before engineers set the design in stone. Since that launch in 2011, InVision has grown to 4 million users, capturing 80 percent of the Fortune 100, raising a total of $235 million in funding.

While collaboration is the bread and butter of InVision’s business, and the only revenue stream for the company, CEO and founder Clark Valberg feels that it isn’t enough to be complementary to the current design tool ecosystem. Which is why InVision launched Studio in late 2017, hoping to take on Adobe and Sketch head-on with its own design tool.

Studio differentiates itself by focusing on the designer’s real-life workflow, which often involves mocking up designs in one app, pulling assets from another, working on animations and transitions in another, and then stitching the whole thing together to share for collaboration across InVision Cloud. Studio aims to bring all those various services into a single product, and a critical piece of that mission is building out an app store and asset store with the services too sticky for InVision to rebuild from Scratch, such as Slack or Atlassian.

With the InVision app store, Studio users can search Getty from within their design and preview various Getty images without ever leaving the app. They can then share that design via Slack or send it off to engineers within Atlassian, or push it straight to UserTesting.com to get real-time feedback from real people.

InVision Studio launched with the ability to upload an organization’s design system (type faces, icons, logos, and hex codes) directly into Studio, ensuring that designers have easy access to all the assets they need. Now InVision is taking that a step further with the launch of the asset store, letting designers sell their own assets to the greater designer ecosystem.

“Our next big move is to truly become the operating system for product design,” said Valberg. “We want to be to designers what Atlassian is for engineers, what Salesforce is to sales. We’ve worked to become a full-stack company, and now that we’re managing that entire stack it has liberated us from being complementary products to our competitors. We are now a standalone product in that respect.”

Since launching Studio, the service has grown to more than 250,000 users. The company says that Studio is still in Early Access, though it’s available to everyone here.

At $3,700, Trek’s Commuter+ 7 is a hard sell in a world of commodity e-bikes. But, thankfully, Trek has added superior components, great styling, and surprising durability to the package, making this pedal-assist ebike one of the best I’ve ridden.

The bike has a matte black finish, fenders, and a motor guard to keep your ebike safe from passing rocks and trash. The 250-watt Bosch Performance CX runs at a maximum of 20 miles per hour and the removable battery lets you swap out packs if things run low.

I enjoyed the ride on this thing and, although it could be prohibitively expensive, you do get some solid components on a well-tested brand. Give it a ride like I did and see for yourself.