YouTube Adds Face Blurring For Privacy

Tool addresses need to protect activists and others caught on camera against their will, but is not a guarantee, say experts.

Google Nexus 7 Tablet: 10 Coolest Features

(click image for larger view and for slideshow)

Six months after Google introduced an optional facial recognition technology called Find my Face, the company's YouTube subsidiary, home of millions of videos created to promote, publicize, share, or disclose, has unveiled a way to hide faces.

In May, Victoria Grand, YouTube director of global communications and policy, said the company was working on software to blur faces in posted videos, in answer to requests from the human rights community and as an option to deal with privacy complaints from those captured on video against their will.

On Wednesday, YouTube made its face blurring system available through its Video Enhancements tool.

"Whether you want to share sensitive protest footage without exposing the faces of the activists involved, or share the winning point in your 8-year-old's basketball game without broadcasting the children's faces to the world, our face blurring technology is a first step towards providing visual anonymity for video on YouTube," said Amanda Conway, YouTube policy associate, in a blog post.

As InformationWeekrecently noted, videos of significant events posted to YouTube have become an important element in professionally produced video news segments and as standalone reportage. Having an easy way to protect the privacy of individuals shown in such videos should help reduce the chance that human rights protesters caught on camera also will be caught by authorities.

YouTube's face blurring system creates a new copy of the video in question with the blur effect, and provides an option to delete the original, unaltered video. This prevents authorities from seeking the original raw footage from YouTube, although the person posting the video must make sure he or she hasn't retained a local copy.

YouTube says it would be difficult to bypass its blurring technology. "We can't say it's impossible to unblur, but we have made it incredibly difficult through pixelating the blurred regions and adding noise," a company spokeswoman said in an email. "In fact, we feel we've made it so difficult that it's not worth the immense effort required to try."

Security researcher Ashkan Soltani observed via Twitter that although face blurring is helpful, it's not a panacea for the privacy risks that confront human rights advocates. He points to research related to Bertillonage, a system for identifying people based on physical characteristics such as height, weight, eye color, and the like--biometrics, in other words.

Grand, in a blog post, makes just that point, noting that voices, names said on camera, and details captured even briefly might reveal information about the location or identity of people on-screen or off.

Conway offers even more reason to be cautious: This is emerging technology and it might not adequately conceal a person's identity. "It's possible that certain faces or frames will not be blurred," she says. "If you are not satisfied with the accuracy of the blurring as you see it in the preview, you may wish to keep your video private."

New apps promise to inject social features across entire workflows, raising new problems for IT. In the new, all-digital Social Networking issue of InformationWeek, find out how companies are making social networking part of the way their employees work. Also in this issue: How to better manage your video data. (Free with registration.)

I was unaware that this project was underway, I am glad I had the opportunity to read it. It sound like it will be a useful tool for anonymity on the web. I am not saying that this is the solution to video privacy in the web but it is definitely addressing the situation and is a positive step in the right direction. Soltani makes a great point that there are other ways of identifying people based on their features. I am wondering if that technology would be or is as accurate as facial recognition.

This is the perfect solution to independent filming. You can capture a scene in the streets and blur the faces of people that do not want their images to go public...for a variety of reasons. Nobody active in security or the military, outside of political executives, want to go public because it could end their career and even get them or their friends killed.

What would be a nice additional feature is if someone spots a video that they are in but they didn't film themselves, they click a button to request their face be blurred. This in turn sends a notification to the originator of the video which they could then approve -- or reject.

Granted this may not account for multiple versions of a video, and if the creator says no, then you're out of luck. Still might be something to consider.

Most enterprises are using threat intel services, but many are still figuring out how to use the data they're collecting. In this Dark Reading survey we give you a look at what they're doing today - and where they hope to go.

A vulnerability in DB Manager version 3.0.1.0 and previous and PerformA version 3.0.0.0 and previous allows an authorized user with access to a privileged account on a BD Kiestra system (Kiestra TLA, Kiestra WCA, and InoqulA+ specimen processor) to issue SQL commands, which may result in data corrup...

A vulnerability in ReadA version 1.1.0.2 and previous allows an authorized user with access to a privileged account on a BD Kiestra system (Kiestra TLA, Kiestra WCA, and InoqulA+ specimen processor) to issue SQL commands, which may result in loss or corruption of data.

Stored cross-site scripting (XSS) vulnerability in the &quot;Site Name&quot; field found in the &quot;site&quot; tab under configurations in ClipperCMS 1.3.3 allows remote attackers to inject arbitrary web script or HTML via a crafted site name to the manager/processors/save_settings.processor.php f...

In Apache Batik 1.x before 1.10, when deserializing subclass of `AbstractDocument`, the class takes a string from the inputStream as the class name which then use it to call the no-arg constructor of the class. Fix was to check the class type before calling newInstance in deserialization.

Some Huawei smart phones with the versions before Berlin-L21HNC185B381; the versions before Prague-AL00AC00B223; the versions before Prague-AL00BC00B223; the versions before Prague-AL00CC00B223; the versions before Prague-L31C432B208; the versions before Prague-TL00AC01B223; the versions before Prag...