Move over, Snapchat. There’s a new ephemeral messaging app in town. And it works in tandem with your favorite social networks: Twitter, Facebook, WhatsApp. Even SMS and email.

Kaboom, which launched Thursday in the Apple Store and Google Play store, gives its users the ability to create self-destructing messages. While its not the first app to do so, it does offer some unique advantages.

It works like this. First, you create a message inside the app, which generates a unique, HTTPS-protected web address. Then you set for how long—or how many views—you want the message to last before it expires. Select recipients, share the link (on the platform of your choice), and kaboom, you’re all set.

When David Gorodyansky, the Russian-born entrepreneur whose company built the app, riffs about its possibilities, he waxes poetic about the nature of privacy and security in the social media age. He talks about human rights in the developing world. He mentions life and liberty and the next 5 billion people coming online.

A bit grandiose for vanishing selfies, no doubt.

But Gorodyansky’s ambitions are not totally off base. His Menlo Park, Calif.-based company, AnchorFree, already has a hit. The firm is best known for its popular virtual private network service Hotspot Shield, which has 350 million downloads worldwide and has seen big surges in user growth at times of turmoil, such as when certain regimes restrict their state’s citizens’ access to the Internet, as happened during the Arab Spring. AnchorFree’s product provides oppressed peoples with a simple censorship work-around. (Impressed with the service’s uptick, Goldman Sachs invested $52 million in the company’s Series C round of funding in 2012.)

Of course, there is already an app for what Kaboom seeks to accomplish: Snapchat. Not to mention, the ultra-secure encrypted messaging app Wickr, which is built entirely on the idea of self-destructing messages. Why not just use those?

“The problem though with them is they require users to leave existing messaging apps and come to a new one,” Gorodyansky says. “We think this a mistake.”

“We don’t want to recreate Facebook or to stop people from using text messaging,” he continues. “All we want is to add privacy and control to other social networks people already use.”

In other words, the benefit of using Kaboom is precisely that most people won’t have to download it in order to enjoy it. The app injects Snapchat functionality into the places where people are already having conversations.

Still, Kaboom’s privacy controls aren’t perfect. Anyone can screenshot a message received, for example, and the sender will be oblivious. (Snapchat, on the other hand, sends an alert.) Gorodyansky says he would like to add this feature in a later version.

While Kaboom is by no means inventing a new category of communications, the app is at least an interesting experiment in disappearing message tech. Who knows? Maybe the app will become as popular as—or even more popular than—Snapchat. Maybe it be routed, like Twitter’s Periscope did to Meerkat. Or maybe it will bomb, like Facebook’s ill-begotten Snapchat competitor, the Poke app.

Facebook just lost a search warrant fight, and that’s bad news for privacy

In a setback for privacy advocates, an appeals court on Tuesday ruled that law enforcement can order tech companies to hand over data on hundreds of users in one swoop – and the companies can’t challenge the warrant or even warn users about the search.

The case in question involves an investigation by New York prosecutors into state employees who scammed the disability system. The investigation, which saw 134 people indicted, was partly based on scanning Facebook for posts that showed the employees doing sports or other physical activities.

While police regularly make Facebook FB a part of criminal investigations, the way New York prosecutors obtained data is striking. Instead of applying for individual search warrants, they instead use a single affidavit as the basis for demanding Facebook root through the accounts of 381 users. And to avoid tipping off the suspects, the prosecutors also asked for a gag order to prevent Facebook from telling those users about the search.

This wide and sweeping order for hundreds of users led Facebook to challenge the warrants. Others shared Facebook’s concern. The ACLU, alongside other tech companies like Google and Microsoft, joined Facebook in asking a judge to declare the bulk warrants were an unconstitutional search and seizure.

The challenge didn’t get far, as a judge last year rebuffed Facebook, and forced the company to hand over the information. Tuesday’s ruling saw a state appeals court confirm that Facebook didn’t have a right to challenge the warrants in the first place. The court said the only recourse was for the Facebook users to object to the warrants during the subsequent criminal proceedings.

As law professor Orin Kerr points out, the court’s reasoning is correct from a criminal procedure standpoint. The point of a warrant in the first place is for a judge to decide that there’s probable cause for a search in the first place and to set out terms limiting the search. (This is a higher bar than subpoenas and other investigation tools, which typically let users mount a challenge before the fact.)

For now, the upshot is that a court could still find the bulk warrants in the New York investigation to be illegal. But that’s not much consolation for privacy advocates since, in the meantime, Tuesday’s ruling could embolden other law enforcement agencies to embark on broad fishing expeditions of their own – asking for hundreds or even thousands of user accounts in one sweep from Facebook, Twitter, Microsoft or any other social media company.

Tuesday’s ruling also goes against the grain of recent decisions by the Supreme Court, which has expressed growing concern over the erosion of privacy in the digital age.

Facebook has said it may bring a further appeal in the New York case, which would be a good thing. In the meantime, judges and lawmakers need to consider measures to make it more difficult for lawmakers to trawl for hundreds of users with a single request.

Here’s a copy of the ruling with some of the relevant parts underlined:

Companies need to share how they use our data. Here are some ideas.

The Internet of things may be the buzzword du jour in business circles, but 87 percent of consumers don’t have any idea what that term means, according to a study from the Altimeter Group. But consumers do know that companies are building more connected devices, and that those devices offer marketers a unique, and unsettling, window into their personal data.

Which is why companies need to develop better ways to communicate how they use and share consumers’ data, perhaps borrowing something from the food or even garment industry.

According to the study, consumers are familiar with the data implications of fitness trackers, connected cars or connected home appliances. And most don’t like it. The average Joe is uncomfortable with a company collecting his data, and really hates the idea of that company selling it to someone else. Here is how Altimeter depicts its findings:

Altimeter Group

While in general, older people were least keen on data sharing, 45% of respondents express very low trust or no trust at all that companies were using their connected device data securely and in ways that protected their privacy. Since most companies are building business models for the connected era around data and providing context so companies can build services, it seems like getting consumers on board should become more of a priority for corporate America.

Currently, the choice is often pretty black and white. You accept the onerous terms of service (which are often presented in convoluted user agreements someone clicks through on their way to download the app after purchasing a new device) or you don’t get to use the service. And that has led to debates over privacy and how to educate consumers about the accessibility of their data. Here are some specific areas of concern for consumers:

Altimeter Group

Jessica Groopman, industry analyst with the Altimeter Group who conducted the research, suggests that companies should pay attention, but isn’t sure how they should change their behavior exactly. In questioning consumers she realized that most weren’t satisfied by the blanket terms of service they get today, but they weren’t keen on getting an indication at every turn about how their data was used. Instead they favored some kind of middle ground that laid out broadly how their data might be used and shared at the time they purchased the device.

“It’s clear that there’s a communication and consent gap today,” Groopman said. “It isn’t smart for companies to move forward ruthlessly and relentlessly. It should be a bit more of a joint effort where companies educate consumers, and get their opt in.”

She adds that the “ick factor” of sharing data is only increasing as consumers are less able to move away from it. As sensors become more pervasive, especially in public places thanks to connected street lights or retailers using tracking devices such as beacons, consumers have less knowledge of what information is being shared and no ability to opt out.

This begs the question of how one might design such a notification. Focusing primarily on devices that consumers buy, as opposed to products that retailers or municipalities install, Groopman asked if companies should add something that looks like a nutrition label or perhaps some kind of other explanatory label (I’m picturing something like the washing machine care instructions on garments) to explain their data-sharing policies. Others have suggested some kind of Good Housekeeping seal of approval for privacy, perhaps administered by an organization such as Truste.

In the meantime, it looks like it’s up to companies to decide how they want to approach this issue. Hopefully the data shared by the Altimeter group gives them pause before they run roughshod over consumers and their data privacy.

It’s easier than ever to obtain fast and reliable DNA tests, but that doesn’t give companies a green light to use them. An Atlanta grocery distributor learned this lesson the hard way after a jury this week awarded two laborers $2.2 million in a landmark genetics lawsuit that also featured some gross-out facts.

The case relates to an investigation in 2012 when Atlas Logistics Group Retail Services, which stores groceries for Kroger, discovered that someone had been leaving feces in the aisles and on top of canned goods. Alarmed at the potential health risks, the company ordered warehouse employees to submit to a cheek swab in hopes of obtaining a DNA match to the fecal matter.

Two workers who submitted to the test, and were found innocent, then sued Atlas under a law called the Genetic Information Non-Discrimination Act (GINA), which forbids companies to “request, require, or purchase genetic information with respect to an employee.”

The case attracted media attention not only for the gross facts, which led the judge to refer to the “mystery of the devious defecator,” but also because it was the first time a lawsuit based on GINA went to trial. For privacy groups, DNA samples are not only intrusive, but create a risk that employers could use the information they obtain for other purposes such as learning about medical conditions—which is what led to the passage of GINA in the first place.

The outcome of the case pleased the Genetic Alliance, an advocacy group for those with genetic disorders, though a spokesperson told the New York Times that a dearth of other lawsuits has come as a surprise—suggesting either a lack of awareness or that the law is working as intended.

The size of the jury award in the Atlas case surprised some legal scholars, and may be reduced on appeal. It is also likely to prove a memorable deterrent for other companies that may be tempted to ask for their employees’ DNA; Atlas said it had asked for the DNA in the first place on the basis of poor advice from its law firm.

But while it’s important to debate how facial recognition tech can and should be used, it’s equally important to note there are multiple way to track a person’s face. A better understanding of how it works can help educate people about how we should be thinking about using it, and making rules for it.

Two types of facial recognition

First off, there are two different approaches that allow companies to employ facial recognition, and one is far more expansive than the other. The first generation of facial recognition technology relies on two popular methods of identifying a face, and is what powers the Churchix software that recognizes faces in church. One of these methods relies on criteria like the distance between your eyes, the measurements of your nose, lips and other facial features and matches them against an existing database.

The other method used by this first generation software looks at points of interest on the face, and tracks how the pixels in a photograph cluster to form a nose. In both of these models matching a face is similar to matching a fingerprint. You’re looking for how closely certain characteristics line up to find a match. To get a good match using these models, you need a full-frontal picture of a face, good lighting and a database to compare your “faceprints” to.

In the case of Churchix, the church is using a database of members’ names and their photos to build a tiny file of data that is then compared with the data generated by the pictures coming from a camera. When there is a match, the name pops up from the existing database. But if a person or photo isn’t in the database, the system can’t return a result. Likewise, if a person’s face is partially obscured, or distorted via a grin, the system cannot return a result.

The second method of facial recognition is much more powerful, and is what Facebook, Microsoft and Google are focusing on. This version of facial recognition is based off of machine learning and efforts to train computers to recognize objects more generally. Facebook is using this type of facial recognition for its Moments feature. Because it is far more flexible, the computer doesn’t need the entire face to recognize someone, nor does it rely solely on characteristics like the distance of facial features. These models are trained using databases of faces to understand what a person actually looks like and then can match a face even without knowing who the person is.

Facebook is also conducting research to recognize people based not only on their face but also their outfits and their other body parts (it can tell if a leg belongs to a man or a woman, for example, and use that to aid in identification). Using machine learning it can combine a variety of inferences made by different computer vision algorithms to identify a person, even from behind. It is now able to identify a person with 83% accuracy even when they aren’t facing a camera.

How scary is this?
In the case of the first type of technology, people can avoid being identified by keeping their head down—literally. Moshe Greenshpan, the president of Skakash, the company that sells the Churchix software, explains that he recommends churches put attention-getting devices on cameras to encourage people to look at them, so the camera can capture the full-frontal view of a face. One can also use makeup to confuse the camera in a dystopian view of contouring where you use makeup to make your features appear different. It’s all the rage in Hollywood and on fashion blogs.

Other things to do are to make huge facial expressions that distort your mouth, to help confuse the software. A grinning Kim Kardashian is the enemy of this type of software.

However, Greenshpan explains that he is investigating using machine learning in his software and expects to have that influence his algorithm in a year or two. That changes things, because then the pictures won’t need to be as clear and the software can match your face against any existing photographs of you that have your name attached. That means it could use an API call (a software-to-software command) to a site like LinkedIn or a place that gathers mug shots in an effort to pull up not just a copy of your face, but a detailed profile of your identity.

And that has both privacy and security implications. On the privacy side, the fears are well documented. At its core facial recognition software takes all the benefits of digital technology—making something instantly and easily searchable from anywhere and forever—and applies them to your face. This means a record of your physical actions in the real world becomes searchable in the digital world. When it comes to the first-generation technology you need a database that equates people and names, as well as high-quality images of the people you hope to match.

But with the type of facial recognition that Facebook, and even Greenshpan is proposing, it becomes much more invasive. This also has implications for security. One of the ways some churches use the Churchix software is to prevent known criminals from attending church events. In that case the church uses a database of people it wants to exclude from events and looks for them. Casinos or banks could use this the same way. However, the church has to work from an existing local database of known offenders. The software can’t recognize unknown threats that come from further afield.

But over time, as the machine learning is applied the recognition gets much better. Plus, as the technology improves and gets faster and cheaper, any facial recognition software can cast a wider net in search of a person’s identity. These wider nets could also deliver more information about the proposed match. So while today it almost feels as if the war against facial recognition is lost—Facebook says it can identify the person in one picture out of 800 million on its network in less than 5 seconds—the technology can become both more pervasive and more powerful.

Given this, it’s probably time to start discussing how we want to protect privacy and how we can use this technology to improve our lives.

Facebook’s new algorithm can recognize you even if your face is hidden

We are just beginning to come to grips with the idea that computers and algorithms can recognize our faces, and the implications that has for privacy. Now the head of Facebook’s artificial-intelligence research lab says that an experimental algorithm he helped develop for the giant social network can recognize you with a high degree of accuracy even if your face is hidden from the camera.

Yann LeCun, an expert in computer vision and pattern recognition who was hired by Facebook FB in 2013, presented his research at a recent conference in Boston. He told New Scientist magazine that he wanted to see whether the same kinds of algorithms used for facial recognition could be tweaked to recognize people from other physical characteristics—their body type, the way they stand, etc.

“There are a lot of cues we use. People have characteristic aspects, even if you look at them from the back,” said LeCun, a former BellLabs researcher who helped develop the algorithm used by many U.S. banks to verify handwriting on checks. “For example, you can recognise Mark Zuckerberg very easily, because he always wears a gray T-shirt.”

The research took 40,000 public photos from the social network, some of which showed people with their faces fully visible to the camera and others with their faces partially or fully hidden. After running them through the recognition filter, LeCun said the system could determine a user’s identity with 83% accuracy. Using its existing algorithms, Facebook has said that it can recognize you with 98% accuracy—in fact, its software can identify you in one picture out of 800 million in less than 5 seconds.

Companies like Facebook are interested in facial recognition so that they can help users organize their photos, the way the social network wants to do with its recently launched Moments feature—which automatically sorts your pictures into different categories, and can detect when you and your friends upload photos of the same event. The description of the new service says:

“Moments groups the photos on your phone based on when they were taken and, using facial recognition technology, which friends are in them. You can then privately sync those photos quickly and easily with specific friends, and they can choose to sync their photos with you as well.”

Google is also focusing on similar features, which can recognize people, places and things in pictures and automatically organize or tag them for search. LeCun said that his recognition algorithm could be useful in helping people find out when someone else uploads a photo of them to Facebook, even if their face is not visible, which would help the privacy-conscious keep track of where their pictures are being published.

For many users, however, these kinds of features can cross a line where helpful becomes creepy. The same recognition algorithm that allows Facebook or Google to detect a picture of your child so you can share it easily could be used to identify people in all kinds of ways—many of them disturbing. For example, police or government authorities can track your location or behavior for their own purposes, or insurance companies could monitor your activities to see if your claim is justified.

Privacy advocates and industry groups are divided over how to approach this growing area. A meeting that was designed to find common ground between the two disintegrated recently after privacy groups said the industry representatives refused to agree that people have a right not to be identified in public places.

“What facial recognition allows is a world without anonymity,” Alvaro Bedoya of the Georgetown University Law Center told New Scientist. “You walk into a car dealership and the salesman knows your name and how much you make. That’s not a world I want to live in.” There are even churches using facial-recognition technology to identify those who attend regularly so they can ask them for donations.

Facebook’s pursuit of facial-recognition technology could also run into opposition from lawmakers in a number of states. The social network is already being sued by a man in Illinois who argues that the company’s auto-tagging feature breaches his rights under the state’s privacy laws. But as Fortune‘s Jeff Roberts explained recently, the legal aspects of facial recognition technology—and who is going to enforce any restrictions on companies like Facebook—is still very much an open question.

Uber privacy charges are overblown—except for one thing

Uber isn’t going to win any corporate do-gooder awards. In recent news stories it has been characterized as a ruthless, money-bloated raptor of a company that will do anything to win. Still, there’s no need to be alarmed about every accusation of skullduggery that critics direct at the ride-sharing platform.

The latest complaints are a case in point. They were lodged at the FTC on Monday by the Electronic Privacy Information Center (EPIC), and ask federal regulators to impose an injunction to restrict Uber from implementing a new privacy policy on July 15. According to EPIC, the policy is a deceptive trade practice because it misleads consumers about how the company will use customers’ contact lists, location data, and other personal information.

But is the Uber policy so bad as to merit a federal investigation? There’s the rub. The company’s new privacy policy is actually pretty good in many respects. It’s shorter, clearer and doesn’t have any nasty surprises – but it also fails to fix the most dangerous part of Uber’s data practices.

Uber improves (a little)

Uber performed an appalling series of privacy gaffes last year – tracking journalists, flaunting a “God View” of its customers at parties, failing to protect data, and so on – but it’s been trying to get its act together, and has made some progress. In recent months, for instance, the company says it has deep-sixed the “God View” trick, and it also commissioned a law firm to propose some privacy guidelines.

And last month, a lawyer from Uber said in a blog post that company would even heed some of the firm’s advice by making the new privacy policy shorter, easier-to-read and available in more languages. As for the new policy itself, it’s not exactly a ringing privacy manifesto, but nor did its publication elicit the “oh, my god, they did what?!?” reaction that has greeted many of Uber’s other business decisions.

In light of this modest progress, EPIC’s “deception” accusations (set out in the FTC complaint) ring a little hollow.

Take for instance, EPIC’s gripes in the complaint (embedded below) that Uber is being deceptive about collecting customers’ location data. First off, is anyone surprised that an app for summoning taxis can know your whereabouts? This would be like using Google Maps and objecting to it using your GPS position when you navigate.

To be fair, EPIC’s complaint also points out that Uber might ask for data when the app is not in use, and that it currently uses customer IP addresses to determine location. The latter charge, however, does not really rise to the level of “deceptive” since it’s a common practice among apps, and since an IP address from a mobile phones typically won’t disclose an exact location.

“We have always disclosed our collection of location information – it is core to our product (we are a location-based service),” said an Uber spokesperson, by email. “EPIC’s allegations about IP tracking are misleading; we receive IP addresses as part of the traffic data that all apps receive.”

As for EPIC’s claim that the new Uber policy could one day allow the company to collect more information about users’ location and contact lists, well, that day will only come sometime in the future – and Uber will (perhaps) have the good sense to inform its customers what it is doing.

Could the Uber policy be better? Sure. Does it merit a federal investigation? Hardly. Except for one thing.

Location, location, location

The part of the EPIC complaint that deserves a deep stare from regulators concerns how Uber uses customers’ location data – not present location, but past location. Under Uber’s current and future privacy policy, the company reserves the right to compile a complete travel history: every Uber trip you take, for an indefinite amount of time.

The folly of this is plain. Hired car trips are often used for sensitive personal matters like late night affairs, secret business meetings or discreet visits to the STD clinic. But under Uber’s rules, the company compiles a personal dossier for every single trip taken by every customer.

The database that Uber possesses is known among security types as a “honey pot” that can attract all sorts of snoops, from the U.S. government to Chinese hackers. And those hazards are in addition to whatever intrusive uses – marketing, third party partnerships, etc –that Uber itself might make of the information.

Uber, meanwhile, can’t even offer a plausible reason for why it insists on storing this information.

“[It’s] a benefit to riders to be able to keep track of their trip history,” was the best explanation I could get. If that’s the real reason, then surely Uber would let those users who don’t wish to have this “benefit” delete the information?

No dice. And worse, even former Uber customers can’t delete the data for sure. All Uber can promise is that it will eventually delete personal information if you quite the service – unless it deems there are “account issues.”

So there you have it. Uber’s new privacy policy is not really a big deal – except for the fact it lets Uber maintain its dangerous, colossally flawed data retention policy.

The solution is easy enough. As Julia Horwitz, the lawyer who authored the EPIC complaint, suggests, Uber should delete trip data after a ride is complete. Or at least allow its customers an easy way to do so themselves. If not, the FTC should step in.

You can read EPIC’s complaint for yourself below (I’ve underlined some of the relevant bits). Keep in mind, this is just EPIC’s suggestion for the sort of complaint the FTC should bring against Uber – there’s no indication for now if the agency will get involved one way of the other.

A medical gold mine, buried underneath layers of red tape

Over the last few years, cloud computing has become more broadly accessible, offering an opportunity for performance improvement across a range of industries. The open, plug-and-play-type cloud-computing services and infrastructure provided by players such as Box and Dropbox can allow for aggregation and analysis of multiple independent entities and data sources at a scale not previously feasible. Opening these rich, cloud-based data platforms to third-party data analytics can generate greater insights than is possible within limited proprietary datasets.

Not every industry has taken advantage of advances in cloud computing. Health care has certainly been a laggard, largely on account of data privacy rules in the Health Insurance Portability and Accountability Act of 1996 (HIPAA). That legislation has led to a proliferation of isolated data platforms and electronic medical records. According to data from the Bureau of Labor Statistics, employment growth in the health care industry was among the highest in the US since 2002 and will continue to outpace other industries through 2022. While productivity in health care is more difficult to pin down, the industry has not reaped the productivity gains of other leading industries. Given benefits of the cloud ecosystem, is health care missing a huge opportunity?

Cloud computing can help improve the industries that embrace it. The value of data lies in the insights that can be drawn from it. Traditionally, however, it wasn’t so easy to access those insights; companies needed to aggregate, standardize, and contextualize data—a challenging, expensive process. Cloud computing simplifies this process on several levels.

In data aggregation, information is gathered and summarized before it is analyzed. The cloud makes it much cheaper and easier to combine disparate sets of data at a large scale.

In data standardization, multiple types of data are converted into a common format. Cloud-based infrastructure allows aggregation platforms to digest broad data sets and transform data into compatible formats for deeper analysis.

In data contextualization, combining data with user profiles and real-time feedback can help deliver more relevant, tailored information. The cloud makes it possible to aggregate much richer and more disparate data sources. The more sources and the broader the analytics, the more valuable your data becomes.

The possibilities of cloud computing, however, bring new technical and organizational hurdles. Collection errors, along with inconsistent entry formats, can hinder comparability, necessitating extensive data standardization. And the financial incentives of sharing data and broad-based collaboration can conflict with the need to share proprietary information beyond the company. Organizations looking to fully harness the potential of cloud computing must address these hurdles and create a compelling case for sharing information.

Today, health care is not taking advantage of what cloud computing can offer. Extensive privacy laws make it challenging to link long-term personal health information to an individual. This makes it very difficult to track and assess patient care systematically over time, especially if that care is provided by many different organizations.

The emergence of aggregation platforms offers some hope. For example, Athenahealth offers cloud-based software for electronic health records and other health care practice management services. The cloud provides a foundation for companies to integrate data (patient claims, billing records, clinical data, etc.), generate analytics to simplify the way hospitals function and share information, and help hospital executives run their practices.

Flatiron Health uses the cloud to aggregate, standardize, and structure data collected from cancer centers and researchers. The company’s efforts to organize data across millions of patients into a common cloud-based format opens up new possibilities for better outcomes in oncology.

To succeed, aggregators must convince care providers, health plans, researchers, and other industry players to share their data. They must offer value to individual users, regardless of others’ participation.

Privacy is critical, but consumers and patients may pay a steep price if valuable information isn’t shared. Is there any way to protect privacy while still increasing aggregation? With no immediate changes to HIPAA or data sharing restrictions in sight, the clinical side of the industry may feel even less urgency to open up.

Meanwhile, the fitness and wellness side of health care is becoming even more relentless in its data aggregation efforts, as consumers share their sleep, eating, and exercise habits using wearable devices and smartphone apps. Wellness providers have an opportunity to demonstrate the power of large-scale data aggregation and analytics to the health care world. As users begin to see the value of sharing their data, they are likely to expect similar access from more traditional providers.

Today, cloud computing continues to offer opportunities for industries to better integrate and understand the data they generate. In industries where regulations still encourage data silos, the eventual winners will likely be those that construct open, wide-ranging ecosystems that attract disparate data sets and encourage collaborative analytics. Whatever your industry or the size of your organization, ask yourself two questions: What additional data could you combine with your own to get more insight? And how could you invite third-party players to engage and run analytics on your data?

John Hagel III is the co-chairman of the Deloitte Center for the Edge based in Silicon Valley. John Seely Brown is the independent co-chairman of the Deloitte Center for the Edge.

Google to remove “revenge porn” links at victims’ request

Google is taking steps to address a persistent problem of the digital age: What to do when people upload nude or sexually explicit pictures of others without their permission. On Friday, the company announced it will let victims of so-called revenge porn ask for the removal of certain webpages from Google’s search results.

“We’ve heard many troubling stories of “revenge porn”: an ex-partner seeking to publicly humiliate a person by posting private images of them, or hackers stealing and distributing images from victims’ accounts,” said Google in a blog post. “Our philosophy has always been that Search should reflect the whole web. But revenge porn images are intensely personal and emotionally damaging, and serve only to degrade the victims.”

As the company acknowledges in the blog post, the new policy will not entirely solve the problem of “revenge porn” since GoogleGOOG cannot delete the underlying website from the internet. But it may bring victims some comfort by making the websites harder to find.

For victims, who are typically women, “revenge porn” can be doubly traumatizing because there are few practical resources to remove the photos. In many cases, the photos appear on websites that permit anyone to upload a name and picture; the operators of those websites, meanwhile, are shielded by a law that provides legal immunity for user-submitted content. Even worse, such websites often work hand-in-glove with “reputation defender” companies that require victims to pay hundreds of dollars to get a photo removed – a form of extortion in other words.

The new Google policy also comes as more states move to address the problem with new criminal laws (it’s unclear if all of these laws will survive constitutional scrutiny).

Google’s new policy, meanwhile, is unlikely to stir controversy. Unlike requests based on copyright or the “right to be forgotten,” which people have used as a pretext to delete information in the public interest, it appears improbable that someone would try to misuse Google’s revenge porn policy in similar fashion.

Asking Google to remove a search result for an unauthorized nude pictures will require people to complete a form along with the URL from the offending website. It’s unclear if the form, which Google says will become available in coming weeks, can be used only by those who appear in the pictures, or if family or guardians will be able to make such requests as well.

Shutterfly hit with privacy suit over “faceprints,” use of photos

A Chicago man claims the popular photo-book service Shutterfly is violating a law that restricts how companies collect biometric data, and is seeking at least $5 million on behalf of others whose faces have been added to the Shutterfly SFLY database without permission.

The class action lawsuit, filed on Wednesday in Illinois federal court, comes amid increased scrutiny over how companies are using facial recognition technology to create so-called “faceprints,” which provide a unique identifier – akin to a fingerprint – based on a person’s face. Faceprints are helpful for tagging photos on services like FacebookFB or Shutterfly, but also pose a privacy danger because they provide a way to identify a person on the internet or even in public.

In the lawsuit, Brian Norberg claims he has never used Shutterfly but that someone else uploaded his photo and “tagged” it with his name; this led the company to add his face to an enormous biometrics database, and create a distinct profile based on biometric. Months later, when the other person uploaded more photos, Norberg claims Shutterfly automatically recognized pictures of him.

Norberg argues Shutterfly is violating a state law that requires companies to inform people about when and how they use a person’s biometric data. The lawsuit, which seeks $1,000 or $5,000 for every Illinois resident whose face was added to Shutterfly’s database, resembles a similar case filed against Facebook in late March.

Both lawsuits were filed in Chicago because Illinois is one of only two states in the country with a law that restricts how companies use biometric data (Texas is the other, while Washington state is exploring a similar measure).

Shutterfly did not immediately respond to an email request for comment.

The cases against Shutterfly and Facebook coincide with growing concern over the federal government’s inaction over facial recognition technology. This week, nine consumer and privacy watchdog groups walked out of Commerce Department talks aimed at establishing guidelines on the use of faceprints, claiming that industry groups would not accept even basic limits on how they are used.

Last week, Facebook rolled out a new app called “Moments,” that makes it easier for users to use its facial recognition tools to identify people on their smartphone camera rolls, and share photos of them.

Here’s a copy of the Shutterfly complaint (I’ve underlined some of the relevant bits):