What tech companies are doing to stop child porn

Google's controversial email scanning practices netted a child abuser last month, but the internet giant is not the only technology company proactively combating the sharing of child abuse images.

While it is not known exactly what tipped the Google algorithms off to the existence of three pornographic child exploitation images in John Skillern's Gmail, the registered sex offender is now facing new charges in the US.

Both Facebook and Twitter use a photo analysis service called PhotoDNA from Microsoft. The software, launched in 2009, is also used on Microsoft's email service Hotmail (now Outlook), search engine Bing and data storage and sharing service SkyDrive.

The image-matching technology attaches a unique number (or hashing string) to images identified by child abuse prevention groups and recognises matches when uploaded and shared by users, regardless of whether they have been modified or resized.

Groups such as the US's Internet Watch Foundation and the National Centre for Missing and Exploited Children have flagged almost 100 million images and videos.

Samantha Doerr, former senior manager with Microsoft's digital crimes unit, child sexual exploitation prevention, previously told Fairfax Media the centre picks the "worst of the worst pictures on the internet and shares them with internet service providers so they can identify when they are shared again online".

"Technology has enabled a number of amazing things in this world and for people to build communities and share with each other, but also for people to build these [illegal] communities," she said.

"Children are getting younger and younger; about 10 per cent [of images] now is infants and toddlers because they can't tell anyone what is happening to them. And it's getting more and more violent."

Ms Doerr, now a director of public affairs at Microsoft, said all technology companies reported found images as part of a global effort to end child exploitation.

She said the technology had not resulted in lower numbers of images being shared, but did improve authorities' ability to detect images and produced better reporting.

The Australian Communications and Media Authority investigates complaints about illegal online content in Australia. In 2012, more than half of the 2283 complaints it received about child pornography proved accurate.

But this only stops known child abuse images appearing online. If the picture has not been tagged by PhotoDNA, it is up to users to flag content as inappropriate before the social networks will check the image and take it down.

Facebook Australia's head of communications, Antonia Christie, said this flagging system worked well.

"The reporting tools we have are very good; anyone can flag an image and there are 1.32 billion Facebook users, so it’s like one big neighbourhood watch," Ms Christie said, adding there were hundreds of staff tasked with checking images.

Facebook and Microsoft Australia would not comment on if they have reported any accounts to the police based on child abuse images.

US law requires any company that becomes aware of child pornography to report it.

The Australian government also regularly requests information from Facebook for the purpose of investigating crimes.

In the first quarter of 2014, the federal government made 603 requests for information on 640 individual profiles. Facebook handed over the data in 65 per cent of the cases.

In the same period, Facebook also deleted 48 pieces of content or accounts based on government requests powered by anti-discrimination legislation.

Despite the smaller population of Australia, the country ranks eighth in the number of requests logged, behind India, Turkey, Pakistan, Israel, Germany, France and Austria and ahead of the United Arab Emirates and Italy.

Katina Michael, vice-chair of Australian Privacy Foundation, said that while Google should be "patted on the back" for helping law enforcement tackle a heinous crime, the ramifications of the incident made it deeply concerning.

"For every successful case like this, there are tens of thousands of privacy breaches into everyday, non-criminal users' accounts that are intensely worrying," Dr Michael said. "Blanket surveillance such as this is not in the best interests of society."

Given the ambiguity around the software involved and the public's lack of awareness about the widespread and pervasive nature of Google's surveillance, Dr Michael said deeply private images such as of a child's christening could trigger a red alert when they were not remotely illegal.

"Too many people assume their Gmail account and online life is private, and this case highlights this risk," she said.

Robert Gellman, a US-based privacy and information consultant, told website Mashable that Google's actions had opened a "whole different can of worms" beyond the usual privacy concerns that batter the tech company.

"Drawing a line about email scanning is not simple – no one seems to object if email is scanned for malware but, once you move beyond that, it's much more difficult," Mr Gellman said.

"No one defends child porn, but the principle that an email provider will read mail looking for criminal activities is problematic. It raises concerns over what standards apply and which crimes are included."