Security

HMRC officials are examining "charging options" to release anonymised tax data to third parties including companies, researchers and public bodies, the Guardian reported.

The plans are likely to raise serious concerns among privacy campaigners and MPs in the wake of the Care.data scheme, which shares "anonymised" medical records with third parties, the newspaper said.

The Care.data initiative was previously suspended for six months due to concerns that people could be identified easily from the given information, such as postcodes, dates of birth, NHS numbers, ethnicity and gender.

Likewise, the sharing of taxpayer information despite the government’s assurance of suitable safeguards would be risky and could break trust between HMRC and taxpayers, according to experts.

Under the data sharing scheme, details about income, tax arrangements and payment history are expected to be traded. Such information would be useful to credit rating agencies, advertisers and retailers who want to practise price discrimination, Ross Anderson, a professor of security engineering at Cambridge University, told the Guardian.

"If they were to make HMRC information more available, there’s an awful lot of people who would like to get their hands on it. Anonymisation is something about which they lied to us over medical data … If the same thing is about to be done by HMRC, there should be a much greater public debate about this," he said.

Former Conservative minister David Davis told the newspaper that the proposal is "borderline insane".

"The Treasury lists no credible benefits and offers a justification based on an international agreement that does not lead other governments to open up their tax database," he said.

"It defies logic that we would remove those restraints at a time when data can be collected by the gigabyte, processed in milliseconds and transported around the world almost instantaneously."

"The ongoing claims about anonymous data overlook the serious risks to privacy of individual level data being vulnerable to re-identification," said Emma Carr, deputy director of campaign group Big Brother Watch.

When NSA whistle-blower Edward Snowden first emailed Glenn Greenwald, he insisted on using email encryption software called PGP for all communications. But this month, we learned that Snowden used another technology to keep his communications out of the NSA’s prying eyes. It’s called Tails. And naturally, nobody knows exactly who created it.

Tails is a kind of computer-in-a-box. You install it on a DVD or USB drive, boot up the computer from the drive and, voila, you’re pretty close to anonymous on the internet. At its heart, Tails is a version of the Linux operating system optimized for anonymity. It comes with several privacy and encryption tools, most notably Tor, an application that anonymizes a user’s internet traffic by routing it through a network of computers run by volunteers around the world.

Snowden, Greenwald and their collaborator, documentary film maker Laura Poitras, used it because, by design, Tails doesn’t store any data locally. This makes it virtually immune to malicious software, and prevents someone from performing effective forensics on the computer after the fact. That protects both the journalists, and often more importantly, their sources.

"The installation and verification has a learning curve to make sure it is installed correctly," Poitras told Wired by e-mail. "But once the set up is done, I think it is very easy to use."

An operating system for anonymityOriginally developed as a research project by the U.S. Naval Research Laboratory, Tor has been used by a wide range of people who care about online anonymity: everyone from Silk Road drug dealers, to activists, whistleblowers, stalking victims and people who simply like their online privacy.

Tails makes it much easier to use Tor and other privacy tools. Once you boot into Tails — which requires no special setup — Tor runs automatically. When you’re done using it, you can boot back into your PC’s normal operating system, and no history from your Tails session will remain.

The developers of Tails are, appropriately, anonymous. All of Wired’s questions were collectively –and anonymously — answered by the group’s members via email.

They’re protecting their identities, in part, to help protect the code from government interference. "The NSA has been pressuring free software projects and developers in various ways," the group says, referring to a conference last year at which Linux creator Linus Torvalds implied that the NSA had asked him place a backdoor in the operating system.

But the Tails team is also trying to strike a blow against the widespread erosion of online privacy. "The masters of today’s Internet, namely the marketing giants like Google, Facebook, and Yahoo, and the spying agencies, really want our lives to be more and more transparent online, and this is only for their own benefit," the group says. "So trying to counterbalance this tendency seems like a logical position for people developing an operating system that defends privacy and anonymity online."

But since we don’t know who wrote Tails, how do we now it isn’t some government plot designed to snare activists or criminals? A couple of ways, actually. One of the Snowden leaks show the NSA complaining about Tails in a Power Point Slide; if it’s bad for the NSA, it’s safe to say it’s good for privacy. And all of the Tails code is open source, so it can be inspected by anyone worried about foul play. "Some of us simply believe that our work, what we do, and how we do it, should be enough to trust Tails, without the need of us using our legal names," the group says.

According to the group, Tails began five years ago. "At that time some of us were already Tor enthusiasts and had been involved in free software communities for years," they says. "But we felt that something was missing to the panorama: a toolbox that would bring all the essential privacy enhancing technologies together and made them ready to use and accessible to a larger public."

The developers initially called their project Amnesia and based it on an existing operating system called Incognito. Soon the Amnesia and Incognito projects merged into Tails, which stands for The Amnesic Incognito Live System.

And while the core Tails group focuses on developing the operating system for laptops and desktop computers, a separate group is making a mobile version that can run on Android and Ubuntu tablets, provided the user has root access to the device.

Know your limitationsIn addition to Tor, Tails includes privacy tools like PGP, the password management system KeePassX, and the chat encryption plugin Off-the-Record. But Tails doesn’t just bundle a bunch of off the shelf tools into a single package. Many of the applications have been modified to improve the privacy of its users.

But no operating system or privacy tool can guarantee complete protection in all situations.

Although Tails includes productivity applications like OpenOffice, GIMP and Audacity, it doesn’t make a great everyday operating system. That’s because over the course of day-to-day use, you’re likely to use one service or another that could be linked with your identity, blowing your cover entirely. Instead, Tails should only be used for the specific activities that need to be kept anonymous, and nothing else.

Of course the group is constantly working to fix security issues, and they’re always looking for volunteers to help with the project. They’ve also applied for a grant from the Knight Foundation, and are collecting donations via the Freedom of the Press Foundation, the group that first disclosed Tails’ role in the Snowden story.

That money could go a long way toward helping journalists — and others — stay away from the snoops. Reporters, after all, aren’t always the most tech-savvy people. As Washington Post reporter Barton Gellman told the Freedom of the Press Foundation, "Tails puts the essential tools in one place, with a design that makes it hard to screw them up. I could not have talked to Edward Snowden without this kind of protection. I wish I’d had it years ago."

For a more detailed analysis of this catastrophic bug, see this update, which went live about 18 hours after Ars published this initial post.

Researchers have discovered an extremely critical defect in the cryptographic software library an estimated two-thirds of Web servers use to identify themselves to end users and prevent the eavesdropping of passwords, banking credentials, and other sensitive data.

The warning about the bug in OpenSSL coincided with the release of version 1.0.1g of the open-source program, which is the default cryptographic library used in the Apache and nginx Web server applications, as well as a wide variety of operating systems and e-mail and instant-messaging clients. The bug, which has resided in production versions of OpenSSL for more than two years, could make it possible for people to recover the private encryption key at the heart of the digital certificates used to authenticate Internet servers and to encrypt data traveling between them and end users. Attacks leave no traces in server logs, so there’s no way of knowing if the bug has been actively exploited. Still, the risk is extraordinary, given the ability to disclose keys, passwords, and other credentials that could be used in future compromises.

"Bugs in single software or library come and go and are fixed by new versions," the researchers who discovered the vulnerability wrote in a blog post published Monday. "However this bug has left a large amount of private keys and other secrets exposed to the Internet. Considering the long exposure, ease of exploitations and attacks leaving no trace this exposure should be taken seriously."

The researchers, who work at Google and software security firm Codenomicon, said even after vulnerable websites install the OpenSSL patch, they may still remain vulnerable to attacks. The risk stems from the possibility that attackers already exploited the vulnerability to recover the private key of the digital certificate, passwords used to administer the sites, or authentication cookies and similar credentials used to validate users to restricted parts of a website. Fully recovering from the two-year-long vulnerability may also require revoking any exposed keys, reissuing new keys, and invalidating all session keys and session cookies. Members of the Tor anonymity project have a brief write-up of the bug here, and a this analysis provides useful technical details.

OpenSSL is by far the Internet’s most popular open-source cryptographic library and TLS implementation. It is the default encryption engine for Apache, nginx, which according to Netcraft runs 66 percent of websites. OpenSSL also ships in a wide variety of operating systems and applications, including the Debian Wheezy, Ubuntu, CENTOS, Fedora, OpenBSD, FreeBSD, and OpenSUSE distributions of Linux. The missing bounds check in the handling of the Transport Layer Security (TLS) heartbeat extension affects OpenSSL 1.0.1 through 1.0.1f.

The bug, which is officially referenced as CVE-2014-0160, makes it possible for attackers to recover up to 64 kilobytes of memory from the server or client computer running a vulnerable OpenSSL version. Nick Sullivan, a systems engineer at CloudFlare, a content delivery network that patched the OpenSSL vulnerability last week, said his company is still evaluating the likelihood that private keys appeared in memory and were recovered by attackers who knew how to exploit the flaw before the disclosure. Based on the results of the assessment, the company may decide to replace its underlying TLS certificate or take other actions, he said.

Attacking from the outside

The researchers who discovered the vulnerability, however, were less optimistic about the risks, saying the bug makes it possible for attackers to surreptitiously bypass virtually all TLS protections and to retrieve sensitive data residing in the memory of computers or servers running OpenSSL-powered software.

"We attacked ourselves from outside, without leaving a trace," they wrote. "Without using any privileged information or credentials we were able steal from ourselves the secret keys used for our X.509 certificates, user names and passwords, instant messages, emails and business critical documents and communication."

They called on white-hat hackers to set up "honeypots" of vulnerable TLS servers designed to entrap attackers in an attempt to see if the bug is being actively exploited in the wild. The researchers have dubbed the vulnerability Heartbleed because the underlying bug resides in the OpenSSL implementation of the TLS heartbeat extension as described in RFC 6520 of the Internet Engineering Task Force.

Kristoffer’s father, Robert Davies, started noticing that Kristoffer was logging into his Xbox Live account and playing video games that were off-limits. When prompted to enter a password, Kristoffer would enter a series of spaces and hit enter, gaining access to his father’s account.

"I was like yea!" Kristoffer told KGTV-10, a CNN affiliate, after breaking into his dad’s account.

Glee quickly turned into panic as the thought of his father finding out what he did dawned upon Kristoffer. Instead, Davies was interested, as he himself works in online security.

"How awesome is that?" Davies said. "Just being five years old and being able to find a vulnerability and latch on to that. I thought that was pretty cool."

After Kristoffer showed his father what he did, Davies reported the issue to Microsoft.

"We’re always listening to our customers and thank them for bringing issues to our attention," Microsoft said in a statement to KGTV-10. "We take security seriously at Xbox and fixed the issue as soon as we learned about it."

According to KGTV-10, Microsoft will give Kristoffer four games, a $50 gift card and a year-long subscription of Xbox Live.

The latest tests from NSS Labs showed IE with a 99.9 percent block rate for what the security tester calls socially engineered malware (SEM). Chrome had a rate of 70.7 percent while Firefox and Safari hovered around 4 percent.

In general, SEM includes all malware that a computer user is tricked into downloading on the Web through a malicious link in an email, instant message or other vehicle. Malware delivered as an email attachment is excluded.

Microsoft and Google use a combination of application reputation technology and URL filtering in detecting malware. The difference is Microsoft relies more on URL filtering, while Google does the opposite.

"They both use the same approaches, but the recipes are different," Randy Abrams, research director at NSS Labs, said.

The low rates of Firefox and Safari are due to the browsers only using Google’s URL filtering through its Safe Browsing service available to application developers, Abrams said. Neither browser uses an application reputation system, which scans all downloads for attributes that indicate malware.

Chrome’s latest block rate was substantially lower than the previous NSS Labs test, when the browser’s score was 83.17 percent. Abrams did not know the reason for the significant drop, but suggested two possibilities.

Google might have lowered the aggressiveness of its application reputation system, if it was preventing too many legitimate applications from being downloaded. Another possibility is hackers have profiled how the system works and have figured out a way to game the system.

Google did not respond to a request for comment.

NSS Labs also tested three leading browsers from China. The Liebao Browser, developed by anti-virus vendor Kingsoft, came in second behind IE with a block rate of 85.1 percent.

Liebao does not use application reputation technology. Instead, Kingsoft depends on its cloud-based malware detection system to scan all downloads.

Liebao surpassing Chrome is unexpected because most browser makers have turned toward application reputation, also called content-agnostic malware protection (CAMP), because it is believed to be the most effective.

Target might have been a tad negligent when it came to observing its security systems last year, according to a Thursday Businessweek report.

Months before hackers stole 40 million payment cards, among heaps of other information, at the end of 2013, the retail giant installed a $1.6 million malware detection system from security company FireEye that later picked up on the attackers’ suspicious activity – on multiple occasions.

Interviewing more than 10 former Target employees familiar with the company’s security, and eight people with knowledge of the attack, Businessweek learned of an alert system that worked like a charm – or at least it was supposed to work.

When asked to explain Target’s lack of a response to those alerts, a variation of a media statement was emailed to Businessweek from company CEO Gregg Steinhafel. It begins by explaining that Target had been certified as meeting payment card industry (PCI) standards, before explaining what Target has done since the breach.

In a Thursday email, Eric Chiu, president and co-founder HyTrust, told SCMagazine.com that not responding to these types of alarms is shocking, but not so surprising for a company that, at the time, had security fairly low on its list of priorities.

“We often see organizations ignoring alarms like this because they’ve become numb to them, receiving too many false positives, or because they’re understaffed,” Chiu said. “You can have all the alarms you want, but unless you put security in a prominent position in the company and have enough staff to review them, those alarms don’t mean anything.”

Joe Schumacher, security consultant for Neohapsis, offered other reasons to SCMagazine.com in a Thursday email.

“I don’t think it is about not paying attention to the technologies as much as fine tuning for actionable, relevant information from the technology,” Schumacher said. “Many security systems (e.g. Web application firewall, log monitoring, Intrusion Detection/Prevention Systems, etc.) correlate large amounts of data into a single repository. Unfortunately, a lot of companies and professional services stop here.”