Streamlined blogging platform Medium rolled out a new login process Monday that throws the trusty old password out the window. Instead, you simply enter an email address or phone number, and a temporary login link lands in your inbox or phone—just like password reset or account verification links used by sites when you first sign up.

"Passwords are neither secure nor simple," writes Medium's Jamie Talbot, summing up a sentiment that has been picking up steam lately. "They're hard to remember or easy to guess, everyone reuses them (even though they know they shouldn't), and they’re a pain to type on mobile. They don't even keep you that safe."

For being gatekeepers (or bouncers) for our online accounts, they're inordinately vulnerable. They can be "brute-forced" through trial and error, teased out of you with a cleverly worded email or IM message, applied to access numerous accounts—thanks to our insistence on using the sames ones over and over—and easily leaked out onto the Web. Put another way, they don't really do a good job of proving that you are who you say you are, and keeping everyone else out.

That's precisely why companies are hot to ditch passwords and find another way to protect our online accounts—like temporary, auto-generated links or tokens.

The Trouble With Passwords

Password safeguards essentially work the same way: If someone gets access to that alphanumeric word or code, your account is theirs until you notice and swap it out. But that delay can be costly (in more ways than one).

Preventing that nightmare scenario has become a core business for companies like Dashlane, 1Password and LastPass, which manage and hide the bevy of logins in a user's life behind one secure master password. But these businesses may have to brace themselves, as auto-generated tokens and hyperlinks aim to nix their bread and butter.

Unlike passwords, those temporary links or codes don't work in perpetuity. They slam the door closed on access after a single use, a set period of time, or often both. And apps and services send them directly to the most convenient receptacles available to you—your email inbox or smartphone.

This approach may seem old-fashioned, particularly when contrasted with newfangled login protocols like face detection, voice authentication and other biometric security, or even creative variations, like emoji passwords.

What the messaging process has going for it, though, is that it's cheap and easy to implement. And since brute-forcing a token or URL string would be impractical, if not nearly impossible, the system would remove some important points of potential vulnerability.

At least some, anyway.

A Token Effort

The new Medium login screen.

Medium isn't alone in adopting this rather old-school, simple security alternative. Passwordless, for example, is middleware for Express and Node.js that uses a similar token-based system: Instead of entering some sort of "open sesame," the keys to your account land in a (supposedly) secure email address or mobile number.

"The classic [username and password] mechanism has by default at least two attack vectors: the login page and the password recovery page." writes the Passwordless team. "Especially the latter is often implemented hurriedly and hence [is] inherently more risky."

So nixing the password could actually reduce, rather than increase, risk. In other words, if you don't have a password, no one can guess it or steal it. Your only vulnerability then is your email.

As the infamous Sony hack—which spilled a mother lode of embarrassing celebrity emails onto the Internet—taught us last year, the relative integrity of those accounts have their own security issues. The temp token approach might amplify them, given that anyone with access to your inbox could theoretically breach your Medium account too.

In practice, however, Medium's system and others like may not actually pose any greater threats. Temp tokens or links expire quickly, and the process itself mimics existing password reset links that Medium and many other services already email or text regularly.

Not that the new password-free systems are hackproof—there's no such thing—but taking everything into account, they could be a step forward from the username-and-password combination we've relied on for so long. At minimum, they appear to be an easy, cost-effective way to remove at least some of the potential vulnerabilities.

When Will We Lap The Old Login?

Of course, password managers are quick to defend the old password system and their efforts at dealing with it.

LastPass representatives were keen to point out to ReadWrite the reduced friction, faster response time and extra privacy you get from its service. (Gmail, for instance, can't see which services you're using or send your login emails to spam.)

Ultimately, password wranglers may be but a band-aid for the flawed approach to authentication we're still stuck with (for now). Eventually, biometric and even behavioral solutions will become more commonplace, and fingerprint or iris scanners—which have already infiltrated some mobile devices—will land on every phone and computer keyboard.

In the meantime, we're going to need other ways to defend ourselves and our data—perhaps including password-less alternatives like Medium's. Only when better options become available on a wider scale, can we leave the old ways of logging in behind for good.

Recently, a LastPass security hole that spilled email addresses and password reminders has put the spotlight back on login security. Now Intelligent Environments, a United Kingdom-based technology firm, thinks it has the key that could help lock down those passwords more tightly: emoji.

The firm announced its "Emoji Passcode" tool Monday based on a simple premise. The large and growing range of potential emoticons offer many more choices than numerical digits, making it harder for hackers to "crack" PINs or passcodes, and gain access to users' accounts.

Intelligent Environments has begun pushing the concept to banks, starting in the UK, hoping it will adopt this picture-friendly form of authentication for their customers. If the company succeeds, the concept's leap to other scenarios—think smartphone logins, secure apps and other online services—may not be far behind.

A Picture Is Worth A Thousand PINS

The use of emoji—the image-based characters that sprang out of Japan to take over the world's messaging—may seem like a naive, overly simplistic approach to the complex, frustrating problem of keeping intruders out of our confidential accounts.

Then again, some financial institutions and services already use images in some form for authentication, so a variation on the theme may not be all that foreign a concept.

Part of Paychex's employee login process

Experts will tell you that there's no such thing as perfect security, but easy, low-cost tactics may still be worth considering. If nothing else, at least they can put more obstacles in front of attackers. Every little bit helps—even if it only adds complication to tactics like "brute force" (a trial-and-error method performed by software, which can crank through a massive volume of guesses at high speeds).

The commonly used, four-digit PIN number has only 7,290 possible combinations. That's not very high from a security standpoint. A hacker equipped with the right software could "brute force" through hundreds of possibilities for four-digit PIN combinations without much trouble.

Intelligent Environments’ tool, however, offers 44 different emoji to choose from, providing 480 times more possible combinations, for a total of over 3.4 million. The result is a higher chance of a unique passcode that's less likely for an unauthorized user to guess—and more likely to be remembered by an authorized one.

Total Recall

On its website, the firm cited "memory expert" Tony Buzan, the inventor of the Mind Map technique, who says that humans have an “extraordinary ability to remember pictures, which is anchored in our evolutionary history.” Because emoji rely on pictures rather than the comparatively abstract numbers and letters we’ve been using for the last several decades, they are more suited to how our brains work.

Also, unlike with set alphanumeric possibilities, emoji character possibilities can expand endlessly, as Apple recently highlighted by unveiling a slew of new ethnically diverse ideograms.

While probably an improvement over today's standard PIN codes, it's not entirely clear emoji is really any more secure than other alternatives. Using a string of words, instead of a mix of different symbols or characters, could help make passwords more difficult to guess—and it wouldn't require any special tools to implement.

Courtesy of xkcd.com

Today The Banks, Tomorrow The World

Although no banks have actually signed on for Intelligent Environments' emoji passcode tool yet—the firm says it’s “in discussion” with a few—one thing seems clear: There's a pressing need for greater security in all sorts of scenarios, from banking to emails, social networks and other online services.

Numerous options are already pushing forward, with alternative approaches, from fingerprint or retina scanning to even voice biometrics.

But not all devices or developers are equipped to support those technologies, which is where the use of alternative passwords—whether by using strings of words or emoji—can help.

Lead image and above image courtesy of Intelligent Environments; comic courtesy of xkcd; screen captures of image authentication by Adriana Lee for ReadWrite

Yesterday, my wife's Gmail account was hacked. Eventually I recovered it for her, but it took over an hour and veered dangerously close to being irretrievable. If the same thing were to happen to a corporate account within your company, the consequences could be far more painful than an hour of someone's time.

Which is why your business should do exactly what I did after I'd reclaimed my wife's account: I set up two-factor authentication (2FA).

Two-factor authentication is the use of something besides just a username and password to identify you. Typically, it's a code sent by text message or generated through an app on your phone, but it could also be biometrics—think of the way Apple's Touch ID sensor on an iPhone authenticates payments for Apple Pay.

It took me just a few minutes to set up 2FA on my wife's Gmail account. Is it much harder for businesses? To answer that question I turned to Steve Manzuik, director of Security Research at Duo Security.

ReadWrite: As in the case of my wife, often security is something companies address after a breach. Recent mega hack breach examples remedied after the attack by two-factor include Apple (celebrity photos stolen from iCloud), Bitly, Evernote, and even the investment bank, JP Morgan. While these companies should be applauded for applying two-factor after the fact, how can we convince companies to proactively prepare?

Manzuik: There are a few reasons for companies to take this seriously, some obvious, and some not so obvious.

First, your company’s board is not going to blame the CIO or CISO [chief information security offficer] for the breach. They’re going to blame the CEO.

Second, the security industry is notoriously complicated and expensive. Very savvy companies like JPMorgan are investing a quarter of a billion dollars a year—doubling over the next five years to half a billion dollars—to block future breaches when there’s little data to support the actual value of these expensive services and products. What are the rest of us to do?

Third, most breaches happen not because of sophisticated cybercriminals burrowing into companies in complex ways, but rather because of lost or stolen employee credentials, according to the annual Verizon Data Breach Investigations Report. Yes, it’s almost certain that someone in your company is using 123456 as his password.

Bet on it.

RW: OK. So walk our readers through how two-factor works.

SM: Two-factor authentication stops easy access with stolen credentials by requiring a second level of authentication after the user enters their username and password. Since a password is something that a user knows, ensuring that the user also needs to have something else to log in thwarts attackers.

In the past, this second factor of authentication could have been a token with a numerical code, a smart card, or a text message sent to your phone.

Modern two-factor authentication takes advantage of push technology found on smartphones to allow users to authenticate with the tap of a finger like swiping your phone to hail an Uber ride (the same Verizon breach report I noted before points out that your smartphone poses a “negligible” threat for cybercriminals to exploit).

By requiring a second factor of authentication after the password, two-factor can prevent attackers from accessing your systems with passwords captured with a spear phishing email (phony email that looks like it came from your bank, for example).

It can also mitigate the damage from many other attacks by making it difficult for cybercriminals to use login credentials that are harvested through other means, such as malware.

In effect, two-factor means you will be notified any time hackers try to log in no matter how they stole your credentials so you can take immediate steps to protect yourself from any further damage.

RW: I get it, because I've seen it work. But what are the primary selling points for two-factor for a business?

SM: Let me name three.

First, two-factor requires little user education.

Too often, implementing security solutions requires employees to perform unnatural acts in the workplace. The “solution” imposes unrealistic expectations on people trying to get their work done. Security should be designed to function in a frictionless way so employees don’t notice it.

Complex solutions drive employees to not participate or, worse, find ways around the systems supposedly implemented to protect them. This, of course, decreases the overall security of an environment.

Complexity is the enemy of security.

A properly designed two-factor solution requires minimal interaction with employees and seamlessly integrates in to day-to-day activities without annoying everyone every day.

Second, with two-factor, no IT admin training is required. There are no complex IT processes to implement.

Most security solutions come with the overhead of installing and configuring systems just to monitor and manage the solution, not to mention budgeting for expensive outside experts to provide ongoing maintenance, monitoring, and customization of that solution.

In sum, organizations are forced to hire additional internal security team members and invest tons of money in employee training just to run a solution that’s overly complicated, probably ineffective and most likely outdated within a year.

Modern two-factor systems do not require specialized training for employees or expensive consultants to implement. In addition, two-factor is more than just a passing security technology fad. It’s been a security best practice for decades. It’s future proof.

Lastly, two-factor simplifies password policies.

In a failed attempt to prevent passwords from being easily guessed, the security industry rushed to implement standard protocol for strong passwords.

Over the years, the protocol has called for even more complicated passwords. Today average users not only struggles to create what we call a “strong password” but they also have no hope in actually remembering that password.

How do employees typically react? Just like you, most people write their “strong password” down and leave it in plain sight or they re-use passwords across multiple websites for convenience. That way they only need to remember a single password, making it much easier for cybercriminals to wreak havoc.

The cycle continues. But it doesn’t have to.

Why are companies turning to two-factor after they’re breached? It’s simple, it’s affordable, and it blocks the majority of attackers from accessing your company’s valuable data.

But it's better to be smart before the breach and see if two-factor makes sense for your company. After 20 years in the business of battling hackers, I don’t think there is any better bang for your security buck.

Android users may soon be able to ride herd on their apps with fine-grained permission controls, Bloomberg sources say. If so, it's about time.

According to "people familiar with the matter,” Google will let people cherry-pick the data that mobile apps can jack into. In other words, those smartphone and tablet users could stop an app from gleaning contacts and location, but let it pull from, say, their photos.

Google won’t confirm or deny the rumor, but it would make plenty of sense. Currently, when an app asks for access to different types of data, usually when you first install it, your only option is to allow them all or to punt on installation altogether.

Greater user control has been a key reason for some people to consider alternatives like CyanogenMod, which makes an App Ops-like setting standard in its version of Android. Privacy concerns also fuel companies like security-centric Blackphone, which has also modified Android to give users more fine-grained controls. Now it appears Google may have taken up its App Ops tool once more.

The timing may help substantiate the rumor. Google I/O, the company’s annual developer conference, takes place later this month. The agenda covers a large array of technologies—including Android for Work and some sort of new moonshot wearable, not to mention an early look at the next evolution of Android, among other things.

With the growing number of users, data and gadgets on Google’s plate, the company likely saw no choice but to ditch its wholesale approach to permissions now. People using Android devices at work or wearing them on their bodies wouldn’t want apps pilfering extra information just to install a photo or game app.

Consider it part of Google’s push for tighter security in Android. Whatever the company’s reasoning, it’s long overdue.

If you're a WordPress user, you'll want to update your site with a critical security release. That's because a new zero-day vulnerability, discovered by Jouko Pynnönen of the Finnish security firm Klikki Oy, allows attackers to gain administrative control of WordPress sites.

The exploit, known as a cross-site scripting (XSS) bug, involves leaving a long comment (over 64 kb) with malicious JavaScript that a logged-in administrator can trigger simply by viewing the comment. Bad things can then happen, according to Klikki Oy:

If triggered by a logged-in administrator, under default settings the attacker can leverage the vulnerability to execute arbitrary code on the server via the plugin and theme editors.

Alternatively the attacker could change the administrator’s password, create new administrator accounts, or do whatever else the currently logged-in administrator can do on the target system.

According to Klikki Oy, another security researcher, Cedric Van Bockhaven, reported a similar WordPress flaw in 2014, although it was only patched this week.

Matt Mullenweg, who is both the lead developer of WordPress and founder and CEO of its parent company Automattic, released the following official statement by email (no link):

It is a core issue, but the number of sites vulnerable is much smaller than you may think because the vast majority of WordPress-powered sites run [the anti-spam plugin] Akismet, which blocks this attack.

However, many WordPress-powered sites do not run Akismet, which now costs $5 to $9 a month for commercial sites and $50/month for enterprise sites. (Automattic did not immediately respond to request for the percentage of users who use the plugin).

[Update: Mullenweg stated in an April 28 email that the number of Akismet users is "more than it has ever been, and [we] can say it’s the vast majority of WP sites."]

WordPress is pushing out the security patch via auto-update, so that will protect many users—at least those who have auto-update enabled—even if they don’t use Akismet.

Google has its own two-factor authentication (2FA), so why is its venture arm investing in a company that sells 2FA services?

Because, it turns out, enterprise security is a really big deal, according to Google Ventures partner Karim Faris. Faris has been hammering this enterprise security theme for several years now, leading Google to invest in ThreatStream, Ionic Security, Shape Security, and Duo Security, a 2FA company that now has over 5,000 customers, including Box, Facebook, NASA, Toyota, and Twitter.

This focus on security—particularly things like 2FA that make it somewhat simple for end-users—is critical. (2FA typically requires a user to log in with both a password and a secondary authorization code, often delivered via text message or a small electronic gadget.) Studies, like this one from Aruba Networks, keep showing that enterprise users mostly don't care about securing enterprise data.

Karim Faris

Just a few short years ago, Google Inc. had zero interest in the enterprise, but now factors heavily in enterprise discussions around cloud, apps, storage, and more. So on the eve of Duo Security's Series C $30 million raise led by Redpoint Ventures, and joined by Google Ventures, I talked with Faris about Google's interest in enterprise security.

More Cybercrime, More Cybersecurity

ReadWrite: Google Ventures’ interest in enterprise security startups seems to have grown. What is changing in the market to make info security more attractive to you now?

Faris: We look to invest in companies that are working on innovative ways to tackle security challenges, while optimizing usability. In addition to Duo, we’ve invested in companies like ThreatStream, Ionic Security, Shape Security, and Synack.

Security has always been an important topic and has garnered increasing attention as more vectors of attack materialize that cybercriminals can exploit. We used to be able to protect companies by having a hard perimeter around physical networks that was protected by traditional defenses like firewalls. But you can no longer solely rely on that with the rise of cloud and mobility services, as well as people bringing in their own devices.

That additional exposure makes enterprises more vulnerable and is fueling the need for new security innovation, which creates investment opportunity.

RW: What did you like about Duo Security?

KF: We liked a lot of things: the strength of the team, the passion of their rapidly growing user base, and the depth of the technology. Two-factor authentication gets you a lot of bang for the security buck and is something everyone should consider. If you have a fortress to keep safe, the first thing you do is protect the gates. Duo makes it incredibly easy to deploy and use. They started by guarding the gates, and now they are building a moat.

Factor This

RW: You mentioned that in your original due diligence process you discovered many companies were adopting Duo and, by extension, 2FA. Why is 2FA so important to enforcing enterprise security?

KF: Enterprises historically have always had to find the right balance between adequate protection and usability. If the CISO wanted to enforce security policies, that often came at the expense of a poor user experience and meaningful workflow disruption, which directly impacted productivity.

In the case of two-factor authentication, hard or soft token implementations have not attracted many fans, whether it’s the idea of carrying another piece of plastic on your keychain or entering a one-time password every time you login. Duo figured out how to make that process seamless and more secure at the same time, while reducing the operational load on the enterprise. That led to impressive user adoption.

RW: You said Duo started by protecting the gate of the fortress. How is this best done?

KF: To be effective, you need to let IT teams easily define rules on who can access what applications and automate the enforcement of these rules. Doing so enables real-time detection and prevention of potentially malicious attempts to access applications from anywhere whether they are on premise or in the cloud.

One reason I like Duo is that it analyzes the context of a user’s behavior, location, security health of the device and the reputation of the IP address in real-time to enforce these rules. This allows more effective security without inconveniencing users.

This is critical. CISOs get insight into the security health of endpoints like Macs, Windows PCs, iOS and Android devices, without installing agents. They can identify users with devices that are out of compliance with policy and enforce restrictions on how these devices are used at work, keeping an enterprise current and safe.

Google is taking the unusual step of updating Chrome to effectively exile from the Web a Chinese firm tasked with vouchsafing the identity of websites. Google's move against the Chinese Internet Network Information Center, or CNNIC, comes after the Chinese company allowed an Egyptian firm to issue fake certificates for Google domains.

Mozilla subsequently followed suit with its own CNNIC blackout in its Firefox browser, although it will apply only to certificates issued after April 1, 2015.

Certificate authorities like CNNIC provide the crucial service of verifying that the website you've connected to is in fact who it says it is. They do so by issuing digital certificates to sites that browsers can check to ensure that you've connected, for instance, to your bank and not an imposter site that can harvest your password and other details. This process is largely invisible to the average Web user, but it underpins the workings of the modern Web.

Google and Mozilla said CNNIC delegated certificate authority to the Egypt-based intermediary MCS Holdings, which in turn issued the fake certificates for Google sites and installed them in "man in the middle" proxy software that could be used to snoop, undetected, on user email, chat and other communications via Google services.

Google security engineer Adam Langley said it was "a serious breach of the certificate authority system" and confirmed that CNNIC will no longer be trusted in an upcoming Chrome update.

Google didn't provide a timeframe for that update, in order to allow website owners the chance to switch to a different certificate authority. Microsoft has also hinted that it will put a similar ban in place with Internet Explorer.

For its part, CNNIC claims the certificate was intended for testing and was installed on the wrong server due to a human error by MCS Holdings. In its official statement, Google admits this explanation "is congruent with the facts" but says "CNNIC still delegated their substantial authority to an organization that was not fit to hold it." Mozilla likewise called CNNIC's action an "egregious practice" that violated its policies on the proper handling and use of certificates.

Google As Gatekeeper

It's the latest example of Google throwing its substantial weight around in policing the Web—even when its intentions are good, the Mountain View firm carries an almost unstoppable level of clout in making decisions about security and fraud on the Internet, and that means the average Web user is essentially at the whim of Google's choices.

In a statement posted online, CNNIC called Google's decision "unacceptable and intelligible." It went on to say "CNNIC sincerely urge that Google would take users' rights and interests into full consideration." CNNIC's concern is that users will find themselves unfairly locked out of email sites, banking portals and other secured domains verified by the firm.

This tone seems at odds with the diplomatic one used by Google, with Langley hinting that everything could eventually return to normal: "We applaud CNNIC on their proactive steps, and welcome them to reapply once suitable technical and procedural controls are in place." That would be likely to take a significant amount of time, however.

After this particular kerfuffle has died down, the incident is unlikely to register on the radar of the average Gmail or Google Drive user—indeed, you need a high level of technical knowledge to even understand what's happened. Nevertheless, it's a reminder of the need to keep our online guardians under close scrutiny while they make decisions on our behalf.

If you've been relying on password meters to determine how strong your passwords are, we've got some bad news. Their strength measurements are highly inconsistent and may even be leading you astray, according to a new study from researchers at Concordia University:

In our large-scale empirical analysis, it is evident that the commonly-used meters are highly inconsistent, fail to provide coherent feedback, and sometimes provide strength measurements that are blatantly misleading.

Researchers Xavier de Carné de Carnavalet and Mohammad Mannan evaluated the password strength meters used by a selection of popular websites and password managers. The sites surveyed included Apple, Dropbox, Drupal, Google, eBay, Microsoft, PayPal, Skype, Tencent QQ, Twitter, Yahoo and the Russian-based email provider Yandex Mail; the researchers also looked at popular password managers including LastPass, 1Password, and KeePass. They added FedEx and the China Railway customer-service center site for diversity.

De Carné de Carnavalet and Mannan then assembled a list of close to 9.5 million passwords from publicly available dictionaries, including lists from real-life password leaks, and ran them through those services to what kind of job their password-strength meters were doing.

Ineffective Rules

Password strength meters typically looked for length, a variety of character sets (such as upper and lower case letters, numbers, and symbols). Some tried to detect common words or weak patterns.

However, the strength meters that looked at password composition often ignored other easy-to-crack patterns, and didn't take "Leet" transformations—which replace the letter l with the number 1, for example—into account. Hackers, of course, often try these variations when trying to crack passwords.

Inconsistent Results

Confusingly enough, nearly identical passwords provided very different outcomes. For example, Paypal01 was considered poor by Skype’s standards, but strong by PayPal’s. Password1 was considered very weak by Dropbox but very strong by Yahoo!, and received three different scores by three Microsoft checkers (strong, weak, and medium). The password #football1 was also considered to be very weak by Dropbox, but Twitter rated it perfect.

In some cases, minor variations changed the assessment as well due to an overemphasis on minimum requirements: password$1 was correctly assigned very weak by FedEx, but it considered Password$1 very strong. Yahoo considered qwerty to be a weak password, but qwerty1 was strong.

Similar problems emerged with Google, which found password0 weak, but password0+ strong. False negatives turned up as well—FedEx considered +ˆv16#5{]( a very weak password, apparently because it contains no capital letters.

"Some meters are so weak and incoherent (e.g., Yahoo! and Yandex) that one may wonder what purpose they may serve," the researchers wrote.

Black Boxes, Black Boxes

De Carné de Carnavalet and Mannan argue that the opacity of password checkers works to their detriment. That could also be a problem for users confused by oddly inconsistent password-strength results.

“Except Dropbox, and KeePass (to some extent), no other meters in our test set provide any publicly-available explanation of their design choices, or the logic behind their strength assignment techniques," the researchers wrote.

With the exception of Dropbox and KeePass, the password meters appeared to be designed in an ad hoc manner, and often rated weak passwords as strong. As the researchers wrote: “Dropbox’s rather simple checker is quite effective in analyzing passwords, and is possibly a step towards the right direction (KeePass also adopts a similar algorithm).”

De Carné de Carnavalet and Mannan recommend that popular web services adopt a commonly shared algorithm for their password strength meters. In particular, they suggest using or extending the zxcvbn algorithm used by Dropbox or the KeePass open-source implementation of it.

The process is simple. Facebook added a money symbol ($) sign at the bottom of the screen, right above the keyboard. Tap the $ symbol and enter an amount. Then add your debit card, and hit send.

To accept money from a friend, you’ll open the conversation, add your debit card information, and off it goes. Facebook says the funds will be transferred "right away," although it adds that your bank may not make the money available to you for one to three business days, "just as it does with other deposits."

The service will be rolling out to U.S. users "over the coming months," the company said.

Message Your Spending To Facebook

Facebook released Messenger in 2011 as a dedicated messaging app distinct from its primary social service. While it's proven very popular—it usually tops the free-app listings in both Apple's App Store and the Google Play store—Messenger also has a secret life as a data fiend that gathers information about how people use it, when they use it, where they use it, whether they spend more time in landscape or portrait mode, and much more.

With all that in mind, using Messenger's mobile payments might reasonably give you pause on data-privacy and security grounds. Facebook appears to store your debit-card information by default, although you can remove it in Messenger settings.

Facebook presumably also stores a record of your transactions, since it's hard to imagine anyone getting comfortable with a service that wouldn't let them review who they've sent money to and whether the recipient got it.

That's another rich source of data Facebook would undoubtedly love to mine for further insights into your personal and business relationships. It's also information hackers might find useful should they compromise your account.

I reached out to Facebook for some answers, and here's what a spokeswoman told me about the security of that financial information:

We use an encryption between the consumer and Facebook at all times and encrypt all card information when it is stored. We value the trust consumers place in Facebook and take numerous precautions to prevent unauthorized access to the financial information saved on Facebook. This information is kept on secure servers with multiple layers of hardware and software protection.

Facebook hasn't yet gotten back to me on the question of what transactions it stores and how it will make use of that information. I'll update once it does. [Update: Facebook says you will be able to view your Messenger transaction records.]

Passwords have a big problem: They're not very secure, and no one likes using them. That's why you now find Web browsers, password managers, and mobile phones all trying to take some of the pain out of the process.

Today's technology is looking for hardware and software solutions, with the iPhone's Touch ID fingerprint reader perhaps the most prominent example. But there are many companies, including wearable device makers, working to push biometrics further into the mainstream.

These gadgets aim to finally rid end users of their reliance on passwords. If they succeed, we may soon see a future in which our bodies are the only authentication we will ever need—whether it's really more secure or not.

Our Bodies As Passwords

Apple's Touch ID works well ... but can be spoofed.

Other gadgets have already introduced the broader public to biometric authentication. The fingerprint-sensing technology inside the iPhone and the latest Samsung handsets is a marked improvement over a PIN code or a password. They are, however, not perfect: With enough time and effort, fingerprints can be spoofed or fooled. (We leave them everywhere we go, after all.) They're also impossible to change once an account has been compromised.

In its current state, such technology works best as a second layer of protection alongside other security measures. To spoof a fingerprint on an iPhone 6 "requires skill, patience, and a really good copy of someone's fingerprint," but it can be done, writes Marc Rogers from the Lookout security firm.

A Minority Report-style iris scanner isn't too far from reaching consumer gadgets, either: At MWC earlier this month Fujitsu showed off a prototype eye detection device that knows exactly who's looking at it by their irises, while ZTE has introduced retina-scanning technology to its smartphones.

Microsoft also significantly boosted biometric support for its upcoming Windows 10 software. The OS ships this fall with a feature called Windows Hello, which is essentially support for the Fast Identity Online (FIDO) 2.0 specification. It heralds a future in which you might, say, log into your Windows PC with a fingerprint or eye scan.

Now wearables are poised to take biometric adoption even further. Over the last year or two, we've seen wrist bands, chest straps and other gadgets riddle themselves with sensors. They filter into the consumer market at a rapid clip—you may well have a step-counting, sleep-tracking, heart rate-reading band strapped to your wrist to quantify your health and fitness levels. The data gleaned from those sensors may also offer another way of proving your identity to a website or bank machine.

Your heartbeat's rhythm is just as unique as your fingerprint, and far harder to duplicate. It's the unique key at the center of the Nymi Band from Canadian firm Bionym, which is currently in trials with a UK bank. If successful, it may offer customers secure, alternative logins someday.

For now, Nymi is still very much in the development stage. But it points to one way biometrics could confirm our identities to cash machines, computers, smartphones and door locks.

In Sweden, the high-tech Epicenter office gives staff members the opportunity to have an RFID (Radio-Frequency Identification) chip implanted in their skin. That may be wearable technology taken to the extreme—as a surgical implement—but once embedded, the chips would grant easy and secure access to any number of areas, from photocopiers to computer workstations, all with no passwords required.

There are dozens of these projects popping up, all small-scale and experimental, but all indicating the password-free future that's approaching. And low-cost, always-on electronics, combined with unique biometrics, are going to play a major role.

The Weakest Link

Is this your eye?

These kinds of systems are only as strong as their weakest link, however. Every password-protected device, app or site needs some kind of safety net—like the reset links emailed to you when you've forgotten your password. But unless that back-up measure is equally secure, every other precaution is in vain.

Dropping your heartbeat-measuring band in the river is one thing. A stranger commandeering or replacing your biometric data is another. Consider this: Associating an eyeball with your bank account may seem well and good, but only if that eye is actually yours. Criminals can't spoof your iris, but if they can reset the link and use a different iris instead, the security fails.

Next-generation safeguards, like the ones we use now, can't take an all-or-nothing approach, nor can they afford to leave the back door unlocked. There must always be some way of confirming your identity if there's a problem with the primary method of access. Today, that's anything from confirming your date of birth to having a PIN code mailed to your verified home address.

Behavioral biometrics is another option. More than just a one-off identification process, they allow for on-going monitoring of your behavior, detecting things from the way you type to the angle at which you hold your phone.

BehavioSec is one firm innovating in this area, adding an extra layer of security on top of existing measures: a "a process of non-invasive, frictionless verification" in the company's words. BehavioSec talks about multi-layered security with three pillars: something you have (a phone), something you know (a PIN code), and something you are (your physical or behavioral metrics). You can see a demo of its behavioral metrics detection system in action.

"We need to change the way we think about security–it shouldn't be a conversation of 'either, or', with any one new technology sweeping in to replace another," BehavioSec CEO Neil Costigan told me. "Since virtually every authentication technique can be compromised, institutions should not rely solely on any single control for authorizing high risk transactions, but adopt a layered approach to security."

More wearables and other devices will soon start acting as ID badges in the near future—from the Apple Watch to the Nymi Band. There's plenty of promise in biometric authentication. But if they're still backed up by old-school safeguards, then that promise could turn into a pitfall, lulling users into a false sense of security.

The group introduced its latest devices this week at Mobile World Congress—the Blackphone 2 smartphone and its first tablet, currently dubbed Blackphone+. But what was really on display was the company’s uncanny knack for turning a well-publicized security flub into a win.

Meet Blackphone 2 And Blackphone+

As far as upgrades go, the 5.5-inch Blackphone 2 looks like a decent successor to last year’s original 4.7-inch Blackphone.

Like most second-generation phones, version 2 offers several hardware improvements, including a faster 64-bit 8-core processor, more memory (3GB), a bigger battery and a larger display. The phone also ties into Citrix's Mobile Device Management, so IT departments can manage employees’ company-supplied or BYO (“bring your own”) phones. Blackphone 2 is priced at $630 (unlocked) and slated for a July release. Soon after, it will be joined by the company’s first tablet, the 7-inch Blackphone+, sometime this fall.

The original Blackphone (left) and Blackphone 2 exhibition unit (right)

Both run Blackphone’s PrivatOS software, a variation on Android designed as an extra layer of protection between users and the big, bad outside world. When apps unnecessarily ask for personal data, like contacts or location, Blackphone can intercept the request, blocking or obscuring it. The software can even fool the app into thinking the user granted access, even if he or she didn't.

“You can take an Android device, you can root it, introduce [similar] features, and after months, you can have something like Blackphone,” said Javier Agüera, Blackphone’s founder and now a chief scientist at Silent Circle. “Or you can have an out-of-the-box device, with everything set up by security specialists, that’s enterprise ready and configured the way you need it.”

PrivatOS boasts new virtualization feature called “Spaces,” which offers separate “work” and “personal” modes, the ability to add profiles and an app store vetted by Blackphone. The technology's encryption protocols also save keys on the device itself, not some unknown remote server. The phone's price includes two years of security services that guards against unsafe WiFi networks, private browsing, and secure cloud file storage.

Sounds like a lot of protection, at least, it's more than most users are accustomed to getting. It all goes back to Blackphone's mission: The company wants to safeguard people. It seems sincere—even though a hacker actually did manage to breach those walls last year.

Turning Hackers Into BFs

PrivatOS running on last year's model

At hacking convention DefCon last year, CTO Jon “Justin” Sawyer of Applied Cybersecurity LLC told Blackphone that he managed to get past its security to root its device. What’s more, he tweeted the exploit, which landed on BlackBerry sites and other tech blogs.

Sawyer found a couple of weak spots in the software, including a hole in the remote wipe feature that let the security expert access the device and grant himself system privileges. He was able to give himself access to core parts of the phone. But what gets less attention, the execs said, is that the company had already patched the hole.

Sawyer essentially attacked an old, outdated version of the software. Even so, the incident and publicity could have humiliated Blackphone right out of the market. It didn't. Instead, the company is milking it.

The team thanked Sawyer for the discovery and sent him a bottle of wine. Then it enlisted others to scope out any other vulnerabilities.

According to Vic Hyder, Silent Circle’s chief strategy officer, Blackphone recently launched a bug bounty program to reward people for finding security glitches—from $128 to more, depending on the severity. (Bounties are fairly common in the tech industry; even big companies like Facebook, Google and Microsoft offer rewards to bug hunters.)

“[It] makes them part of the solution, instead of part of the problem,” Hyder said. "It brings everybody in as a participant.” Even Sawyer, now a friend of Blackphone, helps out by looking for other vulnerabilities. The company publishes all of its source code, to help make it easier for people to find holes.

So far, Hyder estimates that the company has paid out about $15,000 to $20,000 in bounties.

Throwing Shade

"Nothing is hack-proof,” admits Daniel Ford, chief security officer.

However, he says his company can help guard against certain types of attacks. “Targeted attacks are completely different than mass surveillance,” he said.” There’s little Blackphone or anyone can do against the former, such as last year’s breach at Sony Pictures—which may have been a specific retaliation for The Interview, a comedy that poked fun at North Korea.

Sony's "The Interview" made fun of North Korea's regime, which may have been responsible for hacking the movie studio.

Ultimately, if a hacker wants your data badly enough—whether it’s a criminal or a NSA agent—he or she has innumerable tools that can help get it. No platform can hold up against that, he explained.

But when it comes to broader mass surveillance, Ford said Blackphone can step in and offer more protection. "This is where our commitment is: If there is a vulnerability that was disclosed publicly, we will fix it in less than 72 hours,” he said. “We have done so every time. That is our goal … the last time, it took only 6 hours.”

"Samsung had two critical vulnerabilities that was released two weeks ago,” he added, calling out one of his archrivals in the enterprise market, albeit for a vulnerability in its TV business. Still, he couldn't resist poking at Samsung's overall attitude toward security: "They have not even started to address it,” he said.

The latest variation on the "smart lock"—one that secures your front door until you open it with your smartphone—is here. Candy House's Sesame adds a few new twists, including an inexpensive starting price (though only for those who snap up the few remaining offers in its Kickstarter), simple assembly and functions that will unlock your door via a special knock or secret passphrase.

The smart locking system launched on Kickstarter Wednesday. More than 570 people backed it on the first day, lifting it to 87% of its $100,000 funding goal. As of writing , the project has 1,100 backers and has raised almost $170,000.

Door, Lock Thyself

Smart locks are not a new concept. There are dozens of options on the market today. As with any new technology, some smart locks are prone to glitches such as jamming or inconsistent connectivity.

Many are expensive, too. The August smart lock, available in Apple stores as well as online, sells for $250.

Sesame, by contrast, costs early Kickstarter backers a mere $90 for its most stripped down model. Those deals are almost gone, though, and once they are, the Sesame will set you back $150. Of course, you can't get it yet; it won't start shipping until late April.

Open Sesame

No-tools installation is one of the Sesame's big selling points, and it does appear to be pretty straightforward. You basically put the Sesame device over a deadbolt latch using a 3M adhesive strip that comes with the kit. You can put it on at any angle, and the company says the mechanism can fit almost any deadbolt in the U.S., Canada, and Australia.

Whether you feel good about trusting the security of your home to a gadget that's basically stuck to your door with double-sided tape is a separate question. Though the upside here seems to be that if the Sesame comes unstuck, you can always use a regular key—though you might be stuck yourself if you've decided to leave your keys at home, as the project explicitly urges backers to do.

You'll control the lock via the Sesame app on your smartphone. That will let the smart lock know who you are and what you’d like the lock to do.

Who's That Unlocking At My Door?

Sesame connects to your smartphone via Bluetooth. You can also pair it via Bluetooth to an optional Wi-Fi bridge that will let you control the lock remotely from virtually anywhere. That would also let you grant access to others, so you could let in a relative or a sitter without having to hand them a key.

The smart lock also notifies the owner whenever someone tries to access your home using Sesame, whether they’re on the list or not. You also have the ability to store and review log records which document the people that have triggered the lock, and when they did so.

Security is boring—at least until you don't have it anymore. Then it becomes exciting for all the wrong reasons.

In our increasingly interconnected world, it's also painfully difficult. How do you secure connections to internal devices and external services that you do not and, indeed, cannot own? For enterprises trying to lock down sensitive corporate data in a world awash in personal devices and cloud computing, it's an exercise in futility.

Maybe. Maybe not.

Zack Urlocker

Zack Urlocker was just named COO of Duo Security, a Benchmark and Google Ventures-backed security company that aims to make two-factor authentication omnipresent and painless. Is this Urlocker's next unicorn? After all, as SVP of products and marketing at MySQL, he helped to drive a $1 billion sale by Sun. Later, he went on to run operations at pre-IPO Zendesk (now worth $2 billion).

Urlocker clearly knows how to build unicorns, but is security ripe for a unicorn-sized exit?

To better understand the allure of security to Urlocker, I caught up with him to discuss the shift from databases and help desk software to security.

Security Is Big For All The Wrong Reasons

Security has been a big market for a long time, but for all the wrong reasons. And while we like to think of security as someone else's problem (at least, until our own data is pilfered), a Ponemon study shows that we all bear the costs:

Source: Ponemon

And while malicious criminal attacks account for 42% of data breaches, human error comes in second place (30%). Lost devices or other errors in human judgment open up corporations to all sorts of security problems.

Making It Easy

The problem for most people, however, is that securing their devices and, hence, their data, can be a pain. Often we won't bother until we're forced to do so.

I remember when I first implemented two-factor authentication. My IT team had been pushing me to do it for nearly a year, and I kept resisting because I didn't want the bother. It didn't help that some things (like calendars) were shared with other family members on their devices. The thought of having to constantly update the passwords on their devices, and not merely mine, seemed to not be worth the effort.

In this case, the hacker goaded me as I madly tried to get ahead of him to change her passwords. He used the Gmail account to get into her Facebook and other accounts, and used all of them to send vile messages to her and her friends. As I tried to stop him, he IM'd me to laugh at my efforts. It was frightening.

It was the wake-up call I needed, and I implemented two-factor authentication for myself and my family immediately afterward. We haven't had a problem since (though I wish I could keep my credit card numbers from getting stolen every few months.)

Since that time, two-factor authentication has become increasingly easy, thanks to companies like Duo Security, which Facebook, Box, Palantir, Yelp, Whatsapp, Etsy, and over 5,000 other companies use to provide simple security to hundreds of millions of users. In fact, Duo's founding CEO, Dug Song, developed solutions at his previous startup that today secure 80% of the ISPs globally.

As Urlocker told me,

Duo makes strong security easy to buy, easy to use and easy to roll into production. Usually security means making things hard for people. With a SaaS solution, it’s easy to deploy. You can get Duo Security up in running in 15 minutes or a few days for major rollouts compared to weeks or months with traditional solutions. And it works, too!

That ease of use is essential. I'm a reasonably savvy technologist. No one in my family is. For them to be comfortable with two-factor authentication, it has to be as simple as typing in a password. (Or, in this case, a code sent to them via SMS.)

Learning From Open Source

So how did Urlocker get here from open source land? Duo, so far as I know, isn't offering its software free over the Internet and charging for support. What can open source teach us about security?

Security, it turns out, has an equally open community, sharing both code and insights into how to secure code.

Importantly, as he told me, it's critical to "know how bad guys operate and where the vulnerabilities hide," not to mention "how customers behave." The best open source software makes difficult processes easy for developers. Duo is trying to accomplish the same thing for security.

Which means not foisting silly security policies on users (i.e., forcing them to change passwords every 90 days to equally obscure and hard-to-remember passwords). Duo provides multiple ways for users to authenticate, but the one I like best involves sending push notifications and allowing me to simply to respond.

As the thinking goes, anyone can get my password. But getting my password and my mobile device? That's hard.

Not surprisingly, then, Urlocker find that certain SaaS categories, like Zendesk, Box, New Relic, HubSpot and Duo Security, "definitely operate at a similar scale" to open-source software, "but with much better conversion rates than we ever had in open source!"

That's good for Duo, of course, but also for corporate security. Which makes it easier to sleep at night, even if the hackers never do.

True, not everyone's going to be able to make the jump right away. Some internal corporate applications still require Flash; some websites still cling to it. But for your own safety, and for the good of the Web, you should make the effort.

Time To Say Goodbye

Flash Player is dead. Its time has passed. It's buggy. It crashes a lot. It requires constant security updates. It doesn't work on most mobile devices. It's a fossil, left over from the era of closed standards and unilateral corporate control of Web technology. Websites that rely on Flash present a completely inconsistent (and often unusable) experience for fast-growing percentage of the users who don't use a desktop browser. It introduces some scary security and privacy issues by way of Flash cookies.

They're not kidding about Flash's security vulnerabilities. The recent discoveries all involve so-called zero-day exploits, in which malicious hackers use or distribute tools that take advantage of previously undiscovered security flaws.

The first two exploits were somewhat less serious, as they required users to click on malicious links in spammy emails or texts. Most people are smarter than that these days—we hope.

The third one, though—discovered by TrendMicro—uses a malicious advertising vector, and thus affected far more users. Basically, anyone visiting a high traffic website infected with malicious advertisements could find their system hacked.

The security firm Malwarebytesfound the ads on dozens of mainstream sites, including dailymotion.com, theblaze.com,nydailynews.com, tagged.com, webmail.earthlink.net, mail.twc.com and myj.uno.com. These ads would then redirect users to a landing page for the exploit kit Hanjuan that would do the real dirty work.

Take The Flashless Challenge

If the idea of having your laptop infected just because you visited an otherwise innocuous website doesn't appeal to you, it's time to get rid of Flash if you can. (Yes, Adobe has patched that particular vulnerability—but have you installed the patch? Will you install the next one, and the next one after that?)

Here's how.

To Uninstall Flash

You’ll need to download and run an uninstaller program. Adobe offers instructions for Windows and Macs.

To Tame Flash If You Can't Get Rid Of It

If you need Flash for work, or are addicted to DailyMotion, or can’t deal with Facebook and Amazon refreshing pages too slowly, another option is to use an extension like FlashBlock. This allows you to limit your Flash usage to the sites you select. While you’ll still be somewhat vulnerable if a popular site is infected with malicious advertising, it’ll lower your risk.

Firefox: Go to Tools->Add-ons->Plugins, where you can set Shockwave Flash to “ask to activate” (or “never activate”).

Chrome: Go to Preferences->Settings->Advanced Settings->Privacy->Content Settings->Plugins->Click to play (or block by demand)

If you’d prefer, you can use extensions such as Flashblock, available for Firefox and Chrome, or NoScript for Firefox.

The saga of last year's privacy controversy over Verizon’s user-tracking behavior continues on. The latest chapter involves the wireless carrier magnanimously deciding Friday to let subscribers opt out of the program, the New York Times reported.

Not that the idea came purely from the goodness of its heart. As the NYT noted, the decision came less than a day after the Senate Committee on Commerce, Science and Transportation wrote to Verizon’s chief executive, Lowell C. McAdam, to question his company’s behavior.

Next thing you know, Verizon agreed to let people jump off the good ship “Privacy Fail.”

Shhhh! We’re Tracking You

The fiasco started last year, when a tweet by the Electronic Frontier Foundation’s Jacob Hoffman-Andrews pointed out Verizon’s user-tracking tactics—primarily because few, if any, people realized what the wireless operator was doing.

Hoffman-Andrews cited an Ad Age article about Verizon's advertising business that mentioned the company’s use of PrecisionID, a tool developed by Verizon’s data marketer, Precision Market Insights. Its website describes PrecisionID as “a deterministic identifier matched to devices on Verizon’s wireless network powering data-driven marketing and addressable advertising solutions…”

The system works by tacking on snippets of code—sometimes called “perma-cookies” or “supercookies”—to mobile traffic headers moving through Verizon's cellular network. This “UIDH” identifier allows the carrier to track its subscribers' mobile browsing activity for advertising purposes. Ad Age’s Mark Bergen wrote, "Precision packages the request as a hashed, aggregated and anonymous unique identifier, and turns it into a lucrative chunk of data for advertisers.”

In a Google AdSense world, user-tracking may not seem that outrageous. The difference: Google makes no secret of its ad-targeting behavior, and people knowingly accept those terms in order to use the search giant's free services. Verizon Wireless subscribers pay (sometimes hefty) subscription fees, but they apparently didn’t know they were being tracked.

Instead, they became unwitting participants in a program whose security remains in question. As the NYT points out, Verizon must secure those unique identifiers or supercookies, to ensure external attackers can’t get their hands on them.

Verizon "Takes Privacy Seriously" (Kinda)

Even if people knew about the program, they would have had no way out until now. The company offered no mechanism to decline participation, like it does with other advertising initiatives. It makes sense, in some ways. If no one knows they’re being tracked, where’s the need? Another possibility: Putting something out there might trigger unwanted attention, and Verizon only puts it out there because it’s forced to now.

That is, of course, not the way the carrier positions its decision. According to its latest press statement:

Verizon takes customer privacy seriously and it is a central consideration as we develop new products and services. As the mobile advertising ecosystem evolves, and our advertising business grows, delivering solutions with best-in-class privacy protections remains our focus.

We listen to our customers and provide them the ability to opt out of our advertising programs. We have begun working to expand the opt-out to include the identifier referred to as the UIDH, and expect that to be available soon. As a reminder, Verizon never shares customer information with third parties as part of our advertising programs.

The announcement looks like a concession, and a minor one at that. Because if it was serious about privacy, then Verizon would have made user-tracking opt-in, i.e. turned off by default and only activated with consent. Instead, the program is opt-out, indicating it may be turned on by default. That would put the onus on users to be aware and proactive enough shut it down.

Earlier in January, the Electronic Frontier Foundation began a petition against Verizon and Turn, a partner that makes digital marketing software. The digital rights group seeks punitive federal action for the lack of consumer disclosures over the tracking activity. The petition received more than 2,000 signatures as of Friday.

A vulnerability in a widely used component of many Linux distributions could allow remote attackers to take control of a system. Researchers at Qualys have dubbed it Ghost since it can be triggered by the "gethost" functions in Linux.

The vulnerability can be found in the in the GNU C Library, known as glibc for short. Without glibc, a Linux system couldn’t function. The flaw is found in __nss_hostname_digits_dots(), a glibc function that's invoked by the gethostbyname() and gethostbyname2() function calls. An attacker able to access either function could take remote control of the entire Linux system.

A series of misfortunes have helped Ghost to slip through the cracks. First of all, the bug had been previously identified and fixed back on May 21, 2013, as Qualys CTO Wolfgang Kandek writes. However, at the time it was seen only as a flaw, not a threat, and no further patching was done:

Unfortunately, it was not recognized as a security threat; as a result, most stable and long-term-support distributions were left exposed including Debian 7 (wheezy), Red Hat Enterprise Linux 6 & 7, CentOS 6 & 7, Ubuntu 12.04, for example.”

Secondly, since Ghost affects a code library that's integral to the Linux system, patching it is no simple fix. Patching the GNU C Library will mean that the Linux core functions, or the entire affected server, will have to be rebooted. Companies will have to schedule that downtime, which means affected servers could stay vulnerable for some time longer.

With all the worlds’ Linux distributions to choose from, it’s unlikely your homebrew Linux server is anywhere near high risk. And now that Red Hat, Debian,Ubuntu and Novell have all issued patches, Linux server operators have the resources to stay in the clear.

Fingerprint scanners may be all the rage right now, thanks to the Apple iPhone and Samsung Galaxy devices. But the cool tech may have just hit a major snag. Hackers claim they can lift fingerprints from hi-res photos with fingers in the frame.

At the 31st annual Chaos Computer Club conference in Hamburg, Germany, Jan Krissler (aka “Starbug”) revealed to the European hacking group how he duplicated a thumbprint—and not just anyone’s. He duped the digit of German Defense Minister Ursula von der Leyen.

There’s no special equipment required. Krissler used a few high-resolution photos—with the pads of von der Leyen's fingers showing at different angles—to cobble together a complete print. Given how good photographic technology has gotten, consumer- and prosumer-grade cameras could easily do the job. Add commercial software VeriFinger to the mix, and you’ve got a trick worthy of spy movies.

Given the rising interest in fingerprint authentication, the hacker jokes that now “politicians will presumably wear gloves when talking in public.”

For the rest of us, there’s no reason to fear using, say, TouchID-enabled Apple Pay, at least not yet. This sort of exploit requires a targeted effort around one specific subject. However, it does illustrate one thing: As new and innovative security practices and technologies emerge, hackers find new and creative ways to foil them.

That’s unsettling enough when our logins and emails get leaked. (Just ask Sony.) But biometric authentication, like retina and fingerprint scanning, adds a new dimension to security concerns. After all, passwords are easy to change. Fingers and eyeballs, not so much.

If you speak German and want to watch Krissler in action, check out the video below.

If only there was a federal agency dedicated to protecting federal information systems and critical U.S. infrastructure from criminals and foreign attackers. Oh, wait—there is. It's the National Security Agency. And to all appearances, it's botched the job so badly you'd think it wasn't really trying in the first place.

Maybe it wasn't.

The Origin Of Dysfunction In The Breakdown Of The Bicameral NSA

The NSA has historically been a house divided against itself. On one side, it ostensibly works to "ensure appropriate security solutions are in place to protect and defend information systems, as well as our nation’s critical infrastructure." This mission, the NSA says, aims to ensure "confidence in cyberspace."

Then there's the other side of the NSA, which listens in on the communications of U.S. adversaries, conducts mass surveillance of Americans and foreigners and undertakes military-style cyber attacks against other nations and alleged terrorists. Oh, and that also deliberately tries to undermine security tools used to guard both civilian and and government systems against intrusion.

For instance, the NSA's secret 2013 budget request—provided by Edward Snowden and published by the New York Times, ProPublica and other outlets a year ago—revealed that the agency seeks to "introduce vulnerabilities into commercial encryption systems, IT systems, networks, and endpoint communication devices used by targets." In other words, the NSA routinely undermines the security tools that government agencies, businesses and consumer services use to protect messages and data from attackers. It's a little as if car makers were surreptitiously making it easier for repo men to unlock and drive away your vehicle—right in the midst of an auto-theft epidemic.

The NSA apparently does this in the misguided belief that its own spooks will be the only ones to notice and exploit these vulnerabilities. But criminals and foreign governments are smart, too, and just as eager to exploit security holes created by accident or design. In 2010, for instance, Chinese hackers were able to break into individual Gmail accounts by using "secret" backdoors that Google had installed specifically to comply with U.S. government search-warrant requests.

"Confidence in cyberspace," anyone? Let's put it this way: It was bad enough if the NSA's right brain didn't know what it's left brain was doing—and even worse if it did. In neither case could anyone trust the NSA's assurances of helping to secure the Internet.

These are all necessary steps toward limiting the NSA's manipulation of general-use security software and tools. Even in that respect, though, they're insufficient, as the NSA has never renounced its efforts to subvert encryption methods—despite the recommendation of a White House advisory panel that:

The US Government should take additional steps to promote security, by (1) fully supporting and not undermining efforts to create encryption standards; (2) making clear that it will not in any way subvert, undermine, weaken, or make vulnerable generally available commercial encryption; and (3) supporting efforts to encourage the greater use of encryption technology for data in transit, at rest, in the cloud, and in storage.

Taking Apart The Puzzle Palace

Even had the president fully embraced the panel's suggestions, it would have done little to restore "confidence in cyberspace," at least so far as the NSA is concerned. This is an agency, after all, that reportedly uses its contacts with industry—ostensibly intended to help private companies improve network and computer security—to instead cajole or strongarm them into opening backdoors or compromising security products. From ProPublica:

The N.S.A.’s Commercial Solutions Center, for instance, invites the makers of encryption technologies to present their products and services to the agency with the goal of improving American cybersecurity. But a top-secret N.S.A. document suggests that the agency’s hacking division uses that same program to develop and “leverage sensitive, cooperative relationships with specific industry partners” to insert vulnerabilities into Internet security products.

In short, it's hard to see how NSA's defensive mission can coexist with its surveillance work without becoming a punchline. So why not just break up the NSA's different functions entirely?

This isn't an unprecedented idea. Cryptographer and security expert Bruce Schneier has pushed for an NSA breakup since February, ever since it became clear that the Obama administration had slammed shut the window for any further surveillance reform:

The NSA has become too big and too powerful. What was supposed to be a single agency with a dual mission—protecting the security of U.S. communications and eavesdropping on the communications of our enemies—has become unbalanced in the post-Cold War, all-terrorism-all-the-time era.

Putting the U.S. Cyber Command, the military's cyberwar wing, in the same location and under the same commander, expanded the NSA's power. The result is an agency that prioritizes intelligence gathering over security, and that's increasingly putting us all at risk. It's time we thought about breaking up the National Security Agency.

There are lots of ways this could be accomplished. Schneier's plan, for instance, would move military-style targeted surveillance—for instance, of the sort that infected computers in Iran's nuclear program with the malware Stuxnet—out of the NSA entirely, putting it under the aegis of the U.S. Cyber Command in the Department of Defense. It would also transfer all surveillance of American citizens to the FBI. The rump NSA would then handle both signals intelligence—i.e., international telephonic and digital eavesdropping—and cybersecurity defense.

Even that limited mission presents a hypothetical rump NSA with a lot of cognitive dissonance—even if it's required to prioritize security over SIGINT, or "signals intelligence," as both Schneier and the White House panel recommend. Particularly in times of crisis, the needs of the spies always seem to trump those of the defenders; putting them together in one organization under unified management makes it far easier for safeguards and priorities to shift, often in invisible ways that are rarely supportive of civil liberties.

Update, 8:45am November 19: Late Tuesday, the Senate filibuster-killed the USA Freedom Act, the last remaining hope for NSA reform. It was a flawed bill that wouldn't have done anything to fix the NSA's cybersecurity conflict of interest, but it would have put some curbs on its surveillance powers. The fact that the Democratic-controlled Senate couldn't even bring it up for a vote tells you everything you need to know about the likelihood of further NSA reform for the foreseeable future.

More than a few messaging apps aren't doing everything they can to keep your nude photos from leaking on to the Internet or The Man from eavesdropping on your personal conversations, the Electronic Frontier Foundation reports.

In fact, after evaluating three dozen communication tools for its new Secure Messaging Scorecard, the EFF found that there there are only a handful of truly secure messaging apps. And odds are good that most people aren't using them.

You might not be familiar with the top scorers, which include ChatSecure, CryptoCat, Signal/Redphone, Silent Phone, Silent Text, and TextSecure. These are the six apps that met the EFF's seven-point criteria for secure messaging:

Messages are encrypted in transit

Messages are encrypted so the service provider can't read them

Contacts' identities can be verified

Past communications are secure if keys are stolen

Code is open to independent review

Security design is properly documented

The code has been audited

Apple's iMessage and FaceTime products stood out as the best of the mass-market options, although neither currently provides complete protection against sophisticated, targeted forms of surveillance. Many options—including Google, Facebook, and Apple's email products, Yahoo's web and mobile chat, Secret, and WhatsApp—lack the end-to-end encryption that is necessary to protect against disclosure by the service provider. Several major messaging platforms, like QQ, Mxit and the desktop version of Yahoo Messenger, have no encryption at all.

Apple's iMessage and FaceTime did best among mainstream apps, "although neither currently provides complete protection against sophisticated, targeted forms of surveillance," the EFF said in a statement.

If you're looking to keep your service provider out of your communications, forget about Secret, SnapChat and WhatsApp, as well as Apple, Google and Facebook's email services and Yahoo's mobile and Web chat. None offer end-to-end encryption necessary to keep your conversations from being accessed by the company sending them.

Of course, it could be worse. According to the EFF, QQ, Mxit and the desktop version of Yahoo Messenger, "have no encryption at all."

Facebook may have a troubled history with user privacy, but it certainly works hard to protect its users’ security. The social network has just made Facebook available over Tor, an open source security solution that encrypts messages through multiple network nodes.

Facebook security engineer Alec Muffett noted that Tor users often face additional hurdles while trying to browse Facebook because of the way Tor encrypts a user’s location. This has led to connectivity problems in the past.

“From the perspective of our systems a person who appears to be connecting from Australia at one moment may the next appear to be in Sweden or Canada,” he wrote. “In other contexts such behaviour might suggest that a hacked account is being accessed through a ‘botnet,’ but for Tor this is normal.”

Now there is a Facebook onion address, which is only accessible to Tor-enabled browsers. Facebook will continue to issue an SSL certificate to Tor visitors so they can be assured they’re at the right place, despite the different address.

Commenting on Muffett’s announcement, users were immediately suspicious that Facebook was capable of creating a custom Tor address, as these can be instigated with brute force attacks. However, Tor’s original developer Roger Dingledine explained on a mailing list what was going on behind the scenes:

“So to be clear, they would not be able to produce exactly this name again if they wanted to. They could produce other hashes that started with "facebook", but that's not brute forcing all of the hidden service name,” he wrote.

The White House confirmed that it has been the victim of a cybersecurity attack, and the perpetrators are thought to be working for the Russian government.

White House officials speaking anonymously to the Washington Post said that so far in the ongoing investigation, there is no evidence that bad actors breached any classified files. The NSA, FBI, and Secret Service are all involved in the investigation. However, officials are not commenting on whether other data was taken, or who is behind the attack.

“Certainly a variety of actors find our networks to be attractive targets and seek access to sensitive information,” a White House official told the Post. “We are still assessing the activity of concern.”

The reason officials are suspicious that Russia is behind the attack is primarily because it is the country most capable of implementing it. U.S. officials regard Russia as among the most computer-savvy states. The Post has recently reported on a number of similar campaigns thought to be implemented by hackers working for the Russian government, targeting NATO, the Ukrainian government, and an American researcher, among others.

However, aside from these suspicions and the fact that the White House attack looks similar to previous attacks of potentially Russian origin, there is no public evidence about who or what is behind it.

Google has a history of getting creative with its Android mobile security features. Who could forget 2011’s Face Unlock, courtesy of Ice Cream Sandwich—or the failing that let photos fake it out? Now the Android security team has another concept called Smart Lock, and it’s heading to Lollipop next month.

Android 5.0 will allow your Android device to unlock another that you own, just by being nearby. For instance, an Android Wear smartwatch could unlock your phone; someday, Android Auto might do the same.

Safety First

The Nexus 6 smartphone from Google will come with Android Lollipop—and Smart Lock.

The Smart Lock feature relies on close-range wireless pairing of Android gadgets via Bluetooth or NFC that allows them to recognize each other and grant access. It makes intuitive sense; if one of your "trusted" Android devices is near another, it's very likely that both are in your possession, and not in the hands of a thief.

The idea, according to Android lead security engineer Adrian Ludwig, is to take the annoyance out of security and authentication for end users, many of whom don’t want to bother with PIN codes, passwords or pattern unlocks on. And if Smart Lock fails for any reason, your passcode or pattern still serves as backup security.

Smart Lock looks like an intriguing step forward. And it’s hard to deny the convenience of letting one Android gadget unlock another. (It’s also hard to deny the fact that this helps make the case for buying multiple Android devices.)

However, there’s just one concern: If a crook snatches my messenger bag or purse with both my Nexus phone and tablet in there, the security feature intended to lock my data down could be the thing letting the thief access my device.

The same concern might arise if someone—a housemate or family member, say—lifts your smartwatch from its charging cradle while you sleep and uses it to unlock and rifle through your phone. I've pinged Google for more information on how Smart Lock might behave in such situations, and I'll update when I hear back.

To that end, Android started using Security Enhanced Linux (SELinux) last year. Now Lollipop requires SELinux for all applications on all Android gadgets. With this, the system can audit processes and monitor for "potentially hostile apps,” said Ludwig, to spot trouble before attacks can put your data up for grabs.

It may seem counterintuitive to base security on open-source Linux. But Google argues that having so many contributors with deep knowledge working on the code makes for an even stronger and more reliable system.

Android, which has long used “sandboxing” tactics to isolate apps and limit the reach each one has, seems to be evolving. And it needs to, if it wants to go beyond individual consumers and fill some of the void BlackBerry has been leaving behind with companies and government agencies.

Days after Kickstarter took down a campaign for Anonabox, the controversial Internet router that allegedly would keep its users anonymous, the company shut down a subsequent project that purported to fill its shoes.

TorFi, created by University of Michigan law graduate Jesse Enjaian and his friend David Xu, would have been a more honest incarnation of Anonabox. However, honesty doesn't seem to be the solution where Kickstarter was concerned. The company closed down funding for TorFi after sending Enjaian an email that it wasn’t “innovative enough.”

“I'm frustrated because they claim that using pre-existing routers and modifying the software is not innovative enough for their standards,” Enjaian told ReadWrite. “I believe our idea filled a social need and was sufficiently unique, but I'm not going to challenge their decision."

Unlike the previous project, TorFi was upfront about the fact that it was using a prefab hardware solution for the router, and simply installing the Tor security software on top of it. This may not sound like much, but it’s a service that’s clearly in demand. After all, Anonabox earned nearly $600,000—and this despite the controversy.

Enjaian once had law officials confiscate his computer during cyberstalking allegations while he was still a student. While it's not clear if TorFi would have helped him in that situation, Enjaian's description of the project suggests he might empathize with people who seek security and anonymity while browsing the Internet.

“TorFi aims to satisfy the demand demonstrated for a simple, plug-and-play, secure access point to the Internet… with no more technical knowledge than what it takes to plug into a home ISP connection,” he wrote on the project overview.

For people who are still seeking a plug-and-play anonymity solution for the Internet, Invizbox is on IndieGoGo and the crowdfunding site hasn’t shut it down—yet.

Update: An earlier version of this story suggested a connection between past allegations against Enjaian and his interest in TorFi. We've rewritten the paragraph to avoid any such suggestion, which we did not intend.

The gadget claimed to use a combination of the Tor privacy software and a custom, open-source hardware frame to create a tiny router that anonymized its users. However, as the device continued to ramp up in popularity, these claims came under fire.

When its creator, security consultant August Germar, gave an "Ask Me Anything" interview on Reddit about the project, he was unable to explain how the hardware was “custom” and not just a generic pre-produced item. Sleuths found exact copies of the Anonabox frame on Chinese suppliers’ sites. And given that those Chinese products turned out to be regular factory-made routers, many became suspicious of the Anonabox claim to keep users truly anonymous.

Kickstarter’s actions tends to support redditors’ claims. The company suspended the project Friday. Kickstarter did not and does not typically give a reason for project suspensions, but its terms of service state that all projects must be “honest and clearly presented.”

If this development hasn’t scared supporters away from the gadget, Germar told Quartz that the device will still be for sale on the project’s website.

Mobile malware is exploding, though it's mostly not where you live. If you live in Russia, where 10 gruesome factories churn out 30% of the world's malware, you're far more likely to have malware infect your mobile phone than, say, if you live in Sweet Home Alabama. That's the good news.

The bad news is that Americans are at far greater risk of having their phones hacked by their government than by Russian malware hackers.

Android: Popular With The Malware Crowd

Russia has been busy. According to a 2013 report, roughly a third of all malware globally is produced by 10 Russian firms. According to the Lookout Mobile report, which traced the malware back to its point of origin:

With Android accounting for 84.6% of all smartphones shipped in Q2 2014, according to IDC, it's not surprising that Android would get hit the most. What is surprising, however, is that attacks against Android significantly outstrip its market share:

Source: Kaspersky Lab, 2014

It's a booming business on Android, as the report points out: "[I]n the first half of 2014 alone, 175,442 new unique Android malicious programs were detected. That is 18.3% (or 32,231 malicious programs) more than in the entire year of 2013."

Other findings include:

Over the course of a year, Kaspersky Lab security products reported 3,408,112 malware detections on the devices of 1,023,202 users;

In the past year, the number of attacks per month was up nearly 10x, from 69,000 in August 2013 to 644,000 in March 2014;

The number of users attacked also increased rapidly, from 35,000 in August 2013 to 242,000 in March;

Trojans designed to send SMS messages were the most widespread malicious programs in the reporting period, accounting for 57.08% of all detections.

And one particularly interesting point? Nearly 52% of all malware attacks stay within Russian borders, according to Kaspersky Lab:

Source: Kaspersky Lab, 2014

The report authors are quick to point out that this percentage is skewed by the high number of devices they track in Russia, coupled with Russia's heavy reliance on mobile payment services, making it a ripe target for hackers. But even if we cut its number in half, it still looks much more susceptible to malware.

The Malware Is Us

Not that we have it any better in the US. In part because Android isn't as dominant here, the US gets off with just 1.13% of all malware attacks. And yet we may have far more "malware" coming from our government than others do.

The US government, with assistance from major telecommunications carriers including AT&T, has engaged in a massive illegal dragnet surveillance of domestic communications and communications records of millions of ordinary Americans since at least 2001.

Such surveillance doesn't come through the front door. As Apple indicates, less than 0.00385% of Apple customers had data disclosed due to government information requests. That's 250 or fewer such requests.

Even despite the Lilliputian number, Apple announced that it's shutting down backdoor access to iOS device data, encrypting all iPhone data, and not just the small sliver it used to encrypt. This is a good start, but it won't be enough to thwart a dedicated hacker ... or CIA bureaucrat.

The recent decades have given [law enforcement] an unprecedented ability to put us under surveillance and access our data. Our cell phones provide them with a detailed history of our movements. Our call records, e-mail history, buddy lists, and Facebook pages tell them who we associate with. The hundreds of companies that track us on the Internet tell them what we're thinking about. Ubiquitous cameras capture our faces everywhere. And most of us back up our iPhone data on iCloud, which the FBI can still get a warrant for. It truly is the golden age of surveillance.

This isn't to suggest that we're immune to hackers, Russian or otherwise, or that the US government is an evil Big Brother determined to spy on our every move. (I have four kids and my night life is considered wild if I have steamed milk and honey before going to sleep at 10:00. I'd be boring to watch.)

But it does reflect the perverse realities of mobile security today. In Russia, the greatest threat is the black-hatted hacker. In the U.S., it's the white-hatted spy.

AT&T said it has fired an employee who gained access to users’ personal information without permission this year. The personal information compromised may include social security numbers and drivers’ licenses.

The telecommunications provider sent a letter to the roughly 1,600 affected users informing them about the breach. Affected users will have any suspicious transactions reversed and will be eligible for a year of free credit monitoring, as has become customary after data breaches.

“On behalf of AT&T, please accept my sincere apology for this incident,” Michael Chiarmonte, director of finance billing operations at AT&T, said in the letter. “Simply stated, this is not how we conduct business, and as a result, this individual no longer works for AT&T.”

AT&T sent a letter to the Vermont attorney general indicating the company believes the breach took place sometime in August. It is the company’s second insider breach since June.

“Earlier today, we reported that we isolated a handful of servers that were detected to have been impacted by a security flaw. After investigating the situation fully, it turns out that the servers were in fact not affected by Shellshock.”

After taking a closer look, Yahoo said the hackers wrote malicious code that impersonated Yahoo’s own software in order to enter the system. While Stamos believes the hackers were looking for Shellshock-vulnerable servers, it was their mimicry, not the bug, that allowed them to gain access to the system.

Any sort of hack is serious, but Stamos said that the hackers' attack was less serious than if they’d used Shellshock, since Yahoo’s user data appears to be safe.

“The affected API servers are used to provide live game streaming data to our Sports front-end and do not store user data. At this time we have found no evidence that the attackers compromised any other machines or that any user data was affected.”

Stamos also defended against security researcher Jonathan Hall’s allegations that Yahoo refused to compensate him for discovering the Yahoo compromise. Hall, who first documented the hack on his website, later suggested on Reddit that Yahoo was ungrateful for the assistance, of which it has a history.

“Yahoo takes external security reports seriously and we strive to respond immediately to credible tips,” said Stamos. “We monitor our Bug Bounty and security aliases 24x7, and our records show no attempt by this researcher to contact us using those means.”

"I am flat out accusing Stamos and Yahoo of being dishonest and inaccurate in their reports of this breach, as well as being grossly negligent to their users and shareholders by releasing inaccurate and misleading information," Hall wrote.

The Shellshock bug is bad news, and Yahoo may've just found out first hand.

At least two servers for Yahoo Games were allegedly breached in a hack discovered by security researcher Jonathan Hall.

Hall says he found evidence that Romanian hackers gained access to at least two of Yahoo’s servers by exploiting the Shellshock bug, a vulnerability in bash, a low-level program used to execute other programs. By exploiting the bug, hackers can gain remote access of servers and systems. Hall said Yahoo's servers were vulnerable because they were using an older version of bash.

Hall, a Unix expert with Future South Technologies,offers a lengthy explanation on the tech consulting firm's website, where he describes how he tracked the breach to Yahoo’s game servers. Hall also shares an email he says he received from Yahoo confirming the breach. Since millions of people play Yahoo games every day, they make an ideal target for hackers.

If hackers gained control of a Yahoo server using Shellshock, they could potentially steal user information, deliver malware to vulnerable computers and take control of the system. So you'd think Yahoo would be grateful for the information. Hall, however, claims Yahoo did not reward him for the discovery, instead telling Hall that his findings didn’t qualify for its bug bounty program.

“I literally gave them two servers that were hacked, of which there were most likely more—without a doubt—considering one gets a public DNS response of a private IP address… And that doesn’t qualify? What a joke,” Hall posted on Reddit.

Yahoo has a poor track record when it comes to rewarding security researchers who uncover serious flaws, Mashable notes. Where a similar bug might net five figures at Facebook, Yahoo is more in the habit of awarding $25 vouchers which can be used to purchase t-shirts, pens and other items from Yahoo's company store.

How do you get people to take your unpatchable malware program like the serious threat it is? You release it into the wild where anybody can get their hands on it.

That’s the method behind the madness of security researchers Karsten Nohl and Jakob Lell. Their proof-of-concept malicious software indicates a huge hole in a commonly used technology—USB storage—and is now available for download on GitHub.

USB sticks have become so cheap and easy to use that companies often hand them out like calling cards at conferences. Nohl and Lell, however, have found a flaw in USB security that allowed them to do some really scary things. Their malware, named BadUSB, can be installed on a USB stick to take over a PC simply by being plugged into the computer.

The researchers, who work for security consultancy SR Labs, demonstrated BadUSB to a packed crowd at the Black Hat conference in Las Vegas. There will be no quick fix for the vulnerability they’ve found, so the researchers have decided to open source it.

At first glance, it seems like a terrible idea to put malware where anybody can access it. However, this is a pretty standard practice in the online security world. In fact, it’s not even against GitHub’s terms of service since the researchers are upfront about their reasons.

"Security researchers often release a proof of concept to raise awareness of the vulnerability in the security community, and to encourage people to protect themselves,” a GitHub spokesperson told ReadWrite. “A repository that contains a proof of concept but isn't maliciously or covertly distributing malware would not be in violation of our terms of service.”

It’s been more than a week since security researchers discovered Shellshock, a 22-year-old bug in the bash command-line interface used in Unix by default. Now, we’re just starting to uncover the extent of the exploits hackers have committed thanks to the bug.

Web-optimization company Cloudflare has blocked more than 1.1 million Shellshock attacks, the company said in a blog post. Around 83% of these were what it calls “reconnaissance attacks,” digital excursions to scout out vulnerable networks of computers.

Chart via Cloudflare

Cloudflare has been closely monitoring the number and origin of Shellshock attacks toward its clients, and released a chart to convey that data. A huge number of attacks were coming from France, but it’s not clear if it’s because the attackers are located in France, or simply routing their attacks through French IP addresses.

Security research firm FireEye discovered another slew of Shellshock attacks coming from an even unlikelier place Wednesday. Using Network Attached Storage (NAS) systems—essentially large scale networked hard drives—hackers could bypass computers entirely while still maintaining remote control over any data found in the NAS.

FireEye said the attacks were targeting devices from a company called QNAP, a popular Taiwanese NAS manufacturer. QNAP has just published a press release urging customers to disconnect their devices from the Internet until a patch becomes available.

Speaking of patches, Apple’s bash bug patch seems to be doing the trick. “The vast majority of OS are not at risk," an Apple spokesperson has said, and so far that’s been true—even though researchers say Apple’s patch is incomplete. Even as hackers exploit Shellshock on networks and hard drives, nobody has revealed any significant attack on Mac OS computers.

Judging by how the past week has gone, it’ll be a while until we see the end of the Shellshock bug, an old but recently discovered flaw in Unix-like operating systems that's widespread, difficult to patch and not too hard to exploit. It's like the trifecta from hell.

Worried about what it is and how you can protect yourself? Here are some plain-English answers to your questions about this nasty bug.

What Is Shellshock?

The bug stems from coding mistakes in bash, a low-level computer program that's been part of many, but not all, Unix-related systems for decades. That makes the bug mostly a problem for servers that run Unix, Linux or other similar operating-system variants, although Mac users might also have something to worry about.

The name “Shellshock” is a bit of wordplay based on the fact that bash is a "shell," a type of program used to execute other programs. Bash, like many other shells, uses a text-based, command-line interface. (If you're on a Mac, you can see this by opening your Terminal program.) Programmers can use bash to access another computer or computer system remotely and feed it commands.

Bash is short for “Bourne Again SHell,” a pun on Stephen Bourne, the computer-scientist author of an earlier Unix shell known simply as sh. It is compatible with every version of Unix, which made it an obvious choice for the default shell for Linux and Mac operating systems.

Bash is several decades old, and security researchers believe the Shellshock bug has lain undetected in bash for at least 22 years.

So Who's Vulnerable?

Technically, any computer or system with bash installed is vulnerable. Since bash is installed by default on Unix systems, that includes a lot of computers.

Windows computers are safe; they don't use bash. But if you’re using a Mac or running Linux, Ubuntu, or some other Unix flavor where bash is the default interpreter, then you could be at risk.

Just because your computer is vulnerable to Shellshock, however, doesn't mean hackers can target it. For them to do so, they'd have to be able to access your computer's bash program via the Internet.

If your computer is connected to the Internet through a password-protected wireless network—or physically via an Ethernet cable—you're still basically safe. If you're using an open, untrusted Wi-Fi connect, though, you could theoretically be vulnerable to a Shellshock exploit.

Even that's extremely unlikely, though. The most likely targets, according to cyber security firm FireEye, are Internet servers and related large computer systems.

What About Me? Do I Have To Worry?

Eight versions of bash contain the vulnerability, from 1.13 up to the latest 4.3. To figure out which version you are using, you can open up your Terminal program and type the following:

$ bash --version

To search for the bug, type:

$ env X="() { :;} ; echo vulnerable" /bin/sh -c "echo stuff"

If your computer responds with “vulnerable stuff” then your version of bash is indeed executing variables like code, and therefore contains the vulnerability.

Even if your computer is vulnerable, it's still extremely unlikely that you will be targeted through the Shellshock bug. It's too much effort for hackers to bypass your password-protected Internet connection just to get to it.

How Do Hackers Take Advantage Of The Bug?

Let’s take the simple test people are using to check for bash vulnerability, a command you'd issue to bash in this form:

$ env X="() { :;} ; echo vulnerable" /bin/sh -c "echo stuff"

If bash was working correctly, that command would assign the variable X a value—the string of characters "() { :;} ; echo vulnerable"—and would print this on the screen:

stuff

The bug, however, causes bash to interpret everything following that weird collection of parentheses, brackets, colons and semicolons as another command. In this case, that command just prints the word "vulnerable" on the screen:

But it could just as easily search for sensitive bank information, erase all your files, grant a new user untrammeled access to your computer or worse. Since bash is a key component for working on computers remotely, the hacker doesn’t even need to be anywhere near the system to do it.

This is only the first of at least six bugs associated with Shellshock that security researchers have found. The latest, known to researchers as CVE-2014-7186, assists with creating denial of service attacks in which hackers can disrupt a computer’s Internet service.

How Do I Protect Myself?

That’s the tricky part. Security experts keep issuing patches, but researchers are simultaneously finding new related vulnerabilities. So "protection" is a moving target here, at least so far.

If you're using Linux or Unix, Red Hat developed a patch over the weekend, but you have to install it over the command line and it’s got a lot of steps. This is Red Hat’s second patch for the bug but definitely not the last—as researchers keep finding more vulnerabilities associated with Shellshock, they have to keep reinforcing the patch. This patch only offers partial protection, but you can get instructions for installing it on your machine here.

Apple has maintained that the “vast majority of users” are not susceptible to the bug, only those who have customized their advanced Unix settings. To play it safe, Apple has released a patch, though security researchers have discovered new vulnerabilities associated with Shellshock that this patch doesn't fix.

What's The Real Danger?

Researchers have just discovered the first Shellshock botnet. (A botnet is a network of hacker-controlled computers operating maliciously as a group.) This botnet is called “wopbot” and seems to be targeting a content delivery network named Akamai as well as parts of the United States Department of Defense.

When the wopbot gets ahold of susceptible computers, it uses the aforementioned CVE-2014-7186 vulnerability to launch a denial of service attack. Akami and the DoD have managed to remove wopbot's command and control center, but the server that runs the bot is still live and looking for targets.

Is This As Bad As Heartbleed?

The Heartbleed bug let hackers exploit the way your browser talks to a website over an encrypted channel. An attacker could theoretically exploit the bug to unravel the secure channels used by banks, e-commerce sites and other sensitive locations to steal passwords and other sensitive information.

Some security researchers say Shellshock will be "worse than Heartbleed" since bash allows hackers to explicitly inject code on remote computers, while Heartbleed only allowed them to passively listen in on server conversations they shouldn't have had access to.

Furthermore, it was possible to patch Heartbleed immediately once security experts disclosed its existence. (Though many sites weren't exactly fast off the mark.) Shellshock has been a different story so far.

Hundreds of law enforcement offices across the United States are handing out free copies of software that claims to protect children and families while they browse the Web. But according to an investigative report by the Electronic Frontier Foundation, this software is actually spyware, and can put your data at risk.

Called ComputerCOP, the software reportedly allows parents to view recently downloaded material, identify keywords like “drugs” or “sex,” and uses a “KeyAlert” system that logs keystrokes to the hard drive, so that parents can see what their kids have been typing.

The software works by placing the CD-ROM into the computer, and if parents choose to enable KeyAlert, the system will capture conversation when one of the suspicious keywords or phrases is typed.

Outdated and complicated to use, ComputerCOP is also ineffective, according to the EFF report. Researchers found that the software doesn't do what it claims accurately—like identifying trigger words such as "gangs" in Web chat histories or in documents. What's more, it regularly identifies documents that don't include any of the trigger words.

According to the EFF, the key logs are unencrypted when running on a Windows machine, and easily decrypted on a Mac. If parents choose to get emails regarding the key logs, which they can through the ComputerCOP software, the information is sent unencrypted to third-party servers, not only putting information at risk, but rendering HTTPS protection on websites useless. The EFF was able to copy passwords using KeyAlert with "shocking ease."

ComputerCOP's Clumsy Defense

Stephen DelGiorno, the head of ComputerCOP operations, told ReadWrite that ComputerCOP only captures 500 characters at a time when a trigger word is identified, and saves them on the computer's local hard drive to be viewed by parents later. But even DelGiorno was unclear about how secure the data is.

"I'd have to ask the programmers, I'm not 100% sure," DelGiorno said when asked whether or not key logs are encrypted on local hard drives. "I know you can't find it, but I don't want to say it's encrypted at this point."

"It’s no more dangerous than them sending any email from that computer to another computer," DelGiorno said. "But I’m not saying [encrypting data sent via email] is a feature we can’t go back and add."

About 245 law enforcement agencies including sheriff’s departments, police departments, and district attorneys offices have spent thousands in tax dollars to purchase the software and distribute for free to parents, without, apparently, checking the veracity of ComputerCOP's claims.

Apart from the security risk ComputerCOP has posed to an as-yet-unknown number of families, the New York-based company which distributes the software also used false approvals from the ACLU, National Center for Missing and Exploited Children, and the U.S Department of Treasury, which has since issued a fraud alert. DelGiorno told ReadWrite that the company never said the Treasury endorsed the product, rather just said the government body approved the allocation of funding.

The EFF estimates anywhere from hundreds of thousands to one million copies of ComputerCOP were purchased by law enforcement, but because it's complicated to set up, and doesn't do what it claims to, many families might not be using it.

Shellshock, a bug that allows hackers to control a system remotely by inserting commands directly into variables, is a lot bigger than we originally thought. Google security researcher Michal "lcamtuf" Zalewski has found six vulnerabilities associated with the bug.

However, security researcher Greg Wiseman told CNet that he’s found a third. He ran a script on OS Mountain Lion and found that it’s vulnerable to CVE-2014-7186, a vulnerability that allows attackers to remotely create denial of service attacks.

Wiseman did not say he’d found the vulnerability on systems other than Mountain Lion, but if you want to be sure about your system, you can clone Hanno Böck’s bashcheck testing script from GitHub, the same one Wiseman used for his trials.

Apple has maintained that the “vast majority of users” are not susceptible to the bug, only those who have customized their advanced Unix settings. Unless that’s you, it might be preferable to sit tight. With a new patch coming out—and then being found lacking—so many days in a row, it’s clear there’s only so much we can fix on our own.

No more command line input or complicated workarounds: Apple has released a downloadable patch for fixing the bash “Shellshock” bug.

The patch is available not only for OS X Mavericks v10.9.5., but also older versions of Apple software: OS X Lion v10.7.5, OS X Lion Server v10.7.5, and OS X Mountain Lion v10.8.5. There is currently no fix for machines running test versions of Yosemite.

Last week, an Apple spokesperson said that “The vast majority of OS X users are not at risk to recently reported bash vulnerabilities.” However, the company acknowledged it was working on the bash patch released Monday.

Security researchers recently discovered that bash, a UNIX command shell and language included in OS X, includes a 22-year-old vulnerability that allows hackers to sneak prompts in as variable names with the computer being none the wiser. As researchers discover more and more related flaws, new reinforced patches have been released every day.

The bash shell is an omnipresent command-line interpreter used by default in Unix and Linux, and by extension, Apple’s OS X software. The shell itself is decades old, and it turns out the bug has been present for the last 22 years without detection.

Linux stewardship company Red Hat released a series of fixes to patch up the eight or so versions of bash that were vulnerable. On Friday, Red Hat released a second round of patches to resolve newly discovered security flaws, and those discoveries keep coming.

Shellshock exploits are spiking with the development of "wopbot," the first botnet designed specifically to target the bash bug.

At the moment, the only people who need to worry about patching the Shellshock bug right away are system administrators and people who have tweaked the advanced Unix settings on machines running OS X or Linux.

Bash, which stands for Bourne Again SHell, is a command-line interpreter that runs on Unix, Linux, and Apple computers. OS X Mavericks 10.9.5 shipped with Bash version 3.2, one of the seven versions of Bash vulnerable to the Shellshock bug.

To test if you are vulnerable, you can search for the Terminal program on your computer and input this line to be sure:

env X="() { :;} ; echo vulnerable" /bin/sh -c "echo stuff"

If your computer responds “vulnerable, stuff”—well, you can guess what that means.

As evident in the screenshot, my version of bash is vulnerable to the bug—or at least it was, before I patched it (more on that in a minute). However, if you’re not the kind of person to mess around with advanced Unix options, Apple says the vast majority of Apple users shouldn’t worry about being vulnerable.

The vast majority of OS X users are not at risk to recently reported bash vulnerabilities," an Apple spokesperson told iMore. "Bash, a UNIX command shell and language included in OS X, has a weakness that could allow unauthorized users to remotely gain control of vulnerable systems. With OS X, systems are safe by default and not exposed to remote exploits of bash unless users configure advanced UNIX services. We are working to quickly provide a software update for our advanced UNIX users.”

How To Patch Bash 3.2 On OS X

But what if you are an advanced Unix user? Or just a little too paranoid to take Apple at its word? If you've got some familiarity with the command line and some time on your hands, you can patch bash on your own.

First, make sure you have Apple's Xcode developer tool installed. You can check by typing "xcodebuild" into Terminal anywhere. If it says something like "xcodebuild: error: The directory X does not contain an Xcode project," then you already have it. If it says "Command not found," you need to download it.

Second, you'll want to make sure you actually are using bash version 3.2. To find out, type this into Terminal anywhere:

$ bash --version

If you get version 3.2.51, the default that comes with OS X, you're all set to follow these instructions to manually upgrade to the patched version, 3.2.52.

The following are instructions from Wonder How To with additional information added for potential pitfalls. In order, you'll want to type these commands into your Terminal window.

Update: There are a few more steps than I previously thought; thanks to commenters for pointing that out:

Next, you need to back up the current version of bash, just in case something goes wrong:

sudo cp /bin/bash /bin/bash.old

sudo cp /bin/sh /bin/sh.old

Then, you want to verify that you're running the latest version. Type these commands anywhere into Terminal:

build/Release/bash --version

build/Release/sh --version

Lastly, you want to copy and paste the old version and replace it with the new:

sudo cp build/Release/bash /bin

sudo cp build/Release/sh /bin

Troubleshooting

If you downloaded XCode specifically to patch bash and this is your first time using it, you will be prompted to input your password and then to agree with its terms of service by typing "agree" into Terminal. Instead of dealing with that during the fix, you may want to just type "sudo xcodebuild" anywhere in order to get it to prompt you for that stuff in advance.

If the commands that begin with "curl" are taking a very long time, as in more than twenty minutes (like in the screenshot above) this probably means they are about to time out. It's not abnormal; it's probably because a lot of people are working on implementing this patch.

If that happens to you, go into Finder and find the "bash-fix" folder in your main directory. Delete the folder, empty the trash, and then go back into Terminal to restart the patch process again.

Ideally, Apple will come out with a patch you can just download soon because this is a lot of work. But I feel a lot better seeing a blank response in Terminal when I check for bash vulnerabilities.

The vulnerability dates back to bash version 1.13 and extends all the way to the most recent version 4.3. It exploits the way bash handles environment variables. Hackers can tack on code to function definitions within these variables, which the bash shell will then wrongly interpret and execute as commands once it's invoked.

Since bash is the default shell for many Linux and Unix systems, you can imagine the havoc hackers could wreak with the “Shellshock bug.” Since this bug could allow malicious types to remotely execute code, it could theoretically let a hacker seize control of a server from afar.

However, much in the same way that we can’t tell if anybody exploited the Heartbleed bug, it’s too soon to tell if anybody has taken advantage of Shellshock. Update: security researcher Yinette has just found evidence of the first attacks made using the bug.

However, now that there’s a patch for Bash up to its latest version 4.3, they won’t be able to—at least on patched systems.

Do you need to patch your version of Bash? Red Hat provided a test you can implement. To check your system, type the following into the command line:

For the second time in two iPhone releases, mobile-security firm Lookout has tested and bested the security of Touch ID.

Touch ID lets users unlock the iPhone 5S, iPhone 6, and iPhone 6 Plus just by putting their fingerprint over a sensor on the home button. By requiring a fingerprint to unlock the device and make purchases within the App Store, with Apple Pay, or through third-party developers, Apple is trying to make your data and information more secure.

So what happens if it’s hacked?

Lookout’s principal security researcher Marc Rogers hacked Touch ID on the 5S last year, and now he's done it again. Through a CSI-like process, he was able to unlock an iPhone 6 using a fake fingerprint made of glue.

With such a fingerprint facsimile in hand, an attacker could theoretically take over someone’s iPhone to make purchases or steal the owner's photographs, email, texts or other personal information. It sounds like a plot from a prime-time crime drama—and so it’s probably only a matter of time until iPhone fingerprint hacks hit the big screen.

While the thought of someone accessing your phone with a copied fingerprint might make you uncomfortable, don’t worry. Accessing a device the way Rogers did takes significant skill, time and effort. And, as we reported last year, a malicious attacker can’t use a finger that’s, well, detached from your body.

Rogers says consumers shouldn’t worry too much about the potential for duping the system.

“I don’t see this to be a risk to consumers in any way because I don’t think criminals are sophisticated enough,” Rogers said in an email interview. “It is difficult to make these fingerprints—think of Touch ID as being the equivalent of a door lock. It's there to stop the average criminal from getting access, or in the case of Touch ID, claiming they are you.”

Not only does a potential hacker need a clear print from their target that can be lifted by using super glue fumes and fingerprint powder, they will also have to get access to lab equipment to photograph, print, and then cast the fingerprint using chemicals and smearing it with glue. Unless you have access to a crime laboratory, the equipment is prohibitively expensive.

Through the experiment, Rogers discovered that there’s virtually no measurable improvement in the fingerprint sensors between the iPhone 5S and the iPhone 6, except that he got fewer “false negatives,” on the iPhone 6, meaning the reading was clearer.

Even though Rogers is impressed with the technology, he says Apple could do more to keep devices secure. Some improvements, he says, could include limits on the number of unlocking attempts a device will allow, a fallback to a passcode when the device hasn’t been used for a specific amount of time, and “best practices” suggested by Apple which may include using different fingers for different authentication.

“I was hoping to see improvements in the Touch ID sensor that show Apple is working to come up with a solution that cannot be fooled as easily,” he said. “However, while I can't say Apple isn't working on this, I don't see any significant signs of improvement in this version despite the fact that it is now going to be used for payments.”

Lead photo by Selena Larson for ReadWrite; iPhone 6 and iPhone 5S image courtesy of Lookout

Apple said it will introduce more security alerts and better educate consumers about why and how to use iCloud in the wake of an iCloud breach in which hackers obtained personal and revealing pictures of female celebrities and posted them online.

CEO Tim Cook told the Wall Street Journal that the company will start alerting people through email and mobile push notifications when anyone tries to change a password on an Apple account, restore iCloud data to a device that isn't yet registered with the account, or when a new device logs into iCloud.

Cook also gave more information on what it originally said was a "highly targeted attack," describing the way hackers correctly guessed the celebrities' security question answers.

Apart from beefing up security measures, Cook said the company needs to do a better job of providing information to consumers—it's not just the tech that needs a boost.

"When I step back from this terrible scenario that happened and say what more could we have done, I think about the awareness piece," he told the newspaper. "I think we have a responsibility to ratchet that up. That's not really an engineering thing."

Cook said Apple will begin using push notifications to alert users within the next two weeks.

LinkedIn is offering new security features that give you new tools for securing and controlling your information on the professional-networking site.

For instance, LinkedIn will now alert you when your password changes—and will give you a sense of where that request originated as well. When you change your password, you’ll not only get an email notification, you’ll be able to see which browser and operating system was used, as well as the IP address and approximate location of the computer or device used to request the change.

That warning to "change your password right away" may look a little tardy, but it actually takes you to a password-reset form that requests your email address and then sends you instructions.

Another privacy safeguard shows you where else you're logged into LinkedIn and lets you log out of sessions you’re not currently using. Additionally, the service now lets users export all their LinkedIn data—that is, your entire profile, and post history and a variety of other activity. You can export your information here.

It's probably a good idea to do a cursory check of your privacy settings while exploring the new security features, especially if you haven't updated them in a while. But thanks to new features, users will be more aware of where and how their data is accessed, which will help make users—and their data—more secure on the site.

Apple now says the attack in which hackers rifled the iCloud accounts of female celebrities for nude or otherwise revealing photos wasn't its fault. The company calls the incident "a very targeted attack on user names, passwords and security questions" that didn't involve any underlying vulnerabilities in its cloud-storage service.

That may be so, and we'll know more as investigations continue. But Apple still deserves a good share of the blame—if not for security flaws in iCloud, then for making it unreasonably difficult for users to protect themselves against attacks on their iCloud accounts.

Apple offers some strong security protection for the Apple ID accounts that provide access to iCloud. But it doesn't "just work." [Update: It doesn't protect everything, either. More below.] And setting it up makes it much easier for you to accidentally lock yourself out of your Apple ID account—forever.

Apple To Users: Two-Factor This!

Apple's statement on the photo thefts contains this standard bit of boilerplate: "To protect against this type of attack, we advise all users to always use a strong password and enable two-step verification." That's sound advice, and Apple already requires fairly strong passwords by default.

Apple also offers two-factor authentication, which bolsters security by adding a second step to your login process. Basically, you register a mobile-phone number with Apple; after that, each time you log in, Apple texts you a security code to enter along with your password. (You can also get those security codes via any iDevice that you've registered with Apple's Find My iPhone service.)

This process ensures that no one can get into your account without access to your phone—a pretty good way of ensuring that it's you entering that password. Trouble is, Apple makes signing up for two-factor authentication way more difficult than it should be. And more than a little scary-sounding, too.

[Update: Turns out that even two-factor authentication doesn't offer as much protection as it should. At the moment, Apple only asks for an authentication code if you're changing your Apple ID settings, getting Apple ID support or buying something from iTunes, iBooks or the App Store using a new device. Specifically, accessing iCloud backups via new machines does not trigger an authentication request, though Apple is reportedly testing that feature.]

Start with the fact that Apple doesn't exactly advertise the existence of two-factor authentication. In its statement, Apple directs people to this knowledge-base page for more information, which turns out to be one of the only mentions of the security method on Apple's public site. You can also find it by digging around in the settings for your Apple ID; it's listed under the "Password and Security" tab.

Now the fun begins. Register your mobile number with Apple—and it's got to be a real number; Google Voice and similar Internet-based phone numbers will leave you hanging—and you'll get a notification that looks like this:

That's right. Apple makes you wait three entire days before it'll safeguard your account. Even then it makes you come back to your account settings to "continue setup."

I use two-factor authentication on just about every major Internet service that offers it, from Google and Microsoft to Facebook and Twitter, and I've never seen a waiting period like this one. I pinged Apple's press office for an explanation, but like folks who've signed up for two-factor authentication in the wake of the iCloud attack, I'm still waiting.

Looked at one way, none of this is particularly surprising. Apple was, after all, awfully slow to enable two-factor authentication in the first place; Google offered it for two full years before Apple got around to it.

You also have to recall that Apple only beefed up its security following the epic hack of Wired reporter Mat Honan, who lost the contents of his iPhone, iPad and MacBook to hackers who social-engineered their way into his Apple account and wiped his devices. Apple took a lot of heat for that hack, and may have overreacted a bit in order to ensure that no attacker can lock you out of your own account by activating two-factor authentication without your consent.

That said, this setup is a huge disincentive for users and a lousy way to improve account security. People motivated to secure their accounts in the wake of something like the well-publicized iCloud attack could easily have forgotten all about it three days later. Even if they haven't, signing back into your AppleID account to complete the two-factor setup surely seems less urgent days after the fact.

Now That You're More Secure—Beware!

You've also just increased the odds that you could lock yourself out of your Apple account. Permanently.

When you activate two-factor authentication, Apple dispenses with the security questions that are your normal backup for recovering a lost password. It's an understandable, even a laudable, move on Apple's part, since security questions are often easier for attackers to guess than passwords. That's how a college student got into Sarah Palin's Yahoo email in 2008, and how a California man broke into the email accounts of more than 3,000 women, after which he posted some of the sexually explicit photos he found on their Facebook accounts. (Security questions may have played a similar role in the iCloud attack.)

Instead of security questions, however, what Apple gives you is a 14-character "recovery key." Apple encourages you to print it out, make copies and store them safely in your house or office. Small wonder: If you lose your phone and can't find this key, your Apple account is hosed.

If you have permanently lost any two of these items, you will not be able to sign in or regain access to your account. You will need to create a new Apple ID. You can do so on one of your devices or on the web at My Apple ID.

If you're the kind of person who can file away a recovery-key printout and have no fear of finding it later when you need it, this shouldn't bother you at all. For anyone else, though, the prospect of losing your app-purchase history, iPhone backups and photos, and possibly email might be unsettling—even terrifying.

You might even consider it reason enough to avoid two-factor authentication altogether. Which was surely not Apple's intent in setting up its security system this way, though it may well have been an entirely foreseeable outcome.

Updated 10:35am PT with more information on what two-factor authentication does—and doesn't—protect.

Lead image by Ivan Bandura; screenshots by David Hamilton for ReadWrite

The July announcement granted administrators highly requested features covering view-only permissions for shared folders, and passwords and expirations for shared links.

This move is undoubtedly Dropbox’s way of answering critics who were unconvinced about the tightness of its security. With these changes, managers and authorized workers can fine-tune sharing controls, so freelancers, contract workers and other contacts don’t have unbridled access to company documents.

Lead photo by Adriana Lee for ReadWrite, smartphone image courtesy of Dropbox

Well, that's one way to bend the Internet to your will. Google on Thursday applied its not-inconsiderable leverage as Search King of the Universe to "encourage" websites to encrypt their traffic, thus protecting themselves and their users from hackers and other spies (hello, NSA!)

What Google is doing here is an unquestionably good thing. The decentralized Web has been remarkably lax in adopting simple security measures that safeguard your email, conversations, reading habits, and all other manner of personal details you'd rather not share with strangers.

Still, given the flexing involved, you could be forgiven for having a qualm or two about Google's power.

An Offer You Can Refuse, If You're Not Fond Of Breathing

What Google announced, specifically, is that it will begin favoring sites that encrypt their traffic in its search results. As offers go, this seems eminently reasonable and optional. Adopting Web encryption—technically, the HTTPS standard, also known as HTTP over TLS—is pretty straightforward; lots of sites (banks, many email services, Facebook, etc.) use it already. (ReadWrite, alas, does not.)

And no site really needs to be ranked highly in Google search results, right?

OK, scratch that. Google's offer here is perhaps more akin to telling the folks running websites that they can continue breathing oxygen so long as they adopt the encryption standards that Google favors. Because, of course, sites that don't adopt HTTPS will, over time, lose traffic to those that do.

And Yet, It's An Offer You Really Shouldn't Refuse

I'll stress again that this is a fine and proper thing for Google to do in this case. Web traffic is really only protected when all intended parties to a communication are encrypting it, so there's a collective benefit to expanding the use of encryption. Yet there's a collective-action problem in getting everyone to act together—which is why Google is applying the arm here.

Email is a classic example. You may think it's great that Gmail uses HTTPS to protect your connection when you log in to read your email. But if you send a message to your friend whose account is on the unencrypted service BrandXmail, your message won't be encrypted in transit. And thus it's fair game for anyone who happens to be spying—or even who's just scooping up large amounts of passing data for later analysis.

(A technical aside: HTTPS only protects the security of messages as they transit the Internet. It has nothing to do with whether data stored on cloud servers is locked up against snoops. That's an entirely different use of encryption, and how that's enabled is solely up to whoever is storing your data—unless, of course, you've encrypted it yourself before storing it.)

Here's how Google explains what it's up to:

For these reasons, over the past few months we’ve been running tests taking into account whether sites use secure, encrypted connections as a signal in our search ranking algorithms. We’ve seen positive results, so we’re starting to use HTTPS as a ranking signal. For now it's only a very lightweight signal—affecting fewer than 1% of global queries, and carrying less weight than other signals such as high-quality content—while we give webmasters time to switch to HTTPS. But over time, we may decide to strengthen it, because we’d like to encourage all website owners to switch from HTTP to HTTPS to keep everyone safe on the web.

You can read this as Google starting off with a few light taps on the kneecap before breaking out the lead pipes. Just remember: It's for our own good.

About half of the 50 most popular Android apps have vulnerabilities, and the reckless reuse of code libraries is the blame, according to the researchers who uncovered the Heartbleed security bug.

Codenomicon, the IT research firm first to publish its findings about an OpenSSL vulnerability and dubbed it “Heartbleed,” reports that Android app developers often aren’t aware of the bugs they’re propagating when copying code from third party libraries.

The company will reveal the details of its findings—including the compromised Android apps—at the Black Hat USA security conference Aug. 6-7 in Las Vegas. (Codenomicon did not return ReadWrite's request for comment.)

Why Recycled Code Makes Sense

The first rule of programming is to not reinvent the wheel. As a result, many developers recycle open source software solutions to perform their cryptosecurity for them. According to Chester Wisniewski, a Senior Security Advisor at Sophos, it makes less sense to do it themselves.

Most app builders intent on building a cool have don't have the remotest idea how to make a cryptographic library, Wisniewski told ReadWrite. “App builders depend on shared code because every coder can’t be familiar with every type of code in the world.”

When app builders do try to create new code, they often create new holes, Wisniewski said, pointing to WhatsApp, the chat app Facebook is acquiring for $1 billion. When WhatsApp developers initially tried to create their own cryptocode, their lack of security knowledge left the chat app compromised in increasingly new and alarming ways.

“The flaw in OpenSSL, while scary, didn’t result in anything bad happening,” said Wisniewski. “The IT community came together quickly. The alternative [to open source software] is 25 different kinds of brokenness like with WhatsApp.”

Reaching A Compromise

Creating one’s own cryptographic library is much more work than using recycled code, with even less effective results. So that’s probably not what Cryptonomicon will suggest when it presents its findings at Black Hat.

Instead, Cryptonomicon’s chief security specialist, Olli Jarva, told ITnews that he advises developers not to see open source as a “free lunch.”

“We have to take care to test well enough the libraries we use so we can be confident they are safe enough to be used,” he said.

In other words, developers ought to not only be familiar with the libraries they’re implementing; they also should keep them up to date and continue to patch them. Which they have little incentive to do, bitterly writes programmer Marco Arment of the Apple App Store:

“Top lists reward apps that get people to download them, regardless of quality or long-term use, so that’s what most developers optimize for… Quality, sustainability, and updates are almost irrelevant to App Store success.”

Assuming the best of intentions on the part of developers, one solution might be to use smaller, lighter libraries. It’s inevitable that the more code you use, the more bugs you get. Wisniewski suggested that most app developers can opt out of OpenSSL in favor of lighter cryptography libraries like Google’s BoringSSL.

“OpenSSL is a jack of all trades that provides a lot of services,” he said. “When you only need one tiny secure connection to a website in your app, you don’t need that giant lump of code. All of a sudden you’re getting all these vulnerabilities for features you’re not even using. Choose slimmer, lighter libraries for only what you need; don’t throw in everything but the kitchen sink.”

ReadWriteBuilders is a series of interviews with developers, designers and other architects of the programmable future.

For a 100-person company founded in 2009, the tech firm Cloudflare certainly seems to have an outsized impact on the Internet.

Shortly after the Heartbleed bug became public knowledge on April 9, Cloudflare decided to revoke all the digital-encryption SSL certificates it managed—a move that would prevent hackers from stealing digital identities from Web servers by exploiting Heartbleed. When it did so, it caused a dramatic spike in such revocations.

Cloudflare's primary business is to both speed up and act as a sort of digital bouncer for its client sites. It does this by helping them deliver their information more efficiently and by sheltering them from the Internet's bad guys—hackers, spammers and scammers who try to knock sites offline via distributed denial-of-service attacks.

In the process, it's also managed to bring advanced site-management tools—the kind of things that previously only companies like Google could afford—to the masses.

Prince co-founded CloudFlare after bouncing through a number of startups and attending both law and business school. I spoke with him about how getting sued by the porn industry got him started, how he was a lawyer for a day and the role he sees Cloudflare playing as cloud computing continues its astronomical growth.

What follows is a lightly edited transcript of our conversation.

Back When The Web Was "A Fad"

ReadWrite: You describe yourself as the storyteller. How technical are you?

Matthew Prince: When I was seven, my grandmother gave me an Apple II Plus. I grew up in Park City, Utah, and my mom used to sneak me into computer science classes. When I got to college, I was pretty competent as a computer programmer, and got bored in the computer science program fairly quickly.

In 1992, I was technical enough that the school spotted that. Along with two other students, I became one of the campus network engineers. We were building out the network across the campus. Back then, I was installing the switches, running cabling, and learning how the underlying network worked.

The other thing that was fortuitous, in college, a couple of us had started an electronic magazine. There was no World Wide Web in 1992, so we used a programming language sold by Apple called Hypercard. It was object-oriented, one of the forgotten Apple technologies that was way ahead of its time. We made this interactive magazine with Hypercard stacks. We’d email it on campus. The school loved it. It showed how innovative they were.

The apps would get so large that they would actually crash the mail server. The school kept buying bigger and bigger mail servers to accommodate it, and we ended up making more and more complicated versions of the magazine.

They finally came to us and said, "This isn’t going to scale, but let us introduce you to some organizations." One was a printer company, which invented a technology called PDF, which was of course Adobe. The other organization was a bunch of students at the University of Illinois, PhD. students, who had this thing called a browser.

I remember we would write articles and we couldn’t get anyone on campus to read them, but we’d get these emails, in broken in English, from Japan. I remember saying to one of the other guys, why do we care if people in Japan are reading this? It was one of the most naive and stupid things I could have said. I wrote my college thesis on essentially why the Internet was a fad, which is incredibly embarrassing.

I’m technical enough that I know how this stuff works. When we started CloudFlare, I was writing code. I think I have three lines of code left in the code base. We hired people many orders of magnitude better than I am. Lee Holloway [CloudFlare co-founder] is the technical genius, and Michelle [Zatlyn], who is incredible, is the chief operating officer of our organization. The three of us together create a pretty solid foundation.

Lawyer For A Day

RW: You went to law school, and then worked as a lawyer for just one day?

MP: When I got to the end of college, I had job offers at these companies that I thought had no future: Netscape, Yahoo, a company called BBN, [and] Microsoft, for their online service. I thought this wasn’t going anywhere, so instead I went to law school. My friends were building dot-com companies that were some degree of successful, and I went to Chicago to study law.

In 1999, between second and third year of law school, I moved to San Francisco for the summer and worked at a law firm called Latham and Watkins. Over the course of that summer, I helped take six companies public. I went back for the third year of law school, and that was when the bubble burst. Latham called and said, “Good news. You still have a job. We don’t have room for securities lawyers, but we have plenty of room in our bankruptcy practice.”

I had accepted the signing bonus and had started to do some work for them. One of my law professors said hey, my brother is starting a company, he’ll match your salary and give you some stock. I stayed in Chicago and worked for this startup [a company called GroupWorks in the insurance-benefits brokerage market].

RW: What inspired you to go back and get an MBA?

MP: The short answer is I went to business school because I got sued by the porn industry. After GroupWorks, I did well enough that I could mess around for a while. I came up with an idea for an anti-spam technology.

Unspam is like the “do not call” list, but for email. The business plan was absurd. We were going to help pass a bunch of [anti-spam] laws all around the country, and build a technology that enables these laws, and then sell it to state governments. But instead of them paying us directly, they'd charge a fee, and we’d take a share of that fee.

I remember pitching that to venture capitalists. They’re like, you’re insane. That’s exactly what we did. So we worked with state legislators around the country to pass these laws, and then we ended up winning technical services contracts. Lee Holloway was our first technical hire at Unspam.

The pornography industry guys argued it was a violation of their first amendment rights. They were arguing that they had the right to send adult material. They sued the the state of Utah, and we were a contractor to the state, so they sued us as well.

The lawyers said, "You have a good case, but it will take three years to resolve. During that time, lay low." I sent off applications to eight different business schools, and ended up getting rejected by seven of them, and got into Harvard.

Pahk The Stahtup In The Hahvahd Yahd

RW: And that’s where CloudFlare really got its start?

MP: I continued to run Unspam while I was in business school. Lee was continuing to work for Unspam. As a final project for our last semester, Michelle and I ended up entering a business plan competition, and the business plan was CloudFlare’s business plan. It’s remarkable to read it and see that we’ve basically done what we said we were going to do.

At the same time, Lee was running out of hard technical problems at Unspam. He was getting recruited by Facebook and Google. I always wanted him to be on my team.

I called him a couple of weeks later and said, "What if we design a service that essentially sits in front of the entire Internet, and we will build something that can not only protect websites from attack, it will make things faster?"

I knew that in order to get Lee excited, the project had to be huge. Lee needed something that was really, really big. I spent 30 minutes on the phone pitching it to him. At the end he was silent for about a minute, then he said, okay, that will work. So Lee was on board. Michelle is the operations person, I’m sales and marketing and storytelling, and that ended up being the combination that allowed us to build what we built.

Services Only A Google Could Afford

Five percent of all Web traffic passes through our network. We add 5,000 new customers every day, ranging from teeny little blogs to Fortune 500 companies. International governments use us, the U.S. government uses us, commerce companies like Gilt use us. One out of 21 sites you go to online is a customer, and their traffic passes through our network. We have 25 facilities scattered across North and South America, Europe and Asia, and the plan is to open 50 more in the next year.

The network keeps growing bigger and bigger because we’re offering a compelling value proposition. It takes about five minutes to sign up. Once installed, you’re going to be at least twice as fast and protected against a whole range of attacks, and it decreases the load on your server substantially.

We’ll provide resources that previously only a company like Google could afford, with data centers scattered around the world. We ‘ll make that easy and affordable and scalable for anyone putting content online, whether it’s through traditional websites, modern web applications, or the back end of mobile apps. We make all that faster and better.

Begging And Building Frankenservers

RW: What was the first technical challenge that you wanted to address with CloudFlare?

MP: CloudFlare got born in part out of an open source project Lee and I had started called Project Honey Pot. It’s the largest online community tracking fraud and abuse. It has over 100,000 participants in 190 countries around the world.

When we were first starting CloudFlare, after we graduated from school and moved to California, we didn’t have any money, and we needed some way to build the first prototype. Amazon Web Services was just getting started at the time, and we were trying to figure out how we were going to get servers.

Michelle said, You talk about how loyal this Project Honey Pot community is. What if we just ask them if they have some spare servers lying around? It was an absurd thing, but we started to think, why not?

We had all the zip codes of members, and we emailed every Project Honey Pot member that was within 50 miles of San Francisco: “We’re looking for servers to be able to build a prototype on, do you happen to have any that are laying around?”

We got an astonishingly high response rate. So Michelle piled in her Volkswagen Jetta and drove around to all these different people, and did two things. She’d pick up the servers and load them in their car, and ask them what they wanted CloudFlare to be. It was our initial market research. Those Project Honey Pot members were the first CloudFlare.

None of [the servers] worked, but we were able to cobble parts together to create two functional servers, and built the first prototypes—two kinds of Frankenservers.

We needed to be building a demo to show the investors, and Lee didn’t want to build them. Instead he was focused on this little piece of code that would cache requests for one second. I said, "Seriously, that’s the most important thing you could be working on?"

He said, “Trust me, in three years, you are going to be happy I built this.” Lee is this technical genius who thinks about problems five years in advance. Almost three years from the day he said that, we got some of our first denial of service attacks, and the only way our infrastructure could stand up to that was thanks to layers of caching. That caching layer that he was building at the time turned out to be this piece of our foundation which has allowed us to continue to scale.

Expanding The Taxonomy Of The Cloud

RW: Do you build your own data centers, or rent space in others?

MP: We build our own equipment. We don’t pour foundations and build the buildings, but very few companies do. Even Facebook runs out of other facilities sometimes.

We’re not running on top of Amazon or Rackspace, though those are partners of ours. Instead, we are putting our own equipment in buildings scattered all around the world, and increasingly, putting them in the end-ISP facilities to ensure we have the most coverage and can be as fast as possible.

People talk about the cloud, the taxonomy of the cloud. At the base is what I call the store-and-compute layer. That used to be companies like HP, EMC, Dell and Sun—companies that made the big boxes that held your data and processed your data. Increasingly now it's AWS [Amazon Web Services], Rackspace, Google, Microsoft with Azure and VMware building out their own clouds. So when people talk about cloud services, often they’re talking about the store-and-compute layer, only where you can rent time on machines you don’t own.

We tend to be great partners with all those store-and-compute service providers. That’s not what we do.

The layer on top of store and compute is the application layer, which used to be run by these big bundled suites, from companies like Microsoft, SAP, Oracle. Now those bundles are getting unbundled into their component parts: Salesforce does CRM, Box does storage and collaboration, Google does email, Workday does ERP, Netsuite does financial accounting.

All of those used to be in the SAP bundle. Now, instead of buying software, you're buying those individual components.

Salesforce calls itself a cloud company. It’s not the same as Amazon; it’s a cloud services company living at that application layer.

[These companies also] tend to be partners of ours. Oftentimes a big financial institution wants to use Salesforce. The problem is, if it’s not software running in their own data center anymore, they need to have something like CloudFlare if they want a layer of protection in front of it, because they can’t call up Salesforce and ask them to put in a firewall.

That leads to the third tier, what I call the edge tier. Previously the edge used to be a whole bunch of boxes that would live at the top of your rack. Those boxes would be anything with the word firewall in it, companies like Checkpoint. Increasingly it’s Fireye, or Imperva, or Palo Alto Networks. These are all firewalls that sit at the edge of your network.

And it’s companies like F5 Networks that do load balancing, WAN optimization, anyone doing performance caching, DDOS mitigation—these are all boxes, that traditionally, yesterday, you’d have to buy and put in your server rack. But increasingly, there is no rack.

Customers, however, still need this same functionality. That’s what CloudFlare is doing. We're taking all the functionality—firewall, DDOS, web apps, load balancing, caching—and deploying it as a service, instead of it being a box, or a series of boxes, you have to buy.

So the way we work with Microsoft or Google or Amazon is that they’re providing the store-and-compute layer, or the application layer, and CloudFlare is providing the edge that sits in front of that. Instead of doing it as hardware, we’re doing it as a service.

RW: Isn’t that something Amazon and others would want to build into their cloud offerings over time?

Yeah, potentially. If they’re using all the Amazon services, people tend to use things like their Elastic Load Balancer, which is similar to the load balancing we have, or use their DNS services called Route 53.

We have those services, but we’re finding, in a lot of cases, that we’re a lot better. Our DNS services is faster and more performant than what Route 53 or Google DNS service has, so when you compare apples to apples, we do extremely well.

Second, we’re extremely focused on this. Amazon is a great company, but they're not entirely dedicated to making sure the publisher’s experience as good as possible. We also end up being significantly more cost-effective than they are over time. Most people put us on and cut their AWS bill often by as much as 50%.

GZip It—GZip It Real Good

RW: What kind of relationship do you have with open-source projects?

MP: There’s a piece of software called gzip. It’s the compression software built into your browser. It’s probably one of the most common code paths on the internet. Gzip takes a web page and reduces it in size by as much as half.

Because it’s running on every single request, it is one of the things that takes up the most CPU on our systems, so we have an engineer who left Apple to come work for us. One of the first things he did was rewrite gzip. I was skeptical, because it’s open source project—Google uses it, Facebook uses it—so how in the hell are we going to make gzip better?

He goes away and he comes back, and he has massively increased the performance of gzip, and we have started to roll that out now across our network, which saves us a huge amount of time, allows us to offer our customers significantly faster performance.

One of the things i’m proud of that we do is turn around and contribute them back to the open source community. In the next few months, we’ll roll out our new improved gzip. We’ve been running the calculations, and the power savings alone, how much power we'll cut if everyone in the world were to adopt this new version of gzip, it’s just astronomical.

We’re doing something that has extremely wide impact. It touches so many organizations around the world, and our mission is to build a better Internet. That sounds crazy at some level, but can do things at our scale that are pretty substantial.

About two months ago, we defaulted all our customers to IPv6 routing, so even if their backend is on IPv4 still, we can make sure the front end will support an IPv6 connection. In doing that, we increased the size of the IPv6 web in one day by something like 5%.

One of the things I’m most excited about, we have a team that’s very close, maybe by the end of this quarter, we’re going to be able to turn on SSL encrypted connections by default for even our free users. The amount of engineering work that goes into something like that is pretty substantial. There are only about two million SSL protected websites on the Internet, and the day we switch that one on, we will double the number of protected websites on the entire Internet.

RW: What other ways can the Internet be improved?

MP: There’s a Google protocol called SPDY (speedy), and SPDY makes transferring data over the internet just a ton faster, especially for mobile devices. It’s hard for individual server operators to install, so we just enabled SPDY by default.

If your server is in Texas, and you have a visitor who comes to your website from Sweden, what will happen, that visitor will first hit CloudFlare’s datacenter in Stockholm, they’ll connect via SPDY, and that dynamic content, we need to go fetch it from the server back in Texas, so we’ll open a connection back to Texas and hold that connection open.

We also have a differential compression technology called Railgun. If you’re on even a highly dynamic page like Facebook, it has some content that’s personalized to you, but there’s a whole bunch of that HTML that’s the same for you, me, and everyone else. Sending and resending all that content is just wasted bandwidth; what you really want to send is the stuff that changes.

So Railgun is differential compression for that long haul between Texas and Sweden. The performance is a lot better.

Post-Heartbleed, we’re rewriting the underlying [communication and security] protocols so the Internet runs faster. Because we are a larger and larger portion of the edge of the network, there are things that we can do, and these are things that Google has done for their own properties. If you are not Google, there’s no way to do that unless you use CloudFlare.You don’t have to be Google to be fast, safe and secure.

RW: Who do you consider your main competitor?

MP: Google doesn’t now, but will increasingly provide some services similar to us. Amazon already provides some services that overlap. And there’s Akamai, they are increasingly creating a bundle of services that compete with us.

We each have different strengths and weaknesses. My hunch is, there will be somewhere between two to six providers that provide these suite of services, and I think we have a good shot to be the leader.

Through The Backdoor

The three backdoors Zdziarski highlighted in his talk are present in 600 million iPhones and iPads, and are capable of accessing a great deal of personal information and then dumping it off the phone to a "trusted" device, such as the desktop computers many iPhone users plug their devices into. The backdoors can only be accessed via such trusted devices, limiting the danger of exploit—although that trust mechanism itself could also be spoofed by a determined attacker.

Until last night, Apple had apparently never described these iOS services publicly. Zdziarski reported the services do not notify users when they begin accessing personal data; do not require the consent of users if they access personal data; and cannot be turned off by users.

In a support document released Tuesday night, Apple described the three backdoors as "diagnostic capabilities to help enterprise IT departments, developers, and AppleCare troubleshoot issues" and offered a few details about each:

1. com.apple.mobile.pcapd

pcapd supports diagnostic packet capture from an iOS device to a trusted computer. This is useful for troubleshooting and diagnosing issues with apps on the device as well as enterprise VPN connections. You can find more information at developer.apple.com/library/ios/qa/qa1176.

2. com.apple.mobile.file_relay

file_relay supports limited copying of diagnostic data from a device. This service is separate from user-generated backups, does not have access to all data on the device, and respects iOS Data Protection. Apple engineering uses file_relay on internal devices to qualify customer configurations. AppleCare, with user consent, can also use this tool to gather relevant diagnostic data from users' devices.

3. com.apple.mobile.house_arrest

house_arrest is used by iTunes to transfer documents to and from an iOS device for apps that support this functionality. This is also used by Xcode to assist in the transfer of test data to a device while an app is in development.

Apple's support document acknowledges that a third party can access these services wirelessly via Wi-Fi from a trusted device, as Zdziarski had previously reported. It neither confirms nor denies Zdziarski's finding that these three services operate without the knowledge or explicit consent of the user.

Apple also claims a much more limited role for the file_relay service than Zdziarski found, saying it is used only for "limited copying of diagnostic data from a device." Zdziarski, by contrast, reported that file_relay has access to 44 data sources within an iPhone, including highly personal information as call records, SMS text messages, voicemail, GPS logs and more. Such personal information has little in common with diagnostic data in most cases.

In a blog post reply, Zdziarski criticized Apple for being "completely misleading" in some of its descriptions and for failing to address his other concerns such as user consent and notification. But he also acknowledged that Apple will probably begin fixing those issues behind the scenes:

All the while that Apple is downplaying it, I suspect they’ll also quietly fix many of the issues I’ve raised in future versions. At least I hope so. It would be wildly irresponsible for Apple not to address these issues, especially now that the public knows about them.

Security researcher Jonathan Zdziarski started a firestorm over the weekend when he presented findings that Apple has—apparently deliberately—created undocumented "backdoors" in its iOS operating system that third parties could use to siphon personal data from iPhones and iPads under certain circumstances without notice, much less consent of the user.

The backdoors he describes aren't the sort of thing your average cybercriminal can easily exploit. There's no evidence that they've been used for identity theft or any sort of related criminal attack on iPhone or iPad data. At least so far, that is.

On the other hand, if you think the NSA or regular law enforcement might be tracking you, then Zdziarski might have described some of the backdoors by which their agents could be delving into your digital life.

Beyond that, they're an intriguing mystery—one that Apple has yet to explain.

Hold on a moment. What's a backdoor?

Like the word suggests, a backdoor is a simple or unguarded route into an otherwise secure system. Think Matthew Broderick's character in War Gamessussing out a way to access WOPR by guessing a backdoor password specific to the system's creator (his dead son's name—a classically terrible password, by the way).

How would the NSA (or whoever) make use of these backdoors?

Zdziarski, a forensics expert and one-time iOS jailbreaker who's written several books about iPhone development, described three iOS services that appear to have an unusual degree of access to raw and potentially sensitive data gathered by or stored on the phone. These services are also apparently designed to collect that information, package it and dump it out upon request, either via USB or wirelessly over Wi-Fi.

These features are undocumented, meaning that they're not described by Apple in the sort of detail it normally provides to third-party developers who might make use of them. According to Zdziarski, however, they are installed and active on roughly 600 million iOS devices. They provide no indication that they're operating, and there's no way for users to turn them off.

Perhaps most ominous, these services can send out unencrypted information even if users have chosen to encrypt the data they back up through iTunes. Zdziarski calls this behavior "bypassing backup encryption" and considers it deceptive at best.

That all sounds pretty panic-worthy. Isn't it?

Turns out there's a catch. These services only work when an iPhone or iPad is "paired" to a trusted device, such as the computer you run iTunes on. (Bluetooth pairing with, say, a set of headphones doesn't count.) That greatly limits the ability of any attacker to exploit these services and rifle through your iPhone.

It is, however, possible to spoof that pairing. Every pairing generates a set of cryptographic keys and certificates designed to identify trusted devices to one another—and on the iPhone side, those keys and certificates are never deleted unless the user does a full restore or a factory reset on the device. Prior to iOS 7—the version used by most iPhones—pairing happened automatically without any user intervention. (iOS 7 now requires the user to approve pairing with a "trusted" device.)

As Zdziarski put it in a March 2014 technical journal article describing his findings: "[E]very desktop that a phone has been plugged into (especially prior to iOS 7) is given a skeleton key to the phone." And that skeleton key is transportable, because a sufficiently motivated attacker can copy pairing keys and certificates from one computer to another.

Who would go to all the trouble of tracking down those keys and copying them?

Well, the police might, if they thought you were involved with organized crime. So might the NSA, the FBI or a number of other intelligence agencies. And of course some of these outfits could also create seemingly innocuous "paired" devices such as an alarm clock or charging station that would run malicious code once connected to your phone.

As noted above, though, it's not the sort of thing your average Belarusan hacker is likely to use to take over your phone any time soon.

OK, tell me more about these undocumented services. What are they and what do they do?

In a presentation he made at the Hope X hacker conference in New York this past weekend, Zdziarski focused on three particular services known by the technical names com.apple.pcapd, com.apple.mobile.file_relay and com.apple.mobile.house_arrest. (You can see the slides from Zdziarski's talk—all 58 of them—here.)

The pcapd service starts what security professionals call a "packet sniffer" on an iOS device—basically, software that records all data traffic to and from your iPhone. It's installed by default on all iOS devices, and operates whether a phone is in "developer mode" or not, suggesting that it's not a developer-specific feature. And it gives the user no warning when it's activated.

"This means anyone with a pairing record can connect to a target device via USB or Wi-Fi and listen in on the target’s network traffic," Zdziarski wrote in his March paper.

The file_relay service, according to Zdziarski, exists to vacuum up large volumes of raw data from particular sources on an iPhone and then to dump it out in unencrypted form. Several years back, file_relay appeared fairly innocuous. In iPhoneOS 2.0 (an early predecessor to iOS), it was only able to access six data sources, including "Apple Support," "network," and "CrashReporter."

By iOS 7, however, file_relay's reach had expanded to include 44 data sources, many of which specifically address the owner's personal information. These include the address book, accounts, GPS logs, maps of the phone's entire file system, a collection of all words typed into the phone, photos, notes, calendar files, call history, voicemail and other records of personal activity that have been cached in temporary files.

Small wonder Zdziarski calls file_relay "the biggest forensic trove of intelligence on a device's owner" and a "key 'backdoor' service" that provides a significant amount of data that "would only be relevant to law enforcement or spying agencies."

The third service, house_arrest, originally allowed iTunes to copy documents to and from third-party apps. Now, however, house_arrest has access to a much broader array of app-related data, including photos, databases, screenshots and temporary "cached" information.

Couldn't these services have legitimate functions?

Maybe, although it's difficult to understand why they they'd have such apparently untrammeled access to so much information. That's a pretty major security failing under any circumstance.

Zdziarski also runs through a number of possible explanations—that they might be used in iTunes or Xcode (Apple's iOS app-development environment), or in developer debugging, or by Apple support, or in Apple engineering debugging—and shoots each one down in turn.

It's very difficult to construct an explanation for legitimate, non-surveillance uses of services that aren't documented, that bypass backup encryption, that have access to otherwise inaccessible user data and that give the user no notification that they're accessing and dumping out information. Oh, and whose code Apple has maintained and updated across several versions of iOS.

Given Apple's historical issues with lack of cooperation and infighting between technical teams, it's also conceivable that these services grew without much direction at all, almost by accident, as engineers struggled to solve other technical problems without writing a whole bunch of new code. Call this the it-ain't-pretty-but-it-works explanation.

Is it plausible? Your guess is as good as mine. And it's still a major security fail.

What does Apple have to say about all this?

In classic fashion, not very much. Apple didn't get back to me when I emailed it for comment, although I'll keep trying.

Apparently, however, it did email a statement to Tim Bradshaw, a reporter for the Financial Times, who tweeted it:

The statement, of course, is rife with ambiguity. Is Apple referring specifically to pcapd, file_relay and house_arrest here, or just issuing a general statement about its diagnostic functions? (Update: An Apple spokeswoman got back to me post-publication with a copy of the statement and news of its first documentation of these backdoor services.)

And it fails to address most of Zdziarski's basic questions. If these services are diagnostic functions, why aren't they documented? Why do they operate even if users haven't agreed to send diagnostic information to Apple? Why can't users deny their consent to having information taken off their devices this way? Why can't users turn these services off?

It is certainly interesting that Apple feels compelled to deny that it has even "worked with any government agency from any country" to engineer backdoors into its products or services. Especially since Zdziarski hadn't accused them of such.

I understand that every OS has diagnostic functions, however these services break the promise that Apple makes with the consumer when they enter a backup password; that the data on their device will only come off the phone encrypted. The consumer is also not aware of these mechanisms, nor are they prompted in any way by the device. There is simply no way to justify the massive leak of data as a result of these services, and without any explicit consent by the user.

I also contacted Zdziarski for comment, but haven't heard back. (Update: I did hear back from Zdziarski, although he didn't have time to say much.)

Updated on Wednesday, July 23 at 10:08amwith, well, updates noted in the text.

There's good news for anyone who thinks Chromecast suffers from a severe lack of Rick Astley, although it's bad news for anyone concerned about the security of Google's TV stick. Word’s spreading about a Raspberry Pi–based gadget that can seize control of the device, making it relatively easy to Rickroll Chromecast users.

Created by security researcher Dan Petro of Bishop Fox, the appropriately dubbed Rickmote Controller takes its name from the popular Web prank, which involves getting unsuspecting users to click a link that plays Astley’s “Never Gonna Give You Up” music video. Petro first unveiled this project last October at San Diego's ToorCon hacker convention, but the hack has recently gotten a new boost of attention thanks to a recent mention on the Raspberry Pi blog.

In this case, the Rickmote can take over a Chromecast and send those luscious baritone notes to a nearby Chromecast-connected TV. Here’s how.

Chromecast, All Your Streams Are Belong To Us. XO, Rickmote

The Rickmote gizmo works by sending a flurry of “DEAUTH” commands to the Chromecast, which effectively knocks it off the network and puts it into configuration mode.

While in this default setup mode, Chromecast broadcasts its own Wi-Fi signal, making it easy for the Rickmote to connect and direct the TV stick to do its bidding—like blasting an iconic 80s pop song to an unwitting group of pals.

The Rickmote, in action

Petro concocted the Rickmote to prove a point—that he could compromise Chromecast’s security with a few easily gotten tools. He cobbled together a Raspberry Pi (a credit card–sized mini computer), a couple of Wi-Fi radios, a touch display and Aircrack, an open-source Wi-Fi cracking application. End result: A Rickmote that, he says, can discover any nearby Chromecast, push it off its network, and pipe those sweet pop vocals (or anything else you want) to other people’s TV screens.

It’s a hilarious scenario, but there’s a serious issue here, too. The vulnerability that makes Rickcasting possible may not be unique to Chromecast. It seems logical that any gadget with a simplified setup that broadcasts its own Wi-Fi signal without a password, as Chromecast does, may also be vulnerable to this exploit.

And you just know that things will take a turn for the creepy once some jerk sends adult material to a room full of kiddies.

How Big A Flaw Is This, Really?

Even worse, Petro told Wired recently that he thinks the bug might let Chromecast attackers extract the owner’s Wi-Fi credentials, which would compromise a user’s entire network. “It would be a nice way of scraping out the password to a lot of people’s networks,” he said. That would be an enormous flaw, though, and he stops short of confirming it.

My sources tell me that level of security breach isn’t possible with this hack. And it’s worth remembering that this particular Chromecast-hijack can only be performed when the Rickmote and Chromecast are in close proximity, which limits the damage potential by far-flung strangers.

If you want to hack together your own Rickmote (or ColdPlaymote or Minajmote, et al.)—because, you know, science—Petro outlines the process pretty clearly in the following video and even offers a GitHub repository for the source code here.

Google wouldn’t comment on this story for ReadWrite. But when Petro alerted the company, he said the tech giant basically shrugged. The company told him it was a key part of Chromecast’s easy setup, he said, and Google seemed reluctant to monkey around with—i.e., fix—that.

Meanwhile, elsewhere on the Web, Rick Astley’s music video apparently did give up. The original YouTube viral video was just pulled down for unspecified reasons. But take heart, pranksters: Vevo posted it on YouTube in 2009, and that version, with its near-85 million views, is still here for your Rickrolling pleasure.

I wish I had been able to read about women in tech in fashion magazines when I was a teenager. Maybe then I would have decided to become a woman in tech, too.

That was my first thought on reading Elle Magazine’s profile of Parisa Tabriz, "Meet Google’s Security Princess." Tabriz, a white hat hacker who predicts how criminals will try to break into Google's data centers, is no stranger to technology and business publications. But for her to appear in a woman’s magazine is a novelty.

To its credit, Elle’s profile of Tabriz is lengthy, nuanced and portrays her as an intelligent and capable security engineer. But parts of it also made me cringe. To see what I mean, join me for a close reading.

The Woman For The Job

Congratulations, Elle writer Clare Malone! You’ve scored an interview with a top Google security official. So why not make sure your readers know all about her hair, clothes and (lack of) makeup?

Sure, I get that clothes are a quick way to describe a profile subject to an audience. And there's certainly nothing wrong with a woman who rocks her own personal style. But Tabriz's all-black wardrobe and the fact that she eschews makeup suggest that appearance is not a very important part of her personality. There's more than one way to practice femininity, after all.

I also get that Elle has an audience to cater to, one that cares a great deal about fashion. But when the same magazine did an interview with actor, tech investor and Steve Jobs portrayer Ashton Kutcher last year, it only briefly mentioned what he was wearing ("faded jeans and a gray T-shirt") and that he used to model professionally.

Moving on.

"I didn’t touch computers up until college,” Tabriz tells her interviewer, demolishing the notion that women aren’t qualified for technical positions since they didn’t start early enough.

Tabriz doesn’t perceive gender as a negative for her, though she thinks she “may be a little more pushy than the [female] stereotype.”

So much of this profile focuses on Tabriz’s unique characteristics: her skill at math and science, her competitive nature, her driven curiosity about her compromised college website that her to determine the hacker’s modus operandi. And that’s what’s important.

Of course Tabriz isn’t the “female stereotype.” No woman on Earth is. But to separate her in such a way to imply that she’s “not like the other girls” makes it seem like Tabriz didn’t succeed because of her motivation or skill, but because she’s somehow better at being a woman.

Also, FYI:

Getting Technical

Easily the best snippets of this profile are the sections in which Malone describes the nitty gritty of Tabriz’s work as a white hat hacker for a lay audience.

Of course, some women in technology might find it a little condescending to read Malone likening black-hat hackers to thugs who swipe expensive handbags: “not only do they swipe the Birkin, but they rifle through the crocodile-skin datebook to find new victims.” But let’s give the magazine the benefit of the doubt here, given its very specific audience.

Tabriz herself supplies quotes that make the highly technical nature of her work extremely approachable to a non-techie audience. For instance, she describes steganography, the craft of writing coded messages that are hidden in plain sight, by its very low-tech history:

A Greek emperor would shave a slave’s head, tattoo a message on it, let his hair grow back, and then say, "Go over to that other emperor."

Further allusions to Tabriz’s skill at “think[ing] like a criminal,” make it clear what Tabriz does every day—even if you only know about hackers from the movies.

Let’s Talk About Gender

Still, you can easily write a profile of a man in tech without discussing how his gender affected his career, either as a stand-in for a personality trait or as a hurdle to overcome. A high-profile woman in tech? Not so much.

Malone aptly notes that when it comes to a woman in a male dominated field, to not discuss gender in the workplace would be to miss out on half the story. In Tabriz’s role at Google, gender is a daily consideration.

“If you have ambitions to create technology for the whole world, you need to represent the whole world, and the whole world is not just white men,” she told Malone.

Gender issues at Google, of course, have been grist for discussion for a while. Former Google vice president Sheryl Sandberg noted in her book, Lean In, that male Google engineers nominated themselves for promotions far more frequently than women.

Likewise, in the Elle article Tabriz mentions that the young women she mentors at Google sometimes have trouble asserting themselves. The impetus is on women to make their own opportunities, and if they fail, they’re not leaning in far enough.

One way to help women in tech? Make them more visible, just like this profile does. (Though they might stand out even more without all the overt nods to gender.) Then maybe a young woman flipping through her fashion magazine, like I used to do, will discover a tough, capable role model taking a career path she’d never considered.

Google wants to make it harder for malicious attackers—and that includes the National Security Agency—to exploit software bugs that infect your computer or steal personal data.

On Tuesday, the company revealed Project Zero, a team within Google that will work to reduce the number of people harmed in targeted attacks stemming from “zero-day” vulnerabilities, security holes that aren't previously known and for which there are no readily available fixes.

Why is Google announcing this effort? Because Project Zero is hiring.

Google is looking for security researchers to work on discovering flaws in software, as well as researching and understanding the motivations of malicious attackers. Google didn’t say how many researchers the company is adding, but the company already has many people working on security issues.

Heartbleed was one of the most damaging vulnerabilities in open-source software discovered to date. It left two-thirds of the Web at risk of eavesdropping for two years thanks to a flaw in OpenSSL, a widely used piece of security software.

Project Zero will work to improve the security of software used by large numbers of people, as well as research the techniques hackers are using to target these vulnerabilities. Google says it will report bugs to the software’s vendor, and once it’s made public—meaning there’s a patch available—people will be able to learn more about the particular vulnerability, including how long it took the software vendor to fix it.

And though Google didn't dwell on this point in its announcement, it did mention "state-sponsored actors" as a threat. Google has previously said that its systems were targeted by Chinese hackers who may be sponsored by elements of that country's military, and former NSA contractor Edward Snowden revealed that the US intelligence agency has targeted Gmail and other Google services. Project Zero aims to protect against those threats as well as criminal hackers.