In the 1990s, client-server was king. The processing power of PCs and the increasing speed of networks led to more and more desktop applications, often plugging into backend middleware and corporate data sources. But those applications, and the PCs they ran on, were vulnerable to viruses and other attacks. When applications were poorly designed, they could leave sensitive data exposed.

Today, the mobile app is king. The processing power of smartphones and mobile devices based on Android, iOS, and other mobile operating systems combined with the speed of broadband cellular networks have led to more mobile applications with an old-school plan: plug into backend middleware and corporate data sources.

But these apps and the devices they run on are vulnerable… well, you get the picture. It's déjà vu with one major difference: while most client-server applications ran within the confines of a LAN or corporate WAN, mobile apps are running outside of the confines of corporate networks and are accessing services across the public Internet. That makes mobile applications potentially huge security vulnerabilities—especially if they aren't architected properly and configured with proper security and access controls.

Speed (to market) kills

Today we have tools like PhoneGap and Appcellerator's Titanium platform as well as a host of other development tools for mobile platforms that resemble in many ways the integrated development tools of the client-server era (such as Visual Basic and PowerBuilder). So individual developers and small development teams can easily crank out new mobile apps that tie to Web services, hooking them to backend systems launched on Amazon at high speed.

But unfortunately, they all too often do so without considering security up front, creating the potential for exploitation. While a lot of attention has been paid to security on the device itself, the backend connection is just as, if not more, vulnerable.

If companies are lucky, like Montreal-based SkyTech Communications, those holes merely produce public embarrassment. When a computer science student at a vocational college used a freely downloaded security scanner on SkyTech's mobile app (which allows students to access their records and register for classes), he found major security flaws in the application. These flaws allowed anyone to gain access to students' personal information.

Small developers aren't the only ones who can get caught by their mobile app backends. Take, for example, General Motors' sudden leap forward with its OnStar Web API. The company was forced to accelerate a public API effort when it discovered an enterprising Chevy Volt owner had reverse-engineered its mobile application API for retrieving vehicle statistics from OnStar's data centers for personal use. Fortunately, he wasn't malicious. But he did build a website for other drivers to do the same—which potentially exposed personal data in the process by using those drivers' OnStar account logins, in violation of GM's privacy rules. The site now runs on a new, more secure API.

Keeping the client (mostly) dumb

"This sort of thing has been a problem since computers started talking to each other," said Kevin Nickels, the president and CEO of "backend as a service" provider FatFractal. To prevent these sorts of problems—or worse—developers need to address issues like security and access control early on. "Too often, developers try to address these after the fact, and not from the very beginning," Nickels explained.

One of the key elements of security design in mobile applications is making sure that the client—the phone app itself, or the browser app—does very little processing. "The general best practice is to let the code on the device do as little as possible," said Danny Boice, the co-founder and CTO of Speek, a cloud-based conference call service that works through native mobile clients and Web browsers. (Boice is also a former executive in charge of Web and mobile development for the SAT testing company, The College Board.) "There are things on a person's phone that you can't control. We put most of the heavy lifting off of the client, because you can control what the application sends and receives."

It's especially important to handle all data integration with other services on the backend and not on the mobile device, says Nickels. "Ads exposed in an app, for example, could have malicious code. We recommend people do that sort of integration via the backend. That way, things coming from outside the app won’t have any access to any system resources at all."

Dan Kuykendall, Co-CEO and chief technology officer of security testing firm NT Objectives, said the less mobile apps store and process data on the client device, the better. "A lot of developers think, 'The only traffic that's going to come in is from my mobile app'," Kuykendall explained. "And they build logic into the mobile client"—building queries to be sent to the backend systems and processing raw data sent back. But requests from the app can easily be "sniffed" by someone who has the application on a device of their own, by malicious software on the device that might monitor outbound traffic, or by someone maliciously monitoring what comes off mobile devices. "You don't want the app passing SQL statements back to the backend," Kuykendall said. "That's crazy." But as he says, that's also all too common.

The most basic bit of hardening required for mobile applications is to encrypt traffic to the backend—at a minimum, by using Secure Socket Layer (SSL) encryption. But SSL by itself isn't enough because of the nature of how mobile devices connect. Many smartphones will automatically connect to available open Wi-Fi networks they remember, making it relatively easy to get them to connect to a rogue device that can act as an SSL proxy, decrypting and re-encrypting traffic while recording everything that passes through. While SSL is usually a defense against attacks on browser-based sessions on PCs, some mobile apps are vulnerable because they rely on WebKit to handle SSL. WebKit doesn't fail by default with bad certificates like those used in "man in middle" (MIM) attacks—it sends an error message to the app that a cert is bad, and lets the code decide what to do about it. In some cases, to get around errors, apps get set to accept any cert, so they're vulnerable to MIM attacks.

"I can sit in a public place, like the mall, with a Wi-Fi Pineapple and my laptop," Kuykendall said, "and deliver real Internet access with me as a 'man in middle', and see the traffic coming from people's smartphones without them knowing their smartphone is connected to me. And when apps fetch updates, I see that." Since many mobile apps fetch updates without user interaction, "the users aren’t instigating the connection—it just happens." If data pulled from a man-in-the-middle attack doesn't have additional sorts of controls and protection, it could then be used to attack the backend systems.

Another vulnerability caused by putting too much reliance on the client is that it requires more data to be stored on the client—data that could be exploited. Even ephemeral data (information stored locally to be processed for display or to be sent to the backend and then be disposed of) is vulnerable. "It's not so easy to get into a running app and steal stuff," Nickels said. "It's more of an issue with a data cache or on-phone storage, using databases like SQLite. You need to obfuscate that data as best as you can, encrypt it at rest, and store things that are not easy to associate with each other."

The only issue with this is if you can manage the app from a website anyone that has access to the website can see how to simulate the website in a non authorized app. That has been the problem I always have. Even sending a token that is used once, a user could code a script to scrape a token then send their malicious call with that token. How can you stop that?

This is bang on. At Layer 7, we've been preaching this aspect of mobile security for a while http://api.co/VJXFKz. We do have plans and pricing to fit any user profile, so I hope API owners out there looking for a secure solution don't forego the goodness of a gateway/proxy assuming they couldn't afford it :-)

I'm someone who has yet to switch to a smartphone mainly due to security concerns, could you do a story on what a *user* can do to keep the device secure? I'm thinking about things like device encryption, secure logon, vpn, etc while still keeping the device convenient to use (is it even possible?).

I'm someone who has yet to switch to a smartphone mainly due to security concerns, could you do a story on what a *user* can do to keep the device secure? I'm thinking about things like device encryption, secure logon, vpn, etc while still keeping the device convenient to use (is it even possible?).

Short answer: not a lot. Opening a VPN to your home server opens several holes & is easy to get wrong. There's *lots of things* a smart phone is useful for without needing to access banking websites. Email is probably OK if you can set it up securely, the same way as desktop email.

Treat wifi outside your home / work with suspicion, if you can be bothered - I don't generally bother - but then rarely access my banking websites outside my home. I do have a couple of banking apps which are *slightly* higher security, mainly in that it's easier to point the finger at the bank if they get compromised.

1password (don't know about others) has a 'secure browser' built into the app, which they recommend you use if you want a bit more security.

End of the day, it comes down to making reasonable tradeoffs. covering your ass with yourself / your boss / your bank / and not using that particular service if you feel it's not worth the risk. Smartphones are fantastic boxes of tools & you'll love yours even if you decide not to use it for anything that needs higher security.

"I can sit in a public place, like the mall, with a Wi-Fi Pineapple and my laptop," Kuykendall said, "and deliver real Internet access with me as a 'man in middle', and see the traffic coming from people's smartphones without them knowing their smartphone is connected to me. And when apps fetch updates, I see that." Since many mobile apps fetch updates without user interaction, "the users aren’t instigating the connection—it just happens." If data pulled from a man-in-the-middle attack doesn't have additional sorts of controls and protection, it could then be used to attack the backend systems.

Does anyone know more specifically how this works? Is he acting as a wifi hotspot and expecting phones to automatically use his wifi connection, or is he monitoring air signals remotely with a listening device? For example, would turning off my phone's ability to connect to new wifi hotspots without permission protect against this or would it sniff 3G and LTE connections as well?

"I can sit in a public place, like the mall, with a Wi-Fi Pineapple and my laptop," Kuykendall said, "and deliver real Internet access with me as a 'man in middle', and see the traffic coming from people's smartphones without them knowing their smartphone is connected to me. And when apps fetch updates, I see that." Since many mobile apps fetch updates without user interaction, "the users aren’t instigating the connection—it just happens." If data pulled from a man-in-the-middle attack doesn't have additional sorts of controls and protection, it could then be used to attack the backend systems.

Does anyone know more specifically how this works? Is he acting as a wifi hotspot and expecting phones to automatically use his wifi connection, or is he monitoring air signals remotely with a listening device? For example, would turning off my phone's ability to connect to new wifi hotspots without permission protect against this or would it sniff 3G and LTE connections as well?

Phones - and other devices - are constantly looking for "remembered" networks. Devices like the Pineapple spoof these networks and trick your device into connecting to it, after which of course they can monitor your traffic.

Question: would using TOR prevent that data from being readable/useful?

"Speed (to market) kills"Bingo! That's all they care about. Get it done now, who cares how insecure it is and whose personal data gets leaked. We can't leave this money on the table, grab it, grab it now!

"I can sit in a public place, like the mall, with a Wi-Fi Pineapple and my laptop," Kuykendall said, "and deliver real Internet access with me as a 'man in middle', and see the traffic coming from people's smartphones without them knowing their smartphone is connected to me. And when apps fetch updates, I see that." Since many mobile apps fetch updates without user interaction, "the users aren’t instigating the connection—it just happens." If data pulled from a man-in-the-middle attack doesn't have additional sorts of controls and protection, it could then be used to attack the backend systems.

Does anyone know more specifically how this works? Is he acting as a wifi hotspot and expecting phones to automatically use his wifi connection, or is he monitoring air signals remotely with a listening device? For example, would turning off my phone's ability to connect to new wifi hotspots without permission protect against this or would it sniff 3G and LTE connections as well?

I would bet people are automatically connecting to wireless access points. If you have a VPN (hopefully cert based), this wouldn't be a problem

The only way you can do a MiTM against SSL is if you can get your cert onto the users device or you have some crazy wildcard cert that never should have been issued. Neither option is very likely. The best you can realistically hope for is a XSS hole but if the entire conversation is happening over SSL then even that is unlikely, that's why Google allows you to set your preference to always use SSL and has enabled it as the default on gmail and other services with personal data.

I don't get it, most of these "web security" articles on Ars recently seem like little else than an opportunity for various snake-oil peddlers to shove their services down the reader's throats - indirectly, of course. Then people can't make out anything written in there, but do still wanna sound smart on the internet, so they come to the forums with a "great article, thank you so much" comment

I don't get it, most of these "web security" articles on Ars recently seem like little else than an opportunity for various snake-oil peddlers to shove their services down the reader's throats - indirectly, of course. Then people can't make out anything written in there, but do still wanna sound smart on the internet, so they come to the forums with a "great article, thank you so much" comment

Interesting reaction. I don't see that they are attempting to sell anything, let alone shove anything down someone's throat. Directly or indirectly. This is a VERY informative article on a complex subject. If people can't make out what is written then they need to study more and get up to speed.

I for one would LOVE to see more articles of this kind. Especially when they are so well written. (Except for the 'Architected' part of course....)

Interesting reaction. I don't see that they are attempting to sell anything, let alone shove anything down someone's throat. Directly or indirectly. This is a VERY informative article on a complex subject. If people can't make out what is written then they need to study more and get up to speed.

Bleh, you have a link to one open authentification framework, and a bunch of links to various middleware providers. The experts usually point out how nothing that poeple are doing currently is right, and a lot of it is pure unabated technobabble meant to scare people into looking at "enterprise-level solutions", because you "get what you pay for".

As for people needing to get up to speed on this complex subject, I think I'll just lol a little.

Most computers search for their home networks. Mobile devices now do this as well. These systems poll for "home" "linksys" or "My_Very_Private_Home_Network." Whatever is a remembered network, they are always pinging the airwaves to look for it. That's how they autoconnect when you get home from work at night.

Years ago, tools like cowpatty and such would listen to the BSIDS requested over the air and reply. It didn't matter what network your device was searching for, they replied that they were it. Then they accepted whatever authentication your device offered and setup the connection.

At that point, you were routing all your traffic through a malicious source. SSL helps. Mostly. If your app is designed to fail closed if the SSL cert cannot be properly verified, then the app will just break. If not, SSL-Strip will just MitM the protocol. The hacker can then script it watch the traffic and modify it at will.

This is an interesting article, but I wonder about the points on encrypting / obfuscating an app's local data. Sure, that makes it harder to get at the data, but why is that such a concern to begin with? If the concern is that other local apps can read the data without the user's permission, then there's a problem with the OS. If the concern is that the user can do something malicious with the information, then there's a problem with the app / service.

Also, a good VPN implementation can most certainly be configured to send only specific traffic through the tunnel. I don't see the problem with using one on an employee's device, though requiring one on a customer's device certainly doesn't seem like a good idea.

Most computers search for their home networks. Mobile devices now do this as well. These systems poll for "home" "linksys" or "My_Very_Private_Home_Network." Whatever is a remembered network, they are always pinging the airwaves to look for it. That's how they autoconnect when you get home from work at night.

Years ago, tools like cowpatty and such would listen to the BSIDS requested over the air and reply. It didn't matter what network your device was searching for, they replied that they were it. Then they accepted whatever authentication your device offered and setup the connection.

At that point, you were routing all your traffic through a malicious source. SSL helps. Mostly. If your app is designed to fail closed if the SSL cert cannot be properly verified, then the app will just break. If not, SSL-Strip will just MitM the protocol. The hacker can then script it watch the traffic and modify it at will.

Does that help?

Also, WebKit, which gets used in a lot of mobile apps to handle SSL, doesn't fail by default with bad certificates. It sends an error message to the app that a cert is bad, and lets the code decide what to do about it. In some cases, to get around errors, apps get set to accept any cert, so they're vulnerable to MIM attacks.

Also, WebKit, which gets used in a lot of mobile apps to handle SSL, doesn't fail by default with bad certificates. It sends an error message to the app that a cert is bad, and lets the code decide what to do about it. In some cases, to get around errors, apps get set to accept any cert, so they're vulnerable to MIM attacks.

You really should update the article to include this. As the text currently stands, it sounds like ignorant fear-mongering, claiming that SSL is always vulnerable to MiTM.

So I notice this is sponsored by Symantec. Is there an explanation for what this means exactly? Did they write it? Did they pay you to write a general series about security with their name near it? Did they design each of the stories and then get you to write them?

I'm not knocking sponsored stories, people gotta eat. Just wondering how Symantec fit in to the series.

So I notice this is sponsored by Symantec. Is there an explanation for what this means exactly? Did they write it? Did they pay you to write a general series about security with their name near it? Did they design each of the stories and then get you to write them?

Sponsored stories need to be marked as such in advance. This is just you average 'security' scammer wanting to have their name next to every story that mentions how nothing is saf... this is Symantec wanting their name associated with a complex and informative series such as this one.

Better question is why are the background pics on either side of the article actually hidden links to Symantic? Am I the only one who typically uses blank areas like that to click to give the window focus before scrolling, or does whoever is doing layout think that making me see a brief flash of the Symantic site before I WTF and click the "x" is worth the rage it causes me?

So I notice this is sponsored by Symantec. Is there an explanation for what this means exactly? Did they write it? Did they pay you to write a general series about security with their name near it? Did they design each of the stories and then get you to write them?

Sponsored stories need to be marked as such in advance. This is just you average 'security' scammer wanting to have their name next to every story that mentions how nothing is saf... this is Symantec wanting their name associated with a complex and informative series such as this one.

It is not immediately obvious that this is a sponsored story. I had to recheck once I saw these comments.

Ars, if you are going to have "sponsored" stories, please make it clearer up-front. In addition, it would be nice to see the Ars Technica editorial guidelines for sponsored stories. They are clearly not the same as "Dealmaster" stuff, which is an infomercial, but where do these sit editorially and how much control does the sponsor have over the content? Is the sponsor able to say "you can't mention our competitor", for instance?

So I notice this is sponsored by Symantec. Is there an explanation for what this means exactly? Did they write it? Did they pay you to write a general series about security with their name near it? Did they design each of the stories and then get you to write them?

Sponsored stories need to be marked as such in advance. This is just you average 'security' scammer wanting to have their name next to every story that mentions how nothing is saf... this is Symantec wanting their name associated with a complex and informative series such as this one.

It is not immediately obvious that this is a sponsored story. I had to recheck once I saw these comments.

Ars, if you are going to have "sponsored" stories, please make it clearer up-front. In addition, it would be nice to see the Ars Technica editorial guidelines for sponsored stories. They are clearly not the same as "Dealmaster" stuff, which is an infomercial, but where do these sit editorially and how much control does the sponsor have over the content? Is the sponsor able to say "you can't mention our competitor", for instance?

In case you weren't aware then, the series about the future of smart phones was also sponsored content. In that case, the sponsor was Qualcomm.

Ars, if you are going to have "sponsored" stories, please make it clearer up-front. In addition, it would be nice to see the Ars Technica editorial guidelines for sponsored stories. They are clearly not the same as "Dealmaster" stuff, which is an infomercial, but where do these sit editorially and how much control does the sponsor have over the content?

Quoting the Editor in Chief here:

Ken Fisher wrote:

Kwpolska wrote:

Quote:

presented by Symantec

Oh. Note to self: stuff in this series MAY be far away from truth. Validate everything, as Symantec and Norton are shit.

Neither had any involvement in the writing and editing of these articles. Nor did they see drafts. I think it's rather unfair to suggest otherwise without some evidence of BS.

Ars, if you are going to have "sponsored" stories, please make it clearer up-front. In addition, it would be nice to see the Ars Technica editorial guidelines for sponsored stories. They are clearly not the same as "Dealmaster" stuff, which is an infomercial, but where do these sit editorially and how much control does the sponsor have over the content?

Quoting the Editor in Chief here:

Ken Fisher wrote:

Kwpolska wrote:

Quote:

presented by Symantec

Oh. Note to self: stuff in this series MAY be far away from truth. Validate everything, as Symantec and Norton are shit.

Neither had any involvement in the writing and editing of these articles. Nor did they see drafts. I think it's rather unfair to suggest otherwise without some evidence of BS.

I don't recall stating that the article was biased, so don't start finger-pointing just yet. I was pointing out the perception if Ars Technica is not totally up front in:

1. Disclosing very clearly when a story is sponsored, and by whom; and2. Detailing Ars Technica's editorial policy in relation to sponsored stories.

The latter may just be a link to said editorial policy, but if it isn't there then the reader has no idea what the policy is and how the story has been produced.

I don't recall stating that the article was biased, so don't start finger-pointing just yet. I was pointing out the perception if Ars Technica is not totally up front in:

1. Disclosing very clearly when a story is sponsored, and by whom; and2. Detailing Ars Technica's editorial policy in relation to sponsored stories.

There is a difference between sponsored content and companies sponsoring content. The former is, to describe it in the simplest way, an article written by a PR team promoting some services or products. By general consensus (often even by law, I'm not sure how it is in the US) those articles are marked by a subheadline marked titled "Sponsored content" or "Sponsored story". The latter is simple advertising, which is the case here.

Then there's product placement, which is murkier. You know, like all of ZDNet's content. Finally, in tech circles, there's authors hopelessly biased towards certain brands, like The Verge is with Apple.

This is merely an editorial mistake. The author attempted to cover a topic, and wanted to turn to experts. This being the security scam ring, every expert you turn to is peddling some bullshitware. It would have been much better had the story been researched more in depth, and offered some really pertinent information, instead of just quoting the scammers, but the deal nowadays is "get it out fast, make it look good, forget everything else".

Sean Gallagher / Sean is Ars Technica's IT Editor. A former Navy officer, systems administrator, and network systems integrator with 20 years of IT journalism experience, he lives and works in Baltimore, Maryland.