Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

How long do you think it'll take them to come back with feedback? They'll need to work out whose fault it was, who they can blame, what they're going to do about it, the impact of blaming the people whose fault it wasn't, and all the time looking good to upper management. Lessons will be learnt, and this will definitely not happen again, just like always.

I would bet that 80% of these 60 banks are buying the same moderately customized app(s) from the same vendors.I would also suspect there will be similar flaw with the android versions.

Given that most banks don't have any in-house mobile development, they are probably all descending onthe few vendors that wrote and customized these apps, an they will all get fixed about the same time.

I'm responsible for the Android offering of one such vendor. We currently have about 140 small banks running some version of our app. We try to follow most of the security guidelines outlined in this article, but to give our customers added assurance we pay a security company to analyze the most current version of our app (and our back-end services) every six months or so. Not the one responsible for this article, though I imagine they're a competitor of the one we use. Was a good read. I forwarded it to my boss and the coworkers responsible for our iOS app.

On this security issue, I have had several discussions with the financial institutions holding my retirement savings regarding their websites requiring me to enable my pop-ups and javascript.

I have at numerous times been subject to deliberately crafted malware that often delays its mischief until I leave the site which gave it to me and shows up later. Some of it has been so robust that it survives reboots ( the "S.M.A.R.T. HDD virus was the last one I had that did this ) and required going back to a re

The question that you should be asking is what happens if the browser is compromised. It doesn't matter if JavaScript is enabled, if some malware controlling your browser lets the attacker make arbitrary payments then your bank is doing it wrong. To pay anyone I've not paid before (and saved the credentials for) via Internet backing, my bank requires me to enter a code that they provide and the recipients account number and the amount in either a mobile phone app or a separate device, which then generates

You do run on a bit, but the point(s) are well taken. When I was in China I had a chance to hook up with one of the largest banks through their "internet banking."

First, it required IE6. Yes, required, nothing else would workSecond, it required pop-ups because your user name and password had to be input in a pop-upThird, if you tried to use something like Firefox you would get a notification that the certificate was invalid and had been revoked

... to bank from your cellphone. Call me paranoid and old-fashioned (I admit to being both), but if I do on-line banking at all I do it from my own home computer on a wired LAN. OK, so I can't do all the wild-and-crazy things these mobile banking apps allow, but I also am likely to have my money in my bank in my account at the end of the day and not in a bank account in Siberia somewhere.

I'd argue that on a non-jailbroken iOS device you might be more secure than on your home computer and wired LAN. Your home computer is far more likely to be infected with keylogging malware or similar.

The government already has access to my bank account. They don't need to break into my computer to get it.

They'd be interested in your password though.Either in case you re-use it elsewhere or to help them guess the type of passwords you'd use for other accounts.

Why would they need a password? Judging from what we have learned about NSA standard practice all they have to do is show up at your bank, twist some arms, drop the words "We're post 911 here, are you telling us you are refusing to contribute to national security?" and your bank will set up a dedicated back-door that allows them to access any data they want.

I'd argue that on a non-jailbroken iOS device you might be more secure than on your home computer and wired LAN. Your home computer is far more likely to be infected with keylogging malware or similar.

You's argue that, but according to this article you's be dead wrong.

Really, how many people do you have running through your house that you need to worry about a key-logger?

Not necessarily. Most USB keyboards have firmware stored on a flash chip that has some spare capacity, and a lot have built-in USB hubs. There was at least one proof of concept for a keylogger that would record things to the on-board flash and then dump them to a specific USB device when it was inserted, then erase the on-board flash (rewriting the bit that contained some of the firmware) ready to start again.

The idea that jailbreaking makes a device less secure seems rather silly. The vulnerabilities are there, either way. It comes down to what you, the user, do with the device - and that's true regardless of its jailbroken status.

Also, the argument from the article that not detecting jailbroken devices is bad is also silly - it's not like that's particularly hard to circumvent. All it would accomplish is to inconvenience legitimate customers.

If I handed you my phone w/ the app loaded and me logged in there's still not much damage you could do. You could transfer money between my accounts. You could deposit checks into my accounts. You could potentially pay my bills if I had payees already configured. (Typically you can't configure new payees via the app.) So you could inconvenience me, but you couldn't take any of my money for yourself or even get my full account number(s) since those are masked prior to being sent to mobile clients. Cert

That is exactly what I do. If there is no money to steal, the bad guys cannot get it. Only twice in 2013 was there more than $100 in the account that I use online. Most of the time, there is only about $10 in that account. I put money in when I intend to spend it, I spend it, and the account is nearly empty again. No hacker anywhere has had an opportunity to steal $5,000 from that account.

If Mom keeps a cookie jar on the counter, and only ever puts two cookies at a time in it, then you can't steal more

Banks are normally quite process oriented, so in this case I imagine the problem is that the technology is too new for the banks to have a good enough process to cope with the changes and the banks are very rigid about their process where it comes to allowing in new specialist vendors. I am dealing with this on daily basis, for a small company dealing with banks is extremely difficult. I am not even blaming anybody, it's the management necks that are on the line and more often than not, management is not

What surprises me is that TFA mentioned multiple cases of things like failure to validate SSL certs, use of unencrypted assets rendered by the app in ways that could be spoofed dangerously, and similar stuff that wouldn't have gotten past their web people; but apparently are A-OK because it isn't a web browser, it's an 'app' wrapped around the UIWebView class!

The other things they mention, assorted attacks or failures to mitigate against an attacker with priviledged access to the system, aren't good; but they are both less dangerous (at least to people running stock iOS) and more novel and platform-specific. The first class of bugs, though, should have been solved a decade or more ago when they started dabbling in this 'web' stuff.

It is surprising if you don't look at the way banks implement processes, what this tells me is that to the banks this technology is so cutting edge, they have no idea how to deal with it at all, so they are just throwing a bunch of stuff together without a second though really, until there is a disaster.

It IS surprising that nobody in a team raises these questions though, what exactly does it mean? It may mean that the vendors that the banks do have, are mobile app vendors and are not at all qualified to wo

So, are these banks' websites just as bad, or did they actually manage to re-implement something worse than just wrapping their site in a suitable stylesheet and calling that 'an app'? If the latter, how do they look themselves in the mirror every morning?

So, are these banks' websites just as bad, or did they actually manage to re-implement something worse than just wrapping their site in a suitable stylesheet and calling that 'an app'? If the latter, how do they look themselves in the mirror every morning?

Web group is probably internal while the iOS dev was probably shopped out to Rent-a-Coder, so the web app is probably safe. I should say that RaC was used as a generic example. Folks have gotten good work out of them. But do notice the number of times I used "probably".

I thought the 4-digit pin was designed strictly for use with a physical key, i.e. my bank card

Sure, it's easy to have a computer brute force the 10000 possible 4 digit strings... but doing so while standing in front of an ATM might be a little more difficult, and look a bit suspicious, not to mention getting a copy of the physical key and using it before it's owner realized it's missing

As AC pointed out, the magnetic strip can be copied... Very easily. I know someone who this happened to. Once they have that, they as good as have your pin, which is why your card should never ever leave your line of sight. Copying the key is as fast as swiping the card.

I also have a chip and pin, but it is interesting to note that most places here in South Africa will fall back to the magnetic strip if the chip doesn't read properly. The magnetic strip should go altogether. It is a horrible technology.

As an additional note, the fact that it auths with non case sensitive pw means that they aren't hashing the passwords either......... it's either plain text or encrypted.... god forbid someone runs a brute force attack, because it's going to be pretty damn easy.

Passwords are not case sensitive and can't include special characters (e.g., #, %, etc.). Passwords must be 8-16 characters long and contain at least one number and letter.
- be 8 to 16 characters long
- use at least one number and one letter
- not include spaces or special characters (e.g., #, %, etc.)

Every time I see a website that won't allow special characters in passwords, I immediately assume that it's because they're using JavaScript to cover up lack of proper encoding on the way to a SQL database, and I treat the website accordingly, with the appropriate level of distrust. Just saying.

20 years ago I got a C rather than an A in an assignment during my computing systems degree because I failed to fully validate a security in a 'secure' chat program (i did successfully encrypt and purge memory data, including not having page file info readable during unforeseen system power off - but certificate wise I only ensured compliance rather than check integrity iirc) . That was 20 years ago and I'm not a programmer.

Security is layers. For all our firewalls, ids sensors, seim correlation, and other efforts it was the lowly endpoint security package and it's alerts in it's console that got our attention the last time we had an unannounced pen test.

A/v might not be the sexiest thing in computer security today, it might not even be very effective overall but it's one more shot at detecting and stopping the bad guys and it can be a shout worth taking.

While I agree a list would be nice, please don't spread lies that this is "basic" programming. If it were, there wouldn't be so many issues.

Hardening and securing an application against sophisticated attacks (yes, I know not all of the attacks are 'sophisticated') is a non-trivial piece of work requiring expert knowledge and experience in security programming. I doubt you could do it. I doubt most people here could do it. I consider myself an expert software developer and I doubt I could do it.

More to the point, spreading the myth that this is "basic" is exactly the sort of attitude that allows these practices to continue. When Joe Graduate hears how "basic" and "easy" this securing software stuff is, from people like you that have no clue, they go off and do it themselves. It's easy, right? Rather than respecting this field for what it is - highly specialized and difficult work - the exact problem that needs solving is perpetuated by your snarky and uninformed attitude.

When Joe Graduate hears how "basic" and "easy" this securing software stuff is, from people like you that have no clue, they go off and do it themselves

No that is not even close to a major problem. The big problem with software security is that it is usually an afterthought. Poor security does not impeded the normal operation of software, so it is extremely common for management to de-emphasize or even ignore it completely. And then once the software is up and running, retrofitting security into a system is super-expensive so the mindset becomes something like, "why fix a leaky roof if it isn't raining."

Which banks, please? Can we please have a list of which banks fail basic programming???

While I agree a list would be nice, please don't spread lies that this is "basic" programming. If it were, there wouldn't be so many issues.

Hardening and securing an application against sophisticated attacks (yes, I know not all of the attacks are 'sophisticated') is a non-trivial piece of work requiring expert knowledge and experience in security programming. I doubt you could do it. I doubt most people here could do it. I consider myself an expert software developer and I doubt I could do it.

More to the point, spreading the myth that this is "basic" is exactly the sort of attitude that allows these practices to continue. When Joe Graduate hears how "basic" and "easy" this securing software stuff is, from people like you that have no clue, they go off and do it themselves. It's easy, right? Rather than respecting this field for what it is - highly specialized and difficult work - the exact problem that needs solving is perpetuated by your snarky and uninformed attitude.

So for everybody's sake, just cut the condescending attitude. Thanks.

Plus let's not make life any easier for thieves than it already is by providing them with a list of targets. The banks who have such crappy apps may deserve being taught a lesson but the customers whose bank accounts end up being raided don't since they can't be expected to have every bank they do business with vetted by a team of security and cryptographic experts.

I'm sorry, but 30% of the apps they tested HARDCODED credentials, in some cases BANK ADMINISTRATIVE CREDENTIALS - into the app.

Sure, it's sloppy, but if, as the summary implies, those development credentials are for a sandbox server (presumably without any real financial or personal info on it), then it isn't nearly as bad as it sounds.

On the other hand, if there are administrative credentials for the production server....

Maybe it's just me, but the article seems a little light on who they are referring to, aside from a vague reference to the countries of origin. While there's all sorts of legitimate ass-covering reasons not to mention any bank specifically, it makes it useless as a starting point for how we would do anything about it, such as demand improvements of these institutions.

At the least, I hope some private communication to the banks has taken place, though I'd understand if that hasn't happened. Some organization

- Improve additional checks to detect jailbroken devices- Obfuscate the assembly code and use anti-debugging tricks to slow the progress of attackers when they try to reverse engineer the binary

These two will be useless, and easily defeated. "Slowing the progress of attackers" is a meaningless statement in this context. Jailbreak detection is easily tricked, or removed from the code by a jailbroken phone.

Aside from that, if you do all of the other things they suggest correctly (as should have been suggested to the programmers in CS 101), you shouldn't need these two.

If you and your buddy are being chased by a bear you don't have to outrun the bear; you just have to outrun your buddy. Which is to say sometimes it's helpful to make it a sufficiently big PITA for a malicious party to hack your app relative to the effort required to hack someone else's. Someone who really wants to rob me will get past my locked door, but I still lock the doors to my house.

It's still better to avoid the bear, and not think about your friend getting killed.

That's exactly the GP compaint. They are recommending that a bank outrun the others (by procedures that'll reduce the overall security of the app users, be assured of that), instead of avoiding the bear.

I'm not arguing that obfuscation and anti-debug techniques are sufficient; I'm arguing that they aren't completely useless. Take whatever other security measures make sense and then turn on obfuscation and anti-debug on top of that just to dissuade "casual" (read: lazy) attackers.

Real people that can't make the application realy secure also can't do those harder techniques in a way that does not create more security flaws. Also if you are able to use proper security techniques, there's still no evidence that you'll be able to use those techniques correctly (because they are harder). And in the end of the day, those techniques can not add any real security.

As an iOS programmer (not at a financial company but we do ecommerce) I would be surprised that the banks did not use Veracode to analyze their binaries. Veracode isn't perfect but even for us it finds a number of these issues. But statically analyzed security issues found by a researcher are not always exploitable in real life. It's very likely that the bank could have security on the API side that would validate anything the client did that would not be visible on a client only analysis. As with Veracode

Can someone please explain to me why someone needs a separate app to do their banking? As a matter of fact, can anyone explain why we need most of the apps that are just poor rewrites of web sites? Why not make a good mobile version of the web site that users can bookmark as icons on their home screen and call it a day?

My bank (in Netherlands) requires a chip card and card reader for logging in and transactions (challenge/response system). That would be a pain to use with mobile banking; instead, they store the credentials in the phone, locked with a separate PIN and tied to the phone.

There are various security measures to reduce the chance of fraud, such as autologout upon switching to a different app (royal PITA if you need to copy/paste the account number, by the

That's terrible: mobile banking apps for iOS are woefully insecure, yet you folks are making fun of them. Poor little things, you're gonna make 'em cry. Is that really what you want? Can't you just leave 'em alone, you big bullies...?

The shit some alleged jour^h^h^h^h resear^h^h^h^h^h^h overpriced snake-oil salesmen and consultants keep spreading about the "risks" of allowing banking apps to run on jailbroken devices is getting old.

It's wrong, it's a lie, AND it's actively-harmful to the ultimate goal of banking security (fraud-prevention and losses).

There are exactly two things that would happen almost immediately if any major bank in the US with millions of customers tried to prevent customers from running its consumer banking app on

I'm sorry but you clearly have no idea what you're talking about. I'm going to talk about iOS jailbreak because that's what's interesting, Android devices are inherently less secure than iOS out of the gate so the conversation there is different.

The jailbreak defeats two primary security measures - the barriers protecting one app from another and the signature checking on the binary to confirm it hasn't been tampered with. If you are running on a jailbroken device it's trivially easy to hook the binary and

If you're capable of inserting code to intercept credentials and email them somewhere then why can't you just excise the jail break detection code? Seems like this probably isn't the sort of attack jailbreak detection is designed to prevent. I'm instead imagining a scenario where a user's OS has been modified w/o his or her knowledge in such a way that it snoops on legitimate unmodified apps. Maybe the user bought the device used from "some guy at the car wash". He then proceeds to install his banking

My employer is considering offering our customers (banks) the option of turning on code in our apps that attempts to detect a jail broken devices and causes the app not to run. Our customers are all small, regional outfits, though; probably not big enough to merit much outrage.

I learned this lesson the hard way, back a couple revisions with the iPhone. I downloaded Paypal and logged in once, logged out. The very next day, someone stole a couple hundred $$. Clearly, one of the apps I had on the phone had a clever keylogger or other monitoring scheme that was running. Apple did everything to divest themselves of any liability or interest. So we have to be concerned about other apps' behavior and have "failth" (in the case of Apple) in the ability of organizations to pro