Monday, May 29, 2017

Seriously, this was never intended to be a banking technology blog.In the process of gathering evidence to back up what I was writing about for part two of my series of articles on how to fix mobile banking in Canada, I needed to gather a solid example or two of a particular type of problem I was writing about. This was supposed to be a quick 60 second job, as the nature of the problem was such that if you know what you're looking for, you can find stuff very easily. Long story short, something familiar from Toronto caught my eye, and next thing I knew I was looking at a multinational bank breach involving at least 5 banks in Canada, USA and Japan, a multibillion dollar software corporation and the leaker at the centre of it all.You seriously couldn't make this stuff up.

Wednesday, May 24, 2017

So, lots of good news in the past 24 hours.I spoke to the bank that was leaking their source code. They're now investigating fixing things up. That's good news.Second, the "how do you fix mobile banking" article has accomplished what it set out to do, and lots more. My original intent was kick-start a conversation on the mobile banking problem in Canada. I've was told by sources last night that two conversations within Scotiabank happened yesterday, one at the executive level on King Street, and one lower down in Scarborough. By having customers show the banks a solution to their cyber problems, it would appear this has started conversations on how to deal with this. Another thing that happened, is that I ended up with a little army of peeved bank customers, who suddenly sent me new information. I will go through this all in the coming days, and try to work out what to do with it.Finally, this article was picked up by a lot of people. Everyone from the US Tenth Fleet to Microsoft and Oracle are all over it. So the good news there is word is getting out, and people are taking notice of the problem.

Tuesday, May 23, 2017

On Saturday, I posted an article that had been brewing in my head for a long time. The response to it has been unbelievably positive, and the feedback and ongoing dialog about this type of mobile banking problem is great. The general response has been this:

What I said is basic common sense to other programmers and security people, and many people were disbelieving at what I was saying, until they went and checked it for themselves - and then disbelief turned to shock.

Some people claim that what I posted is a good start, but it's not enough.

First off, I know this is not enough. This was just a start.

Just incase it's not obvious, I don't work for the banks. I have a full time CTO commitment to something totally not bank related. I'm just a customer of the banks who's trying to protect the general public from the awful reality that is Canadian mobile banking. Also, I don't make it a habit of giving the banks free security consulting, so I don't tell all. If something's serious enough, it'll go to the CCIRC. Though, what I pointed out on Saturday was a way to highlight and address a massive problem, it's not the only problem, and it's still up to each individual bank to take what I said and go implement it properly - not just copy/paste from this blog. Also, I wasn't setting out to show banks how to certificate pin, or check it's not been altered from the version delivered to the app store. That's down to the banks to do. Maybe if they don't do that, we can cover that in a later post down the road, but arguably the next major issue to address is the shoddy server security.Another great thing that happened as a result of Saturday's post, was the usual readers (the big six banks in Canada and various individual readers across the world) were suddenly joined by other big banks across the USA and Europe, various cybersecurity people from other banks and telcos, and Canadian law enforcement and a tonne of government viewers in Ottawa. This is great news. I welcome these new readers, and I hope you all stick around.Finally a quick update on last week's bank breach of source code;A full responsible disclosure report was submitted to the CCIRC over the long weekend. After the CCIRC has looked over it and digested it (it's significantly smaller than my usual tomes to them, as this weighs in at a paltry six pages), it's likely going to be disseminated down to the affect big bank later this week. The virtual banks this big bank runs have been made aware of the problem, too. So, in short, everyone that needs to know will know.

What's next?I'm going to wait and see what other dialog arises from Saturday's article. There's a lot of good points being made on Social media, and I'd like to hear all of that before doing anything else. Also, I have to now wait for the CCIRC to do their thing, and measure how long it takes for the bank to plug the holes.

Saturday, May 20, 2017

This is a post that has been brewing for a long time. Depending on the reaction to this post, it may become part of a series. I've got a spleen-full that needs to be vented on this subject. I'm going to try kicking off a conversation that many people think just doesn't need to happen, but I really think is long overdue. It's going to get technical, because that's where my gripe is and where the possible solutions lay.

So here goes...

Contrary to the slick marketing and web pages on various bank websites telling gullible people that things are really safe, digital banking is minefield full of scammers, bad programmers, incompetent security people, dodgy delivery platforms and that's before we get to the customer, their choice of wifi access points, etc.

I bank with CIBC and Scotiabank and don't allow either bank to have apps on my mobile phones. I've worked with many banks over time (first in Canada would be RBC in 2000, as I was on the Palm Pilot Banking app, last in Canada was TD in 2015, working on iOS). I don't give my wife any stick over her decision to use the TD app on her iPhone because I've personally seen that code, worked on it, seen other people who worked on it, seen how it's built, sat in the QA team talking to the people actually testing it, their cybersecurity people have bought me beers, and I have a good ongoing relationship with everyone. I feel it's reasonably secure as long as she is equally careful with her phone. However, both my wife and I have a Scotiabank account, and the Scotiabank app is banned in the house, just like the CIBC apps were banned in the house years ago.

You're probably wondering why I'd be so against Canadian banking apps? It's down to two reasons:

The evidence usually shows me that I cannot trust the banks.

My view is the apps the banks put out are clearly in contradiction to the marketing message about safety.

Just to back up what I'm talking about with trust, you may remember Scotiabank had a problem with unauthorized code making it public where they had over a million people walking around with "Fuck kony" (sic) in their pockets. It's not just Scotiabank either. My other bank, CIBC, has for years been feeding me a line, telling me that as a customer I'm appreciated, whilst simultaneously having this nugget in their ember.js system where the English translation for "idiot" is the French for "you" ("toi")...

Example of a CIBC insult in a customer facing system,

where "idiot" in English is "toi" (you) in French.

Both of these unauthorized additions prove that basic crap slips past the same people who are supposed to be looking out for this stuff. Whether that's laziness or carelessness, these things do happen and this all eroded my trust a long time ago.

I mean seriously, ask yourself if you would you use a system where the programmers evidently have contempt for the customer?

Now onto the second point above... The public is told things are all nice and secure, but in an age where crap design or lazy programming puts millions of people at risk very quickly, I find myself shaking my head in disbelief on a regular basis.

One example (and this is where this could turn into a series because theres a lot of this crap going on) is security around URLs in apps. Basically, there's none. Nobody ever thinks to bury that shit somewhere safe, and so API endpoints and hosts for banks are thrown around like confetti at a wedding. It's a totally nuts situation when you think about it.

To back up what I'm talking about, here's code from a very well known payment wallet system in Canada that is live right now. Any script kiddie can upload the APK to an online decompiler, or shove it through a ClassyShark string dump and get every endpoint to start poking about.

(Click image for embiggend version)

Really.... what the hell is with the suspicious non-secure IP address pointing at India? Did they hire an Indian outsourcing company to write this wallet and now it's phoning home to the mothership?

(Click image for embiggend version)

If you're wondering why that Android wallet app has strings just sitting around in a class where they can be read, this is STANDARD PRACTICE in Canadian Android banking apps as nobody ever designs it to be secure, or puts any effort into trying to hide things like URLs.

No, really... "Industry Standard" or "Bank Grade" security in mobile apps in Canada is like this... If all banks have the same non-security, your bank can also do nothing security-wise and then claim in your $5m advertising campaign that your app is secured to an "industry standard".

That's not security. That's theatre.

Another example of this stupidity is from this past week, when Scotiabank pushed out another Android banking app (v17.4.0). I've been harping on about this particular app for over a year. My personal opinion is it's downright dangerous.

(Click image for embiggend version)

As you can see in the above image, even though as programmers they should know Java is insecure, they are still just dumping everything in a simple class as fully assembled strings. There's so much wrong-ness here, I don't know where to start. It could be the lack of obfuscation, or the fact the strings are stored in their entirety, so you can simply string dump the app to get the URLS. I'm not even getting into bank server side security in this article.

My brain can't comprehend why a)anyone approves of this type of dangerous code, or b) why security people in the bank aren't thumping the team that puts this out over the head with Android tablets. However, as you can see, Scotiabank is doing nothing different security-wise to what the payment wallet system is doing.

Now, this app has another security problem, which the CIBC payment app that got pulled in April used to have as well; they use the Enstream framework that communicates with Bell Canada over http, not https. Yes, the same Bell Canada that just leaked 1.9million customers info.

Now, I can point at other banks (and even into Bell Canada) and you'll frequently run across the same bad practices. In fact, many programmers and managers just bounce from bank to bank spreading their bad practices and picking up new bad habits - which introduces a kind of predictability over time.

So now I'll go back to the question in the title - how do you fix mobile banking in Canada?

Well, the first problem is communicating with them about problems. When I started looking at the mobile problem and tried communicating with banks, I found there was a minority who would entertain listening to me, and the majority would talk to me like "We're a bank, we know what we're doing. We don't need to listen to you". Contrary to the usual cool Canadian reaction, I've had American's stick a guy on a plane immediately and that night I'm in their hotel bar in Toronto explaining what I see. Same thing with the British authorities after the Tesco bank hack - the British also had open arms and were ready to discuss stuff. Whilst CIBC has partially changed their tune with me (no longer do things just go into a black hole, they now listen), we don't talk until we really have to, because it's a shaky customer relationship (see "idiot" thing above for example).

At one end of the spectrum I can openly communicate with a bank like TD, or the Loblaw people at PC Financial, and at the other end of the spectrum is Scotiabank, where they told me in December their VP of Security was looking for my phone number to urgently discuss a code leak and it's now May. When customer service is just fobbing people off with platitudes and lies, there's obviously a snowballs chance in hell of a conversation .

The next problem is trying to work out how this type of problem keeps coming up? Like why would Scotiabank for instance keep screwing up security this badly? It's obviously not money because they're a bank in Canada. It's not time, as they've had like 10 years to fix this.

The only reasonable conclusion I can arrive at is they are clueless, and don't know what they're doing.

Now, I believe that if you're not part of the solution, you have to have a good hard think about what you are. To that end, I'm now going to explain to every Canadian bank that repeatedly puts out Android apps with URLs on full display, what I mean when I say "Bury that shit in C/C++".

The following is a lesson where I show you how to put your server address somewhere not on full display to every 12 year old script kiddie.

Step 1. Create your project.Create a project with the correct app name. Check the "Include C++ support" box.Specify your platform.Leave C++ Support at toolchain default.Step 2 - Customise the app to your needsRename the default cpp file to "security-lib.cpp"Note that after renaming the file, it will disappear. This is normal.

Open the CMakeLists.txt file. Change Line 14Change Line 20Change Line 40Refresh your cpp projects.Now your security-lib.cpp file shows up properly.Step 3 - Customise your C++ file.First create a class called ScotiaNativeBridge. You need this because native sandboxing will force you to create a new C++ method for every class that accesses it - so we'll make every Android activity call this class, giving a single entry point.Throw some code in it.Important: Look at the package name. Look at the class name. Shoving it all together, you have:com.scotiabank.banking.ScotiaNativeBridge...remember that. It comes up in the next step.Throw code in the cpp file.Take that really long name I said to take a note of, prefix it with "Java" and suffix it with a function name. That is part of the security that makes sure only your ScotiaNativeBridge class can access this. The function is on line 9 of that ScotiaNativeBridge class. Now, look at the C code above. The entire URL string doesn't exist in code. We have decoy values too. Depending on which build we're in, we're assembling the right components at the time we need them - which is the only time it exists.This means if a hacker looks in the compiled C code, they're faced with the following barrage of real and dummy values... That will remove more of the amateur hackers a bit more as you're not handing the values over on a plate. They now have to work for it.Finally, go to the main activity of your app. Throw this code in - it shouldn't need explaining.Now run it. The first time you do this, you may get this box - just hit Yes.So, do we get the correct server URL if we compile in production mode? Yes, we do.End of lesson one...

So, there you go. In Canada things have gotten so ludicrous it means that their customers now need to show the banks (for free) what the damn solution is to the Android problems they pretend don't exist. Given I know all of Canada's major banks now read this blog, there's now no reason for anyone to push a single Android app with exposed URLs to millions of customers. Though personally, it's my view that any Canadian bank that makes $2bn a quarter in profits and has had nearly a decade to address this problem has no excuse anyway to continue foisting insecure crap on the public.

I'll wait to see what the reaction is to this blog post, before tackling some of the bigger stuff on the pile like obfuscation or stopping code leaks.

Thursday, May 18, 2017

Yesterday I wrote about a large breach of source code from one of the "Big 6" banks in Canada. I'm still compiling details on it, which will be forwarded to a few separate authorities, but thought I'd share some new details regarding it's scale.

Two things I can say about this particular source code breach:

It is not Scotiabank this time.

It's ongoing, and there's evidence it has been breaching for a number of years.

The server location with the leak is now known, and the vectors where you can reproduce it have been documented. The cause of the leak is also now known (inadequate technical chops combined with relaxed security testing), too.

The problem has turned out to be bigger than originally thought, because when you factor in this bank has multiple bank brands under the umbrella of the main bank, there's actually multiple bank brands breaching multiple versions of the source code for multiple systems. The problem gets worse again when you realise there's more than just the retail and business banking operations affected.

It's a world-class hole.

I'm still assembling instructions for the authorities and the other affected sub-banks, and just need to finish up documenting everything in my head that I'm not going to test. This is necessary because I can point external investigators and privacy officers to further systems that I can reasonably infer will also be broken given what can already be proven, without going there myself to prove it. I personally don't want gigabytes of financial software cluttering up the place.

Wednesday, May 17, 2017

Last year, Scotiabank haemorrhaged source code multiple times. Their Android code was available due to poor security practices, their own staff posted server code online and it spread to multiple sites, and their static web content was being leaked by what appears to be attributable to interns who worked at certain third party agencies that the bank uses. That latter code went on to fuel updates to "Scotty" (the Scotiabank scammers kit). Whilst all that bolstered what I've been saying for some considerable time about how bad things are with Scotiabank, all this action meant that I'd taken my eye off the ball elsewhere at other banks.

All banks in Canada have leaks. That’s just a fact of digital banking. So, my interest is just a question of quantifying how leaky each is and where those leaks originate. Scotiabank's leaks are normally through its staff for instance, CIBC's is through its infrastructure, RBC is through its customers and partially through infrastructure, and so on...

So it was that yesterday some source code for one of Canada's big six banks landed on my desktop… Like nearly 40MB of it. Then another 5MB, then another 7MB, and then more dribs and drabs.

The first step in handling a source code breach like this is identifying where it came from (Bank, department, and system), because obviously the authorities are going to need to know about this as soon as possible.

Bank source code doesn't always have its name plastered all over it, so often you rely on which URLs it talks to. Once you know which bank, you can date it by which frameworks are used as banks often flip-flop between competing architectures and systems every few years.

The last step I usually perform is looking for the insults… Insults might sound like a weird thing to identify, but they help refine things even more; For instance I have two banks in Canada that I use, and one is prone to insulting it's vendors whilst the other is prone to insulting its customers over the years. This usually adds to my distrust of banks - it's hard to swallow when a bank tells you that as a customer you're important when you're fully aware that you're simultaneously being insulted. But I digress... Other banks in Canada usually appear to insult internal programmers who came before the programmer writing the current insult, or they insult other departments within the same bank (iOS team insults Android, Android insults iOS, etc). You can tell a lot about the culture of a bank, it’s QA processes, and other attributes by who they insult and what is allowed to slip through, as well as further refining the dates to see how old the code is.

Additionally, the longer an insult remains in the code, the longer it serves as an external canary (when the insult dies, you know someone finally went through that code) for the outside world.

So, back to this leak - once I had identified the bank and system, it was time to check another important thing; were any virtual banks this bank supports equally in trouble. The answer was yes - there's a replication of failures across virtual brands within the same physical bank. This means that when I go to the authorities with this, they'll also get a list of which other companies outside that bank need to be notified as well.

So, this week all the information will get packaged up and I'll start compiling a report to get this taken care of and cleaned up.

The biggest question left now is simply who to tell first? There's the big bank at the root of the problem, the brands of the virtual banks affected by the problem at the big bank, plus the CCIRC (a dept of the RCMP) at the government, and I usually copy the CAFC so they're able to get a feel of what's going on given they deal with the public who get compromised when bank security goes wrong. I'll have that worked out in the coming week, and then this latest leak will hopefully get closed up.

Tuesday, May 16, 2017

It's been a while since I've had to roll up my sleeves and get ready to fight with one of my banks, but after the annual "Don't have a computer call me" rule slipped again over the past few weeks, I found myself this week getting tooled up for the concomitant unpleasantness that occurs as the warm weather returns.

Traditionally, springtime is an unpleasant affair for me. Money gets tight for me, and usually someone within the bank with an unbendingly stiff view of their rules (and this is never their problem but always the doing of someone else) then tries to push me into a corner of fines and penalties, whilst lecturing me about that bank's terms and conditions (which are never what I originally agreed to, but what they’ve introduced in the subsequent years through arbitrary changes that you have no choice over), and this never usually ends well for either party, so things usually escalate and get heated and ultimately we end up in an early summer showdown.

I don't normally handle being talked down to by anyone very well, or being treated just like a number, especially from banks given how connected banks are to my personal history in the first place. In the early 2000’s (during my 20s) I used to sit back and take it on the chin, but these days now I’m in my 40s I fight back when things get unjust - and I nearly always do it on a "You do x, I'll do the same" method unless I get especially riled up.

And so it is that I’m now sitting here again with a few 800lb gorillas in front of me, where the bank can be shown to have broken its obligations too (effectively I'd be holding a mirror up to the bigger pot calling the smaller kettle black), to make the bank consider coming up with some cool-headed alternatives.

For a number of years now when a bank customer service rep thinks that because they're a bank and I'm just one customer, I'll only bring a knife to a gun fight, they're usually surprised when I turn up with a proverbial platoon of backup. However, a subsequent effect of how I need to plan for defending myself against a bank, is because I have to pick the smallest items first and save the larger items for later gambits (so I always have something bigger on hand to escalate up to), as the years have gone by and the smaller items are used up, these 800lb gorillas became 900lb then 1000lb and so on.

Long gone are the minor issues that affect one or two people. Last year we were up to phishing, mobile security failures and the types of issues that might affect a small percentage of maybe a million or two customers (so you might have an effective compromise of say a few hundred to a few thousand people). This year, the smallest 800lb gorillas are sized like mosasaurs (basically a bus-sized angry lizard) that makes the old stuff look like child's play.

The nature of one of the smaller issues I pulled out this time is what I want to cover today, though I'm not going to specify which vendor to which bank is the problematic one. That information would go straight to the regulator (for obvious reasons) if I'm forced into a suitable corner.

All the banks leapt head-first into analytics a few years back now, especially on web and mobile. It’s not uncommon for a bank app or banking website to have three, four or more analytics and marketing systems going simultaneously. Each time a bank pulls in another third party, that’s another avenue where potential problems can start. (I covered some of this last summer - article here - http://coulls.blogspot.ca/2016/06/online-banking-and-hosts-file.html)

This time, we’re up to the scale of problem where everyone is wrapped up in the problem. It’s no longer confined to just mobile users, or just Android users. As usual, the public is often convinced this is all perfectly safe, and many people don't give it a second thought as a result. However, most of these systems can identify you across platforms (when you jump from mobile to desktop browser) and can identify you outside of the banking environment. This is commonly called "tracking", and it's what people might get upset about if there's anything they will get upset about. This is also why despite having built various portions of mobile banking apps, including in Canada, I also refuse to use mobile banking apps myself, and always opt to use online banking if I have to use something.

What everyone forgets is a) you can screw about with this tracking once you know it's weaknesses (the implications of this are huge) and b) the third parties handling this information are not always as safe as even the banks think.

What all this equates to for a bank is if they try to come after me and really try to apply the thumbscrews then I simply drop a compliancy deflection on a regulators desk.

That's an expensive proposition for a bank.

The bank still ultimately gets its money from me as soon as I have it spare, but it's put itself through an unnecessary metric tonne of bad karma and been shown to hypocritical in the process. This can come with an underlying cost that can run into thousands of times more expense than opening up a proper dialog would have cost.