Friday, June 30, 2017

This article has been updated to reflect further communication with the bank,as well as further examination by myself.

So today I had to deal with Scotiabank. The idea was to Interac money from one account to another account. Obviously, because this post exists, you can guess this simple process turned into a bit of a train wreck along the way.If you're not experienced with Canada's "Interac" system, allow me to quickly get you up to speed on this boondoggle. In short, you have banks connected to the Internet, who can't send money between bank computers over the Internet. To resolve this silliness, another organisation was created called Axcsys and they have this service called Interac, where you send money between bank computers over the Internet and they then charge you a fee for this. With me so far? Right... Somehow, instead of sending these money instructions immediately, like you'd send anything else over the internet, this takes about 30 minutes. Compare this with the longest known time to get a signal to Mars (24 minutes), and you'll that instructions can reach Mars about 6 minutes faster than they can reach you in the same city.First, I sent the money from Scotiabank (yes, it's also going into Scotiabank on a different account, which is likely held inside the very same physical computer). This email arrives about 30 minutes later. So far, so good.

(Click for full-size)

I clicked to deposit it and log into Scotiabank, at which point the transaction goes into some kind of Schroedinger's transaction state by being both "temporarily unavailable" and just plain "unavailable". (Click for full-size)

I interpret that to mean the transaction is unavailable - and as it suggests, I wait for a bit before trying again.

This is the beginning of the where things get weird... The money is debited from the sending account and is sitting in a pool account at the Bank to be settled between banks tonight whilst the promise of the money is sent immediately to the depositing bank (the same bank it just left). As a customer, I don't expect the bank software to lose money at any point or not be able to tell me where it is, but let's follow this process through to its logical conclusion.I waited a little bit and tried again, just as the previous screenshot has suggested... Now, I got this error, telling me that the transfer cannot be deposited. (Click for full-size)

So if the transaction was previously unavailable, and is now unavailable forever, you'd think that the transaction was unavailable, right? It's pretty clear that the bank is trying to tell me "This transaction will never go through", right?Wrong. I've seen this type of breakdown before where Interac transactions go into a weird state of quantum superposition. We can see this breakdown of logic here, because this transaction has clearly been communicated to the customer as both a) unavailable previously, and b) no longer available going forward. So, if the bank is saying this is "UNAVAILABLE", you should never then get an email from Interac like this:(Click for full-size)

This email is written confirmation that the "unavailable" transfer has apparently gone through. If you're not confused yet, you soon will be. I gave it 30 more minutes and then checked the receiving account to see if the money ever arrived. Of course, despite having written confirmation the money arrived, the money never actually arrived.

So, what does this mean? This means either:a) The ScotiaBank online banking system was lying when it said the transaction was not available.- or -b) The Interac system was lying when it said the money was accepted on the receiving end.As a programmer, it doesn't take many brain cells to realise that the Interac email couldn't have been sent out without some trigger from the bank telling it the money was received. In the same way you can't move an object without applying overcoming forces, you can't have Interac tell you that the money was received without ScotiaBank telling Interac the money was received.... computers may be many things, but they're not psychic, so this has to have happened. To recap so far: At this point, we know the customer has been led up the garden path and the bank has effectively lost track of my mortgage payment again.

So, I phoned Scotiabank and determined the following:1) Apparently the system isn't working too well this morning and they are aware of this. 2) It's going to take 48 hours to move the money from Scotiabank to Scotiabank to refund it.Let's back this truck up a bit and look at things logically. 1) Despite apparently knowing that things are broken, they're still allowing customers to initiate new transactions that will never succeed, instead of just being honest and transparent and saying "Hold off whilst we fix this mess". That's wrong right there.2) If Interac sent the email out that the amount had been accepted on the receiving end, this means ScotiaBank has told Interac that the amount was received - which is in contradiction to the online banking system that doesn't show the amount because it wasn't received. This means that whilst the audit trail can be followed backwards to find out what really happened, they've effectively temporarily lost the transaction and my money in a gumbo of instructional baggage that's piling up somewhere because they haven't stopped accepting new transactions.It's like there's some "common sense horizon" which when crossed, breaks down normal common sense and sensible logic. This should never have been allowed. It doesn't make sense to keep taking instructions if they're never going to work. Of course, now I'm waiting to get screwed when they don't refund the fee that I paid to send it.I will do a second post when I get a resolution to this stupidity.

This time, I’m going to focus on customer security.But, I’m not going to waffle on about the usual angles that everyone else talks about.Everyone has heard the well-worn clichés like “At such-n-such bank, customer security is of paramount importance”.Of course, they are going to say that because it’s their job to say that, but it’s not always 100% true, and this is what I’m going to demonstrate for this article.

For my first example, we’re going to look at smishing.

Smishing

This is the act of SMS messages being sent to people, with a call to action to get the victim to login with their bank credentials on a phishing site that was setup to emulate the bank. If a customer is compromised in this manner, a bank has two options; One option is to refund the customer any money they lost. The other option is to tell the customer that they were at fault for falling for the scam, and the bank is not going to refund their money. The bank’s also have teams of people that are supposed to look out for the customer (and the bank itself), and act on threats before they cause a problem, but as you’ll see, there’s an apparent blind spot.

So, how does smishing work?

It’s a bit of a team effort, but it works like this:

Someone spends some time building a list of URL’s that haven’t already been registered/blocked elsewhere. They then post these on online clipboard or paste sites, for the next person to pick up. You can see examples here, and note that it’s very much in the open:

Someone else then provisions the new website with a suitable clone (in the case of ScotiaBank, usually a tool called “Scotty”).

Finally, the smishing operation starts. You then watch Twitter for the first signs of customers alerting the banks.

(Click for full-res)

This particular URL example had a pretty fast timeline, as the site went up about as fast as it’s registration was propagated in about 24 hours, and as soon as they could start using it they did, however, often the lead times can be as much as two or three weeks. Normally things are a lot slower.

Now, it takes about 30 seconds a day to check a site like pastebin and get weeks of pre-warning about what’s being planned. However, if you go down those sites and look at the time they were posted, and when the banks customers started bleating about it on Twitter, you’ll see the banks didn’t ever protect the customer in a proactive manner, despite the fact the warning signals were there. I've warned both the CCIRC and the Privacy Commission of Ontario about this previously, as it's something that can clearly be seen and measured.

However, the thing to bear in mind here is the dynamics at play: You are the product, and the bank is the sleeping security guard. What can happen next is not so obvious to most customers. Where this type of scam gets interesting is with “whaling”. A phishing scam tool like Scotty will usually log victim info in a predictable place if the criminals setting it up were lazy and didn't configure it to log to a custom location. Just like the average customer doesn't change the default installation parameter of their software, neither do criminals. Other criminals know this, so they wait for the log file to get sufficiently large, then they try to snaffle the log file for themselves. Some of these whaling criminals are also lazy, and so they sometimes use tools called an auto-whaler. As you’ve probably guessed by now, the writers of the auto-whaler wait for the whaler to grab the phishing logs from the phisher, and then they auto-upload a copy from the whaler to themselves. Now, a compromised bank account will be located with the phisher, the whaler, and the writers of the auto-whaler.

But remember, “At such-n-such bank, customer security is of paramount importance <you can insert here any hornswoggle about firewalls and encryption that doesn’t help this problem either>”.

The next two examples I want to talk about are a bit more complicated.

ATS’s, or Automated Transfer Systems

Again, you can go straight back to Pastebin (https://pastebin.com/BMGZmTZt) and see how this is openly bought and sold. In short, code is injected via a bot such as Zeus, or SpyEye, into the victims browser, that watches the user legitimately login to their bank UI, and the credentials and responses are swizzled to a separate “Command and Control” (C2) server. This means the user is interacting with their bank’s website UI, but it’s the C2 server logging into the bank’s server using the info provided by the user. The user pays their bills and does whatever online banking they would normally do, and the ATS system is draining funds at the same time. To keep the user in the dark, all balances are adjusted accordingly.

TrickBot is another problem in Canada. Usually installed via a Word document with VBA macros, it installs a bot on the users computer. Once installed, it goes back to a C2 server for more information. This information includes what to look out for with the banks, and how to handle injections. You can see an example of an injection configuration for TrickBot here (https://pastebin.com/SGvG6aYh).

In the case of Scotiabank customers, this simply targets everything going through any part of Scotia Connect and sometimes the entire scotiabank.com site. Here’s the usual config line showing that:

“<mm>https://scotiaconnect.scotiabank.com*</mm>”

In the case of CIBC, things get a little more interesting. They have a dual login website, where you can login through the main cibc.com site, or a second website on the cibconline subdomain. TrickBot never goes for the main domain, and always goes for that either that subdomain, the or the cash management (cmo) subdomain.

So you’d end up having someone from Montenegro VNC’ing into your machine, whilst the Scotiabank inject went via 4allgod.com. This means if you have your passwords all in a spreadsheet or note and had that on screen, they've potentially screen captured all that information too.

So, back to mobile…. What about Android.BankBot.136.origin as an example threat?

The banks should all know the software is out there, and know the software will impersonate CIBC, RBC, TD, and so on. If you look at the underpinnings of each app that the banks put out, they don’t appear to be looking for the BankBot.136 app, even though the app is definitely looking for them. In Canada, you have a massive portion of users running just six banking apps, and so you’d think that the initiative would have been taken to look for this and other malware like it. However, I just don’t ever see it happening. Again, it's that "capture the flag" mentality, rather than the “scrub the attack surface” mentality that needs to accompany it..

But remember, “At such-n-such bank, customer security is of paramount importance <insert more whitewash about firewalls and encryption or some other tech that just doesn’t address these problems>”.

So, everyone is after your data, via the bank apps, whether it’s criminals, or the bank. That much is clear.

Now let's switch it up a bit; Remember, with banks, their services are no longer the only product. As a customer, you are also the product, though whether that's for criminals or people the bank wants to share your information with, is kind of irrelevant.

My interpretation of the CIBC privacy policy is simple: "if you give us anything, we assume you're fine with that and we can use it however we see fit“. Scotiabank is a bit more complicated as they try to sound like you can opt out of portions of collection of data. However, the mobile banking app uses the Enstream framework over http not https and sends stuff to Bell, and logs information through Adobe products such as Omniture so your private data is obviously being sent to other third party systems, and whilst the privacy literature online says you can opt out of having data collected, try to find anywhere in the current banking apps about turning off that collection there. It looks suspiciously like the apps were written in a totally different “second rate” spirit of privacy.

When I bank through online banking, I can use Ghostery and AdBlock to chop off browser tracking at its knees, and my browser can block third party cookies, whilst TunnelBear can say I'm in Hong Kong instead of the UK or Canada... so I have some control over my privacy and the underlying tracking from Adobe (who aren't known historically for not leaking too) can't link me to my online shopping for instance. If I used the mobile app, I've lost that control of my privacy and security.

That's a problem.

Conclusion

I bundled this group of problems under a very big umbrella because I think they’re actually symptoms of a common cause, and that is that mobile app privacy is simply treated in an inferior manner to web browser privacy, even though the customer uses the two interchangeably with the same privacy expectations for each.

In the case of smishing, I’ve been told that often this is a numbers game and that banks just let the customer get inconvenienced and the bank’s insurance will just cover any costs the bank incurs. I can’t imagine that the insurance companies would let that go on forever.

For this group of problems, I think the banks need to get together with the government privacy office and work through this as a collective. From a security standpoint, the banks already often talk together when it suits them, but certainly not enough to the customers. For instance, when was the last time your bank talked to you about what (if anything) they’ve done about TrickBot, or when did your bank clarify in their privacy policy as to what the Facebook SDK is doing in your banking app? Is it just liking and sharing with contacts, or are they harvesting your social graph to work out who your friends and family are? Go and look at your bank privacy policy and see if you can find the answer to those types of questions and you’ll normally come up with no answers.

The privacy policies for online banking doesn't accurately reflect mobile. At the end of the day, as a customer you are as much of a product to the bank, which can be sold to third parties, just as the mortgages and banking services they sell you are products. But, on the mobile side, things are not clear - and I think that's deliberate. The next time some social media rep, or media wag in the news is delivering the well-worn cliches about firewalls and other 1990s bafflegab to sound impressive to your grandma, ask yourself how all that affects the fact that your bank is using beacons and things to track you and gather information.

So, to recap what needs fixing so far in this series of articles.

Part 1 - Don't be lazy with mobile app security, and check the code being pushed into production for unauthorized additions.

Part 2 - Stop people doing dumb things like posting confidential documents in public by training them with proper rules and protocols.Part 3 - Stop treating mobile banking as a second rate privacy area.

Friday, June 16, 2017

Following on from the other day, I did a quick sweep of the UK to renew my 10,000ft understanding of the state of things in the UK, for comparison with what I know (and have previously talked about) in Canada.By and large, I think things are still done better in the UK than in Canada. If you look at most Android apps, they usually have proper obfuscation, which usually Canada doesn't. They usually use the JNI for sensitive code where Canada usually doesn't. They also often run 24 hour social media teams, which Canada doesn't. The other thing that sticks out, and I hate to keep beating a dead horse over this one, but attitude makes a massive difference. As anyone reading this blog or following me on Twitter knows from yesterday, I ran into a snag with Barclays Bank. To paraphrase events, it went like this...Me: You got a problem.Barclays: Tell us more.Me: Explains problem.Barclays: Thanks, we're fixing this as a matter of priority.*several hours later*Me: You got another security issue. We need to take this offline.Barclays: You got it.Me: Gets into deeper discussion. Swaps issue, solution, etc.Barclays: And if we need you again?Me: Here's my info.Now, contrast that attitude to, say, Scotiabank.Me: You got a phishing problem.Scotiabank: *crickets*Me: You're leaking server source code.Scotiabank: *crickets*

Me: You're pushing out sweary apps.Scotiabank: *crickets*

Me: You got a problem.Scotiabank: *crickets*

There's a clear difference in the attitude which improves the way things are handled. In every way I can think of, my experience with Barclays yesterday was on par with dealing with an American Financial Institution; It was quick, with clear communication so you knew what they were doing about it.

When it's done like this, two things happen:

1) A bank gets secured more quickly, and that's good for the customers.

2) You don't waste precious time and energy fighting against a bank that is in denial that there's even a problem.

As I said, I don't want to beat a dead horse here, as I've mentioned this many times, but I thought I'd throw out this as an up-to-date, real-life comparison from the UK.

Wednesday, June 14, 2017

After the little bit of excitement that happened around me this week, I decided to do a quick refresher of the state of UK banking Android apps, just to get a mental update as to whether things are more or less secure in the UK than here in Canada. (Short answer is for the most part, it's more secure in the UK)Regular readers of this blog will remember the unauthorized addition to the Scotiabank Android app, which sat there for 230 known days before I posted how long it'd been there, proving that Scotiabank were not checking what their programmers were pushing to the public. Well, it appears that Barclays has taken a leaf out of Scotiabank's book.

Click for full size

The above was spotted in Barclays Mobile Banking 1.42.

Edit 1:

Barclays are now aware of the problem.

Edit 2:

The problem was there in V1.30, pushed on April 14, 2016. That was a whopping 427 days ago.

As you are no doubt aware, last week’s “part 2” article on how how to fix mobile banking in Canada hit a nerve, and there’s been plenty of press about it since. It started in the UK, then went through India, the USA and as far south as Brazil.

There have been a few questions as a result of all this, which I’ve been asked repeatedly, and I’d like to address those here.

Q1. Can I please provide the the contents of the GitHub repo?

A1. No. Consider it cleaned up and gone.

Q2. Am I willing to name the specific Financial Institutions involved in the leak?

A2. Not at this time. If a suitable gov authority in one of the affected countries asks me, then obviously I’ll be more than happy to cooperate with them.

Q3. What did I learn that I didn’t already know?

A3. Other than who else TCS has as customers? As mentioned in “part 2” I already knew what my own bank was doing, so nothing new there. However, I did learn what other banks are up to.

Q4. Was there really 6 Canadian banks in this leak?

A4. No. My original blog post says 2 of the big 6 Canadian banks. Throughout the later follow-up new articles, things got morphed by other reporters until it was being reported that this was all six big banks. That’s their reporting, not me.

The Longer answer...

This was a leak containing confidential documents from a number of large financial institutions. Those documents were not ever intended for public consumption when it was written, leaked, reported, or when it was cleaned up. The American FI that engaged with me and documented/confirmed the problem then coordinated the cleanup knows what was there, and both TCS and myself know what was there - that is more than enough eyeballs looking at it.

I did discuss the possibility of blogging about the cleanup operation with the American FI that helped clean up this mess, and it was agreed not name them. When I work with someone, if they want it kept under wraps, then I’m fine with that.

As for what I’ve learned, this is a bit more nuanced; Yet again, I’ve found myself in a bit of a “lightning rod” situation, as people are offering up information left, right and centre, and I’m learning a lot through this new information channel. After “Part 1” it was the banks’ customers passing information to me about their experiences with the bank customer service teams and security concerns, and now after this installment, a number of consultants who have previously worked in the banks, including my own banks, are doing the same.

What I’ve really learned here, though, if I distill it down is as follows:

When I said I wanted to kick-start a discussion about the problem in Canada, there were far more people than I originally would have estimated, who have a similar opinion. I've learned that I’m really not alone on this train of thought.

Tuesday, June 6, 2017

I wrote it and then a series of security events happened which delayed things whilst that got cleaned up. Then I came back to finish off the article incorporating what happened after originally writing it. Therefore, I apologize if this is a bit more disjointed than my normal style.

Cheers

Jase

—

A quick recap of the angle where I’m coming from… Whilst everyone thinks mobile banking and digital banking in Canada is fine and dandy, safe and secure, I personally believe it's a bit of a nightmare with lipstick applied to it. Banks leak on a daily basis and people appear to do things that leave me scratching my head frequently, so in part 1 of this series I took a look at the ludicrous design all Canadian banks have for shipping apps with URL end points to their backend on full display, as well as dodgy things like IP addresses that resolve to India and clearly unauthorized code that slips through…

After the success of the first article, I wanted to cover a different aspect of digital banking in Canada. I want to cover the subject of people and security training, and how this affects digital security with mobile banking in my view. Naturally, banks will tell you everyone are trained to a high degree in being a safe programmer, etc, and you need not worry about this - but I beg to differ…. In the process of researching this article, found a multi-bank breach backing up precisely my point I was about to write about.

The responses were all styled like canned responses that started with something along the lines of “At blah blah bank, we take customer security very seriously“, and then they would just deflect the customer to a standard web page with a security guarantee whilst reiterating to the customer what *their* responsibility is to the bank. All of this didn’t address whatever the customer was originally asking about, and so basically, they were all blowing smoke up the customer’s backside and fobbing them off.

The responses always came from a non-technical Customer Service rep, and never from someone that actually understood what was being asked by the customer. This is synonymous with getting mortgage advice from the bank electricians, and from a cybersecurity standpoint this has the same effect.

There was never any indication that the concern being raised was going to reach the people capable of fixing the problem. This is something that I know very well from my own experiences over the years. Things appear to go into black holes, and you never hear about them again.

This is all stuff I’ve seen a lot of personally, so where does this strange attitude to customer security come from?

At the time of writing this, 9 out of 25 (~40%) of tested banks designated as “Schedule 1” by the Canadian Bankers Association in Canada, have a standard phishing problem caused by incorrectly configured security on their websites. The general rule (with the exception of BMO and TD) is that if the average person has heard of that bank, it’s got a phishing problem. Banks that the average person wouldn’t know (ZagBank, VersaBank, B2B Bank, Citizens Bank of Canada, etc) don’t suffer this problem, despite the fact that the major browsers had a solution to this about 8 years ago. Many of these banks were specifically notified by me that there is a problem, but it was never fixed. The ratio of affected banks drops off somewhat when you look at the Schedule 2 banks. Things get even more secure in Schedule 3.

There’s a clear trend. The major question is where does that trend originate?

You could joke that the more Canadian a bank is, the more likely it is to be open to phishing. However, if we leave banking for a second, we can use another “truly Canadian” behemoth to get a different angle on things.

(Click image for bigger version)

As you can see in the job postings above from Bell Canada, there are differences in how long they think they’ve been around (likely the result of sloppy copy/paste errors), but there’s no doubt about their “Canadian-ness” that they’re trying to put front and centre. What is important here is that Bell Canada is not a bank, but suffers many of the same symptoms as the major Canadian banks.

The first problem is simply an attitude problem. If you question Bell Canada and their security, you’ll get the same type of canned response about how “security of our customers is paramount”, or some equally well-worn cliché. There’s a cultural wall around the company where any challenge to their security is met with a standard response of denial. There's apparently no security problem.

Second, if you look at the company technically, you’ll see the same technical hallmarks you see in a typical big bank; There’s insecure mobile apps being pushed to the public, the website where you manage your account security has been open to phishing for years, and customer data moves around on non-https connections. Other than being Canadian and just as "leaky" as a standard bank, what else does an entity like Bell Canada share with 40% of the Schedule 1 banks in Canada?

Obviously, size is a factor. Also, financial constraints around certain resources is a factor. In both banks and a large non-Bank organisations like Bell Canada, customer service is an expense of doing business rather than being treated as an investment to make things better.

In the wake of the latest breach out of Bell Canada where 1.9million accounts were compromised, you have to ask whether the concomitant fallout and class-action suits that can arise from this could have been avoided if the customer service desk actually connected concerned customers to internal security people, rather that just declaring “We’re Bell Canada, ergo we’re safe” and carrying on in perpetual denial.

Another problem is a sheep mentality. Large corporations like Bell Canada just follow what the banks do. Thus, if the bar is initially set low by the banks, a corporate entity like Bell Canada is not likely to go much above and beyond what the banks do, right?

The biggest problem, however, is training people.

If you’ve never worked in a bank, you basically go through a few weeks onboarding process when you start. The process involves going through some basic common sense training, like how to identify when you’re being bribed, how to identify and report when something looks dodgy, and so forth. In Ontario, this normally comes with an additional course on dealing with people with disabilities, but again, it’s all stuff that should be common sense.

If you were to believe the tone of this training material, you may be fooled into thinking that banks are really safe from a digital standpoint and that your coworkers are safe, too. It's a tone that implies you're working in a kind of digital fortress, if you will. However, there appears to be no insider threat programs, no “how to be a safe programmer” and so on, so things can unravel pretty quickly from the inside out.

I finally expired on my Militarily Critical Technical Data Agreement in Feb 2017, after half a decade. It’s really obvious to me, because I’ve gone through the NISPOM, that you don’t move code from one computer to another using USB sticks, because you may forget to properly sanitize the stick after, leading to the potential for a leak if you leave the office at the end of the day with that stick in your pocket. Not to mention that if you’ve previously stuck that USB stick in an another external computer at home or a library, you’re at risk of introducing malware inside the network. But I’ve seen that happen in banks on a daily basis.

I'm really not making that up.

My gut feeling is that banks could learn a lot from the spirit of the NISPOM, especially when it comes down to safeguarding confidential information. Unfortunately, I don’t think many people working in bank IT have ever heard of it, let alone looked at it.

Here’s a specific example of what I’m talking about.

In Canada, we see a lot of Indian consulting companies in the banks (the idea being that it’s cheaper), and this by it’s very nature means that hiring these Indian-headquartered companies must come with the additional security problem that they introduce, because they must exfiltrate a lot of confidential documentation about the inner workings of the banks for collaboration with their other regional offices, or for approval by superior managers who are not in Canada or even North America. This problem should be common sense, but the banks still do it anyway.“Ours is not to reason why” and all that jazz, right?

The net result of this forced exfiltration, naturally, is their version of the aforementioned USB stick problem is orders of magnitude worse, because you have not just bad code and bad document handling skills resulting in stuff being shared with all and sundry outside the bank, but the protectionist cultures of some of these consultancies always demonstrated to me in the past that they’re more interested in the politics of progressing their consultancies ever deeper into the banks (and covering their backsides along the way when shit goes wrong) and so doing the right thing for the bank (who is their customer) comes second place to that agenda.

As an aside, I've fallen foul of that personally, because if I see something is clearly wrong, I'll do something about it so it can be fixed. One well-known large IT consultancy I once sub-contracted for was not pleased that as an IT person I fixed the customer IT problem. That bank and I still have an excellent relationship to this day, because I don't do politics, whilst that consultancy has had it's numbers drastically reduced.

So, what’s the worst possible result of all this politics, bad code and bad data handling? In short, people do some really, really, dumb things with confidential documents.

You can’t sugar coat this. It’s stupidity on a massive scale. This is where I believe the crux of the problem is.

Smaller banks and smaller organisations hire local people and so the data naturally doesn’t have to travel externally to India. Big banks hire foreign IT consultancies and so confidential data is routinely exfiltrated and is frequently on the move, and often passes between people with inadequate training and no common sense. No amount of IT protection against external threats is going to solve a problem that originates internally. Again, that should be logical common sense, but if you look at the status quo, the evidence says this is frequently not how it’s being treated.

Like, I’ve seen some really dumb stuff.

Now, this is the part of the article where my natural assumption was that the Internet is just awash with stuff leaking out of Indian consultancies working in Canadian banks, so I was just going to quickly find a document online that obviously shouldn’t be there, and make a “See! This is what I’m talking about” example of it.

The original plan was this should only take about 5 minutes as one of my banks has a "confidential" API platform review underway inside the bank that is public knowledge to everyone outside the bank. All I needed to do was point to a document with meeting or phone call notes about it and things would get cleared up.

As I said, it should be a quick job.

The reality was that I found the worst example I've seen to date, of precisely what I’m talking about. Someone in Kolkata, India, working for Tata Consultancy Service (TCS) was leaking IT documents from one of my banks.

My original plan of spending a few minutes to go and find a TCS leak or Tech Mahindra leak pointing at one of my banks quickly morphed into a massive operation of trying to work out what to do with a multi-national confidentially breach across multiple banks and financial institutions, originating from one guy in India.…

You know how I found this? I’ve seen clueless people use a free (and therefore wide open, and not private in any way) online repository like GitHub before for confidential stuff. But here was a TCS manager in India, using a free GitHub repo to manage multiple banks and financial institutions around the world, with all their documents on their various projects on full display to the world. Even Google had indexed them. Every migration plan, every estimate, every powerpoint telling customers how TCS were going to fix or upgrade their system. Obviously, in a multi-bank breach like this, the first bank to pick up the torch and run this issue was going to have a high technical overview of what everyone else was doing.

As I said before, people sometimes do some really, really, dumb things.

This was a new level of monumental head scratching activity, as you could literally fork or clone an entire repository of containing architecture details and roadmaps for some of the largest financial institutions in North America.

Here’s a list of recently checked in documents, for example.

(Click image for full-size)

Now, obviously, I approached my bank first, but they confirmed that they still have a policy of not paying out for cybersecurity info, and given I have a policy of not working for free for the banks either, I simply moved on, and the breach went south.

Literally.

As is often the stark contrast between Canadian FI’s and American FI’s, a well-known US financial organisation was more than happy to engage with me. In fact, it was their President & CEO that first made contact, offering me his email address.

That never happens in Canada.

Next, a Senior Vice President went over my backstory, the evidence I was presenting, and confirmed the problem symptom and source of the problem. We had a quick bit of back and forth and then he dealt directly with TCS on the matter. It was actually a joy to deal with these people - there was no messing around, and things were always clear.

This morning, I checked to see if TCS had acted as a result, and sure enough the public Indian GitHub repository is deleted, along with all the various bank documents. Looking at the LinkedIn page of the leaker, it appears that TCS has not fired that individual yet for being such a monumental tool.

Conclusion

A common reaction from Canadian banks when I ask (or talk) about a particular security problem, is they immediately think that I must have accessed something internal inside their bank, and that I must have crossed/breached some external perimeter they have to get to something inside.

This is understandable.

However, my never-ending mantra is that Canadian banks are naturally just leaky - so you don’t need to go into a bank to find security problems. Some banks are more leaky than others, and if you know what their normal causes of the leaks are, you can start guessing where the “puddles” of data or information will appear. Today, I highlighted an age old problem I've seen for years in Toronto where the consultancies working on the bank's IT systems are a cause of some of the breached information.

In part 1, I highlighted the silly laziness problem where programmers didn't even try to hide URL endpoint strings, allowing script kiddies and amateur hackers to quickly work out the back-end endpoints at many banks. In part 2, I've now highlighted another big problem that originates inside the banks - training people to not do stupid stuff.

Training people and having proper protocols and policies in place is key to securing the banks. I personally believe banks should take a look at the NISPOM. Look at the spirit of what it’s trying to achieve, and think about how that applies to a bank.

Banks also need to stop with the chorus of denial. There should be meaningful collaboration programs with the public. Or do something like ABN AMRO did and join HackerOne to accelerate the process of securing things.

Banks need to train people to not do stupid things like uploading confidential information to public GitHub repos, pastebin, and so on.

Now, you'd think I wouldn't have to point this stuff out in 2017, but obviously I do need to point it out because this is a problem, QED.

So, to recap what needs fixing so far in this series of articles.

Part 1 - Don't be lazy with mobile app security, and check the code being pushed into production for unauthorized additions.

Part 2 - Stop people doing dumb things like posting confidential documents in public by training them with proper rules and protocols.