Wednesday, December 21, 2016

As everyone knows, there's a few things about bank security in Canada that gets my goat. I've documented a few in a little video, and explained why it's a problem.Here's that video:

What I hope to achieve from this is that it kickstarts a proper dialog on what's continually going wrong. Traditionally, this bank and I don't have proper communication - just the normal platitudes and rhetoric about security being paramount, etc, - so hopefully someone will see this and start taking seriously the points that I raise with the bank, instead of it going into the perpetual customer service blackhole.

Friday, December 9, 2016

Update - This turned out to be busted, but not for the reasons I thought it would be. See here.

This week, one of the big five banks in Canada rolled out an update to support cloud-based HCE (Host Card Emulation). Specifically, it was the Rambus “Bell ID” system - which they call “Secure Element In The Cloud” or “SEITC” - though everyone else has known this for years as plain old “cloud based HCE”.

Whilst it’s always interesting to see technological changes, it’s equally important to think about the ramifications of such changes.

Just rewinding for a second for some quick history, first we had “Google Wallet” V1.0. This tried to use a hardware device element to hold encrypted data, but network operators had started their own ISIS system (used to be at www.paywithisis.com ), which got renamed for obvious reasons as Softcard (which was at gosoftcard.com). Simultaneously, smartphone manufacturers started adding their own Secure Enclaves - Apple has one called “The Secure Element” for instance.

Google Wallet V3 is radically different. It uses a technology called Host-based card emulation (HCE) instead, where card-emulation and the Secure Element are separated into different areas. For example, in HCE mode, when an NFC enabled Android phone is tapped against a contactless terminal, the NFC controller inside the phone redirects communication from the terminal to the host operating system. Google wallet picks up the request from the host operating system and responds to the communication with a virtual card number and uses industry standard contactless protocols to complete the transaction. This is the card-emulation part. The transaction proceeds and reaches the Google cloud servers where the virtual card number is replaced with real card data and authorized with the real Issuer. Since the real card data is securely stored in Google’s cloud servers, the cloud represents the Secure Element part. In general, this approach is considered less secure compared to the embedded SE approach.

The problem that the banks are hitting is there are many people with devices that don’t have a hardware enclave, and the banks want to been seen to be trying to accommodate those users. In this example, they've gone the Bell ID route.

When you consider that a major part of the security is that the secret sauce is stored in a secure part of the hardware that the OS generally has no access to, the idea of lifting this up and sticking it in the cloud immediately begs the question of what happens if that back-end is then compromised?

There is less privacy with cloud based HCE. The mobile payment providers can already see who uses a certain credit card number, and then they do choose to share that data further with merchants or other companies for commercial and advertising purposes. This is something Google has already done with Google Wallet.

When you consider the pros and cons, it is hard not to feel like the banks have opted to put security in second place behind the optics of convenience for what could be inherently insecure devices anyway.

Wednesday, December 7, 2016

Regular readers of this blog will know that when a bank tells me how safe I'm "supposed to be", I will largely view anything I'm told as hornswoggle. All my adult life, I've listened to people telling me how much effort, technology and protocol is in place to protect me, yet, it can always be demonstrated that things are nowhere near as safe as people would have you believe.Recently, I've been working on the hypothesis that Canadian banks spray source code around like some people spray air-freshener... that it's just flying about and nobody cleans up when it lands somewhere it shouldn't. This hypothesis may initially sound absurd in the face of conventional wisdom, but then again, conventional wisdom assumes that the banks are actually safe - even though you can pick it apart and peel back the layers.Banks obviously say that source code is kept suitably safe - after all, they have to say that to keep up confidence - but today, during my lunch break, I decided to do something different. Very different.Instead of looking for an accidental source code leak like I usually do, I assumed this time that I'm looking for code put somewhere by a programmer that really doesn't give a crap about what they're doing, and generally has no regard for customers or the bank. This meant that not only did I have to look somewhere outside of the banks, but it had to be somewhere that bordered on maniacal to think that someone would even conceive of putting code there.I found what I was looking for. Yes, I was surprised, too. Most surprisingly, in this source code was my first run-in with code that handles SWIFT transactions. You may remember news stories about how the SWIFT system was compromised earlier this year, well any code that interfaces with that system or interfacing with the data going through that system, should definitely not be laying around outside of a bank as that's just asking for trouble.... However, real life is often stranger than fiction, and that's what happened. This code is from one of the core financial services at the centre of one of the South American subsidiaries, which runs through all transaction types, and reading through the code we can see it's processing mortgages, Forex, SWIFT, drafts, deposits, and so on. It also gives insights into how the overall service was built and what components it comprises of (a task for another lunch break, perhaps).I won't say where I found this or which bank it is for, until I've worked out what to do with it. Canadian banks don't always cooperate with me anyway, and given it's nature, I may have to report this directly to SWIFT to deal with.

Monday, December 5, 2016

In April of 2016, I found myself talking to a lady at the Office of the President at Scotiabank. I knew something that Scotiabank might want to know about with regards to a cybersecurity problem it didn’t know it had, and we were trying to explore the next steps to exchange information.

The outcome of that call was I would send Scotiabank an email laying out some background information, and they'd pass it to the most appropriate person in the bank to get the next steps in progress. I work in technology and I definitely don’t work for free, especially for banks, and Canadian banks generally don’t pay the public for cybersecurity advice - which traditionally means that nobody tells the banks what they need to know in the first place. However, I sent them an email that explained that the bank had a big cybersecurity problem and I tabled a simple barter; as a bank they could make a phone call for me which I didn’t have the power to do, and in return they would get the information that they needed. It’s a simple “You help me, and I’ll help you” arrangement and no money has to change hands.

A day or two later, a senior cybersecurity person at Scotiabank called Rob Knoblauch took a look at my LinkedIn profile and that was the last observable action taken by ScotiaBank on the matter that I could record. Given the choice of acting on the fact that someone is telling you you have a cybersecurity issue, or taking the other option of not acting on it, the issue disappeared into a black hole, and nobody at the bank ever contacted me again. Exactly 120 days after that, I sent a follow up email to the Office of the President, explaining that I was sending information to the CCIRC. No response came from that message...

So, what precisely was at stake?

The bank had been observably slipping in it's cybersecurity efforts for some time, and by April 2016 it was now showing serious signs that an internal cyber-shambles was in full effect. Not only had the bank forgotten to protect its Android source code (meaning every time it published a new app, everyone from white-hats to criminals could see how the app works and could compromise it, patch it, repurpose and repackage it, etc), but it still allowed phishing on its Internet banking website because they’d not patched a simple click-jacking attack vector. It was also known that cybersecurity policies either were not being followed or didn’t exist, as popular credential sharing sites still contained ScotiaBank’s domain.

Meanwhile, in the US, a Mobile Application Development Platform (MADP) vendor, Kony Inc, who makes the tools that ScotiaBank uses, was the subject of ire by a frustrated Scotiabank programmer who inserted a message on a test screen in the Android app with the words "Fuck kony" (sic) in it. The programmer probably thought that nobody would ever see this unauthorized addition to the app, unaware that the release team at Scotiabank was failing to obfuscate the app properly when sending it out to customers, and also unaware that nobody appeared to test the security of the final product. As a result of Scotiabank turning off it's code obfuscation on its Android app that same month, anyone that knew what had happened was now crawling through their mobile source code, and it was apparent that any rogue programmers within the bank inserting unauthorized changes would be able to get away with it, because nobody had caught it and now over a million Canadians were walking around with expletive laden apps on their phones. The CCIRC were notified that the source code was available to all and sundry, but the rogue programmer problem was left in place as a warning canary, to see whether the bank would be doing proper code reviews and time how long it would take for them to catch it. Besides, if anyone did anything worse inside the bank to the app, it would be caught outside the bank and the alarm raised.

October was National Cybersecurity Awareness Month (NCAM), and Scotiabank was as vocal as many of Canada’s big banks with its platitudes about cybersecurity and how it takes security “very seriously" and pedalled well-worn rhetoric that "security is of paramount importance". Each time, the focus was on making sure the customer did not compromise themselves and the bank with them, meanwhile, in spectacular fashion, Scotiabank kicked off NCAM with two more mobile source code breaches in as many days, as it pushed more updates to it’s app, still with no protection on it’s source code.

It also came to light that Scotiabank’s programmers had posted crash stacks to the public paste site pastebin.com for internal iPad kiosk projects within the bank. During NCAM, Scotiabank had more leaks than a sanitary towel advertisement with blue water demonstrations. This blog, which many banks read in Toronto, tipped everyone off that Scotiabank had an unauthorized code addition in it’s app on November 15th. By November 16th, a new app was being pushed to Canadians that, whilst still exposing much of it’s source code, was at least being polite again to it’s MADP vendor. As ever, ScotiaBank said nothing about the matter.

The exact time that the programmer slipped in the vulgarity is unknown, but it is proven to have been visible to those outside the bank for at least 230 days, during which time the bank never caught it using it’s own policies and practices.

Whilst Canadians spent much of 2016 walking around with swearing aimed at the bank's vendor in their pockets, they were simultaneously very lucky that this same programmer had only done what he or she did, and that they had not planted a few lines of unauthorized code to exfiltrate credentials instead. As the bank was repeatedly shipping an unauthorized change in their apps, Canada was dodging a serious chance for a very large insider-job bank heist.

Friday, November 18, 2016

If you're reading this, it's likely I've just told you via email to come here for some answers to the question(s) you just asked me. It's been a strange time since the ScotiaBank incident went public. Many people have asked me the same questions over and over, and I continue to get asked about it. It's a serious time drain right now.So, here are a few of the answers to the more common questions:

How long was this going on for?

The earliest I can confirm it was a problem was March 31st. Given it was corrected on November 16, that's 230 days.

Did anyone else know about this?

Yes, I had previously communicated the problem to Kony's Chairman & CEO. I'd been tracking this hidden insult over numerous releases of the app. Mr Hogan is also aware that after the story broke, ScotiaBank quickly issued an emergency patch. I've no idea whether ScotiaBank has apologized to him or Kony directly, though I do doubt it.

What's your take on events?

I think Canada was lucky not to have it's first serious "inside job" bank cyber-heist. When Scotiabank showed that a rogue programmer can contribute unauthorized code to an app and nobody did adequate code reviews to catch these unauthorized additions, we should be thankful that this rogue programmer only inserted f-bombs, when they could just have easily put in a few lines of code to exfiltrate credentials en-masse.

What has ScotiaBank said to you?

Nothing. They're Scotiabank and I'm just a customer. They don't listen to me, and unless they are chasing money, they won't call me either.

Did you notify them?

No. I used to help Scotiabank because I thought it was the right thing to do, but I publicly withdrew my support a long time ago after the customer/bank relationship broke down. These days, if it's just a regular vulnerability, I leave it as a warning "canary" to see how long the bank takes to spot it. If it's a big issue that could impact millions of people, I might document and send to the CCIRC. At that point, it's down to the authorities to deal with the bank directly.

How did you find this?

Do you know of other issues?

Yes. I'm aware of a number of them.

Should we be worried?

Personally, I banned my family from using ScotiaBank digital products, and recommended to friends (after the second October breach) to avoid their digital products. I'm the only one to use online banking (out of necessity) in our family, and this is only done on a designated Mac with additional precautions specifically implemented for dealing with ScotiaBank. I don't allow the mobile apps on our devices (I saw what happened in April with the porn problem), as I believe that the bank is allowing itself to be a target for a massive breach.

So, there you go. That's the answers to the common questions I keep getting asked.

Over the years, I've learned that an unhappy programmer is a bad thing. What ultimately happens is either the programmer does something bad, or does something stupid - and in some unfortunate cases, does both. Here's an example with Scotiabank that showed up an unhappy programmer, and it's actually quite embarrassing for that bank.ScotiaBank built their current Android app using the Kony system, and this is outlined on Kony's website. (Click for full resolution)However, the unhappy developer left a "F**k kony" message in the app and then shipped it to over a million of the bank's customers.... Here's the figure backing that up, as shown on the Google Play store.(Click image for full resolution)Here's the offending message pulled from Scotiabank's Android 16.9.1 app (it was also there going back to April at least).(Click image for full resolution)This is the type of thing that can make or break a reputation of an institution. You need to keep your developers happy, and address the issues they have, otherwise things slip and what we're seeing out of ScotiaBank is the result.

Wednesday, October 19, 2016

I was mulling over a tweet this morning where I read about how Canada was going to be helping in financial cyber security with other G7 nations (Link). I found this a bit ironic as the financial security in Canada is usually quite atrocious. I’ve spent a while now, collecting proof of how bad it is, and there are definite trends I've noticed.

I’ve been trying to work out for a while as to what the root cause of the problem is. Usually, I can simply correlate a symptom to a cause; Yesterday, for instance, I pointed out to ScotiaBank that they’re allowing customers to be phished again.

Whilst that’s the symptom, the underlying cause is one of these three things:

* The bank doesn’t check for this.

* The bank does check for this, but failed to check properly.

* The bank did test properly, but someone thought it was OK to publish regardless.

The problem is simply that the aforementioned symptom is just the tip of the iceberg. Elsewhere, I see way bigger issues. My thoughts turned to trying to work out why the bank security keeps failing - something I usually blame on policy, because if the people writing the rules for “what to check” know what they’re doing, and other people following those procedures do it properly, you wouldn’t have these problems.

And then the idea occurred to me today that there’s a bigger fundamental issue…

Anyone that has followed military tactics will know how the current Russian/Surkov non-linear warfare model is bamboozling lots of people, well, basically the bank’s face a similar problem and it’s bamboozling them, too. In the old days, you had the bank and the bank robber. The linear aim was for the robber to get the money in the vault - so it was the bank’s job to stop that happening.

Fast forward to 2016 and we have this triangle, where if you compromise one side of the triangle, you can get to the other two.

In this model, we have:

1) The bank. This is the bank and it’s infrastructure like online banking, virtual vaults, payment messaging systems, etc.

2) The customer. This is your average Joe on the street. He/She can be socially engineered.

3) The shared environment. This is where the bank interacts with customer’s hardware.

In a non-linear attack, an attacker can go for any side of this triangle, any combination of two sides, or the hat-trick of all three sides. That means the bank cannot easily anticipate how to out-fox a would be attacker - and sometimes the attack on the bank means the bank isn't directly attacked in any detectable way.

The modern bank has to be on guard on all three sides and protect itself from a non-linear threat, and that simply doesn’t always happen. Any bank that gets sloppy with it’s procedures, or allows customer phishing on it’s own site is going to be inviting trouble. If a bank leaks data, has incomplete security procedures or leaks source code, then it’s going to invite really big trouble.

I’m not a security expert by trade, but I am observant and I track what I see. When I see banks suffering these symptoms, I see the potential for really big trouble.

Monday, October 17, 2016

I've been programming since I was about 8. Trouble was for 3 years I did it on paper as I didn't get an actual computer until 1984.

Over the years, I've programmed a lot of things: Notable items include working on first Palm Pilot banking app in Canada. Mobile advertising on buses. Writing several versions (singlehandedly until I was given help) of iHeartRadio for iOS, and most recently I programmed the app for Tellspec.

That quick run down skips a lot of Windows, Mac, iOS, Palm, Blackberry, and so on, jumping about between development environments and platforms. I've literally spent my past 9 years up to my eyeballs in iOS with some runs into C# for FEMA EAS related stuff, and other languages for banking or manufacturing.

So it was that I found myself installing Visual Studio last week. This is something I first got introduced to in the late 1990s when VB 6 merged with Visual C++ (and IT Ake with Visual InterDev for web development). (I'd been with VB since v3 before that)

I may have started programming on a 48k Spectrum, and I may have had millions of people running my code on iOS, but Windows is like my spiritual home. I've spent decades there. My first professional Windows app (the "Memory Compactor") worked on Windows 3.11 and played on a mechanism of Windows that I could use to the users advantage to free up memory.

Now, I found myself "coming home" for a new project on the side, where I was tinkering with an idea. The outcome of the project has no bearing on the company i work for - the only thing at stake was whether the idea would fly or not. If it flies, I present the idea to the boss - and if it fails, then I've spent a bit of time keeping abreast. At worst, if someone asks me what the latest version of Visual Studio I've used is, because they're trying to trip me up, I can answer 2015.

So I installed the Community Edition to get re-acquainted with it. (yes, this hardcore Apple Dev was back in Microsoft land)

On the whole, I was very pleasantly surprised. There was some familiarity that made me happy - and there was something new, which made me really happy.... the MFC "Visual Studio" style template.

The standard templates for SDI and MDI MFC apps have been around 20+ years and are well documented to the nth degree. If you have a problem, 20 seconds in your favourite search engine will show you the answer. This new Visual Studio template, however, is new, undocumented, and doesn't have a tone of Q&A.

On the flip side, I posted a comment on Twitter that I'd spent a day in VS (which for a person who spends most of his time in Xcode, makes for a massive change), and was surprised to hear from the Visual Studio Twitter team.

They were proactive, but given my circumstances (I work on a mac in a virtual machine), their enthusiasm to share keyboard shortcuts didn't hit a bullseye with me - but that's not their fault as I'm on a Mac right now.

Now I let them know that unfortunately I was a slight edge case, and they took it gracefully and extended that if I needed future help to shout....

And this is where this post comes into play.

I started Windows programming in VB3. Like before the data access control showed up and revolutionized things. I've grown up through VB4, 5, 5 Control Edition, 6, Fred, C#, and simultaneously gone through MSVC 1, 2, 3, 6, (skip a lot) and now land at (14?).....

... and I hit a problem.

Rather than complain about lack of docs, I spend several consecutive evenings (I have a day job to attend to) trying to find a solution and ultimately now decided to pick up the offer from the Twitter team about help.

What I want to know is how this VS template works. I can put properties in the Property Sheet view and if I ask for the current document, it always null, meaning I cannot persist those changes.

I normally don't consider myself an idiot, but if I'm using a VS2015 template, surely there is some documentation somewhere explaining how this is supposed to work?

It's not the end of the world for me (I can just drop the idea that we ship a windows app), but I'm kinda feeling that I should have a solution.

Thursday, October 6, 2016

Toronto's Financial Crimes Unit (FCU), in partnership with other community and government stakeholders, has a Twitter chat each month called #FraudChat. I usually try to listen in on it, and most months I have no comment.But not this month.I was particularly looking forward to this month's, as in Canada right now it's Cyber Security Awareness Month, and this means we were more likely to be in for some special guests. As always, it was an informative event to follow along with. The topic was identity theft/fraud. Some guests concentrated on property/title fraud, but I was interested in hearing what one particular guest had to say - the Canadian Bankers Association (hereafter the "CBA").The entire chat covered many angles, from physical issues like people dumpster diving for mail, to hacking and trojans, credit reports, scams, property title fraud, etc. However, given my knowledge of Toronto, I was looking for signs of something specific to come up in conversation. Diving in a dumpster might reasonably reveal information on between 1 to 5-6 people. A trojan on your phone might slurp the contact details of 1,000 people. When you have 20 million people doing online banking on just a handful of websites, thats where I'm interested.Now, the CBA is obviously going to be biased into pushing all the security onus on to the customer. In this chat, however, all they brought to the table was a series of tweets that pointed to pre-existing articles on their website. All of which were exactly as biased as you would expect them to be (how to spot a phishing email, don't give out your personal details, etc). I feel like this was a lost opportunity on the part of the CBA. Whilst there was no usual "we take security very seriously" that you'd expect to hear from any bank or banking-related organisation, there was also zero mention of what their members were doing that was new and would tackle the existing security deficiencies that Canadian banks have.However, every cloud has a silver lining. The CBA website gave me something that I can use to determine what I've suspected for years, but have never been able to prove with bank cyber security. So, as soon as I've had some spare time, I will be back with the result to the burning question of the past five years.

Thursday, September 22, 2016

It's been a busy year this year. In my day job working in food security, I've been to Taiwan, Arizona and South Korea. I've met a lot of people who want to help solve some really really big problems that literally affects billions of people. It's been rewarding to see this year unfold, if a little challenging.

In my spare time, I've also had some rewards. As most people know, I've had nearly two decades of challenges with one of my banks. Well, something interesting happened.

Traditionally, the customer/bank relationship looks like this:

It's not a productive loop, and it's prone to issues. For instance, I've experienced "We're looking into your problem" when I've not actually stated what the problem is yet. (In programming, we call this a "race condition"). Another problem is that if you are the one reporting, there's the sensation that things disappear into a black hole as you never get feedback.

But, that was all I had for 18 years.

About a month ago, something I saw emanating from the CIBC twitter team that was obviously "incorrect from a technical standpoint" annoyed me. On that day, I was going to be downtown, so I thought I might as well just break the "Report" -> "Thanks" cycle and walk into the bank with the solution to the issue. Long story short, my "let's just cut to the chase" style wasn't met with the same enthusiasm.

I have no idea what switch flipped that day within the bank, but after 18 years, we changed to this method of communication:

Now, instead of the uncertainty of whether technical messages are getting through, or are distorted in a game of "broken telephone", the bank was asking what I needed?

That's a very simple question for me - If someone gives me a room full of technical and policy people that I can speak unimpeded and natively to in order to explain a) what I can see and b) where I can see it, then I can get effective feedback (anyone that's ever worked with me knows I don't like red-tape or unnecessary delays) as to whether what I'm saying is even being understood, as well as the added bonus that you can hold a Q&A session to clear up any loose ends. I think that in 30 minutes, I offloaded more information about customer optics and technical issues than I have since the "triple-Interac-debit whilst saying money was not debited" debacle of 2013.

Another good thing that came out of this is the customer service person I normally deal with (and probably frustrate to no end) was also in on the conversation. In an age where KYC ("Know Your Customer") is a buzzword in banks and large organisations and doesn't actually mean that they "know" who you are, I think it was good that the frontline customer service person who normally has to deal with me could see me in my native habitat. Instead of being the keyboard warrior on Twitter that they know and recognise, they could the bigger picture when this wound-up-spring was finally let loose.

We didn't agree on everything (for instance, 9:59am to 1:49pm is still basically 4 hours in my book), but I think it was a productive use of my time and their time. I was told new things that nobody had explained before, and they were told things that I see as problematic.

Time will tell if things actually get cleaned up security wise, but my guess is it will because now people know the extent of what can be inferred outside the bank, and most crucially, know that people outside the bank know.

Thursday, August 25, 2016

I read an interesting article today, about how the news outlets are getting targeted by hackers.

This doesn't surprise me for a number of reasons:

Most news today is not an unbiased account of fact, but rather it takes sides and therefore it becomes a polarizing factor that can raise ire amongst some people.

News media outlets today are not used to securing news, so something that is unprotected can be adulterated and changed into something it was never intended to be.

In Canada, we have two major corporations that control most of the news and media in general.

Bell Media

Rogers Media

Regular readers of this blog will realise something: This is Bell Canada and Rogers, the two monolithic cellphone and internet providers in Canada. These two organisations are traditionally not good at security with their own customer facing systems, so you can imagine what type of security I have noted has been imposed on their media divisions.

I did a quick 2 minute scan to see what I could find security wise....

What I found was this: The media divisions are open to the exact same methods of being compromised as each parent company is currently known to be. This percolates down through each media property such as a radio/tv station website, or newspaper site. So, in short, it's possible to adulterate the news in Canada.

We all know the phrase about those who control the message control the people - and that's what makes the media a target for hackers.

Wednesday, July 6, 2016

As most of my regular readers know, I spent a bit of time over recent years fussing over my banks and Canada's financial institutions in general. For those that don't know what's going on, a quick recap...Things started off with me gradually losing faith in my primary bank's ability to maintain a secure banking experience because of a series of events spanning a few years that highlighted to me that something was awfully awry, and things degraded slowly into more problems that were also spotted in my secondary bank. Later, I found similar failures across many banks in Canada and eventually found the government at risk, too. This culminated in the Spring of 2016, when I coordinated with one bank on the issue, and then successfully raised the alarm that basically most of the entire country of Canada was at risk and the RCMP leapt into action. Whilst all this was going on, I had to make sure to never do anything that amounted to, or could be construed as hacking. This is actually very easy - and here's how it happened.This is a Bentley convertible.

(Click for bigger image)

I've never been in this particular car shown in the picture. I can probably guess accurately that neither have you. However, both you and I can probably agree that the roof is down on this car, and that if (hypothetically) we were in this car with this roof position and it started raining, we'd get wet. The reason we know this without ever entering the car is simply because we understand what we're looking at.

Same applies to the banks and the Canadian government. I can look at CIBC or ScotiaBank and without even logging into them, plainly see how they can be compromised because I understand what I'm looking at. Same thing at the Canadian Government...

Often hackers are caught because having breached a system, they bang about inside, tripping monitors as they scan ports, probe and push systems trying to fumble about looking for the proverbial pot of gold.

We see banks respond on social media about this type of threat, such as shown here:

The problem with this, as you can guess, is these measures only apply to hackers that break into a bank. In my case, there was no hacking into any banks, no entry to any bank systems, and yet everyone at the law enforcement level is onboard with me because they understand what they're looking at.

Understanding technology like this is a variant of the "Low and Slow" method of hacking. I say "variant" because whilst it shares all the traits of the "Low and Slow" method of hacking, there is no "hacking" here.

Additionally, it has to be pointed out that operating outside of a bank or government in this manner shows up something else; It's not security. It's "theatre". If you watch the show long enough, you start to see the props and the set moving about. That needs to change.

I'll leave you with one last thought: I'm just one guy who only wants his bank to not put him at risk, and with limited time on my hands, I figured out something that affects the entire country. There are likely cadres of criminals out there figuring this out on a daily basis and, logically, they must go undetected as the banks cannot see them.

You normally see about 2 or 3 of these a week - and they all originate at the same site about 2FA. As you can see, after the request to consider that the bank adopts 2FA, there's a canned response that goes like this "We take security very seriously, so we have two step verification".

Of course, that irks me, and here's why: The customer is talking about authentication (i.e. making sure the person accessing their bank account is the correct authorized person who should be accessing the account), and the bank is responding on the subject of verification. In the case of a bank sending a code to a phone number on file, all the bank is verifying is that regardless of whether they're authorized or not, the person trying to access that account also has the phone belonging to the person who's account is trying to be accessed.

That's a fundamental flaw in security.

If you don't know the difference, two step verification is where you supply a password, and the bank sends you a code and you type this in as well. So, imagine your other-half has your phone and you're in the middle of a messy breakup, and they know your password, the bank sends a code to your phone in their hands, and voila!

There's a really obvious problem here, and anyone with an ounce of security savvy will tell you, physical access is 9/10ths of the problem. This is why people are asking for 2FA.

With 2FA, you have to supply something in addition to the password. This usually is two items out of this list of three:

Something you have (eg password)

Something you know (eg maternal grandmothers maiden name)

Something you are: (eg biometric, location, etc).

It doesn't have to be those three, but they are the most common.

As you can see, the response from the banks totally undermines any confidence that they even understand what's being asked, because in the situation pointed out above, the bank is providing the tools to the attacker to complete the compromisation of the customer.

Of course, the access agreement is written with a totally one-sided assumption that the customer is the only person who could ever put the bank or the customer into jeopardy.

(click for bigger)

The part that says "Without limiting the generality of the first sentence in this Section 9," makes me shake my head, because of course, the first sentence says that you are on the hook for "any losses" whilst ignoring the logical reasoning where as often happens, the bank has set the customer for failure in the first place.

In a nutshell, the security situation can is analogous to going into the sea to scuba dive and the dive master says "There are sharks here, and we take your security seriously, so here's some fresh raw beef steaks to hit the Sharks with", and then having setup the divers with the tools to be eaten, has them also agree to an agreement which is totally one-sided and places on blame on the customers.

Tuesday, June 7, 2016

In Canada, we have our fair share of bank account phishing. Primarily, these scams mostly originate from two distinct teams, and each team has it's own trademark way of operating.In one corner, we have Team Asia.

They register a proper domain.

They set up their own DNS and make the site look like a proper clone of the real bank site.

They are sometimes able to operate for a few months before they get taken down.

They send invites from the SMS code 7000 (formatted as 700-0).

In the opposite corner, we have Team Russia.

They don't register a proper site, preferring to hang off the back of an existing site.

They don't set up their own DNS, preferring to use the short.cm service.

They get taken down quickly, so are much more proliferate.

They send text messages from full phone numbers, usually in Alberta, Ontario or British Columbia.

There are a few stragglers that I haven't assigned to one group or the other, but one particular code base does show up in a number of these, which means they're either the same person/group, or they're buying templates from the same source.

To give you an example of Team Russia:

Here's the SMS from area code 250.

As you can see, it's rather sloppy in comparison to Team Asia. The final link hangs off a Brazilian site, seen here:

URL aside, the site looks real, until you try to navigate, at which point you run into this:

And for anyone interested in the data, here that is (click for bigger version).

As you can see, this isn't complicated at all, and that's a good thing because most of Canada's banks have online banking with holes that are not secure enough to guard against anything much more aggressive than this.

So, there you go. The state of Canadian bank phishing in one quick post.

Wednesday, June 1, 2016

Over the years, I've had my fair share of runaway data. This is usually caused by Bell Canada, as they resell your data to third parties (if you want privacy, you have to pay Bell an extra fee of $2 per month), and once that data has left Bell, it's going to run and run as it passes from marketing company to aggregator to directory service to marketing again.As a result of Bell Canada and the three year battle to get a data noose around them, we have a strange win-win situation; Bell Canada sells my data over and over to the marketing people so they get their money, and the buyers (marketing people, directory services, etc) then scrub my details from the incoming data. That stopped the runaway data in it's tracks. As a bonus, I left my old residential data that Bell leaked some years ago online, so now it optically looks like Bell puts out stale data about me. It's a wonderful system and works really well.For this post, I'm going to cover how I tidied up a similar problem a while back with online banking. When I log in to my two banks, I want to be dealing with the bank and the bank alone, not sharing my purchasing habits with the bank through a third party, or even know that any third party is tracking me at the bank and then reporting that to some computer hardware store 30 minutes later.Both my banks use Omniture/Adobe Analytics. Right there, you got the holy trinity of data sharing going on. Between the two banks and Apple (all the iTunes and Apple store run through the same system), the amount of back and forth of data would be astonishing. So, a while ago, I did something about it as a result of trying to work out why a password issue (unrelated) was giving me so much hassle.I ended up just driving all the junk requests for tracking and analytics, and marketing to localhost (127.0.0.1). Yes, I could opt out of some of this stuff (the banks don't offer you the ability to opt out, but if you track where the banks send this stuff, you eventually end up at Adobe and THEY have a link that allows you to opt out), but that just sets a cookie in your browser, and as a developer, I'm resetting my browser cookies and environment more frequently than the average person, and that would opt-me-in again. So, my solution was just hack this stuff off at the knees by permanently editing my hosts file. If anyone sends a page asking my browser to go talk to Adobe Analytics so it can generate another damn survey or indirect piece of targeted marketing, it will now go into a dark hole and never be seen again.

NOTE: Only change your hosts file if you know what you're doing. I'm not going to tell you how to change the file, as I don't want to be responsible for what you might break. I'm just telling you what I did. YMMV.

Thursday, May 26, 2016

I ran across an anomaly on the City of Toronto website today....They go to pains to point out that you're on a secure site in a weird "in-ya-face" way. This is normally a red flag for me. People make a point of pointing out something when they believe your mind needs to be changed. For instance, I'm told my burgers are 100% beef because someone in marketing thinks we have ideas that maybe it's not. I'm told the latest cars are much more fuel efficient, because someone thinks I might have other opinions on this. Now I'm being told in big message boxes that something is secure. Mental red flag.OK, let's suspend disbelief for a second and imagine that it actually is secure - where is the browser padlock? (Click for bigger version)I clicked the "Close" button and then got to the next screen, where again it points out that you are in a secure site on the first line. Again, there's still no browser padlock. I also noted the usual bank-trade trick of proclaiming things are secure and placing the onus on the user to make sure they've done their bit for security.... even though the padlock is still missing.(Click for larger version)So, I took a quick look at the certificate. The first thing you notice other than the City of Toronto is apparently in Macau, is the verification error that this is a self-signed certificate - so this is no more valid that if I generated one here on my computer and delivered it on a USB stick to City Hall. I've highlighted both items in red.