Search

Subscribe

New Bank-Fraud Trojan

The German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short) recently warned consumers about a new Windows malware strain that waits until the victim logs in to his bank account. The malware then presents the customer with a message stating that a credit has been made to his account by mistake, and that the account has been frozen until the errant payment is transferred back.

When the unwitting user views his account balance, the malware modifies the amounts displayed in his browser; it appears that he has recently received a large transfer into his account. The victim is told to immediately make a transfer to return the funds and unlock his account. The malicious software presents an already filled-in online transfer form ­ with the account and routing numbers for a bank account the attacker controls.

Comments

It doesn't make the transfer for you? The bank fraud ones I'm worried about are the ones that cleverly re-allocate your 2factor authentications, so that they can use one to transfer a ton of money out of your account.

On the upside... while reading this, did anyone else suddenly think of "Bank error in your favor, collect $200?"

That one's pretty clever. Online banking security is fairly tight in Germany -- when I want to transfer funds, it will actually send an SMS to my phone with the code to confirm it, containing both the target account and the amount (two-factor done right, basically). So if I actually check that message (most people probably don't), there's no easy way to steal any money, even if the attacker has complete control over the computer. Thus it makes perfect sense to instead attack the weakest link and try some good old social engineering...

This is actually more dangerous (IMHO). The bank I use has 2 factor authentication online. When I want to transfer money, I have to put my debet card in a reader, enter my PIN, then enter an 8 digit number the banking site gives me and then I have to enter tot total amount of the money transfer (rounded down).

If the total I see is different from the total that's being transferred, the resulting answer I get is different from the one the banking software expects and the transfer doesn't happen.

This malware tricks you into thinking that you actually have to transfer that amount and the messages seem to come from a trusted source.

> If it's an offshore account, it may be
> difficult or even impossible to get the
> foreign authorities to cooperate in
> monitoring the account.

If it were an offshore account, it wouldn't be an account with the same bank as the victim, so how would the victim transfer money to the account in question? I know that works in the movies, but in the real world things are a bit different. The user would presumably have to get a cashier's check and physically go to an office of the offshore bank with it. Not only does that make it harder for the victim to cooperate, but it also introduces a chance for the victim's bank to catch what's going on, when he comes in for the cashier's check. Even if the bank doesn't realize anything is wrong, they'd certainly inform the user of his (remaining) balance when he gets the cashier's check, so most folks would realize something was wrong.

Far more likely the criminals just use patsies to make the withdrawals and then replace the patsy (and the account) whenever the thing goes south. At a large, big-city type of bank, this would be relatively easy. A certain percentage of the withdrawals will be lost this way, but so what? As long as none of the patsies ever know anything of consequence about who's running the scam, and you have them make the drops at locations you can easily keep under surveillance without giving yourself away, the risk of anyone who matters getting caught is no worse than with illegal drug distribution -- i.e., low enough risk to be acceptable to a criminal organization.

> Obviously the attack appears from
> the description to be new, but is it?

The new part is making it so easy to comply that all the victim has to do is click one "yes" button, in the comfort of their own home.

The basic idea of the attack (sending someone a communication that _supposedly_ comes from their bank, instructing them that their account has received money it should not have, and asking them to send part of it "back") is of course older than computers.

> "Is Internet banking dead?"

All most folks really want out of internet banking is the ability to find out what their balance is at any given moment. (In principle nobody *should* ever need this, of course, because they should have it already in their checkbook ledger -- which is more reliable since it takes into account any checks that haven't got to the bank yet for whatever reason. In practice, however, people are lazy.)

> Are people really this stupid?

Yes. (The answer to this question will always be "yes", no matter what context you can think of to ask it in.) Well, some people are, anyway. If the scheme relied on *everyone* being this stupid, then it would fail -- but it only relies on *some* people being this stupid, and that's never a problem. The world's population is unimaginably vast, so it only takes a very, VERY small percentage of complete and total idiots to keep criminals of this sort endlessly supplied with victims.

One noteworthy thing about malware of this type is that its mechanism for getting on the user's computer in the first place (email-borne trojan, most likely) inherently selects for people who are especially likely to be, as you say, this stupid -- and especially unlikely to know how to quickly get the bank people the information they need to immediately get the police involved and perfunctorily shut the thing down.

@jonadab
"If it were an offshore account, it wouldn't be an account with the same bank as the victim, so how would the victim transfer money to the account in question? I know that works in the movies, but in the real world things are a bit different."

I don't know where you are but, in advanced countries, you can make an online transfer to accounts in other banks including those in other countries.

Okay, I was forgetting about some of the differences between American and European banking. Nonetheless, it seems rather unlikely that a German bank would make it easy for the user to transfer money to an account at an offshore bank in an uncooperative jurisdiction where the recipients would be effectively shielded from criminal investigation. That would be foolhardy above and beyond all expectations, in addition to serving no legitimate purpose.

@Jonadab: "Nonetheless, it seems rather unlikely that a German bank would make it easy for the user to transfer money to an account at an offshore bank"

Non-US banks commonly don't feel that they have to arbitrarily block money transfers to unwelcome customers/countries.
It is their job to process payments and not to play world police.
There can *always* be a legitimate purpose and it is not up to the banks to decide.

Anyway, the relative simplicity of international payments is most probably not an issue here, since the common practice is to transfer the stolen money to the accounts of some hired stupid people who will then use services like Western Union to forward the money anonymously.

This is operates like a man-in-the-browser (MTB) attack but instead of trying to manipulate your transaction it manipulates you. It defeats many protection measures against MTB. In fact if you have MTB protection you may be even more susceptible as you may be lulled into a false sense of security. Of course you should always be suspicious of someone asking you to transfer money into their account especially if it comes in a “message on the screen”.

However we probably will have to add secure balance checking to MTB protection measures.

Cant bank companies provide some extra protection to such threads ? If such pages pop-up , everyone would definitely go with the flow as you mentioned above, because it isn't like other malware, It works differently.

Sure, until a class-action lawsuit comes along. Do you not have those in Europe?

Why on earth would a bank deliberately make a cooperative account-to-account money-transfer agreement with an unreliable business partner whom they can't effectively pursue legally if anything goes wrong? They wouldn't, unless they're grade-AAA dumb.

> the common practice is to transfer
> the stolen money to the accounts
> of some hired stupid people

Yes, that's what I said in the first place. They probably just use infinitely-replaceable patsies to make the withdrawals. It's far more reliable than limiting yourself to victims who use a bank willing to support one-click online fund transfers to some seedy black hole run by guerillas in in the Caribbean. Sure, you have to pay the mules, but with a much larger pool of potential victims you still rake in more net profit.

I cannot speak for the whole of Europe (it consists of several distinct nations with significantly different legal systems) and IANAL, but there is no such thing in Germany (there are some very very limited variants).

"Why on earth would a bank deliberately make a cooperative account-to-account money-transfer agreement"

AFAIK such agreements are the 'default case' between any and all banks here. A bank would have to deliberately refuse to handle such payments and they could easily find themselves in court for such a refusal (Paypal is just being sued for not handling payments to european shops that offer cuban goods to european customers).

I mean, why would a bank refuse its customer when he orders "Send EUR X,- from my account to account 4242 at Elbonia National Bank!"? It is not the bank's money they are sending, after all.

@Jonadab: "Nonetheless, it seems rather unlikely that a German bank would make it easy for the user to transfer money to an account at an offshore bank"

To understand this you need to understand that Germany is more like a state (i.e. New Jersey) than a country (i.e. the US) within the EU. It is not really very big, not big enough to have a self-sustaning economy, and does very significant trade international at the individual and small business level. When I was in Germany a common bitter complaint was the fees on international transfers. Imagine if you had to pay a $10.00 fee on an internet purchase from Florida. Now imagine a German wanting to buy something from an Italian web site. International banking in Europe is easy because they are small countries, International there is more like interstate here.

“Cant bank companies provide some extra protection to such threads ? If such pages pop-up , everyone would definitely go with the flow as you mentioned above, because it isn't like other malware, It works differently.”

The problem here is that you supposedly know what you are doing – you actually want to pay beneficiary X the sum of $X.00. So even if the bank tried to get you to double check the transaction via a second channel, you would still agree to go ahead with the transaction.

I said earlier that the malware is not manipulating the transaction, but it is manipulating your balance amount. Therefore if you had a security gadget that performs the SSL encryption, then it could be used to display your balance. Critical data fields should be clearly designated in the web page HTML so that a security device can detect and display them directly to the user. That way no malware can interfere with it.

My first thought was "Hey bank; you made the mistake, you fix the problem." Then I remembered that I avoided a credit card confirmation scam at a hotel, because I was too annoyed with the hotel to be cooperative.

My conclusion is that one can avoid many scams just by being a cranky and uncooperative customer.

This hasn't happened to my bank account but I did have a water dept payment credited to my account by mistake. Suddenly I have a $12,000.00 credit. I called the water department to inform them that there was a mistake; the dept told me I needn't do anything, they would reverse it. (To this day I wonder who has a $12,000 monthly water bill.)

My own bank, Stellar One, in USA, has been warning about just this for a few months now. You have to click through a page of "what we'll never ever do and what the bad guys might try on you" to even get through login. They also do watch for unusual events and do actually call if they detect such, which I think might have been prompted by some hacking that's occurred in the past. They take reasonable care of me, as I'm a relatively large fish in a very small pond. They've recovered my money on a debit card hack in the past.

I haven't seen this exploit yet, but evidently they had, else why the detailed warning about it?

In the US banks aren't liable for corporate account takeover fraud. As Krebs has documented, lots of businesses have had their accounts compromised and significant amounts of money stolen. Provided the bank provided authentication along the lines of the 2005 FFIEC guidelines then a business might not have much recourse, especially if their own security practices were less than stellar. A number of lawsuits are working their way through the courts at the moment. FFIEC has recently issued updated guidelines. Many of the larger banks now offer products like Trusteer Rapport that provide protection against financial malware like ZeuS and SpyEye.

FBI/IC3/FS‐ISAC issued a detailed advisory on the issue last year (they've been issuing advisories on this issue since at least 2009): "Fraud Advisory for Businesses: Corporate Account Take Over" (http://www.ic3.gov/media/2010/corporateaccounttakeover.pdf) that has lots of advice including: "A workstation used for online banking should not be used for general web browsing, e-mailing, and social networking. Conduct online banking and payments activity from at least one dedicated computer that is not used for other online activity." Krebs, SANS and others have recommended use of LiveCDs and ROBAM (read-only bootable alternative media).

"A fool and his money were lucky to get together in the first place - Bill Cosby"

Some fools inherit it. Some hit the Lottery. Some get elected to Congress. :)

@ Tim#3:

"Such stories as this make me curious as to what proportion of security aware people actually trust online banking enought to use it for their primary account."

Security-aware people may use it in a more security-aware manner than the rest of the populace.

Examples:

I do *not* authorize external bill-pay, so that no one can forge a request or authorization to draw $ from my account. The bank would require me to change that setting in person, in writing, with proper ID if the local branch didn't know me by face.

The only exceptions are the power company, which is heavily regulated by the State, and hence responsible for errors, and the local telephone landline provider, ditto.

At times, I do need to transfer funds to a business partner, and vice versa. I had to (again) go into the local branch in person, and sign a form authorizing them to honor online requests to xfr $ to *that specific account and recipient only*. (and my associate had to do the same.) What's the worst that could happen? A crook fraudulently moves $ from my acct to my associate's? Easily traceable; I trust my associate to return it. No gain for the crook.

My online-*only* bank has no authority to move funds *anywhere* except to my local bank, which they have verified by multiple means: voided copy of a blank check, preprinted with my name/address, on the local account; two random deposits of less than a dollar each into the local account, which I have to log in to online bank and correctly report the amounts, and others.

Again, no benefit to a crook. He can't get any money. He could move it from online to my local. So what? He could inconvenience me by drawing down the local checking account and move it to the online acct, which might result in low-balance fees, bounced checks, etc. If crooks with that level of skill are content with mere mischief... why, when they could actually *steal* money from less security-aware users?

There's a name for all this, but I confess it isn't original: "Principle of Least Privilege". Give each account the least privileges needed for my convenience without creating additional risks.

In Germany, banks can not cancel or revert wire transfers after they have been committed. Also, wire transfers and direct deposits are quite common in Germany, cheques are virtually unheard of. So the only way to get your money back if you entered the wrong account information or the wrong amount in a direct deposit form is to ask the receiver (not the bank!) to transfer the money back.

@Dave F:
"Germany is more like a state (i.e. New Jersey) than a country (i.e. the US) within the EU. It is not really very big, not big enough to have a self-sustaning economy"

Germany's GDP is more than double that of Russia. Only the US, China and Japan are bigger than Germany (economically speaking).

@tommy:
I have been using online banking as my primary account for 9 years now, since last year as my one and only account. I feel perfectly safe doing so and I'm pretty sure that is not just a feeling :) I'm security-aware (even as hardcore security-aware that I read security-blogs like Bruce's here) but I have never heard of any fraud or other malicious activities which do *not* rely on human stupidity. I never have to go to the local branch in person, I never rely on office hours and - unlike non-internet banking - its free (including brokerage and Visa/Mastercard/etc.). Stealing money from my account is certainly possible somehow, but I consider someone stealing my car from my garage or beating me to death for my shoes much more likely.

@Mark: "Critical data fields should be clearly designated in the web page HTML so that a security device can detect and display them directly to the user. That way no malware can interfere with it."

Isn't that the problem? That even if it is in the HTML stream, malware and even JavaScript can rewrite them? A secure appliance would have to be robust, with preferably a high assurance OS and the most minimal software to do the job. I believe there was a side discussion on this here a while ago. Even an HTML browser, with no JavaScript, would be very complex. But there may not be very many options, especially if a standard is desired. There would have to be no channels for unauthorized software to install, otherwise back to the same malware issue.

Commodity PC's have numerous problems. One is that if the bad guy can get their software on your system (even via least privilege), then it's not your computer anymore. (i believe someone at ms formalized this) Especially since the os is no longer the target, but rather the user account and browser. we know many users will run whatever gets sent to them, so even if your os has no exploits to allow remote code execution, you are still vulnerable to PEBKAC. Once malicious code is run, your browser, no matter how secure or with noscript, is fully compromised. Maybe deeper app sandboxing, using a hypervisor to set up different os domains for each app, will help with this angle.
An even worse problem is that by default, most browsers WILL run the bad guy's code, in the form of JavaScript. That JavaScript can even change the display on a browser. You can tout noscript as much as you'd like, but the fundamental model is still broken, and so many sites depend on JavaScript, even scripts loaded from OTHER domains (staring at you, Facebook)! Unfortunately, noscript so often fails the wife, or mom, or pops test. Therefore, it is forever relegated into the hands of just a few. Especially when you have to explain the process of whitelisting 10 - 20 scripts just to get their favorite site to work. There is no such thing as a secure display on a browser, this will probably go down as one of the greatest security design failures in history.

"Critica data fields should be clearly designated in the web page HTML so that a security device can detect and display them directly to the user."

How do you propose to make that work?

"That way no malware can interfere with it"

For this to be true your "security device" needs to be an entirely seperate device from the PC (otherwise I/O driver shims could do a end run around it) It would further have to be "immutable" and at the very least "tamper evident".

Then there is the question of it's input channel this needs to be low bandwidth and of a form where an unassisted human can verify what goes in, to prevent there being "side channels" due to attackers getting into the supply chain of the tokens to either the bank or it's customers (remember Epos terminals have been got at this way and likewise Apple sold media players with PC malware on them for the same reason).

Conceptually security is easy, in practice it is very very hard because it has to start before the product is even thought of and remain in place through each and every step untill after the product is retired.

Can it be done? Yes (have a look at the history of very high security computer systems used by the US and other governments). Can it be done at a cheap enough price for mass market give aways by banks? I think the answer is yes provided they take suitable care with security along the way and use probabalistic testing methods.

I will assume for arguments sake that they have good controls in place up to the point of dispatch. This leaves the path to the customer and the time the customer has it in their possession. The Bank cannot mandate how the customer will use it only how they should use it. And thus the need to ensure that the customer can easily visually identify that the device has not been tampered with by the use of good reliable tamper evident seals on the device it's self.

“Isn't that the problem? That even if it is in the HTML stream, malware and even JavaScript can rewrite them?“

No. The appliance is doing the SSL decryption, so it gets to see the HTML/Java first, before the client PC. The only HTML/Java that it sees is therefore from a trusted source i.e. the bank’s website which it has already authenticated since it has the bank’s cert/public key.

“A secure appliance would have to be robust, with preferably a high assurance OS and the most minimal software to do the job.”

Robust – yes, OS – no. This is similar to the IBM ZTIC device but it only really needs a small core of the SSL necessary to keep the secrets, the rest is provided in the supporting interface software on the user’s PC.

“I believe there was a side discussion on this here a while ago”

Yes it was I that was involved in that discussion.

My concept device was implemented on an ARM SOC in less than 256KB of on-board flash. There are only around 5 distinct commands that are sent through the USB interface, so it is not difficult to really analyse and test those commands.

If you add more certs/public keys you might need a bit more flash space, and it might require a bit more functionality in a final version. However, this is orders of magnitude less complex than an OS + browser etc.

The code is in fact small enough to be easily analysed using formal modelling techniques i.e. it could go through the Common Criteria validation process. It could also be implemented in read-only code such that it cannot be modified by any malware attacking the interface.

What I mentioned earlier about the critical data fields needing to be designated in the HTML could simply amount to duplicating these fields in an HTML comment together with a searchable string. However, the ideal would be standardisation though the RFC process. Just like HTML has a standard form field for passwords (which I make use of), it would be ideal if there was a standard “confirmation field”. The bank could then put critical data (e.g. beneficiary account, balance amount, etc.) into these fields and these would be easily searchable by a crypto device which could display them directly to the user.

@Clive Robinson,

Saw your post just before I sent this. Yes I know about the EFTPOS terminals that you refer to. The supply-chain security problem affects all security devices though. This is an entire discussion on its own. I have some ideas on that topic too but I need time to formulate those so I will post this off now.

"... topic too but need time to formulate those so I will post this off now."

It's a fun problem to solve but luckily there is a reasonable solution. In that the bank puts a card reader in the device and puts all the security stuff onto the bank card and routes comms through the bank card and effectivly the security device is made as passive as possible and liimited to basic terminal functions. There are still a few gotchas but these can be partialy resolved in other ways.

Back to your description of your device yes I like the extension of the comms channel out into the security device, but I don't like the idea of USB because it's not including the human in the comms channel.

The real problem is each and every transaction (not connection) needs to be authenticated by both the bank and the user. Whilst the bank can and will have the abillity to protect their end of things, it needs to be taken for granted that the user cannot, and worse the human cannot do authentication of any great bit strength (maybe 16bits max).

To ensure the end to end authentication the user actually needs to be "in the comms channel" prior to prevent end runs in the computer or device.

If you hunt far enough back on this blog you will find several posts I made where I went through the rational and why I decided it had to be that way.

A bank card has no user interface. It relies on a trusted terminal, which is why the hacked EFTPOS terminals you mentioned were effective. But I know that you know this, so perhaps I haven’t explained my gadget clearly enough. The trusted terminal problem is exactly what my device is intended to solve. It is the like having the bank card and terminal in one (although for Internet banking there is no real need for the bank card).

In a nutshell:

The device is a small USB gadget that effectively acts as a SSL proxy, allowing it to have direct and first access to the clear text data to and from the host. It has a small user interface so it can accept (and optionally store) specific user input (PINs, passwords, credit card no, etc) and display specific user/server output data fields. This allows it to insert either pre-stored or on-demand passwords etc. into the outgoing stream. It also allows specific confirmation data fields from the server to be displayed e.g. a beneficiary account no., balance amount, etc.

So the actual passwords are only available in the comms path and PC-side in SSL-encrypted form and any confirmation fields (beneficiary account no., balance amount) are confirmed to the user via the integrated display just before being SSL-encrypted, so they cannot be manipulated before going to the server.

In terms of inserting a password, it automatically detects the standard HTML password form field and the corresponding response message (used by all secure web servers), and substitutes a dummy password for a real one. I have confirmed that this function works on all the major web sites e.g. PayPal, Amazon, E-Bay, Google, LinkedIn, etc. and at least four banks that I have checked (probably most).

For displaying server confirmations, it might require a little help from the host server in terms of detecting the HTML display field, or else it would need to store a profile for each website (not ideal). However, it costs the host service very little to tweak a bit of HTML.

I agree with you whole-heartedly about the user being in the loop. That is what my aim was with this device – The gadget directly authenticates the server through normal SSL and displays the server validity directly to the user via the integrated display. In Internet banking the main threat is the beneficiary account number. As long as the user is satisfied that the account number displayed on the device is correct then it cannot be manipulated other than by breaking the SSL crypto. In this it is almost identical to the IBM ZTIC. The difference is that it also allows the user’s password to be sent securely to the bank and therefore does not require the use of a client cert (so no private key needed).

In terms of this thread/topic of discussion, we would have to add the user’s balance as an additional critical field to be displayed on the device.

Good comment, but I'm afraid the point was missed. In setting up the ability to move funds online from my account to another person's account, I'm glad that local bank required in-person authorization. Otherwise, anyone who could get into the account could set up an "authorization" to xfr funds to their own account. (Numbered account in Switzerland? Shell account in the US, using forged ID, and quickly closed after the ripoffs? etc.)

I think I didn't miss your point. For you, local bank in-person authorization is a good thing. For me, it is a bad thing. As I stated above, I think internet banking (here in Germany at least) is safe enough to fully rely on it.

One has to keep in mind that there are differences between the US and Europe. Credit cards are just now starting to become a little popular (often offered for free by internet banking companies). Before that, bank transfer was practically the only online payment method (except cash on delivery which is another 10€/$ fee). So in a way, one could compare internet banking in Europe to credit cards in the US. Additionally, cheques are never used in Europe. So bank transfers are probably much more popular and much more widely used than in the US. Doing all that in person, I'd be at my local bank (which is closed when I'm not working anyways) twice a week.

As an aside on bank security, I recall reading many years ago a book about a group of gold smugglers back in the '60's, I think it was. They acquired gold in the West and smuggled it to Asia where it was worth a lot more, IIRC.

Anyway, in the process of their business, they ran into a variety of people in Asia with various shady scams - it seems to be quite common over there.

One of them was a guy who worked at one of the major international banks. He arranged for the gold smugglers to actually enter one of the branches of the bank (in Hong Kong, I'm thinking, but might have been South Korea or elsewhere) and acquire all the codes used to authenticate wire transfers anywhere in the world.

The gold smugglers actually had the codes and were implementing a plan to transfer a large sum of money to various accounts under their control when, in some manner I forget, they got caught. In fact, several of them were caught first and the author of the book was still at large and intended to threaten the use of the codes in order to get his associates released (again IIRC). For some reason, this fell through and he got caught. It was a fascinating book.

The point it raises is the security of banks. The case of the programmer who learned the wire transfer authorization codes for Security Pacific and used them to wire $8 million to himself is another case in point. Allegedly he only got caught because he ratted himself out to his attorney.

If banks make wire transfers completely secure on the client side and on the bank side, it's only a matter of time before the banks entire authorization system becomes the target.

I might also point out that third parties who are trusted by the corporation may also have access to this information. Besides the programmer mentioned above, when I was at Bank of America, I did customer support for the bank's Microstar cash management application which the bank sold to Fortune 1000 corporation treasury departments. This consisted of a number of spreadsheets and ancillary programs to download bank balances from bank cash reporting systems and analyze them for treasury management purposes.

These systems were specifically developed for each client by cash management analysts working for BofA. They were run by treasury people in the client corporation. But if a problem developed, the spreadsheets and other programs were debugged and fixed by support personnel at BofA, including myself.

In the process of fixing these spreadsheets and downloading bank balance statements to test them, we used live client corporate bank accounts with live passwords. I routinely saw account numbers for accounts with scores and even hundreds of millions of dollars in them at multiple banks for a given individual client.

We also had a wire transfer program which at some point IIRC our support department was going to be doing support for. This would have meant that the same support people who saw corporate account numbers and passwords would also be working on the wire transfer capabilities of those corporations.

My point is that given the number of people at a corporation and its bank (multiple banks for large corporations) who may have access to wire transfer authorization data, including third parties who may not normally be directly involved in handling wire transfers but who nonetheless have routine access to the data, it might well turn out that the biggest vulnerability in the banking system are these people.

Add in the possibility of someone on the inside of the bank's IT department with access to the wire transfer production software itself and the problem magnifies.

The scenario of a large wire transfer from a client corporate account that then proceeds to disappear from the bank's records (and possibly even from the corporate client's records!) altogether - a la the Zeus trojan manipulating the client's records not to show it while the bank side is similarly manipulated - becomes a real possibility.

While such cases may be rare - and with proper procedural controls in place can be made rarer still - when it does happen, it could happen big.

I agree that problems with bank insiders will escalate, and as you point out, this is not a new problem.

However, many countries have in place, or are implementing laws that enforce liability on companies that keep personal information on their clients. So at least you have some recourse if you lose money due to the bank’s weaknesses.

The problem is that banks have disclaimers protecting them against losses that you might incur through Internet banking i.e. you bank online at your own risk.

Sure they might compensate small losses in order to keep it out of the press, but there are many instances where they do not. I have seen at least three local newspaper articles about losses involving people in my own city where compensation was not forthcoming. On a world scale it must pretty huge.

Internet banking and payments is highly convenient and worth trying to save. That’s why I am interested in finding client-oriented solutions, independent of the bank’s highly limited offerings. Strong client security has the potential to shift the security problem back to the bank, where they are liable for losses.

I have also seen very dodgy security on the bank’s side, and as you point out, this is not a new problem. I agree that problems with bank insiders (and consultants) will escalate but I am more concerned about personal liability.

Many countries have in place, or are implementing laws that enforce liability on companies that keep personal information on their clients. So at least you have some recourse if you lose money due to the bank’s weaknesses.

The problem is that banks have disclaimers protecting them against losses that you might incur through Internet banking i.e. you bank online at your own risk.

Sure they might compensate small losses in order to keep it out of the press, but there are many instances where they do not. I have seen at least three local newspaper articles about losses involving people in my own city where compensation was not forthcoming. On a world scale it must pretty huge.

Internet banking/payments is highly convenient and worth trying to save. That’s why I am interested in finding client-oriented solutions, independent of the bank’s highly limited offerings. Strong client security has the potential to shift the security problem back to the bank, where they are liable for losses.

@ jonadb "(In principle nobody *should* ever need this, of course, because they should have it already in their checkbook ledger -- which is more reliable since it takes into account any checks that haven't got to the bank yet for whatever reason. In practice, however, people are lazy.)"

The checkbook ledger tells you what YOU have spent, but won't catch fraudulent transactions or bank errors. I'd rather catch that sort of thing sooner than later, and generally log in online about once a week, rather than waiting for the monthly statement.

In ten years I've had two instances of fraudulent charges, both caught within a couple of days -- one even before the charge cleared. Much easier to deal with early, particularly since the first transaction is often a small test before they try for larger amounts, than after your account is empty and other transactions start bouncing.

Yes that’s the IBM device. However their device does not submit the password via the gadget. The PIN/password still gets typed in on the computer. They compensate for this by using SSL in client mode i.e. the gadget has a private key and cert.

My gadget works on a slightly different principle, but I think that the difference is important. Their model is a relationship between the bank and the device whereas mine is between the bank and the person.

IBM plays down the client-side SSL by saying that this is a standard part of SSL anyway. However in practice there is a huge difference.

Firstly, not many implementations of SSL have been tested thoroughly in client mode. Client mode hasn’t had the years of hammering that that server mode has.

Secondly, in client mode the bank has to certify each client’s certificate and possibly take the responsibility of generating their private keys too. This ties the bank into the gadget supply stream which then results in large non-core infrastructure investment, training of staff, help desk, etc. The bank becomes a CA and all that goes with that (bunker facility, disaster recovery, etc.).

With my model, status quo prevails (much more attractive to service providers). It is also more generic i.e. it can work with other HTTPS services too. So one gadget for many services: PayPal, cloud services, etc..

Losing the ZTIC is a big deal since it has a private key and no user authentication. Yes there is still the password entered on the PC, but this is entered on an untrusted platform.

I cover some of these arguments in a paper on the subject. If you are interested you can google “In-the-wire authentication”.

"Yes that’s the IBM device. However their device does not submit the password via the gadget."

It was Edrik that mentioned the IBM device, not I.

Not that I am unaware of it, I just never could be bothered to chase IBM for all the details when it first surfaced (there was another device as well back then I think it was from Chronos and that used a diamond of coloured dots to transfer data from the web browser into the device).

With regards your device and the IBM device they both have a common failing which is, an electrical connection, of unknown bandwidth, and thus side channel capability.

But actually the primary failing is the electrical connector.

USB connectors are often only good for between 50 and 250 operations (an operation being an insertion or removal) like nearly all mass produced electrical connectors of any type. It does not matter if you use a high quality electrical connector on the design because the chances are the PC uses a sub 10Cent connector on the motherboard.

So you would have to leave the token/device virtually permanently connected to the PC to get more than a hundred or so reliable uses out of it (it is this failing which is the root cause of "contactless technology" which is giving us security nightmares).

Now call me old fashioned, but if the token has to remain connected to the PC, then unless you "lockup the whole PC" it's actually not a lot of use as one authentication token, let alone the one on which the bulk of the security rests.

"USB connectors are often only good for between 50 and 250 operations (an operation being an insertion or removal)..."

I have only laptops, which tend to have even shorter component lives due to less ventilation, need for compactness, etc. For various reasons, I often insert/remove a USB flash drive several times a day. Surely more than 50 times a month, and probably 250 times in 3-6 months.

I had one \flash drive/ fail after a few years of this (trying to avoid unintentional double-entendres here), when the insertion part started to have too much play within its casing, but the USB connectors themselves are still fine after 6 years of very heavy "operations".

Wow, I can't believe I didn't look at this article sooner. Yes, to the earlier poster, we've had this discussion before. And Clive & I originally worked out most of the principles over a year ago, especially ensuring almost total risk mitigation against software threats. The most recent conversation, which mainly applied to payroll, was here.

I do like Mark's scheme. I think it can be made arbitrarily more or less secure than it is in concept. I just disagree about formfactor and features. I think the security device must have a decent sized screen or be connected with a KVM switch to the monitor. The reason is that the user needs to see a good deal of information & scrolling through lots of info in a tiny LCD screen isn't very pleasant.

My scheme would have a screen at least as big as most POS card swiping devices. It could be connected via USB, but the protocol must be customized. I simply DO NOT TRUST a vanilla USB stack. Side-channel attacks that software can see can be mitigated by processing data through fixed-size blocks of data & in fixed time intervals for various functions. Alternatively, the system can inject bogus data and operations that decrease the probability that the eavesdropper can figure out anything.

Note to Clive: I agree with tommy about USB connector reliability & mine that don't stay in still work if i hold them in. (Or prop them in. ;)

Got great news for you, Mark. I've told you that I prefer simplified hardware architectures & OS's with security/quality baked in. I also thought we could use one of the safety-critical RTOS's to implement your device. Well, Green Hill's partnered with a few new companies that make PowerPC chips, cryptochips, tamper-resistant chips, and microcontrollers.

So, we could get a cheap PPC SOC, disable anything unnecessary, increase the assurance of the firmware, use the Chromebook BIOS/boot strategy, use Integrity/Integrity-178B as the RTOS, customize some of their middleware, have Praxis develop the software using their Correct by Construction technique, rigorously test it at interface level, test for fail-safe due to hardware issues, apply formal covert channel analysis at certain levels, and compile any C code with the CompCert C compiler. The result should be certifiable to Common Criteria EAL5+. I chose that target because it's more cost- and time-effective for a product than EAL6-7 & that's the level that requires NSA pentesting.

Look at Green Hill's website for information on their INTEGRITY RTOS, their pre-made platform packages (like Cryptography), and the press releases showing what hardware they've supported. I don't like promoting one company over others but they've cranked out more products in the past year or two than anyone else in the high assurance software industry. They've also completed an EAL6+ evaluation with EAL7-like development requirements. I figure they would be the ideal partner & solution provider should you decide to implement your device.

Mark Currie:
you can enter a PIN on the ZTIC, there's a rollerwheel on the right of the device (below the buttons, it's visible in the video), although applications are not forced to use that. Entering a PIN with this and the two buttons is a bit clumsy, pretty similar to unlocking a bicycle chain lock, but it's completely out of the host PC's reach.

Apologies to Clive and Edrik for my blindness. Apologies also for this rather long post. I hope I can keep your interest.

@Edrik,

Yes the combo-lock type PIN entry mechanism formed part of a later patent. The original device did not have that. However the wheel was said to be used for entering a local PIN, not a server PIN. They have introduced this probably because they realised that their device could be defeated by a divide-and-conquer attack i.e. first capture the PC-entered server PIN, then borrow/steal the device. They are starting to overlap my design quite a bit now with the PIN entry mechanism. What I can say though, is that at least my design is protected by my prior art since my provisional was lodged before even their first one.

The core idea of my concept is really the protected insertion of the user-to-server password into the authenticated server encryption stream. This is what turns the procurement model on its head from a service-centric to client-centric model. With no prior crypto relationship required between a client and service other than the password, the solution can be sold to directly the public.

@ tommy and Nick P
Thanks for the rescue, but Clive does have a point on the USB too. As you might appreciate, developing code on the gadget means that I have had a LOT of experience with USB connectors and I have had mixed results, some good, some bad (mostly good though).

Perhaps it would be better to consider this concept based on the core idea since it is really comms-agnostic. In fact perhaps I should start with the blue-sky option and then scale back accordingly. I am starting to learn not to constrain ideas to the limits of today’s costs and technologies.

Consider the following:

The gadget could in fact be hardware partitioned to include a WiFi hotspot and link to 3G. This configuration allows it to be used with both desktops and mobiles.

The device is hardware-partitioned into a comms processor and a crypto coprocessor such that only the crypto processor has access to the UI. In this configuration we can also have Nick P’s large screen if we like, touch pad too, why not.

In addition to communications, the comms processor also handles some of the SSL proxy functionality. It is responsible for feeding the crypto processor with the data to and from the service provider (bank etc.), as well as to and from the client platform (PC/laptop/mobile). So what we have is an SSL-encrypted coms session between the gadget and the service provider, and another SSL-encrypted session between the gadget and the client platform. The comms processor can eventually be hardened to run in read-only memory since it only executes well defined functions. This will preclude hacking and modification by external malware.

The link between the comms processor and the crypto processor is an internal link (whatever technology – SPI, PCIe, etc.). The command set on this link is constrained and designed for high assurance.

The crypto processor can have whatever high security assurance level is within budget. It allows the user to log on to it using whatever means, biometric , graphics, text passwords etc. Only after successful login will the user have access to the stored passwords. The stored passwords are linked to a corresponding server public key. The user’s passwords etc. could even reside on a removable MicroSD card (only accessible to the crypto processor) so that they can be backed up.

The user can also opt to only enter the server PIN/password when prompted to by the gadget after the HTTPS connection is made. This can in fact be split into a gadget-stored portion (maximum entropy) and a user-entered PIN/password.

So what we have is a single-sign-on device but a rather secure one since each service has its own unique maximum entropy password, protected end-2-end, and with no reliance on a trusted delegation service.

As discussed earlier, one would need to consult with banks to try and get some consistency in the means of detecting critical data fields for direct display to the user. This is really the only disadvantage of this solution, but when you consider the alternatives for the bank, this is really less than trivial. The other disadvantage is the cost, but in its cheapest configuration (USB) it would cost less than the cheapest of cell phones, and bear in mind that it can work with more than one service.

@ Clive Robinson

Thanks for your inputs. Tempest, electrical/timing side-channels are always an issue. These issues will of course have to be considered carefully in any implementation. However, the concept in essence does not expose any new vulnerable channel. As discussed above, the external comms interface can also be just a normal SSL channel. BTW here are some refs to the IBM ZTIC:
For overviews you can google:
“br-sec-ibm-zone-trusted-information-channel-en.pdf”
Or “Secure_Internet_Transactions_0.pdf”
The IBM academic paper was available for free online some time back (and is referenced in my paper). You can google:
“ZTIC-Trust-2008-final.pdf”

The solution to unreliable connectors I looked into back in the mid 1990's, and the solution I came up with back then was a pair of IR diodes and a simple logic circuit to implement an IR serial comms channel similar to I2C but open standard [AX25]. I still refuse to design around a royalties based standard or one controled by a paid up membership association etc because it realy serves no users interests as they get "tied in" and in the end it does not actually serve the designers and controlers of the standard.

The modern equivalent could be a USB serial dongle that uses an IR comms channel and say PPP and an open IP stack over it etc. That way you have an open standard bridge over the tied standard of USB.

Thus the IR dongle can remain permanently plugged into the laptop just like the ultra miniture bluetooth and WiFi USB dongles. The advantage the IR has is it is much much easier to secure from evesdropping or fault injection attacks etc. And the security token can be locked in a safe (or whatever the user deems secure storage) etc when not in use.

"The device is hardware-partitioned into a comms processor and a crypto coprocessor such that only the crypto processor has access to the UI."

I see you were paying attention in our last discussion. ;) Hardware partitioning, complex code isolation, trusted path... If only I could convince mainstream security device developers to use these techniques.

I like a lot of what I'm reading in your post. Particularly, the removable MicroSD card & boosting entropy of user passwords. Most password managers already implement this feature, so it will be a necessity in any competing product. The problem is that it's complicated by web sites that have restrictions on the password. For instance, many sites have a low maximum length, ban characters like -, or only allow numbers. Even major banks do this. *cough* Chase *cough* We'll have to account for it if the device is generating the passwords. Might have a UI on the untrusted PC that let's a user tell the device to generate a password with certain constraints, then the device shows the request on its screen, user clicks OK, and the password is generated. Seems workable.

Also, I don't know if I mentioned it previously, but I already have a simple way to make it bigger & stuff. We could build it to be like one of the old "electronic organizers." They were small enough to put in a pocket, had a large LCD screen, QWERTY keyboard, and it flipped shut when not in use to save space. This plus a small board w/ SOC & comms functionality and we're set. :)

"So what we have is a single-sign-on device but a rather secure one since each service has its own unique maximum entropy password, protected end-2-end, and with no reliance on a trusted delegation service."

Sums it up nicely.

"As discussed earlier, one would need to consult with banks to try and get some consistency in the means of detecting critical data fields for direct display to the user. This is really the only disadvantage of this solution, but when you consider the alternatives for the bank, this is really less than trivial."

Yeah. That's why I like your concept better than mine. With all these sad court rulings, banks aren't going to implement something like this unless its nearly free for them & cheap for the customer. I also like that you have prior art over ZTIC.

"The solution to unreliable connectors I looked into back in the mid 1990's, and the solution I came up with back then was a pair of IR diodes and a simple logic circuit to implement an IR serial comms channel similar to I2C but open standard [AX25]. "

It's funny you mentioned IR. Last week, I was looking at IR Free Space Optics as a partial solution to covert, long-distance networking. I was also looking at vanilla IR for various tamper-resistance & stego applications. I never thought about using it to ditch connectors in a high robustness product. Hmm. I think I'll have to go back over some old designs. ;)

"I still refuse to design around a royalties based standard or one controled by a paid up membership association etc because it realy serves no users interests as they get "tied in""

I totally agree. It's why I wouldn't use OLE Automation or I2C unless I'm forced to. Same goes for MP3 and similar compression standards.

The figures I quoted where from various manufactures reliability figures.

One area of security you very very rarely hear talked about is exploiting replays due to link unreliability, and I've never seen any one ever refere to it with regards connector reliability, which is a shame.

Put simply a lot of protocols across connectors have a "repeate on error detected" mechanism incorperated either deliberatly or by accident.

An example would be using a serial line with SLIP (who remembers it?) or PPP. The physical layer of the connector and above are just considered "unreliable" as a matter of course and the above IP layers (ie TCP or application level with UDP) take appropriate action to convert the "unreliable" into a "reliable" connection.

Now as a developer of an application, you either get a reliable connection or no connection so yippie, all the hard work gets done for you and you don't need to think about "reliability".

However what about "confidentiality" and "authentication" of the data. Well authentication is usually done with a plaintext MAC above the ciphertext encryption layer of "confidentiality".

BUT what "cipher mode" is used at the encryption layer?

And how does it deal with replay attacks?

And how does it deal with "error correction"?

And how does it deal with "substitution" either at the datagram or bit level?

For instance let us assume (for simplicity) you are using a stream cipher, any error that causes data loss causes data to be resent, the question is is it encoded under the same key or a different key?

That is depending on at what level the error is detected the encryption layer may send the plaintext under two different parts of the key stream. This may be adventageous to an attacker who can then strip off a usefull set of key stream that they can re-use or analyse.

For instance let us assume at the physical coms layer a simple parity code is used. If as an atacker you flip two bits in a byte then the error is not picked up at the parity layer which means the error may not get discovered untill much further up the stack.

Similar knowledge of the protocol in use may alow an attacker to flip various bit patterns to excercise different "errors" at different layers to their advantage.

It is one reason why you should always do the encryption of data at the application layer in a way such that all errors irespective of at which point the error occurs in the stack should cause the same ciphertext to be resent. But... in such a way that multiple repeats of the ciphertext do not cause other errors to occur. This is actually quite difficult to do esspecialy when "error window" protocols are in use to ensure link efficiency on high bandwidth long latency networks.

Ahh manipulating the lower layers to escalate the error - very devious Clive.

In my experience the crypto layer generally doesn't do corrective measures when there is a "reliable" comms stack underneath. It simply fails and the application has to decide what to do. Usually the app simply reports the error, escalating it to the user level.

If the session terminated, the user may decide to re-connect. This means a different key. However depending on the app or type of data, it may or may not result in the same data encrypted under a new key.

In the SSL case the entities would most likely have session caching enabled and simply resume with the same keys and IV's. So the likely outcome would be that the same data is sent encrypted using the same key and IV resulting in the same ciphertext. If new keys are generated, these would be randomly generated so there should not be a problem.

However, with stream ciphers like RC4 still in use, there may be circumstances where this kind of attack could have some merit. RC4's internal state is often saved across different messages, so one would have to look at the impact of this on SSL session resume scenarios. If the RC4 internal state is also cached, then there might be an issue here. I will have to think a bit more about that.

BTW, on the IR issue, the problem I have with IR and the new near field technology is bandwidth. In the full-blown proxy scenario, we have to have two full-duplex web sessions passing through the link.

If you have a crypto service provider (CSP) linked into the user browser, then you don't have to have a proxy scenario and you only have to deal with one full-duplex session. However, users are now used to high speed broadband Internet, so you don't want to slow them down too much.