Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

The NY Times has a story and a blog backgrounder focusing on a weapon now being wielded by bad guys (most likely in Eastern Europe, according to the Times): Trojan horse keyloggers that report back in real-time. The capability came to light in a court filing (PDF) by Project Honey Pot against "John Doe" thieves. The case was filed in order to compel the banks — which are almost as secretive as the cyber-crooks — to reveal information such as IP addresses that could lead back to the miscreants. Or at least allow victims to be notified. Real-time keyloggers were first discovered in the wild last year, but the court filing and the Times article should bring new attention to the threat. The technique menaces the 2-factor authentication that some banks have instituted: "By going real time, hackers now can get around some of the roadblocks that companies have put in their way. Most significantly, they are now undeterred by systems that create temporary passwords, such as RSA's SecurID system, which involves a small gadget that displays a six-digit number that changes every minute based on a complex formula. If [your] computer is infected, the Trojan zaps your temporary password back to the waiting hacker who immediately uses it to log onto your account. Sometimes, the hacker logs on from his own computer, probably using tricks to hide its location. Other times, the Trojan allows the hacker to control your computer, opening a browser session that you can't see."

- a one time pad for logging on- another set of codes, from which one is picked randomly, to confirm transfers

The one time pad means they can't open a second session. Even if they could hijack the session I've opened they can't transfer money without my explicitly authorizing each transfer by entering the second code.

Technically, it's possible to modify the browser itself so it inserts unwanted transactions into the list, but hides them from view for the user, and then just waits for them to get confirmed in conjunction with some other transaction made by the user. Don't know if it's worth the trouble though.

A good solution (read as "implementation") would consist of a challenge that the user can verify corresponds to the transaction he wishes to do. Four first digits of the Challenge are the four last digits of the sum. Six last digits of the Challenge are the six first digits of the target bank account. Etc.

Nobody can expect good security if the user doesn't watch out and double checks what's happening. The attack you're talking of could very well be done to a poor old lady paying her bills for the month i

For starters, I don't think they roll on success (how would the device know, by the way?). -- Disclaimer: I'm holding one in my hand right now, so I'm pretty sure.;-)

But even if they would: the legitimate user would not be able to know the difference between a failure due to making a typo and a failure due to some hacker beating him to the line. So he'd assume the former and simply try again, not understanding that someone else is active at the same time. Providing such a false sense of security, doesn't

For starters, I don't think they roll on success (how would the device know, by the way?).

The server enforces it. You can't authenticate multiple times with the same token. The server returns an "an already used" code if it was recently used. I know this because I've written software that uses RSA's secure-id toolkit.

But even if they would: the legitimate user would not be able to know the difference between a failure due to making a typo and a failure due to some hacker beating him to the line.

Again, see the point out about return values from the server-side. The application may choose to report this information directly to the user or simply flag it for the security team to investigate further. I prefer the later because false positives are going to be pretty rare unless the client software is broken in other ways.

I work for RSA and you are absolutely correct. Attempting to authenticate twice with the same tokencode will automatically yield a rejection.

I believe the idea of this "real-time application" is that they see you typing in your passcode and zap that code into the authentication system before you do. The success of this hack is predicated on the notion that they are watching with baited anticipation, ready to spring into action the exact moment you sign into your online bank.

The chance of this actually occurring is highly remote, to say the least. The technique of racing ahead of a potential 2-factor authentication is compelling in theory, but of little practical use. If they're going to get into your bank, it has nothing to do with "defeating" Securid (or any other one-time display mechanism).

The success of this hack is predicated on the notion that they are watching with baited anticipation, ready to spring into action the exact moment you sign into your online bank. The chance of this actually occurring is highly remote, to say the least.

(Emphasis mine).

Well, if a background process would be waiting with baited anticipation, and would create a valid login and then sit back, the hacker would have 20 minutes (or whatever the server-side determined session timeout) to get to his terminal and use the open, authenticated session.

Where I think this totally fails, is that my bank uses two-factor authentication for logging in as well as for doing an actual transfer. This is where the hack fails for such systems: it depends on letting the user creat

An alternative used by at least one bank in Australia is that when you request a transaction they send ans sms to your pre-authenticated mobile number detailing the transaction, i.e who to and how much, and giving an authorisation code that you then enter. That code only authorises that specific transaction.No need to carry a one-time pad around or a special code generator

An alternative used by at least one bank in Australia is that when you request a transaction they send ans sms to your pre-authenticated mobile number detailing the transaction, i.e who to and how much, and giving an authorisation code that you then enter. That code only authorises that specific transaction.

That's common in Europe too. But the result has been that hacking sms in various [softpedia.com] ways has become of great interest to thieves. If they don't already exist, you can count on seeing java trojans for cells phones that silently forward SMS too.

If they don't already exist, you can count on seeing java trojans for cells phones that silently forward SMS too.

Not that easy to do silently as in Australia and Europe SMS's cost the sender not the receiver. At AU$0.25 per SMS this will be noticed easily by even the dumbest of phone users. It will take one case in front of the TIO (Telecommunications Industry Ombudsmen) for Telco's to block SMS forwarding all together, despite the fact the telco will likely win in front of the TIO (virus is on the client

A properly designed security system fails gracefully by limiting the knowledge available at *every* step of the game.

Let's make a few assumptions:

1) The bank has a password generator. It's a simple key/value randomizer. It's very, very secure.

2) The end user has a cell phone. It may or may not be hacked.

3) The end user is attempting to get money or do something with the bank. It might be on a computer, or it might be a credit payment machine at a grocery store. The device can be reliably tracked (EG: IP add

The article focuses on RSA's SecurID, but one of the main drawbacks of RSA's SecurID is that it is only time based. Other companies also use event-counters, which means that you can't actually replay the attack.

The parent is right (and I should now, I deploy these solutions), most serious banks will use OTPs (One Time Passwords) for the initial log-on, but then require Challenge-Responses to sign the transactions (website provides a challenge, which can be a completely random number, or based on a number of variables: amount, target account, etc; this challenge is provided to the token (stupidly named "gadget" in the summary), and it spits out a response.) This can be verified by the server.

OTPs have always had this flaw, and this really isn't any news. I've heard of attacks were real-time keyloggers would interrupt the network connection (wifi, ethernet, whatever) on a software/OS level temporarily (I assume by refreshing the DHCP bumf) as to allow the attacker to use the OTP.

However, this can be easily thwarted.

Any good Authentication Server will provide the option to use seeded authentication, and even though this doesn't apply to OTPs (most OTP algorithms actually include clock counter (and event counter if it is implemented, not RSA's case) related information in the OTP, hence the whole OTP is required for authentication), it does apply to Memorable Data. For example, 2nd and 8th character of your secret passcode. Or for example, even better: multiply the 4th digit of your OTP with the 6th digit of your secret passcode. (OTP still required to be input completely). Yeah sure, given sufficient time, the attacker should be able to know what your passcode is, but heck, that's going to require quite some effort.

Wikipedia has a bit of a section about the MITM attacks vulnerabilities of OTPs (even though it is right in SecurID's article [wikipedia.org], it doesn't apply to them alone, but to the concept as a whole). The main issue, however, with RSA's implementation isn't necessarily the MITM attack, but quite simply, stealing the token. It doesn't have a PIN code, heck, it even just shows the code the whole time (last one I checked did this), and I could read the number right off my friend's keychain.

Also, let us not forget that a one-time attack (which again, shouldn't be much of an issue if banks have a good solution that requires CRs for each transaction) on an account really isn't a big deal. It's a One-Time Password. It's only valid once. After he's visited the account, and seen the balance, that's about as far as he's going to go.

Nothing to see here, please move along. If anything, this is just going to drive our business a bit.

The article focuses on RSA's SecurID, but one of the main drawbacks of RSA's SecurID is that it is only time based.

I can only speak to the RSA authentication I use, but once a 6-digit password has been used, it cannot be used a second time. This feature is enforced server-side and is especially annoying if you need to authenticate multiple times because each remote application (email, timecard, etc.) requires a separate authentication.

Moreover, at least in this instance, the SecurID password must be combin

Actually, my point was that other vendors provide tokens that require a PIN to be input into the device, rather than to the server. The device can be locked if an incorrect PIN is entered, etc.

Also, I never intended to say that Authentication Servers implementing SecurID weren't able to counter replay-attacks (this is a base functionality), I was merely stating that it didn't use event-counters to calculate the OTP. Other vendors provide this functionnality, and this enhances security, as instead of havi

It demands a password.It demands the code from a one time pad.It demands a confirmation of the full detailed transaction.As the transaction surpasses a certain amount it asks you to physically go to the bank.You then get to the bank, to assure the bank director you do want to make the payment.

From that point, the information required depends on your skill convincing the bank director that you actually do want to buy diamonds through "THA INTARWEBS!" and that

The technique menaces the 2-factor authentication that some banks have instituted:

Sure, they could intercept my login, but that would get them nothing. A new token is required for each and every transaction once logged in. I suppose they could try to add an emulation layer of sorts for the entire bank site, but that starts to become a lot of work with a lot of opportunity to notice something strange going on.

Does it really matter? If they have access to your PC, why on Earth is this an issue anyway? Two-factor authentication or not, they have *ACCESS* to your Visa numbers, Amazon account, bank details (if you pay some bills online by direct transfer etc.). What the things *do* once they are on your machine is irrelevant. How they got there and finding them is infinitely more important.

I wonder if the next step will be a dedicated hardware device such as IBM's ZTIC, where one does their transaction confirming on a closed secure device. This way, even though the consumer's PC may be compromised, an attacker trying to run transactions would be stopped when there is no device confirming the transaction.

Of course, there are always issues like spamming the user with bogus transactions, or compromise the hardware device. However, it is a lot harder to compromise a hardware device than a generic PC which has to parse/execute/render untrusted code from the Internet on a common basis.

I already do this basically. I have an encrypted OS on a USB key that I boot from when I want to do online banking, and in that OS image I do ONLY banking, no other websites of any kind. It's linux and it's firewall is on, auto-updates/etc are off. Nothing short of a full BIOS virus running a VM emulator can get at me, that or a hardware key logger. And that's unlikely, because I generally use a dis-used PC at work that has no hdd/os (spare in the corner of the equipment room), or a spare system at home

Long term, what comes to my mind for secure transactions would be placing a hypervisor at the BIOS level, and having a hardened OS dedicated for banking and other items. Then having an OS in another VM for general stuff (gaming,/., etc.)

Of course, there are five issues with putting hypervisors in every PC out there:

1: The hypervisor needs to be hardened. By default, these have a smaller attack surface than an OS, but there are ways to get around its protection. If malware in an untrusted partition is a

What's changed in that? If a Trojan can get into your host machine, it can get into your emulated machine (since it obviously has Internet connectivity), and vice versa. Doesn't really matter if it catches real or emulated key presses.

Virtual Keyboard (preferably a browser based one) is a better defence, still poor compared to stopping malware at the gateway before you infect your machine. If you don't trust\have the virtual keyboard just make one by writing out A-Z, 0-9 and all the special characters into a text editor and copy and paste each one as you need it. Yes this takes time but it is less vulnerable to key loggers.

... that I'm still a Bank of America customer. I've grown to like their 2-factor authentication mechanism. You can set up your account so that whenever you try to log in they send a random 6-digit number to you via a text message to your phone. You then enter that number into the website as you're logging in. Since it's truly a one-time-use number sent out of band from the way you're logging in it's about as secure as you can get.

I remember suggesting this years ago, and the responses I got at the time were "but I/my mother/granddad/aging relative doesn't have a mobile phone", or "I don't want to have to carry around my mobile to use my online banking" - all very strange retorts. Glad to see a bank using its noddle.

Bank of America used to have a good system for authenticating their site. At login, you input your ID, and the B of A site gave you back a photo of your own choosing to tell you that you were on the real Bank of America site. Only then did you input your password.

Last Friday, B of A broke this feature. I'm now getting a password prompt without seeing the photo I'd chosen. My first thought was that there's was a security problem. I checked the SSL cert info, which looked OK. I reinstalled Firefox. No change. I called Bank of America. They wanted me to remove Flash, which I did. No change. They advised me not to log in. Then they passed me off to tech support, which hasn't called back yet.

Then I took out a Linux-based Eee PC 2G Surf that had been unused for months, powered it up, plugged in an Ethernet cable, and saw the site doing exactly the same thing. So it's probably not a client side problem.

What I think happened is that someone at B of A did a partial site redesign and broke something. They introduced some Flash (something called "/sas/sas-docs/html/pmfso.swf") on the password page (a terrible idea, given Flash's history of security vulnerabilities) and along with that, broke some part of the login process.

If, in fact, they've had a break in on the server side, the main login of Bank of America has been compromised for at least three days now. I'm not seeing any indication of that, though; just general ineptitude.

(The page HTML is awful. It's clearly been modified over and over for years without a cleanup. It has Flash, Javascript, CSS, single-pixel GIFs for formatting, and comments like "July maintenance OLB timeout inactivity update starts". The "enter password" page has 966 lines of HTML and JavaScript, not including external files. That's too much flaky machinery for such a security-critical function.)

Bank of America used to have a good system for authenticating their site. At login, you input your ID, and the B of A site gave you back a photo of your own choosing to tell you that you were on the real Bank of America site. Only then did you input your password.

My credit union used this for a while, but stopped recently (or maybe not! *eerie music*). I don't see how it helps me verify that I'm really connecting to their site, though, since a middleman site can just as easily act as a proxy to the real si

How does this provide any security? All the fake site needs to do is get the picture from the BoA site. (Heck a well written script could cause your machine to do it for them.) Once that happens you are no better off than you were before, and likely worse (Since you are training people to assume that "picture means legit", instead of other more secure methods.

When the first part of the authentication is done by a Greasemonkey script, keyloggers don't see that. Or do they?

This may sound like a joke, but in fact I do have one part of the authentication scripted in Greasemonkey. That gets me directly to the next step with some sort of challenge-response system involving a calculator-like gadget with my bank card inserted in it.

Of course, if your bank requires nothing else than an account number and a password which you have in a GM script, I would be glad to borrow

Two-factor authentication is when authentication requires two different factors of authentication. Some possible factors of authentication are something you know (PIN numbers, passwords, usernames, secret answers to questions arranged in advanced), something you have (smart card, key fob, pass-card, a special piece of hardware, a SSL certificate loaded on a device that you can't read), something you are (biometric identification, facial, voice, fingerprint recognition,

TFA says nothing about the OS involved, which usually means a Microsoft Windows PC. I suppose the NYT is able to sell more advertising if they keep it ambiguous.

Now, to be fair, Linux recently patched a root-privilege bug that went unnoticed for EIGHT years. But, to be just as fair, there are several orders of magnitude more compromises available courtesy Redmond, and due largely in part (as Djikstra quipped...) to their poor reinvention of UNIX.

I have family that use Windows. What am I supposed to do? This is getting ridiculous. Sure, they get the OS they deserve. Sure, my employer gets the security compromises they deserve. But some part of the blame has to be shared by the company which made all of this possible.

Programmers have always written buggy software. But it took Microsoft to create security flaws *by design* - that is, to deliberately architect software in an insecure an unreliable manner. It took Microsoft to disregard the lessons learned in UNIX, (as Djikstra would say) "To reinvent it poorly."

I know, I know,./ers will say, "Don't use Windows". Okay, I don't. But you have to understand that not everyone is a geek. The folks at corporate *BUY* Windows licenses because they don't know any better. My relatives use it because it came with their computer, or, their department at the university uses word, or they want to play games, or they want something familiar.

What about them?

Is it really acceptable for us to ignore the needs of the average user? Is it really acceptable to blame the victims?

Or, should we hold Microsoft accountable to the same standards adhered to by everyone else in the industry?

due largely in part (as Djikstra quipped...) to their poor reinvention of UNIX.

That's a very odd spelling of Henry Spencer.

Is it really acceptable for us to ignore the needs of the average user? Is it really acceptable to blame the victims?

In this case, no. Let Microsoft clean up their own mess. The approach that Microsoft took to the internet in their Microsoft Windows 95 ("ActiveX" and auto executing stuff from across a wire or from removable media) had already been discredited for a decade.

If you really wish to reinvent something, you can at least do a decent job of it.

When you authenticate successfully with a passcode the passcode is immediately invalidated and cannot be used again. You cannot complete a login then use the same passcode again. At my old company we had to request special 30-second fobs for this reason. People would connect to a machine using their passcode and then need to su to root, but had to wait for the code on the token to change before they could authenticate again. If an attacker captures your passcode after you use it to successfully log in it's not going to do them any good at all.
I feel like I'm missing something because none of the comments that I read above mention this fact. Pretty basic stuff to anyone who has administrated the system before.

If an attacker captures your passcode after you use it to successfully log in

That's the point of it being in real-time. The person on the other end of the keylogger has already logged in by the time your mom has gotten her hand back on the mouse, wiggled it around to find where the pointer is on the screen, moved the pointer to the login button and clicked on it. No, not that mouse button, the other mouse button.

She gets the usual useless error message and decides she must have mistyped something.

That doesn't stop them from blocking your login such that they are the only ones using the password/id. They log the keystrokes prior to it being sent over the wire to the bank, block the post to login.cgi, and login for themselves.

They log the keystrokes prior to it being sent over the wire to the bank, block the post to login.cgi, and login for themselves.

If they are smart they can even provide a fake error page once they've acquired the credentials that tells the user that the site is "experiencing technical difficulties" and that they should please try again in 15 minutes. 99.99% of users won't think a thing of it.

That's probably a really hard hack to pull off. But I doubt most users would notice anything if they got an RSA SecurID password wrong once -- they'd assume it's a typo.

(By the way, I don't see any information saying RSA SecurID only lets you use the token once. Sure it changes every 60 seconds, so that's as good as "once", but if two people happened to be racing to type in the same code at the same time, I don't see anything saying it would deny access.)

>(By the way, I don't see any information saying RSA SecurID only lets you use the token once. Sure it changes every 60 seconds, so that's as good as "once", but if two people happened to be racing to type in the same code at the same time, I don't see anything saying it would deny access.)

That feature is set on the RSA server. The first device to present your username and passcode gets the green light. The second device (VPN appliance, webserver, whatever) to present that same username and passcode

First of all, RSA SecurID has nothing to do with the algorithm RSA (besides being created by the same people).

Second, biometrics won't help at all since they can simply transmit the biometric data back and have *permanent* access to whatever system uses it.

Finally, RSA SecurID is actually *not* vulnerable because the passwords it generates are *one time* passwords. If the hacker tries to log in to the system using the same password the victim just did, he will be rejected since that password was already us

Finally, RSA SecurID is actually *not* vulnerable because the passwords it generates are *one time* passwords.

If the attacker has trojaned your machine, he just needs to arrange for his software to block your submission of the one-time password so that he can use it. If he gives you an error page, or even what looks like a functional page, then he can proceed to drain your bank account and leave you completely unsuspecting.

The calculator won't give you a new token for another 30-60 seconds (depending on configuration).

Of course, one could argue that people that won't notice anything odd with a forged site, also won't mind the usually instant "eeer, wrong!" taking a whole minute. But nothing will save the idiot from the persistent phisher, so at some point the line between security and convenience needs to be drawn.

Umm.. it's a banking website.. I dunno about your bank, but my bank takes 30+ seconds to log me in on a good day.

Oh, and blaming the user for a failure of technology is classic geek arrogance. The simple fact is, these token devices a part of the arms race and if you want to keep ahead, you've got to keep innovating. For example, most users don't even *need* wire transfer capabilities so they should be disabled by default, when they ask for it to be enabled the bank gets the opportunity to educate users t

You're not thinking out of the box. Sure SecurID is a one-time password system, but that doesn't mean it still can't be exploited. If the keylogger is sophisticated enough to be able to pick out the username, pin, and tokencode, it is sophisticated enough to send the real tokencode to the hacker, in real time, while fudging it up for the user. Passwords are usually masked anyway, so the user would never know that the keylogger changed the tokencode. The hacker logs in, and the user tries again, possibly

No need to execute them. No need to punish them severely at all. We just need to catch them. Given a 50% risk of being caught a one year prison sentence would provide more than adequate deterrence. Given the present one in 100 million risk of being caught an 18th century hanging would offer no significant deterrence.

And since our lazy leaders, who don't even bother to read the bills they pass, are unlikely to change this statistic, I'm going to go close my online bank account right now. The last thing I need is some asshole swiping my half-million life savings. I'll just drive to the bank instead.

And since our lazy leaders, who don't even bother to read the bills they pass

We could do real reform to the whole system if we sunset every law in effect now and require new laws to be read aloud in full before they are allowed to be voted on. That's supposed to be the law (at least in the Senate)...

We just need to catch them. Given a 50% risk of being caught a one year prison sentence would provide more than adequate deterrence.

Your post displays a lack of understanding of the criminal mind. Don't feel too bad though, because most people (especially lawmakers) have the same lack of understanding.

The thing about criminal sentences is that they don't work as deterrents - because criminals don't believe they'll be caught. Career criminals believe that only idiots get caught, and since they're smarter than everyone else (thanks to the Dunning-Krueger effect), they won't be caught.

Some, those who who protest against governments in violation of the law or who steal from the rich to give to the poor, do so for a real or imagined higher purpose.

Others are aware of the consequences but get some benefit out of it, such as the thrill of "getting away with it," the thrill of showing they are, at least this time, more powerful than their victim or

How else can you explain an engineering report that lists 120mph as the designed maximum limit for an interstate, and an 85mph recommended limit for travel, but somehow gets signed at 65? The only reason I can conclude why politicians ignore engineers' recommendations is because the politicians view the twenty mph gap as an opportunity - to increase tax revenue.

And of course the Bernie Madoff-like scammers we call insurance companies also benefit because they

I'm not saying you are wrong about the ads, I am saying the official reason for the change was to save energy. I am also saying that if some Wikipedia article is claiming otherwise, it needs to be reconciled with the two articles I mentioned above. Happy edit

By that reasoning the national speed limit should be set to 40mph, which is the *most* efficient speed for most cars (1900-2000 rpm is the engine's sweet spot). Obviously I think the "saves oil" argument is flawed, because while it may save oil, it defeats the purpose of having a car in the first place (to travel long distances in as short a period of possible). Now maybe for you an extra 15 minute per day commute is no big deal,

>>>Speed limits need to be set on a case by case basis for each road segment, taking into account typical actual traffic patterns including typical actual speeds,>>>

Which is not what happens. The State legislatures set an arbitrary maximum limit. Even if the engineers designed a new strip of road for 120mph (max) and 85 (recommended), the signs would still read 65 due to an arbitrary decision by out-ouf-touch politicians that 65 will be the max allowed across the whole state.

How else can you explain an engineering report that lists 120mph as the designed maximum limit for an interstate, and an 85mph recommended limit for travel, but somehow gets signed at 65? The only reason I can conclude why politicians ignore engineers' recommendations is because the politicians view the twenty mph gap as an opportunity - to increase tax revenue.

Something like that. For those of you young'uns who don't remember Dick, his administration flooded TV with advertisements that said "55 saves lives", then violated the 10th amendment to force states to comply with it.

Lowered speed limits had *nothing* to do with fuel efficiency. And for those of you who think that is the case... get off my lawn!

Uh, do you live in the US? Every single person everywhere drives 5 MPH over the limit and that's almost always at least 10% over (40 in a 35 is 14% over). I have never known anyone anywhere to get a speeding ticket for 5 over.

Obviously you have never been to, or driven in California (USA). My home town hired its first motorcycle cop explicitly for ticketing things like this.

See if you can find some old ca.driving Usenet archives. That's probably the most central place you can go for details.

It's hard to motivate to your voters why you need to spend huge amounts of tax money chasing down cyber criminals that mostly operate abroad, thus not affecting your country in the slightest, when that money could go to catching criminals that do, or to education, health care, whatever.

Voters are generally emotionally biased toward fighting crime even when it isn't very useful - there was an experiment done where people were asked to choose between spending money to combat thing A and national parks destroyed by [deer/poachers]. The group that got poachers was much more likely to choose that over thing A than the group that got the deer.

>>>The douchebags stealing info from banks aren't hackers... they are thieves and crackers.

You don't know your definitions son. For as long as I can remember, a hacker was someone who broke-into secured computers. I don't see how you can claim there's anything "good" about such a person. (shrug). And a "cracker" is someone who defeats copy-protection. Originally that applied to cracking floppies, but now it also applies to CDs, DVDs and downloaded media like MP3/AAC files.

I've been using computers since the early 80s, and hacking very specifically meant someone doing things that the "authorities" would consider crimes - like phreaking to get free phone calls. Or wardialing to find computers to break into. Or just guessing people's passwords on BBSes so you can raise havoc. And of course cracking software so it could be copied freely amongst friends (aka piracy).

In other words they commit acts that the authorities consider crimes, like breaking-into secure computers, making free phonecalls, copying software without permission, et cetera. Just like I said previously. (Also it's worth nothing that wikipedia article is marked "unverified claims" so it's basically an invalid reference and proves nothing.)

Also, banks should be on the lookout for things like "he used his ATM card at home yesterday, he's in Eastern Europe today" and react accordingly.

This is what my bank does, and it annoys the hell out of me. I do a lot of foreign travel, and I also mainly live outside the country where my bank is based.

If my bank sees overseas transactions (including internet transactions with a source IP outside the bank's country), then they block the transaction and the card, until I call them to have the block re