Man-in-the-middle attacks divert data on scale never before seen in the wild.

Huge chunks of Internet traffic belonging to financial institutions, government agencies, and network service providers have repeatedly been diverted to distant locations under unexplained circumstances that are stoking suspicions the traffic may be surreptitiously monitored or modified before being passed along to its final destination.

Researchers from network intelligence firm Renesys made that sobering assessment in a blog post published Tuesday. Since February, they have observed 38 distinct events in which large blocks of traffic have been improperly redirected to routers at Belarusian or Icelandic service providers. The hacks, which exploit implicit trust placed in the border gateway protocol used to exchange data between large service providers, affected "major financial institutions, governments, and network service providers" in the US, South Korea, Germany, the Czech Republic, Lithuania, Libya, and Iran.

Further Reading

A black hole route to implement Pakistan's ban on YouTube got out into the …

The ease of altering or deleting authorized BGP routes, or of creating new ones, has long been considered a potential Achilles Heel for the Internet. Indeed, in 2008, YouTube became unreachable for virtually all Internet users after a Pakistani ISP altered a route in a ham-fisted attempt to block the service in just that country. Later that year, researchers at the Defcon hacker conference showed how BGP routes could be manipulated to redirect huge swaths of Internet traffic. By diverting it to unauthorized routers under control of hackers, they were then free to monitor or tamper with any data that was unencrypted before sending it to its intended recipient with little sign of what had just taken place.

"This year, that potential has become reality," Renesys researcher Jim Cowie wrote. "We have actually observed live man-in-the-middle (MitM) hijacks on more than 60 days so far this year. About 1,500 individual IP blocks have been hijacked, in events lasting from minutes to days, by attackers working from various countries."

At least one unidentified voice-over-IP provider has also been targeted. In all, data destined for 150 cities have been intercepted. The attacks are serious because they affect the Internet equivalents of a US interstate that can carry data for hundreds of thousands or even millions of people. And unlike the typical BGP glitches that arise from time to time, the attacks observed by Renesys provide few outward signs to users that anything is amiss.

"The recipient, perhaps sitting at home in a pleasant Virginia suburb drinking his morning coffee, has no idea that someone in Minsk has the ability to watch him surf the Web," Cowie wrote. "Even if he ran his own traceroute to verify connectivity to the world, the paths he'd see would be the usual ones. The reverse path, carrying content back to him from all over the world, has been invisibly tampered with."

Guadalajara to Washington via Belarus

Renesys observed the first route hijacking in February when various routes across the globe were mysteriously funneled through Belarusian ISP GlobalOneBel before being delivered to their final destination. One trace, traveling from Guadalajara, Mexico, to Washington, DC, normally would have been handed from Mexican provider Alestra to US provider PCCW in Laredo, Texas, and from there to the DC metro area and then, finally, delivered to users through the Qwest/Centurylink service provider. According to Cowie:

Instead, however, PCCW gives it to Level3 (previously Global Crossing), who is advertising a false Belarus route, having heard it from Russia’s TransTelecom, who heard it from their customer, Belarus Telecom. Level3 carries the traffic to London, where it delivers it to Transtelecom, who takes it to Moscow and on to Belarus. Beltelecom has a chance to examine the traffic and then sends it back out on the “clean path” through Russian provider ReTN (recently acquired by Rostelecom). ReTN delivers it to Frankfurt and hands it to NTT, who takes it to New York. Finally, NTT hands it off to Qwest/Centurylink in Washington DC, and the traffic is delivered.

Such redirections occurred on an almost daily basis throughout February, with the set of affected networks changing every 24 hours or so. The diversions stopped in March. When they resumed in May, they used a different customer of Bel Telecom as the source. In all, Renesys researchers saw 21 redirections. Then, also during May, they saw something completely new: a hijack lasting only five minutes diverting traffic to Nyherji hf (also known as AS29689, short for autonomous system 29689), a small provider based in Iceland.

Renesys didn't see anything more until July 31 when redirections through Iceland began in earnest. When they first resumed, the source was provider Opin Kerfi (AS48685).

Cowie continued:

In fact, this was one of seventeen Icelandic events, spread over the period July 31 to August 19. And Opin Kerfi was not the only Icelandic company that appeared to announce international IP address space: in all, we saw traffic redirections from nine different Icelandic autonomous systems, all customers of (or belonging to) the national incumbent Síminn. Hijacks affected victims in several different countries during these events, following the same pattern: false routes sent to Síminn's peers in London, leaving 'clean paths' to North America to carry the redirected traffic back to its intended destination.

In all, Renesys observed 17 redirections to Iceland. To appreciate how circuitous some of the routes were, consider the case of traffic passing between two locations in Denver. As the graphic below traces, it traveled all the way to Iceland through a series of hops before finally reaching its intended destination.

Cowie said Renesys' researchers still don't know who is carrying out the attacks, what their motivation is, or exactly how they're pulling them off. Members of Icelandic telecommunications company Síminn, which provides Internet backbone services in that country, told Renesys the redirections to Iceland were the result of a software bug and that the problem had gone away once it was patched. They told the researchers they didn't believe the diversions had a malicious origin.

Cowie said that explanation is "unlikely." He went on to say that even if it does prove correct, it's nonetheless highly troubling.

"If this is a bug, it's a dangerous one, capable of simulating an extremely subtle traffic redirection/interception attack that plays out in multiple episodes, with varying targets, over a period of weeks," he wrote. "If it's a bug that can be exploited remotely, it needs to be discussed more widely within the global networking community and eradicated."

Promoted Comments

This is all the more reason that every packet should be encrypted. This should have been done with IPSEC or similar in IPv6 (which, believe it or not, is really growing, even in the US, as more and more clients use 4G as their most frequent internet connection), but it is an optional feature, not a requirement. Instead, we have to individually change dozens of protocols, like the recent discussion of HTTP/2 going to SSL/TLS.

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP. The way to post one's contact information was via a public "finger" service. The only "firewall" was the /etc/hosts.deny file, which assumes that your applications themselves are bug-free. The BIND DNS system and Sendmail and Apache (A patchy webserver) were originally completely riddled with security bugs.

Gradually, protocols like SSH ascended, as UNIX's other properties made it popular in early university, scientific, and business environments, and the system had to adapt to the slums around it. This is not really an "original sin" situation, but an adaptation-through-natural-selection situation.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

This I'd like to know. It'd be nice to have a way of determining whether or not my traffic is being redirected in such a manner.

It isn't so much hidden from traceroute, as it is that traceroute is the wrong tool to determine if your own IP block is being re-routed. Essentially, traceroute is used to determine the hops from you to your destination. It tells you nothing about the route from your destination to you. And if its traffic to your IP space that's been packetjacked like this, then it's not going to help.

The solution is to have some other location on the net trace its route to you. eg: A buddy in another country/ISP/whatever does a traceroute to you. Note that asymmetric routing is relatively common for a variety of reasons, and isn't immediately an indicator of issues.

For a less informal solution, it's pretty easy to query major ISPs for the routes they see, through the use of Looking Glass Servers. (eg: Level 3, Cogent, Sprint, etc) Renesys similarly measures these routes from a wide variety of points, and can notice when things change.

(I worked at UUNET from about six months before it was acquired by WorldCom until shortly after WorldCom's collapse and there was a huge disaster when a client incorrectly announced a route via BGP-- that was nearly 15 years ago!)

I'm confused. This doesn't sound like full MiTM. The eavesdropper is only seeing half the traffic, if I understood correctly.

If that understanding is correct, then this wouldn't let an eavesdropper do real MiTM kind of stuff (inserting itself into non-authenticated key exchange for example). It would allow modification of unencrypted traffic, though. But only in one direction.

This is all the more reason that every packet should be encrypted. This should have been done with IPSEC or similar in IPv6 (which, believe it or not, is really growing, even in the US, as more and more clients use 4G as their most frequent internet connection), but it is an optional feature, not a requirement. Instead, we have to individually change dozens of protocols, like the recent discussion of HTTP/2 going to SSL/TLS.

I can't say this kind of service-provider-telling-service-provider-communication is a big surprise. About a year ago, I think it was Dodo (An Australian ISP) had an issue with their international connection.

(Of course, even if you encrypt all your packets for security, this sucks for TCP latency.)

While reading the article it occurred to me that such circuitous redirection probably impairs latency. Surely, strong encryption would help remove the motive for malicious redirection?

Your comment reminds me of an article on Ars Technica last year, explaining that Google tried to fix this problem by improving the protocol specification. They reportedly hit a brick wall because some router manufacturers didn't cooperate. Time for Google to have another go at promoting this?

Am I right in understanding that the people part way along the route also gain access to the traffic?

It would be interesting to review how often these excursions lead to traffic going through a party outside the US that is friendly to the US but not bound by the same rules on reading the data of US citizens.

Am I right in understanding that the people part way along the route also gain access to the traffic?

It would be interesting to review how often these excursions lead to traffic going through a party outside the US that is friendly to the US but not bound by the same rules on reading the data of US citizens.

Time to open a pass-book bank account, just like my mother-in-law still uses ;-)

Even this can have its down-sides. High resolution webcam? Social engineering —"I'm from the bank, and we're doing some technical upgrade work. We need your next code to help us make sure we've done this right..."

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

It's interesting that both examples end up routing traffic through London. I'd love to see a map of the other hijacks that they saw. I checked the source blog post, but no additional routes were provided.

It is very possible that most of the traffic that goes between the US and Europe, Africa, or western Asia travels through London. If that would be the case, then the common hop of London is to be expected. However, given the partnership between NSA and GCHQ, such a reroute could give access to traffic that the NSA couldn't otherwise tap while being arguably just the result of a bug or random internet attackers and not an intelligence agency.

Well, it's comforting to know that the US is in a good position internationally to work with our friends and allies in sorting this out.

Hey, if your government will at some future time be willing to reach out and collaboratively address issues of snooping on internet traffic and man in the middle attacks, we'll be all ears. Let us know.

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

Or software could be engineered to assume an untrusted network. Most of the time the gut reaction is "we have to make it illegal," without thinking about what that really means. What if, instead of trying to pass laws forbidding such behavior (which criminal elements will ignore, especially if there's money to be made) we simply design protocols that assume anyone with a PC and time will figure out a way to read your data, and just do the best we can at keeping ahead of them.

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP. The way to post one's contact information was via a public "finger" service. The only "firewall" was the /etc/hosts.deny file, which assumes that your applications themselves are bug-free. The BIND DNS system and Sendmail and Apache (A patchy webserver) were originally completely riddled with security bugs.

Gradually, protocols like SSH ascended, as UNIX's other properties made it popular in early university, scientific, and business environments, and the system had to adapt to the slums around it. This is not really an "original sin" situation, but an adaptation-through-natural-selection situation.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

From the original blog post on which this article is based:

"At Renesys, we watch the Internet 24/7 for our enterprise customers, to help them understand and respond to Internet impairment before it affects their businesses. Many of those impairments are the result of someone else’s well-intended Internet traffic engineering. Some are accidents, like cable cuts or natural disasters, and that’s what you typically see us blog about. [...] Renesys maintains a realtime view of the Internet from hundreds of independent BGP vantage points. We have to, because that’s how we can detect evidence of Internet impairment worldwide, even when that impairment is localized. We also maintain an active measurement infrastructure that sends out billions of measurement packets each day, crisscrossing the Internet in search of impaired or unusual paths like these. Finally, we have a distributed realtime-taskable measurement system that allows us to trigger quick measurements from all over the planet when trouble is detected in a region, so that we can immediately evaluate its significance."

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

From the original blog post on which this article is based:

"At Renesys, we watch the Internet 24/7 for our enterprise customers, to help them understand and respond to Internet impairment before it affects their businesses. Many of those impairments are the result of someone else’s well-intended Internet traffic engineering. Some are accidents, like cable cuts or natural disasters, and that’s what you typically see us blog about. [...] Renesys maintains a realtime view of the Internet from hundreds of independent BGP vantage points. We have to, because that’s how we can detect evidence of Internet impairment worldwide, even when that impairment is localized. We also maintain an active measurement infrastructure that sends out billions of measurement packets each day, crisscrossing the Internet in search of impaired or unusual paths like these. Finally, we have a distributed realtime-taskable measurement system that allows us to trigger quick measurements from all over the planet when trouble is detected in a region, so that we can immediately evaluate its significance."

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

How does "huge chunks" of traffic get redirected and no one notice any load differences on their routers?

Should latency be monitored? Most routes have similar latency all the time. Taking on of those scenic routes should be setting off bells and whistles that something is amiss.

I don't know the exactness of BGP, but if route costs are always zero or greater, and every router adds a higher weight, how does such a round-about route get selected? Shouldn't the total weight be greater?

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

How do they know where they're routing? Latency? That can change quite a bit on a residential connection. Example: My ping from Midwest USA to London is about the same as my mom's DSL ping to her ISP down the road.

You typically get the IP addresses, but unless they also include Geo location info in DNS, there isn't a nice way to look up their location. Even if DNS did, imagine the extra load on DNS servers if every users on the Internet was constantly do reverse look-ups on every connection they make every few seconds.

Then, to top it off, how do you decide when a route is "clearly wrong"? There's a lot of grey area.

The trace route from my work, which I only live 1/2mile from, took a 2,500 mile route for the past few years, and only just recently switched to using Level 3 comm, which routes through Chicago for an only 400 mile route.

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

How do they know where they're routing? Latency? That can change quite a bit on a residential connection. Example: My ping from Midwest USA to London is about the same as my mom's DSL ping to her ISP down the road.

You typically get the IP addresses, but unless they also include Geo location info in DNS, there isn't a nice way to look up their location. Even if DNS did, imagine the extra load on DNS servers if every users on the Internet was constantly do reverse look-ups on every connection they make every few seconds.

Then, to top it off, how do you decide when a route is "clearly wrong"? There's a lot of grey area.

The trace route from my work, which I only live 1/2mile from, took a 2,500 mile route for the past few years, and only just recently switched to using Level 3 comm, which routes through Chicago for an only 400 mile route.

Well I would think a cross continental route for a service that resides in your own country would be telling of an issue. These application providers should be monitoring this traffic to some degree, and if they see a massive change in traffic patterns then something is going on, it could be a legit BGP change, but it could also be something different. In the end I would think these web service providers would be looking at this high-level data...

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

Jitter on residential lines can be greater than the time it takes to route a packet around the world. Low quality ISPs abound.

Anyway, how does one differentiate because latency caused by a bad route and latency caused by a congested connection, like someone running BitTorrent?