(I worked at UUNET from about six months before it was acquired by WorldCom until shortly after WorldCom's collapse and there was a huge disaster when a client incorrectly announced a route via BGP-- that was nearly 15 years ago!)

I'm confused. This doesn't sound like full MiTM. The eavesdropper is only seeing half the traffic, if I understood correctly.

If that understanding is correct, then this wouldn't let an eavesdropper do real MiTM kind of stuff (inserting itself into non-authenticated key exchange for example). It would allow modification of unencrypted traffic, though. But only in one direction.

This is all the more reason that every packet should be encrypted. This should have been done with IPSEC or similar in IPv6 (which, believe it or not, is really growing, even in the US, as more and more clients use 4G as their most frequent internet connection), but it is an optional feature, not a requirement. Instead, we have to individually change dozens of protocols, like the recent discussion of HTTP/2 going to SSL/TLS.

I can't say this kind of service-provider-telling-service-provider-communication is a big surprise. About a year ago, I think it was Dodo (An Australian ISP) had an issue with their international connection.

(Of course, even if you encrypt all your packets for security, this sucks for TCP latency.)

While reading the article it occurred to me that such circuitous redirection probably impairs latency. Surely, strong encryption would help remove the motive for malicious redirection?

Your comment reminds me of an article on Ars Technica last year, explaining that Google tried to fix this problem by improving the protocol specification. They reportedly hit a brick wall because some router manufacturers didn't cooperate. Time for Google to have another go at promoting this?

Am I right in understanding that the people part way along the route also gain access to the traffic?

It would be interesting to review how often these excursions lead to traffic going through a party outside the US that is friendly to the US but not bound by the same rules on reading the data of US citizens.

Am I right in understanding that the people part way along the route also gain access to the traffic?

It would be interesting to review how often these excursions lead to traffic going through a party outside the US that is friendly to the US but not bound by the same rules on reading the data of US citizens.

Time to open a pass-book bank account, just like my mother-in-law still uses ;-)

Even this can have its down-sides. High resolution webcam? Social engineering —"I'm from the bank, and we're doing some technical upgrade work. We need your next code to help us make sure we've done this right..."

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

It's interesting that both examples end up routing traffic through London. I'd love to see a map of the other hijacks that they saw. I checked the source blog post, but no additional routes were provided.

It is very possible that most of the traffic that goes between the US and Europe, Africa, or western Asia travels through London. If that would be the case, then the common hop of London is to be expected. However, given the partnership between NSA and GCHQ, such a reroute could give access to traffic that the NSA couldn't otherwise tap while being arguably just the result of a bug or random internet attackers and not an intelligence agency.

Well, it's comforting to know that the US is in a good position internationally to work with our friends and allies in sorting this out.

Hey, if your government will at some future time be willing to reach out and collaboratively address issues of snooping on internet traffic and man in the middle attacks, we'll be all ears. Let us know.

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

Or software could be engineered to assume an untrusted network. Most of the time the gut reaction is "we have to make it illegal," without thinking about what that really means. What if, instead of trying to pass laws forbidding such behavior (which criminal elements will ignore, especially if there's money to be made) we simply design protocols that assume anyone with a PC and time will figure out a way to read your data, and just do the best we can at keeping ahead of them.

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP. The way to post one's contact information was via a public "finger" service. The only "firewall" was the /etc/hosts.deny file, which assumes that your applications themselves are bug-free. The BIND DNS system and Sendmail and Apache (A patchy webserver) were originally completely riddled with security bugs.

Gradually, protocols like SSH ascended, as UNIX's other properties made it popular in early university, scientific, and business environments, and the system had to adapt to the slums around it. This is not really an "original sin" situation, but an adaptation-through-natural-selection situation.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

From the original blog post on which this article is based:

"At Renesys, we watch the Internet 24/7 for our enterprise customers, to help them understand and respond to Internet impairment before it affects their businesses. Many of those impairments are the result of someone else’s well-intended Internet traffic engineering. Some are accidents, like cable cuts or natural disasters, and that’s what you typically see us blog about. [...] Renesys maintains a realtime view of the Internet from hundreds of independent BGP vantage points. We have to, because that’s how we can detect evidence of Internet impairment worldwide, even when that impairment is localized. We also maintain an active measurement infrastructure that sends out billions of measurement packets each day, crisscrossing the Internet in search of impaired or unusual paths like these. Finally, we have a distributed realtime-taskable measurement system that allows us to trigger quick measurements from all over the planet when trouble is detected in a region, so that we can immediately evaluate its significance."

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

From the original blog post on which this article is based:

"At Renesys, we watch the Internet 24/7 for our enterprise customers, to help them understand and respond to Internet impairment before it affects their businesses. Many of those impairments are the result of someone else’s well-intended Internet traffic engineering. Some are accidents, like cable cuts or natural disasters, and that’s what you typically see us blog about. [...] Renesys maintains a realtime view of the Internet from hundreds of independent BGP vantage points. We have to, because that’s how we can detect evidence of Internet impairment worldwide, even when that impairment is localized. We also maintain an active measurement infrastructure that sends out billions of measurement packets each day, crisscrossing the Internet in search of impaired or unusual paths like these. Finally, we have a distributed realtime-taskable measurement system that allows us to trigger quick measurements from all over the planet when trouble is detected in a region, so that we can immediately evaluate its significance."

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

How does "huge chunks" of traffic get redirected and no one notice any load differences on their routers?

Should latency be monitored? Most routes have similar latency all the time. Taking on of those scenic routes should be setting off bells and whistles that something is amiss.

I don't know the exactness of BGP, but if route costs are always zero or greater, and every router adds a higher weight, how does such a round-about route get selected? Shouldn't the total weight be greater?

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

How do they know where they're routing? Latency? That can change quite a bit on a residential connection. Example: My ping from Midwest USA to London is about the same as my mom's DSL ping to her ISP down the road.

You typically get the IP addresses, but unless they also include Geo location info in DNS, there isn't a nice way to look up their location. Even if DNS did, imagine the extra load on DNS servers if every users on the Internet was constantly do reverse look-ups on every connection they make every few seconds.

Then, to top it off, how do you decide when a route is "clearly wrong"? There's a lot of grey area.

The trace route from my work, which I only live 1/2mile from, took a 2,500 mile route for the past few years, and only just recently switched to using Level 3 comm, which routes through Chicago for an only 400 mile route.

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

How do they know where they're routing? Latency? That can change quite a bit on a residential connection. Example: My ping from Midwest USA to London is about the same as my mom's DSL ping to her ISP down the road.

You typically get the IP addresses, but unless they also include Geo location info in DNS, there isn't a nice way to look up their location. Even if DNS did, imagine the extra load on DNS servers if every users on the Internet was constantly do reverse look-ups on every connection they make every few seconds.

Then, to top it off, how do you decide when a route is "clearly wrong"? There's a lot of grey area.

The trace route from my work, which I only live 1/2mile from, took a 2,500 mile route for the past few years, and only just recently switched to using Level 3 comm, which routes through Chicago for an only 400 mile route.

Well I would think a cross continental route for a service that resides in your own country would be telling of an issue. These application providers should be monitoring this traffic to some degree, and if they see a massive change in traffic patterns then something is going on, it could be a legit BGP change, but it could also be something different. In the end I would think these web service providers would be looking at this high-level data...

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

Jitter on residential lines can be greater than the time it takes to route a packet around the world. Low quality ISPs abound.

Anyway, how does one differentiate because latency caused by a bad route and latency caused by a congested connection, like someone running BitTorrent?