Man-in-the-middle attacks divert data on scale never before seen in the wild.

Huge chunks of Internet traffic belonging to financial institutions, government agencies, and network service providers have repeatedly been diverted to distant locations under unexplained circumstances that are stoking suspicions the traffic may be surreptitiously monitored or modified before being passed along to its final destination.

Researchers from network intelligence firm Renesys made that sobering assessment in a blog post published Tuesday. Since February, they have observed 38 distinct events in which large blocks of traffic have been improperly redirected to routers at Belarusian or Icelandic service providers. The hacks, which exploit implicit trust placed in the border gateway protocol used to exchange data between large service providers, affected "major financial institutions, governments, and network service providers" in the US, South Korea, Germany, the Czech Republic, Lithuania, Libya, and Iran.

Further Reading

The ease of altering or deleting authorized BGP routes, or of creating new ones, has long been considered a potential Achilles Heel for the Internet. Indeed, in 2008, YouTube became unreachable for virtually all Internet users after a Pakistani ISP altered a route in a ham-fisted attempt to block the service in just that country. Later that year, researchers at the Defcon hacker conference showed how BGP routes could be manipulated to redirect huge swaths of Internet traffic. By diverting it to unauthorized routers under control of hackers, they were then free to monitor or tamper with any data that was unencrypted before sending it to its intended recipient with little sign of what had just taken place.

"This year, that potential has become reality," Renesys researcher Jim Cowie wrote. "We have actually observed live man-in-the-middle (MitM) hijacks on more than 60 days so far this year. About 1,500 individual IP blocks have been hijacked, in events lasting from minutes to days, by attackers working from various countries."

At least one unidentified voice-over-IP provider has also been targeted. In all, data destined for 150 cities have been intercepted. The attacks are serious because they affect the Internet equivalents of a US interstate that can carry data for hundreds of thousands or even millions of people. And unlike the typical BGP glitches that arise from time to time, the attacks observed by Renesys provide few outward signs to users that anything is amiss.

"The recipient, perhaps sitting at home in a pleasant Virginia suburb drinking his morning coffee, has no idea that someone in Minsk has the ability to watch him surf the Web," Cowie wrote. "Even if he ran his own traceroute to verify connectivity to the world, the paths he'd see would be the usual ones. The reverse path, carrying content back to him from all over the world, has been invisibly tampered with."

Guadalajara to Washington via Belarus

Renesys observed the first route hijacking in February when various routes across the globe were mysteriously funneled through Belarusian ISP GlobalOneBel before being delivered to their final destination. One trace, traveling from Guadalajara, Mexico, to Washington, DC, normally would have been handed from Mexican provider Alestra to US provider PCCW in Laredo, Texas, and from there to the DC metro area and then, finally, delivered to users through the Qwest/Centurylink service provider. According to Cowie:

Instead, however, PCCW gives it to Level3 (previously Global Crossing), who is advertising a false Belarus route, having heard it from Russia’s TransTelecom, who heard it from their customer, Belarus Telecom. Level3 carries the traffic to London, where it delivers it to Transtelecom, who takes it to Moscow and on to Belarus. Beltelecom has a chance to examine the traffic and then sends it back out on the “clean path” through Russian provider ReTN (recently acquired by Rostelecom). ReTN delivers it to Frankfurt and hands it to NTT, who takes it to New York. Finally, NTT hands it off to Qwest/Centurylink in Washington DC, and the traffic is delivered.

Such redirections occurred on an almost daily basis throughout February, with the set of affected networks changing every 24 hours or so. The diversions stopped in March. When they resumed in May, they used a different customer of Bel Telecom as the source. In all, Renesys researchers saw 21 redirections. Then, also during May, they saw something completely new: a hijack lasting only five minutes diverting traffic to Nyherji hf (also known as AS29689, short for autonomous system 29689), a small provider based in Iceland.

Renesys didn't see anything more until July 31 when redirections through Iceland began in earnest. When they first resumed, the source was provider Opin Kerfi (AS48685).

Cowie continued:

In fact, this was one of seventeen Icelandic events, spread over the period July 31 to August 19. And Opin Kerfi was not the only Icelandic company that appeared to announce international IP address space: in all, we saw traffic redirections from nine different Icelandic autonomous systems, all customers of (or belonging to) the national incumbent Síminn. Hijacks affected victims in several different countries during these events, following the same pattern: false routes sent to Síminn's peers in London, leaving 'clean paths' to North America to carry the redirected traffic back to its intended destination.

In all, Renesys observed 17 redirections to Iceland. To appreciate how circuitous some of the routes were, consider the case of traffic passing between two locations in Denver. As the graphic below traces, it traveled all the way to Iceland through a series of hops before finally reaching its intended destination.

Cowie said Renesys' researchers still don't know who is carrying out the attacks, what their motivation is, or exactly how they're pulling them off. Members of Icelandic telecommunications company Síminn, which provides Internet backbone services in that country, told Renesys the redirections to Iceland were the result of a software bug and that the problem had gone away once it was patched. They told the researchers they didn't believe the diversions had a malicious origin.

Cowie said that explanation is "unlikely." He went on to say that even if it does prove correct, it's nonetheless highly troubling.

"If this is a bug, it's a dangerous one, capable of simulating an extremely subtle traffic redirection/interception attack that plays out in multiple episodes, with varying targets, over a period of weeks," he wrote. "If it's a bug that can be exploited remotely, it needs to be discussed more widely within the global networking community and eradicated."

Promoted Comments

This is all the more reason that every packet should be encrypted. This should have been done with IPSEC or similar in IPv6 (which, believe it or not, is really growing, even in the US, as more and more clients use 4G as their most frequent internet connection), but it is an optional feature, not a requirement. Instead, we have to individually change dozens of protocols, like the recent discussion of HTTP/2 going to SSL/TLS.

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP. The way to post one's contact information was via a public "finger" service. The only "firewall" was the /etc/hosts.deny file, which assumes that your applications themselves are bug-free. The BIND DNS system and Sendmail and Apache (A patchy webserver) were originally completely riddled with security bugs.

Gradually, protocols like SSH ascended, as UNIX's other properties made it popular in early university, scientific, and business environments, and the system had to adapt to the slums around it. This is not really an "original sin" situation, but an adaptation-through-natural-selection situation.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

This I'd like to know. It'd be nice to have a way of determining whether or not my traffic is being redirected in such a manner.

It isn't so much hidden from traceroute, as it is that traceroute is the wrong tool to determine if your own IP block is being re-routed. Essentially, traceroute is used to determine the hops from you to your destination. It tells you nothing about the route from your destination to you. And if its traffic to your IP space that's been packetjacked like this, then it's not going to help.

The solution is to have some other location on the net trace its route to you. eg: A buddy in another country/ISP/whatever does a traceroute to you. Note that asymmetric routing is relatively common for a variety of reasons, and isn't immediately an indicator of issues.

For a less informal solution, it's pretty easy to query major ISPs for the routes they see, through the use of Looking Glass Servers. (eg: Level 3, Cogent, Sprint, etc) Renesys similarly measures these routes from a wide variety of points, and can notice when things change.

87 Reader Comments

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

How do they know where they're routing? Latency? That can change quite a bit on a residential connection. Example: My ping from Midwest USA to London is about the same as my mom's DSL ping to her ISP down the road.

You typically get the IP addresses, but unless they also include Geo location info in DNS, there isn't a nice way to look up their location. Even if DNS did, imagine the extra load on DNS servers if every users on the Internet was constantly do reverse look-ups on every connection they make every few seconds.

Then, to top it off, how do you decide when a route is "clearly wrong"? There's a lot of grey area.

The trace route from my work, which I only live 1/2mile from, took a 2,500 mile route for the past few years, and only just recently switched to using Level 3 comm, which routes through Chicago for an only 400 mile route.

Well I would think a cross continental route for a service that resides in your own country would be telling of an issue. These application providers should be monitoring this traffic to some degree, and if they see a massive change in traffic patterns then something is going on, it could be a legit BGP change, but it could also be something different. In the end I would think these web service providers would be looking at this high-level data...

They already do have stuff for this, which is why we're hearing about this story. The discussion is how does the end user figure this out, without access to flow data from the Internet topology?

Well, it's comforting to know that the US is in a good position internationally to work with our friends and allies in sorting this out.

Hey, if your government will at some future time be willing to reach out and collaboratively address issues of snooping on internet traffic and man in the middle attacks, we'll be all ears. Let us know.

Or some other national-level organization that has access to the source code of core internet routers. Anyone know a country offering great deals on telecom equipment? Amazing that these changed pattern after reporting it to the vendor and getting a software "patch".

This is the game now. The NSA and other organizations like it have to find loopholes / grey areas in current law, exploit it while they can, then move on to the next exploit.

This to me seems pretty obvious. We have a setup where it's illegal to do X with citizen's data in country A, but not yet illegal in country B (notice many countries are involved, not just the UK)... the countries then cooperate behind closed doors.

Country B does the dirty work for country A, absolving them of any wrong doing on this particular technicality. B then shares the information they find. Country A later reciprocates when necessary, doing country B's dirty work, and then sharing information under "normal channels" that we've always shared information under. Once the cat is out of the bag, it doesn't matter how the information was originally obtained, as long as the NSA (or whoever is doing this -- it could be several actors working together in their own self-interest) didn't break the law or rule they just promised not to break. Someone else broke it for them (legally, in that jurisdiction)... all this stuff has only just begun.

It's a big game and to stay ahead of the curve the NSA will continue to probe the boundaries of internet law to get what they want. You don't think a few angry Congressmen will stop unelected officials from doing what they want, do you? There is no accountability within organizations like the NSA. Once in a while someone gets dragged into a hearing or fired, but on the whole -- zero monitoring and accountability. They are more or less autonomous, and our scumbag internet and telecom providers (interested mainly in their own profits), will continue to give them whatever they ask for behind closed doors. We'll never hear a word of what they're sharing until the next whistle blower comes along.

Nobody is going to fight for your privacy but you (and other citizens) -- get used to that idea.

Microsoft took the easy route and figured the network was a friendly neighborhood. Unix took the hard path and figured it was a crime infested slum. Guess which one I trust with my data?

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP. The way to post one's contact information was via a public "finger" service. The only "firewall" was the /etc/hosts.deny file, which assumes that your applications themselves are bug-free. The BIND DNS system and Sendmail and Apache (A patchy webserver) were originally completely riddled with security bugs.

Gradually, protocols like SSH ascended, as UNIX's other properties made it popular in early university, scientific, and business environments, and the system had to adapt to the slums around it. This is not really an "original sin" situation, but an adaptation-through-natural-selection situation.

Yeah, was thinking the same thing: Telnet, open SMTP relays, plaintext passwords, no firewalls Nothing was secure back then. People were just supposed to be nice to one another.

Well, it's comforting to know that the US is in a good position internationally to work with our friends and allies in sorting this out.

Hey, if your government will at some future time be willing to reach out and collaboratively address issues of snooping on internet traffic and man in the middle attacks, we'll be all ears. Let us know.

"we'll be all ears". Seems to be the problem at hand.

Touché! Following some revelations and since the US is doing the whole "try and stop us" thing, I'm now hard pressed not to think it would be good if it weren't just one party screwing with the backbone of the internet for fun and profit, though. Would take all the challenge out of life for you guys if it were that simple. Regardless, if it ain't a bug, I'll be super surprised if this has anything to do with Belarus, Iceland, Russia or the EU. Just because the traffic went through a geographical location...

Huge chunks of Internet traffic belonging to financial institutions, government agencies, and network service providers have repeatedly been diverted to distant locations under unexplained circumstances that are stoking suspicions the traffic may be surreptitiously monitored or modified before being passed along to its final destination.

Suspicions? Haven't we been discussing exactly those sorts of things actually happening since Snowden's whistleblower disclosures?

This sort of hijack seems like an excellent Plan B for the intelligence complex to use against corporations that have otherwise declined to play along, or were insufficiently cooperative to the whims of the security state.

So cant web application and or application developers help stem these issues by having some sanity checks on the connections? This may be too costly for some applications but it would seem to make sense for critical services...

Not really. That breaks layering of the network protocols, good engineering practice. If the application is interacting with the link and protocol layers, something is horribly wrong with the system design. The correct solution is to fix the protocol layer.

I understand that, but shouldn't these applications be aware if their connections are routing to places they shouldn't?

No, that violates the separate layers and leads to the horror that is every single program developer having to design and implement their own low-layer networking implementation. You do not want that to happen. Ever. Just think about what would happen when Zynga had to write their own networking protocol.

Now, you could move for operating systems to monitor connections, since they've fairly low-level control over networking and are generally written by people who know what they're doing, but that's still putting routing in the hands of the wrong people. This is simply a case of old assumptions no longer holding true; Used to be that having your own router was sufficient evidence of trust, since they were pricey and hard to get on the network. This is no longer the case and changes need to be made by those in charge of routing.

In essence, your proposal is like making every driver be in charge of planning, laying, and maintaining their own roads from scratch because current setups result in traffic jams. You might solve the problem for a few people, but the ungodly chaos that would result would be far, far, worse.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

From the original blog post on which this article is based:

"At Renesys, we watch the Internet 24/7 for our enterprise customers, to help them understand and respond to Internet impairment before it affects their businesses. Many of those impairments are the result of someone else’s well-intended Internet traffic engineering. Some are accidents, like cable cuts or natural disasters, and that’s what you typically see us blog about. [...] Renesys maintains a realtime view of the Internet from hundreds of independent BGP vantage points. We have to, because that’s how we can detect evidence of Internet impairment worldwide, even when that impairment is localized. We also maintain an active measurement infrastructure that sends out billions of measurement packets each day, crisscrossing the Internet in search of impaired or unusual paths like these. Finally, we have a distributed realtime-taskable measurement system that allows us to trigger quick measurements from all over the planet when trouble is detected in a region, so that we can immediately evaluate its significance."

Just curious, I know a basic tracert will give me the info on the hops TO the destination, but is there some type of ICMP packet that can be sent which will trace the return journey? If that isn't possible, is there a way to track this as an individual user? I mean there has to be a way since these companies are tracking it, just wondering how they're doing it and if it's doable on a "at home" type of level...

Time to open a pass-book bank account, just like my mother-in-law still uses ;-)

This combined with all the recent stuff about massive login credential theft seems to point to an ever-increasing spiraling downfall of the internet that can affect anyone and at any time.

Am I the only one is is gradually withdrawing as many things as possible from the internet? I am closing out my login memberships in as many different forums and such as possible. Cranking up social media privacy settings to the max. Getting ready to close my personal web pages entirely. I want to eliminate as many passwords as possible and become more aggressive with my current password management (strong passwords, frequent changes, etc).

I am seriously looking at possibilities to go offline with the banking, like your mother-in-law.

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

Jitter on residential lines can be greater than the time it takes to route a packet around the world. Low quality ISPs abound.

Anyway, how does one differentiate because latency caused by a bad route and latency caused by a congested connection, like someone running BitTorrent?

Run some of those Web based VOIP jitter tests. It is only a few hundred microseconds on DSL. Cable modem is another story.

I bet Microsoft has a great deal of in-house knowledge on these routing hacks from their Skype service diagnostics.

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

Jitter on residential lines can be greater than the time it takes to route a packet around the world. Low quality ISPs abound.

Anyway, how does one differentiate because latency caused by a bad route and latency caused by a congested connection, like someone running BitTorrent?

Run some of those Web based VOIP jitter tests. It is only a few hundred microseconds on DSL. Cable modem is another story.

I bet Microsoft has a great deal of in-house knowledge on these routing hacks from their Skype service diagnostics.

On my mom's DSL, her first hop past her router is 80ms +-10ms. Sometimes it gets over 100ms to the first hop during peak hours.

My brother is on fiber through the same ISP, he gets a 60ms ping to his first hop, and over 100ms during peak hours.

To put that in perspective. When he pings his ISP, which is a local only ISP, he gets 60ms and one hop. When I ping that same IP address, my route goes from Central WI, to Chicago, to Minnesota, back to Central WI, in 23ms.

Using the speed test on his ISP(they have their own speedtest page that they run on their servers), I get almost exactly 50/50. When he runs, he only gets about 3mb/s of the 6mb/s of his rated speed.

He's got several friends in the area, they all get the same thing. About 60ms to their first hop and sub max speeds.

You can't count on having a good ISP.

I'm not too concerned about line jitter, but effective jitter to the Internet, like an Internet Exchange.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

This I'd like to know. It'd be nice to have a way of determining whether or not my traffic is being redirected in such a manner.

It isn't so much hidden from traceroute, as it is that traceroute is the wrong tool to determine if your own IP block is being re-routed. Essentially, traceroute is used to determine the hops from you to your destination. It tells you nothing about the route from your destination to you. And if its traffic to your IP space that's been packetjacked like this, then it's not going to help.

The solution is to have some other location on the net trace its route to you. eg: A buddy in another country/ISP/whatever does a traceroute to you. Note that asymmetric routing is relatively common for a variety of reasons, and isn't immediately an indicator of issues.

For a less informal solution, it's pretty easy to query major ISPs for the routes they see, through the use of Looking Glass Servers. (eg: Level 3, Cogent, Sprint, etc) Renesys similarly measures these routes from a wide variety of points, and can notice when things change.

It seems to also break the laws of various nations regarding certain types of private data not being sent beyond national borders; or not allowed to be processed in another nation with more lax privacy regulation.

So, for example, a hospital takes precautions to make sure their medical data processor isn't sending their data overseas. But somehow data on what should be a cross-town trip gets routed through Russia or even the USA. Clearly a breach of the law. Is it a criminal breach? Who's liable? How to best prevent further breaches?

Uh - who sends private data over a public network? I guess many do - but it's nonsense.

As for criminal breaches... I think this is in gray area, since there are no specific laws in place to address it. Even forging SSL certificates for man-in-the-middle attacks isn't illegal - given the NSA in the USA does it all the time!

Am I the only one is is gradually withdrawing as many things as possible from the internet?

I got my first email account in 1988. (No, I don't have a beard, nor would it be gray.) During the early Internet/web expansion new companies were offering accounts and login names like popcorn, and I fell into the trap of trying to maintain my semi-uniqe login name on any I though would be important. (That doesn't include my Ars login though.) At one point I had around thirty accounts/identieis on various properties.

Then the spam explostion hit (Thanks, Cantor and Seagal) and I started abandoning many of those accounts because the maintenance effort wasn't worth it. Long story short: I'm now down to two email accounts, one private, and one public to hand out to untrusted entities. My list of logins in 1Password is around twenty, with less than ten being accounts with real assets at risk. I will sometimes forego buying from an online vendor simply because I don't want to create yet another account. So you're not alone in this sentiment.

(I also read Database Nation when it first came out, which predicted this mess over a decade ago. It should be permanently linked in any article dealing with privacy or lack thereof.)

In any case, this is the sort of cruft that the NSA and their ilk take advantage of to spy on everyone! Yes I say to encrypting EVERYTHING that is send over the net. Assume that whatever you place in clear text is public knowledge!

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

This. Although packet statistics could be manipulated, latency can't be hidden. If your ping times soar, well...

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

This. Although packet statistics could be manipulated, latency can't be hidden. If your ping times soar, well...

The real answer is encryption. You don't try to protect your traffic from being viewed by opponents. You assume your traffic is being viewed by opponents, and act accordingly. This is why SSL/TLS/SSH guards against man-in-the-middle attacks. Evidence of an in-the-wild MITM attack doesn't mean you need to invent some new method against it. Just use the stuff that's already there.

This is the game now. The NSA and other organizations like it have to find loopholes / grey areas in current law, exploit it while they can, then move on to the next exploit.

This to me seems pretty obvious. We have a setup where it's illegal to do X with citizen's data in country A, but not yet illegal in country B (notice many countries are involved, not just the UK)... the countries then cooperate behind closed doors.

Country B does the dirty work for country A, absolving them of any wrong doing on this particular technicality. B then shares the information they find. Country A later reciprocates when necessary, doing country B's dirty work, and then sharing information under "normal channels" that we've always shared information under. Once the cat is out of the bag, it doesn't matter how the information was originally obtained, as long as the NSA (or whoever is doing this -- it could be several actors working together in their own self-interest) didn't break the law or rule they just promised not to break. Someone else broke it for them (legally, in that jurisdiction)... all this stuff has only justly begun.

It's a big game and to stay ahead of the curve the NSA will continue to probe the boundaries of internet law to get what they want. You don't think a few angry Congressmen will stop unelected officials from doing what they want, do you? There is no accountability within organizations like the NSA. Once in a while someone gets dragged into a hearing or fired, but on the whole -- zero monitoring and accountability. They are more or less autonomous, and our scumbag internet and telecom providers (interested mainly in their own profits), will continue to give them whatever they ask for behind closed doors. We'll never hear a word of what they're sharing until the next whistle blower comes along.

Nobody is going to fight for your privacy but you (and other citizens) -- get used to that idea.

NSA does not likely have many friends. What I suspect we're seeing are the footprints left by the competition : China Russia perhaps Iran or Israel. The battle is quietly begun.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

Packets could be conveniently spoofed to create the illusion of arbitrary paths when viewed from a traceroute. Renesys probably used known and trusted BGP peers and routes to determine the destinations of the traffic at each hop. Also, BGP routes, which are published in the Routing Assets Database (RADb), are static within a time period. By static, I mean the rate by which route advertisements are sent. So routes may not change across successive advertisements. The researchers could validate the "goodness" of routes using some technique that analyzes the RADb (or even some database that was produced by renesys).

Time to open a pass-book bank account, just like my mother-in-law still uses ;-)

This combined with all the recent stuff about massive login credential theft seems to point to an ever-increasing spiraling downfall of the internet that can affect anyone and at any time.

Am I the only one is is gradually withdrawing as many things as possible from the internet? I am closing out my login memberships in as many different forums and such as possible. Cranking up social media privacy settings to the max. Getting ready to close my personal web pages entirely. I want to eliminate as many passwords as possible and become more aggressive with my current password management (strong passwords, frequent changes, etc).

I am seriously looking at possibilities to go offline with the banking, like your mother-in-law.

Meh. Personally I saw this coming a while ago and my web presence is as minimal as I can make it without becoming a complete recluse. Apart from Ars, I am subbed to 5 other websites (includes three gaming websites required to play certain online games) and two e-mail providers (one used to contact family and friends and one used exclusively for registering on websites). Every one uses a different password, each of which is stored in a physical notebook on my desk.

I'd prefer not to use online banking, but there are 0 banks where I live and pretty much every bank in NZ charges you extra (up to $15!) for over the counter services that could have been done online or via an ATM

In any case, this is the sort of cruft that the NSA and their ilk take advantage of to spy on everyone! Yes I say to encrypting EVERYTHING that is send over the net. Assume that whatever you place in clear text is public knowledge!

Encryption only helps to some degree. Example: the NSA forges SSL certs... so your precious secure connections only mean joe-blow at starbucks can't get at your data... NSA and anyone else with a large pocketbook on the other hand...

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

This. Although packet statistics could be manipulated, latency can't be hidden. If your ping times soar, well...

The real answer is encryption. You don't try to protect your traffic from being viewed by opponents. You assume your traffic is being viewed by opponents, and act accordingly. This is why SSL/TLS/SSH guards against man-in-the-middle attacks. Evidence of an in-the-wild MITM attack doesn't mean you need to invent some new method against it. Just use the stuff that's already there.

Well, it's comforting to know that the US is in a good position internationally to work with our friends and allies in sorting this out.

Hey, if your government will at some future time be willing to reach out and collaboratively address issues of snooping on internet traffic and man in the middle attacks, we'll be all ears. Let us know.

Be careful what you wish for! The NSA has already collaborated with the British government to tap Google's servers.

(Of course, even if you encrypt all your packets for security, this sucks for TCP latency.)

While reading the article it occurred to me that such circuitous redirection probably impairs latency. Surely, strong encryption would help remove the motive for malicious redirection?

Your comment reminds me of an article on Ars Technica last year, explaining that Google tried to fix this problem by improving the protocol specification. They reportedly hit a brick wall because some router manufacturers didn't cooperate. Time for Google to have another go at promoting this?

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

Packets could be conveniently spoofed to create the illusion of arbitrary paths when viewed from a traceroute. Renesys probably used known and trusted BGP peers and routes to determine the destinations of the traffic at each hop. Also, BGP routes, which are published in the Routing Assets Database (RADb), are static within a time period. By static, I mean the rate by which route advertisements are sent. So routes may not change across successive advertisements. The researchers could validate the "goodness" of routes using some technique that analyzes the RADb (or even some database that was produced by renesys).

Hijacking a router and having it broadcast bad BGP data is one thing, but hijacking several routers along the path and programming them to falsify ICMP data is a bit more complicated.

Especially if you plan on emulating proper timings. If I see a hop claiming to be from Moscow and trace-route is showing a 20ms response time, I am going to think something is up.

could a time related packet lock be designed? if there is a few milliseconds difference between what your average latency or loop time should be/is,if you got a warning at least thats better than nothing.

This. Although packet statistics could be manipulated, latency can't be hidden. If your ping times soar, well...

The real answer is encryption. You don't try to protect your traffic from being viewed by opponents. You assume your traffic is being viewed by opponents, and act accordingly. This is why SSL/TLS/SSH guards against man-in-the-middle attacks. Evidence of an in-the-wild MITM attack doesn't mean you need to invent some new method against it. Just use the stuff that's already there.

Your comment reminds me of an article on Ars Technica last year, explaining that Google tried to fix this problem by improving the protocol specification. They reportedly hit a brick wall because some router manufacturers didn't cooperate. Time for Google to have another go at promoting this?

Routers and SSL terminators are very, very different things.

Quote:

"One, fairly major, SSL terminator vendor refused to update to fix their False Start intolerance despite problems that their customers were having," Langley wrote. "I don't believe that this was done in bad faith, but rather a case of something much more mundane along the lines of 'the SSL guy left and nobody touches that code any more.' However, it did mean that there was no good answer for their customers who were experiencing problems."

This is why it's important to select vendors very carefully to reduce the chances of being held hostage like this. Open source is also a big advantage because you always have an additional option to contract someone to fix the code.

Quote:

Chrome will continue to use False Start with websites that have deployed Next Protocol Negotiation, another experimental TLS tweak devised by Google that's already available in NSS, TLSLite, and OpenSSL. NPN is just one of many changes proposed under SPDY, a broad set of experimental protocols intended to reduce the latency of webpages.

The standardised version of NPN is ALPN, which should be supported by TLS implementations soon.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

Packets could be conveniently spoofed to create the illusion of arbitrary paths when viewed from a traceroute. Renesys probably used known and trusted BGP peers and routes to determine the destinations of the traffic at each hop. Also, BGP routes, which are published in the Routing Assets Database (RADb), are static within a time period. By static, I mean the rate by which route advertisements are sent. So routes may not change across successive advertisements. The researchers could validate the "goodness" of routes using some technique that analyzes the RADb (or even some database that was produced by renesys).

Hijacking a router and having it broadcast bad BGP data is one thing, but hijacking several routers along the path and programming them to falsify ICMP data is a bit more complicated.

Especially if you plan on emulating proper timings. If I see a hop claiming to be from Moscow and trace-route is showing a 20ms response time, I am going to think something is up.

That's a good observation. A theoretical construction for detecting bad routes that's kinda in line with your observation is explained in this paper.However, I don't believe the attackers have to manipulate timings (or purposefully inflate the rtt). If an attacker has control of sufficient border nodes in an AS (recall that intra-routes are never broadcast to the extranet), the attacker can coordinate the border nodes to deceive the victim of what routes were actually taken. This is not complicated to do because all the attacker has to do is to make all the controlled border nodes have the same state (at least in the perspective of whatever chosen packet origin). This means that if a packet was actually routed through the bad network, in my perspective, it would look like it didn't go through it, but was redirected.

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP.

Not the Unix I remember. The standard communication was over a serial port. Mail was exchanged by uucp. I don't know how "secure" you would consider them, but Unix itself was very secure - you could connect lots of students to it, and be sure they weren't going to stomp on each other or the rest of the system.

What you are saying is insecure is newfangled TCP/IP stack that came after Unix. And yes, it was insecure for a while. But then so was everything else. Security in NetBIOS was if anything, worse. It's only security is it wasn't routable.

The internet started to take off in 1990 or so - 20 years after Linux was first born. That is when the security issues really began. It is no accident that by 1995 both ssh and SSL were born. In those intervening 5 years the internet was a remarkably civil place, possibly because most people on it knew each other, or had common friends or colleagues . Far more civil than the outside world, in fact. Security could be enforced via social means - like cutting you off from the new toy. Both ssh and SSL were far sighted developments, back then.

But back to your point. I don't recall a time when Unix was insecure. It was always intended to be a multiuser system, and it was designed to kept those users isolated. Yes, the network stacks added to it did leak. That wasn't only true of TCP/IP. It supported a whole pile of them, and the were all insecure. There was no way to fix this - all Unix (and indeed all other operating systems) did was implement a standard defined by others.

The internet started to take off in 1990 or so - 20 years after Linux was first born. That is when the security issues really began. It is no accident that by 1995 both ssh and SSL were born. In those intervening 5 years the internet was a remarkably civil place, possibly because most people on it knew each other, or had common friends or colleagues .

Linux was 1991. BSD was...almost 20 years old in 1990. 1990 to 1995 was as much the proverbial Wild West as is today, if not more so.

How is the traffic path hidden from a traceroute? If it is hidden, how did Renesys determine the traffic was going to Russia or Iceland?

This I'd like to know. It'd be nice to have a way of determining whether or not my traffic is being redirected in such a manner.

It isn't so much hidden from traceroute, as it is that traceroute is the wrong tool to determine if your own IP block is being re-routed. Essentially, traceroute is used to determine the hops from you to your destination. It tells you nothing about the route from your destination to you. And if its traffic to your IP space that's been packetjacked like this, then it's not going to help.

The solution is to have some other location on the net trace its route to you. eg: A buddy in another country/ISP/whatever does a traceroute to you. Note that asymmetric routing is relatively common for a variety of reasons, and isn't immediately an indicator of issues.

For a less informal solution, it's pretty easy to query major ISPs for the routes they see, through the use of Looking Glass Servers. (eg: Level 3, Cogent, Sprint, etc) Renesys similarly measures these routes from a wide variety of points, and can notice when things change.

The internet started to take off in 1990 or so - 20 years after Linux was first born. That is when the security issues really began. It is no accident that by 1995 both ssh and SSL were born. In those intervening 5 years the internet was a remarkably civil place, possibly because most people on it knew each other, or had common friends or colleagues .

Linux was 1991. BSD was...almost 20 years old in 1990. 1990 to 1995 was as much the proverbial Wild West as is today, if not more so.

I have to assume the OP meant Unix was first born 20 years before 1990. Linux wasn't around until 1991 or so...

I agree with your original sentiment, but this statement is a bit anachronistic. UNIX was originally not secure at all. The standard mode of remote access was Telnet. The communication protocol was SMTP.

Not the Unix I remember. The standard communication was over a serial port. Mail was exchanged by uucp. I don't know how "secure" you would consider them, but Unix itself was very secure - you could connect lots of students to it, and be sure they weren't going to stomp on each other or the rest of the system.

What you are saying is insecure is newfangled TCP/IP stack that came after Unix. And yes, it was insecure for a while. But then so was everything else. Security in NetBIOS was if anything, worse. It's only security is it wasn't routable.

The internet started to take off in 1990 or so - 20 years after Linux was first born. That is when the security issues really began. It is no accident that by 1995 both ssh and SSL were born. In those intervening 5 years the internet was a remarkably civil place, possibly because most people on it knew each other, or had common friends or colleagues . Far more civil than the outside world, in fact. Security could be enforced via social means - like cutting you off from the new toy. Both ssh and SSL were far sighted developments, back then.

But back to your point. I don't recall a time when Unix was insecure. It was always intended to be a multiuser system, and it was designed to kept those users isolated. Yes, the network stacks added to it did leak. That wasn't only true of TCP/IP. It supported a whole pile of them, and the were all insecure. There was no way to fix this - all Unix (and indeed all other operating systems) did was implement a standard defined by others.

Unix itself was "secure" from a multi-user perspective but not necessarily from a network perspective or a server to server perspective. Servers tended to consider themselves peers with a certain implicit level of trust.