Posted
by
Hemoson Wednesday September 27, 2000 @05:56PM
from the holy-schmoley dept.

A reader writes "Of the schemes being concocted to ease traffic among Internet backbone providers, InterNap Network Services Corp. may have the most ambitious: a setup that bypasses the peering process entirely by scanning the Net for optimal routes.
EEtimes has the full story on their plan."

Is that Internap is paying the providers in question for peering, and thus being treated as a customer. At most backbone providers, customers' routes are given a higher local preference than peer routes, and thus will cause traffic to traverse the customer network as opposed to the peer. See the BGP decision making process if why:

A->C->B is possible even though
A->B exists.

doesn't make sense to you. (See RFC 1771)

Also by paying, Internap can demand that the backbone carriers honor Internap's MEDs, thus controlling where in Internap's network the backbone carrier's traffic enters. Local preference inbound, hah! Marketingspeak for having peers or providers honor your MEDs.

Another comment worth noting is that very little traffic between large scale providers is traversing public peering points anymore. Most traffic traverses private peering points, which generally speaking is much less congested ( thus much less packet loss/increased latency, usually none ) than public peering points. I'm seeing little/no advantage, in fact I see multiple disadvantages, to running your traffic over an additional network (Internap) as long as providers A and B above have sufficient backbone capacity and sufficient peering between them.

The only thing of any real interest in this besides the author showing that he has no real clue how large scale internetworking works, is this ASsimilator/Cogitator which sounds like it's an engine of some sort that apparently adjusts the local pref and the MEDs dynamically based on some thresholds of packet loss and latency. An interesting concept, but would have to see real world deployment to have any real comments on it.

In other words, this is a case of somebody without a clue getting ahold of a marketing document, and believing it. Typical. As always, when you are in position to be buying decent quantities of bandwidth, you need to know your market requirements, and what you are trying to accomplish. Buying hype will get you exactly that, and lower performance; buying transit from C because the peering between A&B is inadequate, but peering between C&A and C&B is great, and you need to get traffic between A and B as effectively and efficiently as possible, then purchasing that transit may be a wise investment.

Huh? Multicast doesn't put stress on border routers. In fact, the beauty of multicast is that it dramatically decreases bandwidth everywhere. Between peers, ISPs to customers, even on the LAN. (Although certain switches tend to treat multicast packets as broadcast... Just need to get those vendors shipping updated code.)

As an employee at a rather large ISP, I'd like to state that at least one backbone provider has no problems with multicast in the network. We're regularly turning up ten or more customers a week to multicast. Granted, not nearly the same rates as your standard unicast traffic, but multicast is still a rather new technology (even though it's been around for about seven years now). Vendors are just coming around and starting to produce equipment that can handle multicast. My own company helped to fund the creation of a Linux kernel patch for IGMPv3 (cli ck here [sprintlabs.com]) which is required in order to support PIM-SSM. PIM-SSM is poised to be the prime motivator for the acceptance of multicast.

The biggest problem that we're seeing with multicast right now is not that it stresses our backbone. (The Victoria's Secret Fashion show stressed our backbone...at least, the thousands of 56k streams did. I, however, quite enjoyed the 700k multicast stream coming in over the network. And a single stream from the source covered the entire network. HUGE bandwidth savings for everyone.) The problem is that troubleshooting tools for multicast are rather primitive. Unfortunately, the only way to advance that is to get more people using multicast, so that there is a greater need for those tools.

Internap just buys enough transit and sees enough routes to actually make informed decisions on their own routes. They are in a position to send any packet to it's destination via the best path as far as can be determined via BGP.

Whether they take full advantage of this or not is really the question. Obviously they're trying.

-Peter
(Former internap customer. Quit that job, but really liked their service. They were the most proactive ISP I've ever seen, bar none).

I would argue that it is (in theory) better. Because Internap buys the bandwidth, they can facilitate symmetric routing, which is something a tier 2 cannot provide you. The peering arrangements between providers are hardly ever equal, so level3 might push more traffic to sprint than sprint would push to level3. So Sprint might just not accept some traffic from level3, forcing that traffic to go through a different level3 peering arrangement.

Internap doesn't have that problem. They buy bandwidth, guaranteeing that both there and back your traffic will travel along an optimal route.

However, it's probably better to look into massive distributed caching models for that first, considering how much that sort of bandwidth costs.--- pb Reply or e-mail; don't vaguely moderate [ncsu.edu].

Their AUP [internap.com] is pretty vague and restrictive. For example: "Customer shall not use, nor shall it permit others to use, the Services for any... immoral... purpose." So porn sites cannot use Internap? ISPs cannot use InterNAP beacuse their users may surf porn?

Also: "Customer shall not use, nor shall it permit others to use, the Services... to alter... [or] disable any security or encryption of any computer file, database or network. Um, huh? Even if it is legal? Bewarned sysadmins: if you use InterNAP do not ssh over to your remote data center and run crack on one of your machine's passwords files to check security. Even using tcpdump to troubleshoot network snafus may be a violation of the AUP.

I'm sure all providers have such lame policies but that doesn't mean they have to have one too, especially since they seem to claim to be such and enlightened Linux-luvin' company.

also, check out the new RFC on the design space for multicast protocols. It points out many useful ideas. find it on www.faqs.org/rfcs. I don't remember the number, but it was published in August or so.#define X(x,y) x##y

It's not that I find it "revolutionary", terms like that are real nowhere but in press releases... I do however, find it as the best use of an old and outdated technology known as the TCP/IP suite. And if it isn't 'revolutionary', it is certainly A Good Thing for the internet at large. I think there are going to be other companies that with varying degrees of success will try to do the same sort of thing.

It's easy to forget that some of what you hear from a company is, in fact , just marketing. But the technical details underneath can still be pretty interesting and somewhat unique...

that nobody has yet mentioned the location of my favorite kernel mirror; http://ftp-mirror.internap.com/ [internap.com]. They have other interesting stuff there too, are about to upgrade their servers and are supposed to be looking for suggestions as to what else they mirror.

I'm kinda backwoods in terms of routing distance from anywhere, and that site's always responsive, it seems.

You just proved my point. You described hot potato routing, and the reasoning behind it.

What I think you're asking is, why do it any differently?

And the answer is, because you can do it better.

Warning - the analogy approaching you is imperfect, but suited to proving a point.

You send letters through the USPS. Cheap, and they get there, most of the time. But you need something there the next morning. You shell out an order of magnitude more cash to make sure that happens, becuase it is worth it. The people (Fedex, whomever) have a parallel distribution network to make it happen. It gets there faster, because a different distribution chain was designed for different needs, and different cost assessments.

See the difference?

Back to hot Quayle, er, potato, routing. If you just move bits for a living, you optimize for moving most of them, most of the time, as fast as is cost effective. If you run a high performance delivery system, you move most of them, all of the time, as fast as the QOS you signed said you would. There's a big difference there.

So, as Joe Random ISP, I want to minimize costs, so I offload packets as soon as I can. Like a hot potato(e). Make it someone else's responsibility as soon as I can.

As a high performance ISP with QOS contracts to fullfill, I want to keep traffic on my backbone as long as possible so I can control how fast they get there.

Economics happens to have a strangely powerful hand in how people do things. Even in business.

Actually, that's what BGP does quite well. Just because local-pref says go here, if "here" is down, reroute. In practice, a change in internet routing (from experience) is reflected across all 85K+ routes within 2 minutes of a change. In the event of a "network-down" failure, the router that is seeing the network-down error will reroute back to the next hop. Also, just for scale, watching route-flaps in a cisco, we see ~20 route changes/second.

The money issue is important because to date, no company is turning a profit at providing backbone connections. And InterNap itself is still losing money -- the publicly traded company reported losses of $43.4 million on revenues of $22.5 million for the first six months of this year.

Ouch! Too bad they're losing so much money. Is there any way to effectively make money as a backbone service provider?
That has to suck.

I'm in the process of signing up with them right now to get a cabinet to host a bunch of my personal boxes, and a website or two that I run.

The selling point for me (besides their kick-ass network architecture) is they have these pretty oscillating rainbow lights on the floor of their datacenter that light up and guide you to your server... after you scan your hand it turns on the light above your cabinet(s) (the whole place is dark inside), directs the rainbow lights to your machine, and off you go. Very cool.

Plus, the building itself (the new Fischer building, right by the space needle) totally rocks.

It looks as if I (as a European;) have no idea about how bad the public peering points in the U.S. are.

Sorry but in.de we have one public every-net-peers-with-every-othernet-that-is-connec ted-to-this-peering-point called "decix" and this seems to work very well - nearly all big Networds (well, except for UUNet and the German Telecom) are connected to it, they mention how good their connection to "decix" is in their advertisements etc. and are proud of it.

I didn't see "bigger" (well, thats big for Germany) peering points as a disadvantage so far.

I don't get it: they claim that they invented something much better than peering and what they do is connect all big backbone providers with their "superior" network, effectively peering among them.

Why does traffic now "directly" go to its destination ? Lets say you have a packet originating from sprintnet going to abovenet. The packet goes through the sprintlink net to InterNap, InterNap sends it to abovenet and voila, the packet gets to its destination. I can't see what makes this thing different from a pretty normal, expensive and highly commercial peering-point.

Yep, right from assuming that Microsoft Technology (sic) has anything to do with the operation of the Internet. It runs servers, sure, but if it runs a measurable percentage of Internet routing I'd be surprised.

Sportal [sportal.net] and PSINet [psi.com] developed a system like this for the Euro2000 soccer tournament in June this year. It handled something like 70,000,000 hits a day at peak with a distributed architecture across 10 points of presence in the PSI network worldwide. That made it easy: to do it across multiple providers would be a fair bit more work - even for a single provider the team solution involved getting a dedicated AS number to enable peering into the system.

I know I'm coming in rather late, but I just have to object to the abuse of the term "hot potato" to describe the common backbone routing policies. True hot-potato routing is totally naive with regard to network distances from a destination; it just forwards to some node (or in this case network) other than the one the packet came in on - usually the one with the shortest queue length. This will often end up routing a packet away from its destination. If backbone providers truly did things this way, packets would wander around quite aimlessly, often expiring before they reached their destinations. Every backbone provider would get saturated with such "lost" packets, creating more traffic for all of them and leading to a complete collapse of the Internet.

Obviously, that is not what's really happening. I'd call the technique in common use "warm potato" instead. Providers do want to dump the packet on someone else quickly, but subject to the limitation that they dump it at least a tiny bit nearer the destination than when they received it. This is still strongly suboptimal[1] and slightly antisocial, but it's nothing like true hot potato.

[1] It actually approximates hierarchical routing rather closely. The efficiency loss (in terms of hops) of hierarchical routing has been thoroughly studied and quantified by Tanenbaum and many others, but it does have the (sometimes critical) advantage of keeping routing-table sizes under control. As others have pointed out, one of the weaknesses of InterNAP's approach is the potentially very large number of routes involved. None of this is new. It has all been well understood in the networking community for decades.

Secondly, hot potato is indeed a "term of art"...one which has had a clear and well-understood meaning for twenty years and which does not apply to the routing we're talking about. Real network professionals know that; it's only the tyros and dilettantes who slept through their basic networking class (or never took one) who would abuse such a time-honored and universal term. Read any basic networking text; they're all very clear on what "hot potato" routing is and is not.

peering. Essentially all they're doing is taking peering to a grander scale. One massive peering unmbrella so-to-speak, with routing algorithms deciding the best possible routes to other nodes in the their "web." The whole thing makes total sense, I'm just jealous I didn't come up with the business model.

In fact, InterNap's infrastructure uses only OC-3 (622-Mbit/second) or even DS-3 (45-Mbit/s) connections to the various backbones, not the massive OC-192 (10-Gbit/s) routers being installed at the network core.

Yes, considering the fact that computers are not like roads, and that since most run Microsoft Technology, they can perform at Mach 10 at a point of time and be out for 10 months the very next moment on. So how can we expect anybody to map out "reliable" routes?
They'll most probably end up with so many "bad" routes that used to function well when they tested it, that they'll do the survey all over again.

Still, I wish the company a lot of good luck.

Whatever happens, one site won't go down. That's iotaspace [hypermart.net] - No routes required, coz all roads lead to this place, at least eventually....

Maybe not that revolutionary but still it works very nicely. We happend to have lots of boxes hostes all over and a big chunk of them are at InterNap. It is amazing to see how dog slow some of the traffic gets when it goes over a particular saturated backbone.

I have to feel sorry for the folks who just get thrown onto those backbones 'cause somebody is playing hot potato with their packet.

It would be even nicer if people like Nos didn't have to have their packets play intercontintal hot potato, ping pong!

For what it is worth, check out
http://www.patents.i bm. com/cgi-bin/viewpat.cmd/US06009081__ [ibm.com]
This should better explain how InterNap has massaged their routing tables in the past.
With the release of ASsimilitor 3.0(?), they will be performing real-time data manipulation. Previous to this revision to their software, updates to the routing tables were occuring once every twelve hours, I believe.

WOW! I'm slightly disturbed about the control that InterNAP would have over access points! Good gawd... here's a company that wants to control access to all 11 major backbones?!?! Wow... I don't think they'd really be helping anything by providing an addition bottleneck, ala "P-NAPs". Alas, the major providers make money by the volume of information that is transmitted... in their minds, saturated lines aren't a problem. Trust me! Idle lines cost money and don't generate revenue. I don't think there will be full agreement about "P-NAPs" because there wouldn't be any room for competition of every provider carries an equal amount of distributed traffic.

Majors will get peering right when there is a compelling business model to do so. I've worked in this space for 6 years and seen every pointy haired boss look this over and say "and we give this away?" and not been able to come up with an alternative that was workable.
The current sender keeps all (SKA) model of interconnection will always lead to tragedy of the commons type congestion.
InterNAP is at least a stab at a business model that tries to solve this, time will tell if they get it right.

Umm, no. "Hot potato" is the term of art used by everyone to describe the routing of packets between two ISPs. If you and I interconnect at the east coast and west coast and one of my east coast customers wants to talk to one of your west coast customers, I will hand you the traffic on the east coast and you will hand me the traffic on the west coast.

My employer looked at their colo services. In a few words, they don't exist. Currently, they're providing ISP services at dedicated colo facilities that aren't theirs (Level 3, for instance). I think eventually that they will have to get into the game directly. I can say that we were using Internap through a different ISP (we're now using Exodus through their colo). Our subsequent connectivity has been somewhat flakier, albeit faster.

Just think about it for an instant. InterNap.. InterNapster.. who says these guys aren't just trying to find optimal routes so the mp3's will transfer more efficiently ? I'm sure the RIAA will come up with some similar to force-feed the judge, just to get even more media attention.

In fact, this is pathetic....
InterNAP has no backbone, so they better make sure that every packet sent to one of their customers hit the right P-NAP... I don't know about you, but I would rather use an ISP that uses private peering and has a real backbone connecting the peering points. I know at least of one...

I guess you missed the other thread...
Multicasting is no good for file transfer - imagine streaming simultaneously from a single source to 2 users, one with a 56K modem, the other with a T3. Doesn't work, does it?
Multicasting is generally used for streaming video/audio, where you don't expect to catch every packet - video/audio streams don't have to have 100% integrity to be legible. Data does.

Well, yes, the way you describe it is nothing new, and actually quite greedy.

One of the problems is it's not like you can reasonably perform least path planning with a Web browser. I mean, when you click on a link, you are getting routed from one site to another, but it's generally along the same routers, unless something really wierd is going on. (Servers down, heavily allocated, etc.) So trying to find a dynamic path that is the Internet 'path of least resistance' while increasing connection time works against each other. Because everyone wants to connect to the web sites faster, and they generally don't care how it is done.

Perhaps not the best attitude to take as a customer, but a lot of the typical web surfers are barely computer-literate.

Internap's chief software engineer calls their sub-optimal engineering solution to an NP-hard problem as a "new area of computer science". Haw Haw. The depths to which corporations will stoop to market their ideas:)
Most network/routing problems with multiple QoS goals are NP-hard or even NP-complete. It must be obvious to even a beginner in computer-networking science that in practice, one has to seek some sub-optimal, heuristic solution that works well in most practically occuring scenarios. Hardly a "new area of computer science".

This sorta defeats the purposes of the Internet being so robust. If their server goes down they wont get a reply for the fastast route so wouldnt that cause a major timeout scheme going on which you would have to wait 60 seconds or so before the server realizes "oh their down lets search the old fashioned way" which supposivly will cause more time to do according to this article. I dunno but from what I pulled from the article it seems a little etchy.

If they actually go through with this, and it catches on, this will lead to the end of the internet as we know it. In place of the well interlaced internet we know today, we will have numerous, smaller networks, connected only peripherally.

Today, the only time we lose service to Slashdot is when Exodus's subnets start dropping packets. But imagine how it would be if major Internet backbone providers go through with their scheme to bypass peering and choose to route traffic as they see fit?

I forsee a day when European's or those on the west coast of the US don't even have access to Slashdot during high traffic times. If the backbone providers route IP traffic based on how convenient it is for them, we can never be gauranteed continuous service.

So, there are some of the many reasons why this is bad news. And I didn't even discuss the unwholesome possibilities this technology would provide if the government gets involved.

A couple things worth mentioning: what they do is not private peering, its private unidirectional peering, dynamic routing, and a bunch of other very clever technologies. Proof is in the pudding, they've definitely (from experience) got the best connectivity out there.

They peer with all the big networks, but don't allow the big networks to route traffic back through them. They're also not like a typical colocation facility in that they've got a large number (or were planning a large number, at least) of PNAP locations, and they provided mostly leased lines except for a couple of larger data centers. They're really expensive, but you get what you pay for.

I couldn't read the article because the link seemed to be broken, so it may have mentioned this, but last I knew their technology that maps network connectivity and dynamically modifies their packet routing through modification of router tables within the networks they peer with via BGP is all Linux based, and has been from day one.

I wouldn't claim it is revolutionary, but it is a bit more interesting than laying fiber.

Most folks running networks employ "hot potato(e)" routing methods - the idea is to get it off your network at the earliest possible time.

Internap, instead, attempts to minimize transport time, which usually means reducing the number of hops as much as possible. In practice, this means modifying BGP to approximate solving the Travelling Salesman problem. You can't, but you can make a good guess. So, if your ISP uses them, traffic to a server on the other end of the country will probably not have to pass through the major hubs. And if you are communicating with another Internap customer, you buypass the public net entirely. They will sell you SLAs with pretty low maximum latencies.

Get a modem like me. Things have to be pretty far gone before you notice that the rest of the internet is moving less than 56K. It works in real life too. That guy going 45 MPH in the right lane of the highway doesn't wait for ANYBODY.

because let's say that goodnet doesnt peer directly with sprintnet. well, guess what? that means you go through a public interexchange, i.e. mae-east, -west, etc.. and well, they're rather congested, and tend to drop packets whenever its convienent for them. By providing a direct peering relationship for these big providers, you cut out the public interexchange. Also, InterNAP pays its providers to carry it's traffic for as *long* as possible, versus the traditional method where providers attempt to dump traffic at the first public peering point. Trust me, there is a *huge* difference between an internap DS3 and a UUNet DS3. You pay alot more, but if you have traffic/applications that need bandwidth that's low latency and low jitter, use internap. (no, I dont work for them. I'm a network engineer for growing DSL/VoIP provider, and they're a godsend.)

I am tired of this hot potato fantasy. Everyone uses BGP4 to determine which route should be taken by a packet. BGP4 only considers how many AS's are in the route. It doesn't know anything about hops, latency, etc. If I'm an ISP, and I have two routes for a packet with the same AS path length, of course I'm going to "get it off my network at the earliest possible time." Would you rather I ran it back and forth across the country a few times? The two paths are the same length, you should get it to whichever exit point is closest in your network.

Or do you think that the backbone ISP's are sending packets out of longer AS path points because they are closer in their network? I am almost positive that they are not doing this because your packet would probably never arrive at it's destination. If everybody was choosing longer AS paths, your packet would probably never arrive. It would just run around it loops as everyone shunts it off to someone else ignoring the AS path.

If you have specific details about what hot potato routing is, and how it differs from the correct routing that everyone does, please inform me. I would love to know. But I think this belongs firmly in the urban legend category.

Exactly. They've got some solid BGP experience. However, they seem to like to apply large negative weights to everyone but Sprint, so most of their traffic goes out through Sprint. How it comes back to you is anyone's guess...

Frankly, I don't think it's anything revolutionary. If I wanted to emulate their setup, I'd fork out a buttload of cash to some large providers to pay for transit, next I'd hire some guys like Avi [freedman.net], or someone of his calibre to do it right....--

IANAL but I am a network engineer at a "Tier 1" ISP. This article is a joke. It's very clear that the author doesn't understand what he's writing and is most likely simply regurgitating marketing materials being fed to him from the company that is the subject of the article. They are bleeding money and would probably like their stock price to go up.

The most obvious error is that OC-3 is not 622Mbps, it is 155Mbps. OC-12 is 622Mbps.

How do you think InterNAP gets the 11 major backbones to honor BGP local prefs? Very simply, InterNAP establishes a BGP peering session between its router and one of the routers of the ISP that it is purchasing service from.

Is the software they use revolutionary? Perhaps, but I also know of a major Tier-1 provider that uses some clever software to re-compute static routes for every router on their network every single night rather than use a proper IGP like OSPF or IS-IS. Unfortunately this software is so clever that no one completely understands how it all works. Except for that guy that did the clever bits, and he's long gone.

In the end, InterNAP is very simply a hosting provider that instead of being multi-homed to a couple of ISPs is multi-homed to 11 ISPs. They are doing nothing different than anyone else on the network. Hell, if they convinced all those backbone providers to use MPLS and used that to shunt the traffic to them, I would be impressed. They're just using the same old BGP4 that everyone else is using (Cisco's).

As I understand it, AOL does something like this. I've seen web server access logs that show a user(logged in with a session) accessing a page from an AOL account. Two pictures and the html all went to different IP's, presumably of caching web proxies.

InterNap is an interesting contradiction. On one hand they loooove linux, they've been a linux shop since just about day one. On the other hand, they've applied for patents on their special BGP routing algorithms. I think such a patent is barely better than a software patent, because as you say, anybody who knows how BGP works can figure it out the basics of they've patented...

People have been doing this for years, it's
called a Tier 2 provider, the only difference here
is most tier twos have 2-3 backbone providers,
where they get 11.

Now, would you rather be with a provider who
has a 11 connections to big players, or would you
rather go with a real backbone that peers with
200-400 other providers directly? I mean, it's
simple math, if there are 1000 networks on the
Internet, and they connect to 11, for 989 you will
go through a middleman (the backbones they
connect to). If you go with a peering provider
you might get direct access to 400, and have a
middleman for the other 600.

Here's the other issue. Large providers generally share costs when they peer, making it
relatively cheap. InterNap takes a solid stance
of buying all their bandwidth. So,
if you're a customer, and you use 10 meg more
InterNap will have to pay for 10 meg more...where
a real backbone will simply have to share costs
with their peering parters. Who will be able to
upgrade first? Not InterNap.

Bottom line, it's not better, and it costs
more. It can be made to look better while they
are small though.

If IP multicasting were implimented on not only video/audio applications, but say downloads of large popular files (eg. Redhat ISO images etc.) then I would imagine that internet traffic across backbones would decrease.Imagine it! "To download the latest version of yourdistro-ver.iso tune in on the half-hour."---

That's what I was thinking.
For example, I'm in Regina, Sask (Canada) which is roughly above the North Dakota and Montana border. There are only two real ISP's here. One is Roger's/AT&T and the other is Bell Canada. To do a traceroute from one to the other involves a tour of Canada and the US. Unfortunately I can't do a traceroute from here or I'd show you, but I hit both coasts. Now, if someone were to run 100' of fibre between the two, the delays would decrease, usage on the big lines out of the city would decrease, etc. etc.

It doesn't take any genius to figure this out. A little more complicated are the shortest (weighted) path algorithms, but I really don't see the big deal here. This isn't some huge new great idea. Unless I'm way out to lunch here, in which case instead of just flaming me and mod'ing as flamebait, why not post something useful, like an explanation of what I'm missing.

The value of InterNap is directly related to the poor quality of peering between the major (tier 1) carriers. Content providers use InterNap because they feel that they can bypass this peering (private and public) and reduce congestion / latency. This will go away if/when the majors get peering right.

So the question is: will the majors peer with sufficient bandwidth, and keep upgrading as traffic increases, or will they intentionally keep peering poor to sell their own backbone connections as the best way to reach their "eyeballs"? If you believe the former, InterNap and competitors are dead. If you believe the latter, they'll probably still lose money for a while, but this business will have a niche.

It's only nP for sufficiently large numbers of nodes. The whole universe isn't wired yet, so we're mostly still talking about cleaning up the congestion between a few major sites in the US (one of the few places with the capital to fund one of these ventures, besides). Like with much of CS, systems rarely match the models with sufficient accuracy to produce all those hairy results we've come to fear and loathe.

internally for optimal routes in their cache hierarchy, and also as a hook into their modified bind (or whatever named they're using) so that www.ak.customer.com always points to the "closest" ghost server to the end-user.

Akamai has many more data points from which to deduce traffic flow information, but internap has higher-quality ones.

Of course the services you can get are different, but I wouldn't be surprised if Internap started offering services akin to what Akamai currently does..
-o

All they really are is one big colocation site/ISP. They just happen to have line from all the major backbone providers. There service is great those heavy hits site that want to make sure the they get the best connection possible.

1). They create large private peering points which are in general overutilized and badly managed. Individual, private peers create just as much bandwidth without concetrating routes into a single facility, which also provides more redundancy.

2). Huh? The Tier 1 ISPs (which InterNap is _not_, the Tier 1 ISP which I am employed by does not consider InterNap a peer, but a customer.) all have meshed BGP backbones these days and diverse paths on their backbone trunks. Network redundancy is a simple matter of planning, and nothing revolutionary.

3). Actually, it's called peering. InterNap has to pay for half of these peers with other Tier 2 and smaller-scale Tier 1 carriers which consider them a peer, and they have to pay for bandwidth from the top ISPs who consider them a customer.

The ISP world is much, much different behind the scenes than it is in the ISP's marketing materials. They in NO way portray a truthful picture of the workings of the Internet backbone.

....and what would they be using Linux for? Routers? I sure hope not. Certainly not switches. How would the desktop machine they use in their Noc or as a statistics monitor affect their backbone performance in any way?

This is close to an idea I had. Place web caches/proxies close to major ISP in the network and serve content out of them instead. The Netscape and MSN homepages must be the most heavily hit pages, why not use local caches that update every 2 minutes co-located at major ISP's like Earthlink. More advanced caches could be used for dynamic content like eBay and the various stock services. The end result is faster response time (less traffic, and less distance traveled, but its not like anyone will notice the added 50ms), and a lower network load.

I've seen work into resolving nP complete into polynomial time, but it is at the very basic stages. The best I've seen is, in worst case, cubic time, but it still can't be proven. (That, and most of the nP theory behind the cubic solution is heavily beyond my understanding.)

What you really have to look at is that while the computer is solving this shortest path, it is not loading the page. It has to find the path before it can even load the first graphic or bit of text. And while it is not loading that web page, the user is sitting there waiting. Maybe on a 1 GHz, it takes a lot less time then on my 'old' 166 MHz, but depending on how the nP algorithm is coded, you could still have a lot of time where the browser is just sitting there, apparently doing nothing (at least from the user standpoint).

And I think I can speak for a large chunk of the online populace when I say I find that waiting for a web page to load is one of the more boring things I can think of doing.

I took two tours with them awhile back and was explained the process...

They buy pipes from anyone with more than 1% of the global routing table on the net. They put all of these pipes in a PNAP in a location and they provide full redundancy on all of the links and equipment.

They pull in all of the routes, shoot them to a Linux box that massages the routing tables so that if a customer packet is destined for Alter.net, it will only travel down Alter.net's network, thus bypassing clogged peering edge routers. It doesn't rely on AS-PATH decisions at that point.

The edge peering routers are, traditionally, the most clogged/slow of the links on a providers network. Think about it, are you going to spend more money on your core routers that support YOUR network, or routers that pass global internet traffic to other networks? BBN planet was having these problems this week, in fact at some of their peering routers. It was all broken.:)

It is really quite an original idea. Very expensive to maintain all of the different links to all of the providers, but they only accept DS3 customers and higher, and you do get VERY good performance.

To me, this is the most interesting point in the whole article: "The money issue is important because to date, no company is turning a profit at providing backbone connections. And InterNap itself is still losing money -- the publicly traded company reported losses of $43.4 million on revenues of $22.5 million for the first six months of this year."

As the backbone providers ratchet rates up to alleviate this red ink, InterNap will start to make more money as demand rises for their colo service (since this means less traffic over the backbones), but I'm most curious how this sort of thing will play out when a business realizes that 90% of its customers are all on one node and why should they pay for backbone traffic at all if they can serve most of their customers without it?

1) They don't "lay connections" between web sites. They pay for peering with large BB providers.

2) They do some really funky stuff to BGP to make things more efficient and redundant. But it's a secret;)

3) "Forcing people to pay"? Uhh, it's called selling something, and you study it in econ.

Why is it that every gee-whiz article these days has 50 people sign on immediately and say "whoopdeedoo"? I understand being a jaded technologist, but sometimes someone does something cool, and not EVERYONE on the planet knows about it. Don't dig it, don't read techie news sites...

This is nothing new. InterNAP has been doing this for years now. Which is why they're so goddamn expensive. But I must say that they offer the *best* data pipes you can possibly get. They peer with 8-9 of the largest providers in each PNAP and your traffic goes to the provider that has the best route. They do an exhaustive systematic search through the global BGP routing table and pick and choose their routes individually. I would assume their route-maps are freaking gigantic. Their technology is unfortunately not real time... (yet.;) Anyone who knows how BGP works can figure out how they do this.. it seems rather simple (I deal with them on a regular basis) but they came up with it first.