Along with the new infrastructure, how about a new browser and a different protocol. Seems like HTTP and webpages as we know it could be made so much better if you had an HTML type language that was more of a application toolkit/RAD deal. So I could write a GUI that is as nice as a local one and doesn't have to be installed on your computer...I guess this is what XUL is supposed to do.....

I don't think there is that much demand for it. In ancient times the Internet served purely academic purposes and was used for sharing of information (in post-military and pre-commercialism days). I believe the same is true for Internet2 now.

I don't need no fancy GUI in a webpage, I don't need fancy movies, I don't need shock-the-monkey etc.

It it can shoot down the idea of leaving WIMP, it can shoot down this idea: what works doesn't need to be changed. I like being able to make a small page with a effort proportional to the amount of formatting. I know when you're trying to design a site that looks like it belongs on an interactive CD-ROM it's not so fun, but if you want to do that there are other ways. As for HTTP, I wouldn't even think of replacing it. It's a simple text protocol that gives easy anonymous access to files. Maybe if you're using it for remote administration (as in SSH, not webmin) it's not so good, but using the wrong tool never justifies changing or eliminating that tool.

IMHO, i'd rather have service that is stable where the provider doesn't play any tricks (ahem, Cox@home blocking port 80, ahem), etc.

I kinda view this the same way as i view the "3G" cell phones. I don't care if Joe the businessman can video conference, i just want to have decent voice quality. Same goes for the net... i don't care if Joe the net surfer can browse his pr0n ten times faster, i just want it to work well!

I'm ignorant on this subject, but I've read the article and it seems like this Internet2 thing is just around the corner, so if I wanted to put some money in and hopefully make a profit down the line, what companies could I invest in? Does anybody know?

Its not just around the corner, its been around for a few years now. Its purely research though, as it says in the article. You can invest all you want, but most of the benefit of the network isn't going to be tangible. I'm sure Cisco, IBM and the other companies mentioned will get to try out some next generation technology on this that will eventually make it to the regular internet, but commercial research isn't the point. In fact anything "commercial" is quite contrary to the point.

As an aside we had a connection to I2 at the school I graduate from this past spring and from talking with professors some of the projects on it revolved around distributed computing, as well as some work on new network protocols.

OK, let me rephrase. Internet2 speeds and bandwidth being applied to commercial and every day internet is an idea that is just around the corner. As it is, Internet2 is only used for education and research, but it, or something like it, will hopefully be hitting the every day person sometime soon, so how can we jump on the bandwagon and rake in some bucks?

well basically any company not leaving out the "dot-com" part when they answer the phone.

i would also try to get ownership of some leet names under the new ltds.com2.net2.org2.biz2.info2 and.name2 that you... well first have to get icann (or some local court in alaska) to aprove... but then you can cash in big time when you sell the popular names to real companies!!!

It was interesting to read in "The Lexus and the Olive Tree" how Europe fumbled with computer manufacturing to the point that the industry is non-existent. Hopefully the latest round of deregulation will help their telecom companies compete. Whether or not they can efficently transfer technology from the lab to the market as well as the US or Asia remains to be seen.

Having done some good looking-into for this...a lot of these comments are irrelevant.

Commercialism: The Internet(1) is plenty good for this. Internet2 is for research. In fact "Internet2" isn't a network, but a group of people, much like "open source" isn't a company but a movement. Abilene is the network the people who are Internet2 memebers connect to, and it's pretty much only research traffic. You can connect to the internet (commodity traffic) over your I2/Abiline link, but that is not routed over the Abiline backbone you have to pay for an ISP for that as well.

Controlled? Not really. Once you are in, you can do just about whatever you want with other orgs that are in as well.

Get this in your room? Not likely, unless you can get through the application process (stating that you are a research university, nonprofit research group, or a corporation doing research with at least $25,000 to spare for dues. You then have to buy an OC3 to a local "gigapop", who will connect you (and then probably pay more if you want to use any of this bandwith to get to the Internet(1)). I would skipp all the paperwork and extra fees and just get the OC3 if I were condiering this. I mean $2000/month is better than $4000/month and a lot of red tape.

What they are getting in return? They are doing research in middleware, networks, video, you name it...that is on the leading edge. Enough research in these areas and you're going to come up with some pretty neat ideas: product ideas, service ideas, etc.

Ya I have Internet 2 in my room. Well my dorm room. Actually here at the University of Wyoming we have been apart of Internet2. We connect to our local Gigapop [frgp.net] that then connectes us to abliene. I am fortune enough to work for the UW network group and get to listen in on change as the such. Quite exiciting

IMO the only way to get v6 adopted is (was?) to build a new internet. One of those chicken-and-egg problems, no incentive to upgrade the routers because the endpoints don't use it yet, endpoints don't use it because the routers can't route it.

Pushing IPv6 into mainstream use will only come once we have either a new internet or the one killer application that can't work without IPv6 features. Many people have been saying that mobile networks, and in particular 3G phone networks, would be this killer application but i'm not so sure....

The problems arise because it is still easier to design a new hack to IPv4 to overcome limitations than it is to move to IPv6. Until this becomes mandatory, i feel that people will keep applying the bandaid approach which will get us nowhere.

As a researcher in QoS protocols and network services i have to say that the idea of starting again, without the requirement of having to support the legacy internet is probably the best way to go. It allows research into next-gen network technologies to continue and, when the time is right, the rest of the commercial Net can join in.

I believe Internet2 is being used to test a core DiffServ implementation, where QoS is built in from the start. I'm not sure about the state of IPv6 in Internet2 or Geant' but if they are not using these technologies then i can only mourn the loss of the potential to move us beyond todays Internet

It supports both IPv4 (for the huge existing installed base) and IPv6. Having the network do IPv6 is quite far from actually being able to use it meaningfully. While standart Unix stuff, especially under BSD, mostly supports IPv6 now, how about a file sharing application? A hardware video conferencing box?

The right question to ask oneself: Does your computer support IPv6 today? Internet2 does.

I just don't get this whole internet 2 thing. You know, if we kicked all of those annoying users off of the internet 1, then we would have a superfast and reasonably stable internet to use. What's the big deal about making a whole new internet if you can't get to slashdot?

You may want to learn what Internet2 actually is and why it exists. As stated several times by Stas Shalunov in these threads, Internet2 is dedicated to advancing core Internet technologies, while providing optimized, unsaturated bandwidth for demanding educational and research applications. The engineers of Internet2 are researchers, and one day those same "annoying users" you talk about will be using technologies developed at I2.

(warning to moderators, slightly off topic but please don't slap me with negative karma. I got the last first post [slashdot.org] on/. before the switch over to the new comment counting system so I'm special hehe)

My college Johnson and Wales University (Providence, RI campus) currently has several 1.4mbit lines going into the seperate dorms. Each line is shared by like 350-400 people. Now figure that all of those people are running morpheus and uploading/downloading like mad fools. Right now, I'm only getting 3k/sec through websites with some bursts to 20k/sec. This is just horrible considering I haven't been near dialup or any slow connection for almost 2 years.

My friend who goes to Drexel has 160mbit I2. Downloading going less than 700k/sec is "too slow" for him.

The rumor on my campus (credible rumor) is that we're going to get a 45mbit I2 connection. I know all these morpheus users would fill it up, but I think there would be significant improvement over the current setup. I'd be a very happy for any improvement at the moment...

This is slightly off-topic as well, but just responding to the above post.

I have a friend who goes to Cal Poly San Luis Obispo in CA. He was telling me that so many people are using Morpheus and Kazaa that just like above, the internet connections are TERRIBLY slow.

Apparently what Cal Poly SLO has done to combat this is cut their bandwidth in half. I guess the story goes that someone donated their internet connection (or the money for it) under the terms that Cal Poly could not BLOCK any website. So to get around this file sharing craze, they just go and cut their available bandwidth in half. This makes it even slower/impossible for people to transfer files, so eventually people just give up.

I personally am not sure how effective this is, but my friend was telling me that many people he knows have stopped using Morpheus just for this exact reason that it is so slow.

Johnson and Wales actually does have access to I2. JWU is hooked into Oshean [oshean.org], which links Rhode Island's higher education and related institutions. you can find traffic statistics for various schools' gateways at their NOC Page [oshean.org]. I believe Oshean partners with Qwest to provide Internet1 access to all of us.

Oshean, along with educational institutions from several other New England states, links to Northern Crossroads [nox.org]. NoX colocates at Qwest's PoP in Boston. It is there that they link to Abilene and I2 (an OC-12).

im not exactly sure what URI's link to Oshean runs at...the traffic graphs seem to suggest that we have quite a fat pipe:) i dont live in a dorm so i dont know how the net access is there, but id imagine it gets pretty saturated at night. i can sorta tell because at about 4pm on weekdays, whatever mp3 stream im listening to starts to lag. im in an office in Tyler Hall which is where most of the networking stuff is (i think)...

anyway, ive been wondering...if i play counter strike from my office on a server at Georgia Tech (another I2 school), does it go over Abilene or does certain traffic get blocked because its unlikely to be "research-related"?:)

Heh, do you work for J&W or something? Or just know how our system works?

I was told a while back that we are hooked up to Oshean, but nothing detailed. I would love nothing more than a faster connection than what we currently have. At night here at J&W it speeds up (2am-10am is the drunk passed out time on weekends).

And from what I gather, the network here at our school is free for any type of use, entertainment (Tribes2, yeah!) or research (Thank you MLA formatting webpages).

I'm just about to explode from the fact that I'm downloading at the speed of dialup...

heh actually i work at and attend URI. im the systems manager for the computer science department. i learned about Oshean, NoX and Abilene over the course of a few days after one of those campus-wide outages during the summer...pretty interesting stuff.

my pondering at the end of my comment mainly had to do with whether I2 itself had any filtering. if i traceroute from my office to say www.gatech.edu, i can see that it goes over I2. but i have no way of knowing if the packets that make up my CounterStrike (Quake3/Tribes2/etc) game take that same route. although, now that i think about it, UNIX traceroute does use high port UDP connections just like most modern network-enabled games. hmm...

as for dialup, well, im stuck with it at home. good thing too otherwise id still be spending 4 hours a day on IRC...:)

Anyone else frightened by the positive and negative potential of the "Grid" refered to towards the bottom of the article? At once, the notion of distributed computing at this level overwhelms the mind in terms of the benifit to research and developement. But it also seems to tap into a familiar theme in sci-fi - the interweaving of all the world's computers into some sort of incomprehensible entity that eventually destroys humankind for whatever reason (inefficiency, malaise, etc.).

Another thing: this article doesn't get into any of the specifics of the networking and transport models of the internet2. I have looked on the internet2 website to no avail as to quesitons of compatability with the current internet, readiness for ipv6, and built in security features. Has anyone hacked the internet2? It might end up being the fastest way to get your MS Passport stolen.

You've got to wonder how far this is going to get without commercial support. If the thing remains pure, then that's great-- but there's only so far it can go.

The internet didn't really pick up until businesses got the idea that they could rape it for all it's worth. Of course, this is what left the researchers feeling like they needed something new in 1996, but it's also probably the reason that it's as widespread as it is today. You can't have a revolution these days if somebody's not coughing up the cash.

Not that using this thing to get a nonexistant ping in Quake 3 isn't a bit of a shame. But it's a bit optimistic to think that the future for Internet2 is as rosy as the article implies, I think.

At the very least, getting some corporations involved in something other than a research capacity would allow them to supply some advertising muscle-- I mean, you'd think somebody in the past five years would have been able to come up with a snappier title than "Internet2."

You've got to wonder how far this is going to get without commercial support. If the thing remains pure, then that's great-- but there's only so far it can go.

Ahem, the whole point is that the comercial shit has clogged and ruined the first net. The perversity is that the backbones are NOT being used. The bandwith has been drooled out to a few select companies who are bussy trying to control it and rape the public for access. The article claims:
Geant and Internet2 are not separate from the physical network of fibre optic cables and telephone lines that serve today's commercial internet... What both networks do is buy connectivity on the open market from the telecommunications network operators, and then earmark it for research purposes only. They don't lay any new cables and they don't dig up the road.

While I have little faith that it will happen, some government regulation along the lines of electric utilities would be useful here. A garanteed, but modest, rate of return for investment and regulations in the public interest would be a much nicer and better way to fund and develop the net than the current Rapist Cartel. $160/month for DSL? You heard it here. Give me a freaking break! As things are, the owners of the new media are striving to control it and make it more like cable TV. Adverts and overcharging all parties is not the best way to foster the new media. So what does your piece of the bandwith look like?

Even companies such as McDonald's, Johnson & Johnson and Ford are keenly watching developments on the new networks. The fast food chain has already shown interest in the tele-immersion experiments being run on Internet2.

Would you like fries with that teleconference? Yeah right. My current ISP won't let me host email, ftp or html. I doubt that they will let me host my own video.

The title Article In The Guardian On Internet2 is erroneous. The article [guardian.co.uk] is actually about Geant [dante.net], "the new pan-European network serving more than 3,000 of the continent's academic and research institutions". Basically, Europes version of I2.

>Even companies such as McDonald's, Johnson & Johnson and Ford are keenly
>watching developments on the new networks. The fast food chain has already shown interest in
>the tele-immersion experiments being run on Internet2.
>The company envisioned fitting tele-immersion cubicles in its restaurants so people away from
>home - even in separate countries - could have dinner with their family

<vocoded voice&gtplease can I have the crappy easy to choke on wind up toy, mommy, please>

I'm currently a student at the University of Maryland we have Internet2 connectivity. Stolen from OIT's network throughput page is:

A 75Mbps connection to Qwest Communications for commodity internet traffic.

An Gigabit Ethernet connection to MAX (Mid Atlantic Crossroads), a consortium of local research institutions, through which we have high speed connectivity to those institutions as well as the NSF vBNS network and the Internet2 Abilene Network.

An ATM connection to UMATS, the intercampus network of the University System of Maryland for connectivity with other USM schools.

Internet2 gives me downloads very close to the theoretical max of the 10megabit connection to my room, which is great for being an ultra-low-ping bastard in games. With the gigabit connection, the ping times to basically any location on Internet2 is the same as if it was on your LAN.

To answer the IPv4 vs. IPv6 question, it uses the same IPv4 that the rest of the world uses, it just appears to be more infrastructure to speed up things.

OK Folks. I worked in URi's networking department and I'll tell you what I2 is. I2 is a high-speed connection to other universities with I2, it is NOT the 'future' it is NOT anything special. Packets originating inside our school hit the router, if the destination is on I2, the packets go through the I2 pipe, if not they hit the commercial router. You can get stats on URI's Internet connections at http://zeppole.uri.edu/mrtg/, you can see that the I2 is not heavily used because most people want stuff off the.com TLDs.
If I want to download an.iso from redhat.com it goes really slowly, if I get it from rutgers.edu it flies. Nothing 'revolutionary' just the Internet as it was meant to be.
Now if I could only connect URI to the high-school hosted in our building (the high school is ten feet above me, but 12 internet hops!).
If network folks interconnected more, the world would be a lot better.

There is supposedly going to be some sort of dance production to promote Internet2 at the SuperComputing 2001 conference [sc2001.org] in Denver. The performance is going to be done entirely on Internet2, with choregraphers, dancers, and a sypmhony from various locations around the world.

There's an article here [ufl.edu]. The project site is here [ufl.edu].

I didn't see any mention of improving latencies so I guess they are helping the multimedia and file sharing internet applications.
Stuart Cheshire's
It's the Latency, Stupid [stanford.edu]
article gives a good explantion of why more bandwidth isn't the
only thing the industry should quote when selling access.

Latency is a prime concern of the I2 network architects. One of the cooler aspects of the whole I2 thing in hardware is the use of packet switching schemes that resemble circuit switching schemes. Instead of a packet just flying around the network trying to get to the other end of it by whatever path a router deems viable, I2 routers route all of the data from two sources along the same path. Packetswitching was designed to be redundant for lossy networks, not fast for high availability networks. Routes can easily be recalculated but having data all go the same way lowers latency dramatically. I remember reading a bit ago about a realtime video conference between a professor and his class in the US and a professor and his class in Japan. Uncompressed realtime video doesn't work real well on the more traditional packet switched networks of the current internet. An interesting aspect of the circuit switching like packet switching is that specific pathways between a destination and origin can be planned out ahead of time and even sold. Instead of merely selling X megabits (or maybe gigebits) of bandwidth a company could sell a direct path or high priority path between two network nodes. Junior sending a video of his skateboard competition to grandma wouldn't interfere with the realtime traffic of a big Fortune50 paying beaucoup dollars for their network while Junior and Grandma are paying 50$ for their cable modems.

I still don't really see the point. OK, we can deliver 40Gb of bandwidth. Why do we want it?

All of the scenarios in the article seemed dreamed up to show that such a network might eventually be useful. They mention virtual teaching, but as I understand it, the thorniest problems in distance learning have nothing to do with network speed.

I used to work at an education institution which was connected to I2. The network was very fast. I remember downloading entire Mandrake ISO distributions from other I2 sites in around 5 minutes. No problem there.

However I2 isn't just supposed to set FTP speed records. Connecting educational institutions was designed to advance research in high speed network and practical applications. Some mentioned were interactive video applications, multicast HDTV and the like. It will be great when we start to see these apps, but unfortunately this will be some time coming.

While I2 now provides the theoretical playground for researchers and some developers to start generating next generation applications and protocols, these applications and protocols will most likely depend on the bandwidth of I2. Right now there are like 200 universities that are on I2. However the technology that is produced by them will stay theoretical until thousands of companies gain access, and those companies will have to wait until millions of homes are wired before they can ship their products.

I see I2 as being a lot like IPV6. A needed improvement, and a good thing. However something that will take time to permiate into our daily lives. Here's hoping it doesn't actually take that long to hit the market.

However I2 isn't just supposed to set FTP speed records. Connecting educational institutions was designed to advance research
in high speed network and practical applications.

Actually, FTP (well, bbcp and the like to be precise) are very important applications for high-energy physics community. Wait till you have a petabyte database, you'll appreciate a 100Mb/s transmission.

This is not to say that new kinds of applications aren't important. And significant progress is being made here. Remote musician collaboration (reduces travel), remote control of heavy lifting equipment (reduces injuries while training), etc., have been demonstrated to work. Deploying stuff is harder, because it's driven by demand and demand is determined by users' expectations.

I'm not a tech guru... but why can't we make a new "net" wirelessly, using 80211B connectors, that share information with a gnutella like interface with other computers around us? We could have a cloud of computers instead of a net. all the computers witin range of a connection, sharing 10 percent of its resources with all the others, which in turn share with the ones they connect to... this could actually replace the internet.

The 802.11 protocol isn't robust enough to handle a huge network cloud like you're thinking of. It uses many techniques of the wired 802.x schemes to detect nodes on the network, these work fine for wired networks where it's as simple as detecting a completed circuit but for wireless networks alot of overhead is added. Besides the problems in 802.11 for large numbers of nodes, you'd also need to come up with an efficient dynamic routing scheme. Packets can't just propogate over the entire network with no points that sort of direct their flow. If the wired internet worked like that data would never reach its destination. Wireless clouds are inefficient and messy and not very scalable at all. The bandwidth of 802.11 can't ever exceed B log2 (1 + SNR). Wired networks have the same restriction but can just add wires instead of refining their encoding schemes.

Seti@home was an interesting project because it provided an excellent model for ultra high latency supercomputing, ie. You could send a small amount of data to a computer let it churn for a while and get it back much later. Actually, though most supercomputing problems that are distributed require much more communication inbetween nodes. What programming frameworks/languages are being used now in low-latency applications for distributed computing.

The real question is, will we see IPv6 deployed on Internet2, and will the 'little guy' be able to participate in the core structure of the internet2 (i.e. be assigned routable IP space)?

The internet was supposedly designed to route around points of failure, however this is now only really true of the 'core' of
the internet - As it grows, it becomes less and less robust, since routable IPs are no longer available to people who need less than (or can't afford to pay for) a/19.

Without being able to advertise routes, you are at the mercy of your (neccesarily) sole inbound provider, and this is the way that most corporate and government interests would like it to stay.

What the Internet2 needs is a new routing protocol, or at least employ equipment with a decent capacity for route tables.

What is the point of IPv6 (more address space) if you can't route what you currently have without hacks like CIDR.

There is absolutely no point in assigning every device an IP address if the majority of those devices are not accessible directly, or cannot be accessed if failures further up the heirarchy cannot be routed around,. rendering them unreachable. It's just stupid.

Only with the provision to allow new players to 'compete' on equal technological footing with 'the big boys' will we see meaningful growth in internet technologies past the next 10 years or so.

The internet was supposedly designed to route around points of failure, however this is now only really true of the 'core' of
the internet - As it grows, it becomes less and less robust, since routable IPs are no longer available to people who need less than (or can't afford to pay for) a/19.

BGP multihoming with provider independent address space is not the only way to take advantage of the resilience of IP. For one example in one situation, a customer can multihome (with physically diverse leased lines of course, right?) to two different PoPs of the same ISP, and route their PA space appropriately.

The advantage of this is that it doesn't require every core internet router to be aware of the multiple paths to your network; I have multiple paths to your ISP already, and they handle the resilience from there. This speeds convergence time for you (you don't want to wait for routers in US, Europe and Australia to all catch up with your flapping lines, do you?) and reduces the cost to me of maintaining a full routing table.

Address space is allocated based on how many addresses you need. Routability of small allocations isn't some grand conspiracy; it's a decision resting with each network based on just how costly it is to maintain a copy of the global routing table.

And while increasing the available address space is definitely a good thing, extra address space, as you say, won't solve that particular problem. However if the rate of growth of BGP multihomed networks is greater than Moore's law, neither will simply slapping extra RAM and CPU into backbone routers. Eventually the routing table will outstrip affordable router capacity and you will reach the same problem from the other direction.

IPv6 has some neat tricks that might actually reduce the dependence on BGP multihoming. In the meantime, consider the hassle of BGP multihoming for you, and see if there isn't actually another solution that fulfils your requirements better - there might not be, but you might be surprised.

> Address space is allocated based on how many
> addresses you need. Routability of small
> allocations isn't some grand conspiracy; it's a
> decision resting with each network based on just
> how costly it is to maintain a copy of the global
> routing table.

I realise there are extremely good reasons for the adoption of CIDR and the reluctance to hand out small packets of IP space, and to some degree the problem is fundamental - how do you maintain independent centers of global awareness without duplicating the global table?

What i would really like to know is if there is a better way of handling global routing than BGP, especially with regard to maintaining redundant paths on the edge of the network.

CA*net is Canada's (fairly slow) commerical Internet backbone. What you're referring to is CA*net3, which replaced CA*net2 as the national research/educational backbone in late 1999. At the time, it was the the fastest advanced network in the world.

I worked for the National Research Council [imb.nrc.ca] at the time, in Halifax, and we were actually one of the first in Canada to be on CA*net3. My boss there was actually one of the big guys involved with it, so I got to see some of the early network maps, etc. It was quite interesting.

I believe that most of CA*net3 is still dark fiber though. There's a lot connected to it, but it has to potential for much, much more.

Ok, first of all. Internet2 is slow. Really slow. They are hopeing for 10Gigabit in 2003? What? I know people with home networks that fast. That really is bottom of the line when it comes to backhaul networks. Most major telecommunication companies know have backbones in the Tb range, because of DWDM technology. I know Time-Warner has 80Tb lit, and will another 80Tb soon. You used to be able to get around 10Gigabit on a single fiber strand, now its about 16x that with dwdm technology by Nortel, Lucent, or a host of other companies. Thats 160Gigabit on a single strand of fiber. This network really is not fast in any way whatsover. I have used computers on schools on I2, it really isnt that fast. In fact, its so bogged down, its slower then 10Mbit for the most part.
Second, I wouldnt exactly call Europe lagging behind hind us in bandwith. In fact, as far as bandwith between colleges they kick our ass, and have for many many years. Have you have transfered between Euro schools. We are talking about 8-9megs a second, consistantly. I believe their is a network (not sure if its Janet or what its called) that is 80Gigabit running between a bunch of schools. The bandwith at uwente.nl is amazing. Thats why all the top software piracy sites are located there. US colleges internet speeds are pathetic and in the range of 1/100th of that of Euro schools. Most euro schools I have ran into have 100Mbit to the dorms, and have for years, most US schools are still 10Mbit (I know University of Washington , where I go, is, and I can name of dozens of other major universities that are). In fact, 1Gigabit in dorms is really starting to become popular in euro schools. Like I said, Utwente.nl has quite a few software piracy sites running a 1Gigabit. We cant even hope to catch up until we upgrade to at least 100Mbit in most of our schools. Then we will still be years behind.

People are getting it all wrong. I2 isn't the "next" or "new" Internet and it wasn't created for brand new applications or new "mindsets" for doing things because it's so blistering fast. It was created because schools can't afford commercial pipes. It's less expensive to connect schools together directly than connect them to a national, commercial provider at these high speeds.

I2 is primarily fast because it isn't used all that much. You don't have thousands of AOL dialups clogging up the network, @Home/Time Warner boxen downloading pirated movies, or the psychic friends network using it to do their VoIP. That all eats bandwidth. Instead, you have the occasional geek downloading a slackware distribution, or browsing the Computer Science department of another university. If suddenly all the schools would allow traffic over their commercial pipes to access their I2 routers, I'm sure the network would slow down now seeing it's accessible to the public - along with all the abuses and bandwidth eating applications.

I guess the best analogy to this would be comparing it to an underground tunnel between schools only for academic use, compared to a giant highway for public use. The underground tunnel doesn't nearly have the capacity of the massive highway, but is much faster. So just because something is fast doesn't mean its on the edge of technology or is, in fact, anything special.

I have used I2 and it is quite fast, but what can you get on it? The latest well hyperlinked personal page of a student in a nearby school? This makes it loose much of the reason why the real Internet is so popular -- it's a space where you can find anything. But I2 defeats this purpose by limiting what the network can connect to, and thus its usefulness. It may be useful at testing new applications, like an HDTV stream, but since you're not doing this on a public network to begin with it's applications are limited to your own, highly restrictive network. You can't say you've done something new when all you've done is create an exclusive network that doesn't address the real problems of networks anyway - like last mile access and exponentional bandwidth increases. IMO, I2 is a way for schools to have a fast link to each other without paying the huge costs associated with a 1 GB link to the national backbone. That's all it is, and that's all it probably will be.