Posted
by
Hemoson Thursday June 17, 2004 @04:19PM
from the learn-at-the-feet dept.

Lisa Langsdorf writes "Thought you might be interested in this interview between Nicholas Negroponte and BusinessWeek Online's Steven Baker.
In it, Nicholas says that peer-to-peer is his prediction as to which new products or services are likely to make the biggest splash, he says:
Peer-to-peer is key. I mean that in every form conceivable: cell phones without towers, sharing leftover food, bartering, etc. Furthermore, you will see micro-wireless networks, where everyday devices become routers of messages that have nothing to do with themselves.
Nature is pretty good at networks, self-organizing systems. By contrast, social systems are top-down and hierarchical, from which we draw the basic assumption that organization and order can only come from centralism.
"

Saying that organization and order can only come from centralism sounds a little, well, ideologically loaded coming from the brother of John Negroponte [disinfopedia.org], the former US Ambassador to Honduras who seems to have formed the opinion that the best way to establish order in fractious Latin countries was to tacitly allow strong men and dictators to terrorise, torture and kill the populace.

And now John Negroponte is Bush's choice for next Ambassador to Iraq, where it seems the current US administration obviously feels a little torture and a few disappeared people is one way to restore "order". How convenient!

Funny... but misses the point. When you hit a web page from home, all the computers (routers, proxy servers, etc) that the data passes through have been built, configured and installed for the central purpose of moving data. In that sense they have *everything* to do with routing your web page.

What Negroponte means is that your phone will pass data for other clients like a router does, but it will also be your mobile phone (a helpful, interactive, personal device). So instead of having a fairly strict division between client, server, and message-passing machines, each device will contain the transport functions and also do something individualistic.

This architecture, it seems to me, will imply encryption throughout -- somehow, people are more concerned by the idea of their data passing through other individuals' devices (what if they look at it?!) than they are sending the data through the hands of a few mega-corporations. I would say this is a good thing...

uhhuh "as we all know".. it's a wave of the future that you can transfer data with devices meant for transferring data?

yeah, well, did you read anything about the 'virus'? it was more like "hey, it's possible to TRANSFER PROGRAMS WITH BLUETOOTH" than being of any major concern to anyone.. unless you think it's a major concern to somebody that you can transfer a program to your friend if you want to do so and your friend can choose to run that program if he wants.. if the user _wants_ to install something it doesn't much matter how the program got to him in the first place, only way to prevent such from spreading would be to take the right of running whatever the user wants away from the user.

When someone from MIT says peer-to-peer is a good thing, he's talking about peer-to-peer as an architecture. He does not mean "KaZaA 0wnz!! fr33 pr0n = 1337!!!!111oneoneone." People are interested in peer-to-peer for reasons other than file-sharing because they're scalable architectures that can handle load balancing very well, and have no central point of failure.

Most peer-to-peer research in universities regards creating better, faster Distributed Hash Tables, or DHTs for short. Typically, for N nodes on an overlay network connected by a DHT, insertion and queries come at log(N) cost. MIT has one of the best, called Chord [mit.edu]. Some DHTs are very fragile and their routing topology can "break" when under extreme churn (when a flash of nodes suddenly join or leave the network), or malicious nodes attempt to manipulate other nodes' routing tables by creating fake identities (see the Sybil attack [rice.edu]) -- Chord has been shown to be very resistant to both. Other notables are Kademlia [nyu.edu] from NYU (which is under the hood of eMule), and Pastry [slashdot.org] from Rice (Microsoft collaborated).

MIT has done some pioneering research in DHTs, and they have a lot of great minds on it. I'm making my own peer-to-peer program (hopefully it will be ready in a few months) and it will incorporate quite a few of the ideas they've developed. One of their ideas that I find particularly interesting (and I think should be incorporated into BitTorrent, because it seems like the perfect application) is called Vivaldi [mit.edu]. You can read for yourself on how it works, but when applying it to BitTorrent, basicially the tracker would give you peers it thinks you have a low ping time to, as opposed to a random list which may be sub-optimal.

They're also involved in Project IRIS [project-iris.net], which aims to develop a decentralized Internet infrastructure using all the latest DHT technology. It's funded indirectly through -- gasp -- the government via the NSF.

So yeah, don't just think that MIT is jumping on the bandwagon. They've been on the bleeding edge for some time.

Current wireless transmission protocols trade tranmission speed to deal with power concerns involved with communicating over increasing distances. What's particularly counter-intuitive given experience with wired networks is that multiple hops can often actually be faster in wireless networks due to these speed/distance tradeoffs.

BTW, unlike the url in the implies you can indeed unburn a monitor if it has not been exposed too long (e.g., years). Make a solid full intensity white full screen image, crank up the brightness and contrast of the monitor, and let it sit there. Basically your burning over the old burn and bringing everything back up (or down if you prefer). It works most of the time if the burn isn't too severe (e.g., i don't think it would work on a 12 year old ATM monitor).