Original post by gbLinuxit is important to differentiate line Speed and Bandwidth. even tho referred to as a "speed" bandwidth is actually not what matters to "PING", it does not measure the speed with which packets travel or influence how long would it take to arrive at destination... for that we need some distance and the number of routers and checkpoints it must stop and visit on the way.

say, if the speed of water molecules in some water-pipe is what packet speed is in network lines, then bandwidth is the width of that pipe, ie. how many packets can flow through it per second. but the speed, the speed is always about light speed, i suppose, minus all the time lost in routing... and that's pretty much all i know about this. what routers do and how much they slow packets down, that i don't know.

I get the impression you're only skimming people's replies and not really taking it all in. I explained the difference between latency (what you call "speed") and bandwidth way back in my second reply:

Quote:

From my second replyBandwidth and latency are usually orthogonal (one is not related to the other). Bandwidth is the amount of data per second that your connection can sustain and is usually measured in bits per second (b/s). Latency is the amount of time a packet sent from one end of the connection takes to reach the other end and is measured in seconds (or milliseconds). For example, a sattelite link usually has really high bandwidth, but high latency. Fibre optic connections are typically high bandwidth and low latency, and so on.

Now, bandwidth is technically infinitely expandable - if you want to transfer twice as much data per second, simply install twice as many cables. But latency is limited by the physical properties of the universe we live in - data cannot travel faster than the speed of light, and it takes around 66ms for light to travel from Sydney to LA (for example), meaning the physical minimum round-trip time from Sydney to LA is 133ms*. You cannot improve that (without violating the laws of physics).

Quote:

Original post by gbLinuxbtw, does anyone know about this Xbox 360 p2p network?

The "P2P" used by most Xbox games is similar to what was described by hplus0603. That is, one of the "peers" is designated the "host" (or "server") and everybody connects to him. In reality, it's a client/server model.

I'm going to leave this discussion with one observation. It is not uncommon for a novice to a particular field to believe he's come up with a novel idea that nobody's ever thought of before. He can't see any problems with his idea, and he gets frustrated because so-called "experts" will dismiss it, almost out-of-hand. This is not because the experts lack imagination, rather it is because the experts can see the inherent flaws in the idea that a novice - from a lack of experience - will miss.

Some people believe that being a novice can be an advantage because you're not hampered with pre-conceived notions of what is and is not possible, but that is not true. Perhaps you can provide one or two examples of a novice who actually has come up with a novel idea that no "expert" would have considered, but for each of those, I can point out tens of thousands of "novice" ideas that fall down in the real world.

Do not be discouraged, however. We were all novices once! (Not that I'm an expert by any stretch of the imagination, of course!) My suggestion would be to keep your idea in the back of your mind as you learn all you can about implementing networked applications in the real world - you will be surprised at how complex it actually is.

Round-trip for P2P should be 40km (2x 30km, 1x60km / 3) assumming it's an average what you did.Second, and more importantly, in computers there is no average in this stuff. The whole system works as slow as the lowest component in that system (what is called, a bottleneck), not as it's average.This means the P2P will go as slow as if it were running at 60km, because A has to wait for D. There are some clever tricks (lag-compensation) that help A predicting what D should have done and when data has arrived fix the estimations. Nevertheless, in the long term, A will have eventually need to stop and wait because D can't keep it up (or vice vesa)And B and C are caught in the A & D's delays, so they have to wait too to avoid getting too far in the simulation from and A & D.

Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet.

gbLinux, I really suggest you go read through the entire Forum FAQ for this forum, including following all the links. Start with question 0, make sure you internalize the science behind it, then go to question 1, make sure you internalize that, ...

Then come back, and we can hold a discussion that makes sense, and where you don't come out looking like a lazy n00b. You've made so many beginner mistakes in your analysis it's not even funny, yet you complain that the experienced answers don't make sense to you. For an example of the latest mistake: You assume that geographic distance equates to network distance. That's not true at all -- when I ping a server in San Francisco from Redwood City (a distance of about 25 miles north), the packet goes through San Mateo, Sacramento (80 miles away), San Jose (30 miles south) and from there finally to San Francisco. If you're not familiar with the SF Bay Area, look it up on a map. Geographic distance has very little to do with network distance at the regional and lower levels. Hence, why we talk about "back-haul" and "long-haul" in the discussion.

I'm going to leave this discussion with one observation. It is not uncommon for a novice to a particular field to believe he's come up with a novel idea that nobody's ever thought of before. He can't see any problems with his idea, and he gets frustrated because so-called "experts" will dismiss it, almost out-of-hand. This is not because the experts lack imagination, rather it is because the experts can see the inherent flaws in the idea that a novice - from a lack of experience - will miss.

im novice and i can not see any problems with p2p, so i come here to ask expert (you) to explain it to me, but you only told me p2p is bad, it has problems, it's this and that, it can't work... but no explanation, no analysis, no numbers.. nothing, and now you gonna leave? i suppose you realized you are wrong, after all what kind of expert does it take to realize shortest route will yield the fastest path?

Quote:

Some people believe that being a novice can be an advantage because you're not hampered with pre-conceived notions of what is and is not possible, but that is not true. Perhaps you can provide one or two examples of a novice who actually has come up with a novel idea that no "expert" would have considered, but for each of those, I can point out tens of thousands of "novice" ideas that fall down in the real world.

Do not be discouraged, however. We were all novices once! (Not that I'm an expert by any stretch of the imagination, of course!) My suggestion would be to keep your idea in the back of your mind as you learn all you can about implementing networked applications in the real world - you will be surprised at how complex it actually is.

Round-trip for P2P should be 40km (2x 30km, 1x60km / 3) assumming it's an average what you did.

what?? no, take client A for example:

A->B = 30kmA->C = 40kmA->D = 30km

how do you keep coming with 60km, diagonally opposite clients can talk to each other as well, this is not some RING thing, or something.

Quote:

Second, and more importantly, in computers there is no average in this stuff. The whole system works as slow as the lowest component in that system (what is called, a bottleneck), not as it's average.

no. there actually is an average here, especially if we decided to sync all the peers some time in the past, just like servers do. this system does not work as the lowest lowest component allows because updates are asynchronous there is no FAST/SLOW here, no waiting - you only have FURTHER and CLOSER, further is not SLOWER it is only more behind in the past, but the rate of update is NON INTERRUPTED, has CONSTANT streaming flow.

theoretically working on FULL 60Hz and more, where frequency only depends on upload bandwidth, size of packets and number of peers. you are describing problems server-based approach have.

Quote:

This means the P2P will go as slow as if it were running at 60km, because A has to wait for D. There are some clever tricks (lag-compensation) that help A predicting what D should have done and when data has arrived fix the estimations. Nevertheless, in the long term, A will have eventually need to stop and wait because D can't keep it up (or vice vesa)And B and C are caught in the A & D's delays, so they have to wait too to avoid getting too far in the simulation from and A & D.

no, no waiting here.

imagine 8 people have radar devices that can read signal from similar device and display their location. all the devices broadcast their location to all other devices and all the devices update location of all other device as the the signal arrives. now this signal never stops and the latency here is directly proportional ONLY to distance.

The shortest route between London and New York is straight line. It would take decades or centuries to dig a tunnel through there. Then next shortest route is over surface. It takes about 3 days of sailing. The longest route is through the air - and only takes 6 hours or so.

Also - stuck in gridlock vs. subway+walking.

And - walking across a mountain in straight line, vs. driving all the way around.

So I'm going to go with: No.

Quote:

imagine 8 people have radar devices that can read signal from similar device and display their location. all the devices broadcast their location to all other devices and all the devices update location of all other device as the the signal arrives. now this signal never stops and the latency here is directly proportional ONLY to distance.

This is not necessarily true. First, it implies stationary observers within same frame of reference. This can result in different results, depending on which terminology is ued.In addition, it does not include medium. Speed of light inside some media is lower than in vacuum. Gravitational lensing can be used to bend indirect path through vacuum instead of traveling shorter path in straight line but at slower speed.Things are further complicated by tunneling. Depending on distance between observers, shortest distance might be zero, but with low probability.And then there's string theory....

In other words: it is not proportional to distance. Time needed to travel (average speed can be calculated from that) is integral of velocity over path, as defined very long time by physics.

And since latency is direct function of average speed (emphasis on average), it is independent of topologically (geography, line of sight, network route) shortest path.

The shortest route between London and New York is straight line. It would take decades or centuries to dig a tunnel through there. Then next shortest route is over surface. It takes about 3 days of sailing. The longest route is through the air - and only takes 6 hours or so.

Also - stuck in gridlock vs. subway+walking.

So I'm going to go with: No.

i thought you were referring to yourself as 'networking expert', and now instead of using paths of network infrastructure you would rather dig tunnels?! please, if this is your profession... don't you think its kind of important to figure this thing out completely? or, at least, don't try to dig it down without good reason, thanks.

it should be perfectly clear to novice and experts alike, p2p has shorter traversal route, so it simply has to be able to communicate faster, plus having all the benefits of streamed, non-interrupted, parallel processing.

this will not only allow far better frequency, but asynchronous updates will smooth many visual glitches automatically and the streaming nature of incoming data would even further make the whole experience more fluid.

Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet.

gbLinux, I really suggest you go read through the entire Forum FAQ for this forum, including following all the links. Start with question 0, make sure you internalize the science behind it, then go to question 1, make sure you internalize that, ...

Then come back, and we can hold a discussion that makes sense, and where you don't come out looking like a lazy n00b. You've made so many beginner mistakes in your analysis it's not even funny, yet you complain that the experienced answers don't make sense to you. For an example of the latest mistake: You assume that geographic distance equates to network distance. That's not true at all -- when I ping a server in San Francisco from Redwood City (a distance of about 25 miles north), the packet goes through San Mateo, Sacramento (80 miles away), San Jose (30 miles south) and from there finally to San Francisco. If you're not familiar with the SF Bay Area, look it up on a map. Geographic distance has very little to do with network distance at the regional and lower levels. Hence, why we talk about "back-haul" and "long-haul" in the discussion.

huh. why complicate?

YES/NO:1.) does p2p has shorter traversal path than server-based model?

2.) would parallel computing even further speed up latency by getting rid of serial computation server does?

3.) can p2p run on much faster frequency (60HZ and more) due to the nature of uninterrupted, streamed, asynchronous updates?

i rest my case... and i will gladly answer any question and try to explain if there is still anyone who can not understand this.

Quote:

Because if World of Warcraft (a single game) were P2P, it alone would consume more than the available bandwidth of the entire Internet.

what are you talking about? we are not talking about 'broadcast packets' any more, i think conclusion is those packets would be lost on WWW. are you asserting that all 10mil WoW clients, play on one server? how many players maximum can one WoW server host?

"bandwidth of the entire Internet", does that even make sense? that has nothing to do with anything. you should only be concerned about upload/download bandwidth per client. -- take 32 clients, take some average packet size, calculate traversals and latency, then tell us what's upload/download bandwidth for p2p and server based model, can you do that? as long as every peer/client stays within it's limits, that's all what maters, and than p2p wins over server model on sheer SPEED provided by constant, uninterrupted streaming flow of asynchronous updates, isn't that so?

I now realize you are a troll. However, you've done a great job of skirting the limits of what might be considered a reasonable line of questioning, so I've let the thread go on this long. As far as trolls go, you're really skilled. (Or, as far as normal social humans go, you're very unskilled -- it's hard to tell the difference online)

Because you do not take advise that's given to you, and do not actually draw the learning from the posts that have been made (including posts with clear numbers, statistics, and technical explanations), this discussion will go nowhere further.