Let's say a server is connected to internet through limited bandwidth, and more than 1 user try to download a file from that server simultaneously.

If we ignore the bandwidth limitation at the user side, may I know how the server side bandwidth will be allocated to different users? If there are 2 concurrent users trying to download the same file, will the bandwidth be divided evenly among the users, so each user get 0.5 of the bandwidth?

I've tried the following setup:-

I've connected 2 client PC with Windows XP OS to a switch. From the switch, I connect it to a server PC through a fixed bandwidth of 2mbps. Then, I run iperf in all 3 PC at the same time. The client PC run iperf in client mode, and the server PC run iperf in server mode.

Both the client PC send data to the server PC at the same time.

Then, I found that the server PC get ~500kbps from client PC1 and ~1450kbps from client PC2.

Both the client PC are connected to the switch using 1gbps ethernet connection. Both are using the same type of cable. Both are using the same OS. The settings for iperf are also the same.

I don't understand why there is such a big difference between the bandwidth allocated to client PC1 and client PC2. I would like to know how bandwidth is allocated to concurrent users who are trying to access the server at the same time.

I'm very sorry. But I found that there are a lot of programmers in Stackoverflow. Can anyone give some comments from a programmer's point of view? Does this problem related to socket programming? How the server's OS handle the connection from different client PC and allocate the limited bandwidth to different concurrent users? Thanks.
–
kwc1Aug 28 '10 at 2:58

3 Answers
3

There is no one answer. For the most simplest of TCP services, each client will attempt to grab data as fast as it can, and the server will shovel it to the clients as fast as it is able. Given two clients of combined bandwidth exceeding the bandwidth of the server, both clients will probably download at speeds of roughly half the server's bandwidth.

There are a LOT of variables in this that make this not quite true in real life. If the TCP/IP stacks of the different clients are differently able to handle high streaming connections, that by itself can affect bandwidth even if the server has infinite bandwidth. Different operating systems or server programs handle streaming speed ramp-up differently. Latency has an effect on throughput, where large latency connections can be significantly slower than low latency connections even though both connections can stream (in absolute values) the same amount of data.

A case in point, downloading kernel source archives. I've got very fast bandwidth at work, in fact it exceeds my LAN speed so I can saturate my local 100Mb connection if I get the right server. Watching my network utilization chart while downloading large files I can see some servers start small, 100Kb/s, slowly ramp up to high values, 7Mb/s, then something happens and it all starts over again. Other servers will give me everything immediately when I start downloading.

Anyway, items that can cause actual bandwidth allocation to differ from absolute equality:

TCP/IP capabilities of the client and server relationship

TCP tuning parameters on either side, not just capabilities

Latency on the line

The application-level transfer protocols being used

The existence of hardware specifically designed for load balancing

Congestion between clients and the server itself

In regards to your test-cases, what likely happened is that one client was able to establish a higher datastream rate than the other, perhaps by getting there first. When the other stream started it was not allocated sufficient resources to gain full speed parity; the first stream got there first and got most of the resources. If the first stream ended the second would likely pick up speed. In this case, the speed experienced by the clients was determined by the Server OS, the application doing the streaming, and the TCP/IP stack of the Server. Also, if the network card supported it, the TCP Offload Engine of the network card, if present and enabled.

There is no balancing logic on the server PC to equate the two connection performances. In fact, I would say there is no differentiation of 'users' across the two connections on the server either.

This case would be similar to two instances of the same applications running on the server -- all conditions appear to be same but one of them might seem to perform better. In short, one of them should almost 'randomly' perform better.

Likewise, your two iperf test paths seem similar but one of them would perform better (I am a little surprised that it seems to show 3 times the performance of the other).

But, tell me, how many times did you run this test?
If you run it say 10 times, does the same client PC seem to perform better by the same factor? or, do you see some amount of randomness across reruns?

I have run this test for about 4-5 times. And i found that in all these tests client PC1 and client PC2 got the same amount of bandwidth (roughly). I have also double-check the total bandwidth available for each client PC by turning off 1 of the client PC's iperf. And I found that, after I turn off iperf in PC1, iperf in PC2 will be able to send about ~1960kbps to server. The same thing happened to PC1 after i turn off iperf in PC2.
–
kwc1Aug 28 '10 at 2:37

Timing will have more to do with your test than anything else due to TCP backoff - what's happening is the first connection uses all the bandwidth then the second one shows up and essentially starts competing with the first one for bandwidth; if the transfer goes on for long enough they should eventually both be using the same bandwidth. The details are actually TCP stack dependent based on how they implement the congestion avoidance algorithm.

At first I also had the same hypothesis. But after several tests, i found that it is not true. I have tried to turn on the iperf in PC1 and PC2 in different order. I found that, when i first turn on iperf in PC1, PC1 can get about ~1960kbps of bandwidth. Then, when i turn on the iperf in PC2, PC2 straight-away got ~1400kbps of the bandwidth. And the bandwidth for PC1 immediately drops to ~500kbps. It seems that no matter what is the sequence i turn on iperf in PC1 and PC2, the ratio of bandwidth for PC1 and PC2 is the same.
–
kwc1Aug 28 '10 at 2:51