100mb/s speed limit, please help

So we’ve been experiencing an issue where the maximum speed (both download and upload) while imaging is capped at 100mb/s when we should be able to get near 1000mb/s like we do with any other file transfer we do. All our infrastructure is sound and gigabit capable. After a little research we discovered that the netgear smart switches we were using beforehand gave several people speed issues and fluctuations so we changed those out for smaller gigabit cisco regular switches but that didn’t solve any issues. Is there some obvious bandwidth limit config that we’re missing or is it deeper? This only happens when trying to image with fog, any other ftp test transfers are very near gigabit speeds. Additionally, when imaging more than one computer at a time it will split up that 100mb/s semi-evenly among those computers.

What on earth has subnetting to do with speed? I am not convinced that this issue was fixed by changing the subnet mask on the FOG server. Well, only if the wrong subnet mask would make the server send all the traffic over a gateway to the client(s) which would then have been the bottleneck. But… Anyhow, great you got this fixed.

@Tom-Elliott The issue was with subnetting. We combined 2 subnets not long ago, and the server still had the old subnet mask. I changed the mask and it now is seeing everything on the same network as it should. I am getting 3-4 GB/min now, MUCH faster than before.

Thank you everyone for the help. This was something I should have been able to catch, and was definitely not a direct problem with FOG.

@Tom-Elliott Looking back over this thread I have a question for Tom, actually a confirmation is all I need. Does image deployment use NFS to move the file from the server to the client?

@stowtwe You have done tests with ftp and iptraf which received expected results. Did you do those tests between the FOG server and the same network jack where these target devices are connected?

Lastly, we haven’t considered that both the OP and Tom is right. Lets assume that for some reason the target computers are only functioning in 100Mb/s mode instead of GbE. This would make both people correct. With an unmanged switch it would be difficult to tell, maybe from the lights on the front of the switch.

Something else to consider is just change out the switch to a different model to see if there is something in the switch going wrong.

[Edit] I see that Tom was thinking along the same lines as it could possibly be the nic too. it just took me a bit for my last post [/Edit]

THat’s why we need to narrow what and when and where things are happening.

Just because you “know” doesn’t mean that it couldn’t be doing something unexpected.

Seeing as it consistently showing the same results, it leads me to believe there is a 10/100 connection somewhere between. Maybe the imaging computer is on a different subnet than the fog server? If it is it would have to pass through the router and back to reach the fog server to begin with.

All of our connections are gigabit running to the computers we are trying to image. The only connection that is 100Mb is the line running from the switch to our router, but that should be irrelevant in this case because the packets do not need to go through the router to reach the hosts we are imaging. This WOULD be a bottle neck if we were imaging to other rooms in the building.

Here is a picture (There is another switch before the router, but you get the idea):

@george1421 Compression 9 was much slower than the others. I have also just recently tried compression 3, and it yielded similar results to compression 1 or 0. I will be trying 6 next to see how it goes.

Is the speed “limiting” on all systems? Have you attempting imaging in other locations?

From the sounds of things, things are working exactly as intended. This means, if there’s a 10/100 SWITCH between any of the GIG parts and your imaging to a system on the other side of that “middleman” switch, it would be giving you the same type of thing.

Speed isn’t always related to decompression/compression though it does play into it. Uploads would be the most often AFFECTED issue with compression as the CLient machine is not only transferring the data, but compressing it before it goes up the drain.

I think we need to trace where the 10/100 is starting at. It could even be as simple as an improper punch down.

I have an image with compression at 0 or 1 (I can’t recall what it was for sure), and an image with compression at 3, but both have about the same speeds. I have also tried compression 9 just to try the other side of things, and unsurprisingly, things were much slower.

Just to be clear here, an image with compression of 0 or 1 gave the same transfer speeds as a 3 or 9? I just want to make sure we are not chasing the wrong assumption.

I have an image with compression at 0 or 1 (I can’t recall what it was for sure), and an image with compression at 3, but both have about the same speeds. I have also tried compression 9 just to try the other side of things, and unsurprisingly, things were much slower.

Recapturing the image is not a huge deal for me, and I am open to try anything right now.

Compression and decompression happens on the host. The default compression value was changed from 9 to 6 because 6 is faster (in probably 99% of cases).

If the OP can change his compression to 6 and then re-upload the image, he might find that his image deploys much faster afterwards.

@ch3i posted a script that will change the compression of an image without deploying/capturing again. It does it all on the server. It’s in the forums somewhere here. It needs to go in the wiki. Somebody hash tag it if they find it.

On my production instance that is on 1.2.0 trunk 5070 the compression is 6

On my dev instance that is on 1.2.0 trunk 5676 the compression is 6 (must be default) because I know I haven’t tweaked this system.

Changing this now shouldn’t have an impact since the image that is captured already will be compressed at the higher value. Something I don’t know is where does the decompression happen, I’m suspecting the client computer. If that’s the case the slower (cpu wise) the computer is the slower the transfer rate will be.

@george1421 Yes, I am on 1.2.0 stable. Would it be worthwhile to upgrade to trunk?

I will look into NFS. currently the images are on a drive that is in the FOG Server Tower, and I have that set up as the master node. There are no other nodes. These are the settings.

The switch I am using is an unmanaged Cisco switch. It has no configuration at all. I do not have access to any of the managed switches on the network, so I am trying to work around them the best I can by not running stuff directly through them. I have proxy DHCP configured on the FOG server, among other things.