You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!

Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.

If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.

Having a problem logging in? Please visit this page to clear all LQ-related cookies.

Introduction to Linux - A Hands on Guide

This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.

Hello everybody. I'm building an iSCSI array for an ESX server setup at home. The iSCSI array is built on Fedora 8. Right now I'm just trying to baseline expected File transfer speeds. Here is what I have so far:

-Each Fedora box has 3 Intel gig nics. 2 of the nics are bonded together in each box (bond0). The third nic is one another network and used for management only. The idea is to have a private network for the iscsi traffic using the bonded nics or bond0.

Now from the other machine I mount up the iscsi machine through the bonded nics. Then I perform a file transfer. I'm getting a transfer rate reported by midnight commander of 35-38mbs. I'm transferring a 3.5 gig iso file. What I want to know is if this is what I should expect. I'm at home so traffic is not an issue. I'm thinking I might want to upgrade my switch. Can somebody that has a gig network transfer a large file and let me know the speed results? I think I can live with 35 mbs but looking to make it the best I can.

Considering the speed of most drives is sub 70 megs/sec even raid0 you are only looking at roughly a 120megs/sec. The theoretical limit for GigE is 125 megs/sec. What mode of bonding are you using(1-6)? If you are running mode 5(?) you can get maximum transfer throughput(250megs/sec in theory) but all the computers connecting to it must also be running mode 5.If you are running mode 1(?) you can serve multiple single nics with each at a max transfer of 125megs/sec. Is the second machine(or machines) also running Raid0? Your transfer rate will be limited by the slowest part in the system. If the receiving drive is not Raid0 that will be the limiting factor. How much ram are you running? To buffer things properly at high speed, 4GB would be a bare minimum. Is the raid software raid or hardware raid? If you try and run software raid you will run into speed and reliability issues vs hardware raid. You are running 64 bit? Running 32bit with above 3GB ram will cause issues.

Considering the speed of most drives is sub 70 megs/sec even raid0 you are only looking at roughly a 120megs/sec. The theoretical limit for GigE is 125 megs/sec. What mode of bonding are you using(1-6)?

I'm using mode ALB. Here is the line from my /etc/modprobe.connf
options bond0 mode=balance-alb miimon=100
I'm not sure what number that is but I'm pretty sure its just round robin.

If you are running mode 5(?) you can get maximum transfer throughput(250megs/sec in theory) but all the computers connecting to it must also be running mode 5.If you are running mode 1(?) you can serve multiple single nics with each at a max transfer of 125megs/sec. Is the second machine(or machines) also running Raid0?

The Second machine is not using raid at all. Just a sata drive. Local transfer speeds are about 68 mb sec according to midnight commander. I was hoping to get near that.

Your transfer rate will be limited by the slowest part in the system. If the receiving drive is not Raid0 that will be the limiting factor. How much ram are you running?

The systems are pretty lightweight. 2 gigs for the machine with the array.

To buffer things properly at high speed, 4GB would be a bare minimum. Is the raid software raid or hardware raid?

Running software RAID

If you try and run software raid you will run into speed and reliability issues vs hardware raid. You are running 64 bit? Running 32bit with above 3GB ram will cause issues.

I'm using 32 bit . I know, not the ideal setup. I guess I could step up to 64 bit fedora with no problems. I really don't know why I did not do that in the first place. Time to reinstall.

How much is your local write speed to the Raid0? I am not familiar with Raid systes, but would be a Raid10 give better writing speeds?

Local write speeds are about 69 mbs according to midnight commander. Thats without raid0. I do not have enough drives to do a multiple raid0 setup. I'm looking for bang for the buck right now. RAID10 will be a little much.