This website is run by the community, for the community... and it needs advertisements in order to keep running. Blocking our ads means your killing our stats!
Please disable your ad-block, or become a premium member to hide all advertisements and this notice.

This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

iSCSI initiator vs. pass-through disk

This website is run by the community, for the community... and it needs advertisements in order to keep running. Blocking our ads means your killing our stats!
Please disable your ad-block, or become a premium member to hide all advertisements and this notice.

Hey

I am currently involved in planning migration of our single sever infrastructure to clustered virtual environment (Hyper-V).
Taking for example file server should I initiate iSCSI connection inside a virtual machine using iSCSI initiator or expose a volume to cluster and configure pass-through disk to store the data?

This website is run by the community, for the community... and it needs advertisements in order to keep running. Blocking our ads means your killing our stats!
Please disable your ad-block, or become a premium member to hide all advertisements and this notice.

I would say that it would depend on what kind of server it is, if it's a file and print server with data stored in a file store rather than locally within the server then you should be using an iscsi initiator to map the lun to the server, if however you are just utilising the server and it's not storing data off to another server\lun then the server should be fine with local storage configured. The Hyper-V server should be configured for iscsi traffic however. This is of course assuming that the shared storage is based around iscsi and not FC.

You have to understand what the initiator is there for, it's used to allow the server to connect to an iscsi target using the iscsi transport protocol, if the virtual server is only based on a single disk then it wouldn't need an initiator. You don't actually 'need' an initiator if the server isn't required to connect to an iscsi data\file store. All the initiator would be required for is if you are required to assign new storage to the server that's an already existing lun.

Hey SimonD, thanks for the reply
I should have been more specific by the looks of it.

The idea is to have 2 servers clustered together with iSCSI storage attached to the cluster.
On the storage I will have a witness disk, cluster shared volume storing VHDs, and some other LUNS.
So up to this point I will initiate connections using iSCSI initiator within cluster hosts.
My question is how should I expose other LUNS (for example file store, exchange store, archive, etc.)?

Should I expose them to cluster hosts and set them up as pass-through disks or should I initiate connections within virtual machines using iSCSI initiator?

Edit: oh an I will be using Windows Server 2008 R2 and on the server drives (73GB SAS in RAID 1) I will store only host OS + some utility software (UPS auto shutdown, maybe backup software)

Personally I would go down the route of using the software initiator, whether you use MS's own or go for one ofStarwinds offerings is up to you, however Exchange is not on of my strong points so I could be wrong with that and it could cause you issues. However iscsi is a thoroughly proven technology and when I have used it in the past it's been rock solid (the last place I worked had a virtualised EX2007 environment, the servers all ran on ESX server with disks created directly on the ESX server rather than via an iscsi lun).

Now after giving it some more thought I don't think it can be done anyway. (iSCSI within VM)
Will 2 NICs dedicated for iSCSI traffic at the host level be even accessible within VM? I don't think they will be.
Also I think that by initiating iSCSI connection within a VM I won't be able to benefit from Multipath I/O and load balancing.

The other thing is that by exposing LUNS to hosts and setting pass-through disks I will keep all iSCSI traffic on separate subnet.

What happens within vSphere is that you have dedicated network switches, they can be set for work network and iscsi networks, all you would then do if you needed to access the iscsi network is create a nic on the host for the guest vm and instead of assigning it to the work network it's assigned to the iscsi network. When the guest is powered up it would have 2 nics, one work network, one iscsi network. Obviously neither network should cross over (ie different network addresses etc).

There is no reason why a VM can't get to an iscsi network using the dedicated nic on the iscsi switch. I can't tell you if that would work on Hyper-V but it does in v-sphere

You should be able to use iSCSI on the VM's if required. Say for example the host has 4 nics, 2 for network and 2 for iSCSI, just create two virtual nics for the VM, one for network and one bound to the iSCSI phyical nics.

I haven't done it yet (I will within next few weeks) but to get full use of MPIO you may need to create the same number of vnics as the physical nics for the iSCSI traffic if each physical nic is on a different subnet.

However, I wouldn't even bother with iSCSI on the VM's. It creates additional overhead when all you need to do is let the host deal with it and create a VHD or pass through LUN for each store you require to each VM (Drive D for example) so the VM see if as locally attached storage. Not sure of the SAN you are using (Or Windows Storage Server) but I would try to just use VHD files so you can snapshot, etc if possible.

You should be able to use iSCSI on the VM's if required. Say for example the host has 4 nics, 2 for network and 2 for iSCSI, just create two virtual nics for the VM, one for network and one bound to the iSCSI phyical nics.

Click to expand...

This sounds ok, but what if I want to host more than one machine using direct iSCSI access for storage. For example File Server, SQL Server, Exchange. I could create 4 virtual networks - 2 for iSCSI and 2 for network traffic, then add 4 virtual NIC's and bound them to different virtual network each. This would work for storage if I didn't want to access iSCSI traffic from hosts (I want to store VHD's there as well). Correct me if I'm wrong.

I haven't done it yet (I will within next few weeks) but to get full use of MPIO you may need to create the same number of vnics as the physical nics for the iSCSI traffic if each physical nic is on a different subnet.

Click to expand...

I will as well. At plannning stage at the moment. With regards to MPIO see above

However, I wouldn't even bother with iSCSI on the VM's. It creates additional overhead when all you need to do is let the host deal with it and create a VHD or pass through LUN for each store you require to each VM (Drive D for example) so the VM see if as locally attached storage. Not sure of the SAN you are using (Or Windows Storage Server) but I would try to just use VHD files so you can snapshot, etc if possible.

Click to expand...

I think I'll go this route. At least I'm sure this will work in clustered setup. I will be using Dell's MD3000i as they're cheap enough to fit in the project budget and let us spec up the servers bit more.

Would you store File shares as VHDs? What about Exchange storage or SQL Server data/logs? I would of thought that VHD will add extra overhead (unnecessary) too.

This sounds ok, but what if I want to host more than one machine using direct iSCSI access for storage. For example File Server, SQL Server, Exchange. I could create 4 virtual networks - 2 for iSCSI and 2 for network traffic, then add 4 virtual NIC's and bound them to different virtual network each. This would work for storage if I didn't want to access iSCSI traffic from hosts (I want to store VHD's there as well). Correct me if I'm wrong.

Would you store File shares as VHDs? What about Exchange storage or SQL Server data/logs? I would of thought that VHD will add extra overhead (unnecessary) too.

Thanks

Click to expand...

You may not need to create so many virtual nics. Say in the case of normal network traffic; the host has two nics. the virtual machine will only need one as it doesn't require a team as a virtual nic will (should) not fail so longs all of the network traffic is on the same subnet. The host has two for physical redundancy and load balancing as it is the host serving a number of machine, same does not apply to the virtual machines.

However in the case of iSCSI, if the host has two physical nics on the same subnet (example 1.1.1.1) then one virtual nic will be fine as it will be available to talk to all of the storage controllers. If the physical nics are multi pathed on different subnets (example 1.1.1.1 and 2.2.2.2) then ideally you will need to add two virtual nics, one for each subnet so it can talk to both controllers.

I'm sure VHD will create some overhead, although I doubt it will be noticable/measurable. From a management perpective having say a 1TB LUN to store your VHD files is a lot easier to manage in terms of snapshots, increasing VHD sizes, cloning etc on the fly. Having pass-through direct to a LUN means having more fixed sized LUNs to manage and a loss of functionality. Its a trade that only you can decide on and maybe consider which will be easier to back up too.

Since R2 can now handle multiple VHD files on the same LUN this will likely be the way I will set up some of the upcoming projects I have.

Hope this makes sense and if anyone wants to correct me anywhere go ahead as I more of a VMware guy although in a month or two I will be a dirty Hyper V guy too.

You may not need to create so many virtual nics. Say in the case of normal network traffic; the host has two nics. the virtual machine will only need one as it doesn't require a team as a virtual nic will (should) not fail so longs all of the network traffic is on the same subnet. The host has two for physical redundancy and load balancing as it is the host serving a number of machine, same does not apply to the virtual machines.

Click to expand...

That's right. Actually while writing this I had my planned configuration in mind where I want to put exchange traffic through 1 nic and rest of the traffic through the other.

However in the case of iSCSI, if the host has two physical nics on the same subnet (example 1.1.1.1) then one virtual nic will be fine as it will be available to talk to all of the storage controllers. If the physical nics are multi pathed on different subnets (example 1.1.1.1 and 2.2.2.2) then ideally you will need to add two virtual nics, one for each subnet so it can talk to both controllers.

Click to expand...

Yeah, this iSCSI traffic will be on different subnets as I want to use to separate physical NICs for redundancy.

I'm sure VHD will create some overhead, although I doubt it will be noticable/measurable. From a management perpective having say a 1TB LUN to store your VHD files is a lot easier to manage in terms of snapshots, increasing VHD sizes, cloning etc on the fly. Having pass-through direct to a LUN means having more fixed sized LUNs to manage and a loss of functionality. Its a trade that only you can decide on and maybe consider which will be easier to back up too.

Click to expand...

Can you/should you actually do snapshots of Exchange or SQL Server data? Same goes for Domain Controllers. As far as I know doing snapshots of any state data volumes is not recommended.

Since R2 can now handle multiple VHD files on the same LUN this will likely be the way I will set up some of the upcoming projects I have.

Hope this makes sense and if anyone wants to correct me anywhere go ahead as I more of a VMware guy although in a month or two I will be a dirty Hyper V guy too.

Click to expand...

I am still deciding. All the virtual machine boot volumes will be stored in CSV as VHDs. It would be nice for some Hyper-V guru to shed some light on the topic. As you say, in a month or two, I too will be after this project and will be able to advise some more.

Snapshotting DC's, Exchange etc will work, its reverting back to a snapshot where the problem lies.

In the case of a DC, if you revert back to a previous snapshot you get an issue called USN roll back, where other DC's within the domain will not resend replication updates they have already sent due to security. This causes your domain to be inconsistant and a whole bunch of issues.

However its a cheap way to back up since you have use ntbackup to backup the system state on the machine and save it locally, then snapshot the machine for a backup. When you come to restore, restore the machine off the network and restore the system state from the local backup. (Or in the case of a DC you could also just dcpromo down and back up.)

But if you have the funds, back up with agents for the best results and a proper up to date restore.

That said I would still recommend VHD files over direct accessed LUN for an easier life. Even within VMware I have never yet bothered with a pass through LUN over using a VMDK, I'm sure is it required for some instances like managing a SAN via a virtual machine and some cluster set ups but I've not needed it so far.

Tested on a iSCSI SAN with 2 x 4 Port NICs. These four ports were aggregated for a 4Gbps link which forms one path. Times two is 2 paths at 4Gbps each.

The host only has 2 x 1Gbps ports, each linked to a single path, giving 2 paths to the storage active/active on the NICS round robin style. 2Gbps total for each host.

Performed the standard tests that I normally do which are performed from the VM itself with iometer and four set tests. However I will just give some brief figures from the first test to show how it worked out:

CertForums.com is not sponsored by, endorsed by or affiliated with Cisco Systems, Inc. Cisco®, Cisco Systems®, CCDA™, CCNA™, CCDP™, CCNP™, CCIE™, CCSI™; the Cisco Systems logo and the CCIE logo are trademarks or registered trademarks of Cisco Systems, Inc. All other trademarks, including those of Microsoft, CompTIA, VMware, Juniper ISC(2), and CWNP are trademarks of their respective owners.