'''Network RAID1''' is supported by the {{Target}} and allows for two or more {{T}} systems to become physically redundant in order to mask hardware or storage array failures.

-

== What is a iSCSI Network Raid 1 Mirror..? ==

+

A prototype of a {{Target}}/[[Initiator]] ("T/I") Repeater Node was built with [[DRBD]] volumes as described below. A {{anchor|T/I Repeated Node}}"T/I Repeater Node" is a physical or virtual machine that is running both iSCSI target and Initiator stacks. The DRBD T/I Repeater Node was implemented with [[Open-iSCSI]] and {{Target}}s running in DomUs under Xen. The Xen DomU VMs for {{T}} were used to ease development.

-

A network raid 1 mirror allow for two or more iSCSI target machines to become physically redundant to complete hardware or storage array failure for LIO-NR1 mirrored volumes.

+

The setup can be ported into [[LIO-VM]]. For the [[Initiator]]s, both [[Open-iSCSI]] and [[Core-iSCSI]] can be used. For a multi-OS T/I repeater node, local Host OS local iSCSI storage can be imported through a hypervisor into [[LIO-VM]].

A Network RAID1 demo setup can be built with virtual machines. In an early example based on Xen, both [[Initiator]] and {{T}} nodes were fully redundant. The early example contained four Xen paravirtualized machines (two {{T}} VMs and two Initiator VMs with ext3/[[OCFS2]]) running across two physical dom0 machines with 2x socket 2x core x86_64 with 8 GB of memory. The two Network RAID1 client VMs had no local storage (other than a Xen block device for the root filesystem), and were accessing storage on {{T}} through [[Open-iSCSI]]. On both of the Network RAID1 target nodes, volumes were created on top of available SCSI block devices. On the primary Network RAID1 target node, the RAID1 array was built with:

-

The two LIO-NR1 T/I virtual machines are have no local storage (other than a Xen block device for the root filesystem), and are accessing storage through Open/iSCSI to core LIO targets. On both LIO-NR1 nodes, volumes are created on top of available SCSI block devices. On then the primary LIO-NR1 node, the LIO-NR1 array is built with:

In the example, the Network RAID1 volume on {{T}} is constructed with Linux MD RAID1, with an internal write intent bitmap and write mostly element flag. The use of an internal bitmap for tracking changed blocks allows failed Network RAID1 Primary and Secondary nodes to recover quickly in the face of node failure. The write mostly element flag is used on the Primary's remote iSCSI volume which represents the Secondary's local storage. This ensures that READ operations coming from frontend iSCSI initiators are issued to the Primary's local storage.

-

[root@bbtest2 ~]# cat /proc/mdstat

+

The resulting Network RAID1 array looks as follows:

-

Personalities : [raid1]

+

-

md0 : active raid1 dm-2[0] dm-3[1](W)

+

<pre>

-

10477504 blocks [2/2] [UU]

+

[root@bbtest2 ~]# cat /proc/mdstat

-

bitmap: 1/160 pages [4KB], 32KB chunk

+

Personalities : [raid1]

-

unused devices: <none>

+

md0 : active raid1 dm-2[0] dm-3[1](W)

+

10477504 blocks [2/2] [UU]

+

bitmap: 1/160 pages [4KB], 32KB chunk

+

unused devices: <none>

+

</pre>

From there, a new volume group (LIO-NR1-VOL) is created and a new volume (NR1-PRIMARY-VOL) on the LIO-NR1 array (/dev/md0).

From there, a new volume group (LIO-NR1-VOL) is created and a new volume (NR1-PRIMARY-VOL) on the LIO-NR1 array (/dev/md0).

These iSCSI volumes and LIO-NR1 volumes need to be accessable on boot by LIO-Primary, and from there, the LVM UUID is passed into a virtual iBlock (BIO Sync Ack) or FILEIO (buffered Ack) with the Storage Engine of LIO-Target.

+

These [[iSCSI]] volumes and LIO-NR1 volumes need to be accessable on boot by LIO-Primary, and from there, the LVM UUID is passed into a virtual iBlock (BIO Sync Ack) or FILEIO (buffered Ack) in the {{T}} storage engine.

For testing purposes, all four virtual machines disk images are located on iSCSI storage on their respective host virtualization machines. This storage is coming from one of the core LIO target nodes, and is MD RAID6 SATA with lvm2 on top of the array.

+

For testing purposes, all four VM disk images are located on iSCSI storage on their respective host virtualization machines. This storage is coming from one of the {{T}} nodes, and is MD RAID6 SATA with LVM2 on top of the array.

-

For typical production systems, we expect people to be using entire software or hardware RAID arrays, or Linux v2.6 lvm2 block devices.

+

The prototype so far has proved very stable testing possible failure scenarios.

-

== What is a T/I Repeater node..? ==

+

For production systems, we'd typically expect people to be using software or hardware RAID arrays, or Linux v2.6 lvm2 block devices.

-

This as physical or virtual machine that is running both iSCSI Target and Initiator stacks.

The use of an internal bitmap for tracking changed blocks allows failed LIO-N1 primary and secondary nodes to recover quickly in the face of node failure.

+

Using LVM volume block devices on the DomU Primary and Secondary T/I and VMs as elements of ''/dev/md0'' on the LIO-NR1 machines seems to be a bit slower than raw SCSI block devices. We then create a LVM volume (<code>NR1-PRIMARY-VOL</code> in the prototype) on top of ''/dev/md0'' and this is the storage object that is exported to frontside iSCSI Initiators.

-

+

-

The use of the write mostly element flag is used on the primary LIO-NR1 node's remote iSCSI volume which represents secondary LIO-NR1 node's local storage. This is done to ensure that READ ops coming from frontend iSCSI initiators are issued to the primary LIO-NR1 node's local storage.

+

-

+

-

== What are the plans for production usage of LIO-NR1..? ==

+

-

+

-

The production plans for Linux-iSCSI.org are to run with LIO-NR1 on Dom0 from software SATA RAID6+LVM, Hardware RAID5+LVM, and Software SAS RAID10+LVM with Linux/HA. As the prototype so far has proved very stable testing possible failure scenarios, getting LIO-NR1 into Dom0 testing is the next step

+

-

+

-

== What iSCSI Initiators can be used with LIO to create T/I Repeater Nodes..? ==

+

-

+

-

The current developments for a stable LIO-NR1 have been done with LIO [[Target]] and [[Open-iSCSI]] running DomU under Xen.

+

-

+

-

== Why did you choose to use DomU for the current prototype..? ==

+

-

+

-

Basically for ease of development.

+

-

+

-

There is also plans in the near future to provide this ability inside of [[LIO-VM]] itself using Open/iSCSI for testing and educational purposes.

+

-

+

-

You can also import Host OS local iSCSI storage through a virtualization hypervisor into [[LIO-VM]] for a multi-OS T/I repeater node.

+

-

+

-

== What about performance of the current setup..? ==

+

-

+

-

Running LIO-NR1 on Dom0 will definately increase performance.

+

-

+

-

Using LVM volume block devices on the DomU Primary and Secondary T/I and VMs as elements of /dev/md0 on the LIO-NR1 machines seems to be a bit slower than raw SCSI block devices.

+

-

We then create a LVM volume ( NR1-PRIMARY-VOL in the prototype) on top of /dev/md0 and this is the storage object that is exported to frontside iSCSI Initiators.

+

There is also a concern that using an internal write intent bitmap (which is pretty much a requirement for production) with MD has performance implications.

There is also a concern that using an internal write intent bitmap (which is pretty much a requirement for production) with MD has performance implications.

-

== What about latency..? ==

+

=== Latency ===

-

+

-

Having dedicated 1 Gb/sec or 10 Gb/sec ports between LIO-NR1 nodes running jumbo frames for dedicated traffic on Dom0 should help improve latency and performance by reducing the number of interrupts produced by networking hardware.

+

-

Also, using dedicated CPU affinity for LIO-Target threads on Dom0 is something that should be considered for production

+

Having dedicated 1 Gb/sec or 10 Gb/sec ports between Network RAID1 nodes running jumbo frames for dedicated traffic on Dom0 should help improve latency and performance by reducing the number of interrupts produced by networking hardware.

-

== What about growing the amount of LIO-NR1 storage available for frontend iSCSI initiators..? ==

+

== Capacity management ==

-

There are at least two ways of doing this:

+

The amount of Network RAID1 storage available for frontend iSCSI initiators can be managed (grown) at least as follows:

-

*) Growing an existing LIO-NR1 volume (NR1-PRIMARY-VOL in the prototype) by building a new LIO-NR1 of local/remote storage objects. The frontend iSCSI initiators will have to rescan the logical unit for capacity, and then expand partition->filesystem.

+

* Growing an existing LIO-NR1 volume (NR1-PRIMARY-VOL in the prototype) by building a new LIO-NR1 of local/remote storage objects. The frontend iSCSI initiators will have to rescan the logical unit for capacity, and then expand partition->filesystem.

+

* Creating a new LIO-NR1 array and volume and making a new iSCSI LUN available to frontend iSCSI initiators. These initiators can then create new filesystems or extend existing logical volumes.

-

*) Creating a new LIO-NR1 array and volume and making a new iSCSI LUN available to frontend iSCSI initiators. These initiators can then create new filesystems or extend existing logical volumes.

Latest revision as of 17:57, 29 September 2013

This article needs a review and may need a cleanup or additional content. Please help improve this article to meet our quality standards.(2010-11-26)

Network RAID1 is supported by the LinuxIO and allows for two or more LIO systems to become physically redundant in order to mask hardware or storage array failures.

A prototype of a LinuxIO/Initiator ("T/I") Repeater Node was built with DRBD volumes as described below. A "T/I Repeater Node" is a physical or virtual machine that is running both iSCSI target and Initiator stacks. The DRBD T/I Repeater Node was implemented with Open-iSCSI and LinuxIOs running in DomUs under Xen. The Xen DomU VMs for LIO were used to ease development.

The setup can be ported into LIO-VM. For the Initiators, both Open-iSCSI and Core-iSCSI can be used. For a multi-OS T/I repeater node, local Host OS local iSCSI storage can be imported through a hypervisor into LIO-VM.

Contents

Setup

A Network RAID1 demo setup can be built with virtual machines. In an early example based on Xen, both Initiator and LIO nodes were fully redundant. The early example contained four Xen paravirtualized machines (two LIO VMs and two Initiator VMs with ext3/OCFS2) running across two physical dom0 machines with 2x socket 2x core x86_64 with 8 GB of memory. The two Network RAID1 client VMs had no local storage (other than a Xen block device for the root filesystem), and were accessing storage on LIO through Open-iSCSI. On both of the Network RAID1 target nodes, volumes were created on top of available SCSI block devices. On the primary Network RAID1 target node, the RAID1 array was built with:

In the example, the Network RAID1 volume on LIO is constructed with Linux MD RAID1, with an internal write intent bitmap and write mostly element flag. The use of an internal bitmap for tracking changed blocks allows failed Network RAID1 Primary and Secondary nodes to recover quickly in the face of node failure. The write mostly element flag is used on the Primary's remote iSCSI volume which represents the Secondary's local storage. This ensures that READ operations coming from frontend iSCSI initiators are issued to the Primary's local storage.

These iSCSI volumes and LIO-NR1 volumes need to be accessable on boot by LIO-Primary, and from there, the LVM UUID is passed into a virtual iBlock (BIO Sync Ack) or FILEIO (buffered Ack) in the LIO storage engine.

For testing purposes, all four VM disk images are located on iSCSI storage on their respective host virtualization machines. This storage is coming from one of the LIO nodes, and is MD RAID6 SATA with LVM2 on top of the array.

The prototype so far has proved very stable testing possible failure scenarios.

For production systems, we'd typically expect people to be using software or hardware RAID arrays, or Linux v2.6 lvm2 block devices.

Performance

Throughput

Running Network RAID1 on Dom0 increases performance.

Using LVM volume block devices on the DomU Primary and Secondary T/I and VMs as elements of /dev/md0 on the LIO-NR1 machines seems to be a bit slower than raw SCSI block devices. We then create a LVM volume (NR1-PRIMARY-VOL in the prototype) on top of /dev/md0 and this is the storage object that is exported to frontside iSCSI Initiators.

There is also a concern that using an internal write intent bitmap (which is pretty much a requirement for production) with MD has performance implications.

Latency

Having dedicated 1 Gb/sec or 10 Gb/sec ports between Network RAID1 nodes running jumbo frames for dedicated traffic on Dom0 should help improve latency and performance by reducing the number of interrupts produced by networking hardware.

Capacity management

The amount of Network RAID1 storage available for frontend iSCSI initiators can be managed (grown) at least as follows:

Growing an existing LIO-NR1 volume (NR1-PRIMARY-VOL in the prototype) by building a new LIO-NR1 of local/remote storage objects. The frontend iSCSI initiators will have to rescan the logical unit for capacity, and then expand partition->filesystem.

Creating a new LIO-NR1 array and volume and making a new iSCSI LUN available to frontend iSCSI initiators. These initiators can then create new filesystems or extend existing logical volumes.