ISCSI-based 7410 unified storage system has some usability and integration issues

By Logan Harbaugh

Network World|May 4, 2009 1:00 AM
PT

Sun's latest addition to its high-end enterprise storage repertoire – the iSCSI-based Sun Storage 7410 Unified Storage System -- is certainly a high-performance offering, but in our testing we found some annoying usability matters and some potential integration issues.

Sun's latest addition to its high-end enterprise storage repertoire – the iSCSI-based Sun Storage 7410 Unified Storage System -- is certainly a high-performance offering, but we found some usability and some integration issues.

The system leverages Sun's ZFS file system, and uses solid state disk (SSD) to replace expensive cache and improve both read and write performance without the need for expensive 15,000 or 10,000 RPM hard drives. It uses up to six, 100GB SSDs for a read cache, and up to four, 18GB SSDs per drive shelf (up to 16 total) for a write cache.

Sun claims a maximum performance of 288,000 I/Os per second (IOp) and 1.1GBps throughput for the 7410, and based on our limited testing, we feel it should be able to sustain those kinds of numbers with either four, four-port 1Gbps Ethernet or multiple 10Gigabit Ethernet adapters.

The system consists of one or two Sun Storage 7410 controllers equipped with eight, 2.5-inch drive bays, accommodating up to six 100GB SSD drives and two 500GB SATA drives for boot purposes. The 7410s are connected to as many as 12 J4400 drive shelves, each of which supports up to 24 SATA drive bays, of which up to four can be 18GB SSDs for write caching. The 7410s are connected to the J4400s via external SAS cables.

The system Sun shipped to us to test consisted of two Sun Storage 7410 controllers, each with two 100GB SSDs and two 500GB SATA drives, and one J4400 system with four 18GB SSDs and 20 750GB SATA drives. The 7410s each had seven gigabit Ethernet ports, plus one lights-out monitoring (ILOM) port and a serial management port, as well as KVM connections.

Each 7410 controller had 16 Opteron cores, 128GB of RAM, and two 100GB SSDs set up as read cache. Three of the gigabit Ethernet ports are used for cluster interconnects and four are available for iSCSI traffic. There are three open slots, which can be used for four-port Gigabit network interface cards or dual-port 10G adapters, for a total of 16 gigabit Ethernet ports or eight 10G ports.

Initial setup can be accomplished through the serial port or via the management port if DHCP is enabled – you only need to discover the network address assigned and connect to it via browser or SSH. Once the initial configuration of network interfaces is completed, you can log into the Web console and complete the cluster configuration (if you're using two systems).

Cluster configuration at first glance was very easy, a simple matter of letting the system detect the second connected Sun Storage 7410 and telling it to add the second system to the cluster. The redundant controllers can be set up in an active-active or active-passive mode. Active-active provides two separate storage pools, each with its own IP address. If one of the controllers fails, that controller's pool is taken over and served by the other controller. In an active-passive configuration, only one controller is active, serving one storage pool. If the active controller fails, the passive controller takes over. The active-passive controller is less complex to set up, and has a faster switch-over time in the event of failure, while the active-active system has less utilization under normal circumstances, and provides two storage pools rather than the one provided with an active-passive configuration. Failover takes a little over a minute in active-passive mode and about 30 seconds longer than that in active-active mode. In either case, the iSCSI initiators on the test servers lost the connection and had to be manually re-connected to the iSCSI volumes.

While we could test failover features of the cluster, testing the controllers for performance in a clustered configuration was not possible because of an unresolved issue with the system Sun sent to be tested, symptoms for which included freezing of the administrative interface, spurious reports of drive failures and failure of the ILOM interface.

Therefore, we pulled one of the controllers out of the test bed and proceeded with all performance tests on a single controller, seeing as the clustered configuration doesn't add any extra performance considerations. It would have been our preference to set up each of the four available iSCSI ports on this single controller as a separate port on the same subnet of our network, but this configuration is not supported by Sun at this time (although Sun did tell us it is working on this kind of support for a future release). The Sun Storage 7410 requires a different network for each port on the controller, which could be up to 16 ports if you used all available slots for four-port gigabit cards. This is a clumsy and inefficient way to set up storage because if you need to change the servers around for any reason, and each one has a different subnet configuration, managing the pool of servers becomes more difficult.

Following instructions from the Sun engineers, we then set up all four ports as a single aggregated connection using link aggregation control protocol. However, because the control interface uses one of these same four ports, we had to designate one port as an admin port, and three as aggregated iSCSI ports.

Performance details

Performance of the single controller system, as far as our limited test bed could verify, was excellent.

The controller managed an average of 67MBps throughput per gigabit connection. We did not have enough servers to generate enough traffic to max out the aggregated connection. However, taking our base numbers and extrapolating them out to a scenario where there were 16G Ethernet connections, the number would come in around 1,072, which is very close to Sun's throughput claims for the Sun Storage 7410. That assumes you didn't encounter any scalability issues along the way, of course.

With four gigabit Ethernet connections across our IOmeter-driven tests, we were unable to move CPU utilization on the Sun Storage 7410 system above 3% with the average of 1,600 IO/sec each of our four connections, which bodes well for the system's ability to support the 30 or 40 servers necessary to generate 288,000 IO/sec maximum Sun advertises.

Management is somewhat complex, with two separate consoles used to run the system: the admin console is accessible through the Web interface on the primary iSCSI port; and an ILOM console accessible through either SSH or a serial terminal. The ILOM console is theoretically also available through the Web interface, but using it is not officially supported by Sun and frequently crashed both Internet Explorer and Firefox in our tests.

The ILOM interface is used to make changes to the BIOS, run the initial network configuration, and to perform some manual diagnostic tasks that aren't available through the administrative console. The admin console is a browser-based Java application that enables you to set up volumes, snapshots, replication and all the normal storage-area network (SAN) functions.

The business analytics section of the GUI-based admin interface contains very useful monitoring tools, with the ability to drill down to specific interfaces, network or storage protocols, as long as you're willing to dedicate one of the iSCSI ports to the admin console. Reports are available in a very wide variety of formats, with many variations. For example, you can get network I/O as a raw number, by port, by type of protocol or by source. There are similar reports for disk IO, overall storage utilization, and more. Historical data is available as well, and the amount of storage used for logging can be adjusted to keep data for longer or shorter periods of time.

One management oversight is the lack of an automatic update process. Updating the Sun Storage 7410 controller software, required downloading a 487MB file and manually uploading it to each controller, then rebooting (which takes over 3 minutes). After updating, all security certificates were invalid, which requires several steps on either IE or Firefox every time the console was accessed from a new system.

Sun offers a standard, although not exceptional, set of storage features with the Sun Storage 7410, including remote and local replication over synchronous or asynchronous connections, snapshots and mirroring of volumes. While Sun claims support for industry standards, this statement is mostly grounded in the fact the system uses industry standard parts. However, the Sun Storage 7410 cannot be expanded with parts bought from anyone but Sun without losing the warranty and the Storage Management Initiative – Specification (SMI-S) developed by the Storage Networking Industry Association to promote interoperability between SAN products is not supported. They also claim that future software features will be available at no additional cost, though this is only true as long as you pay the yearly maintenance fees.

The price for one controller and the storage allotment we tested is $137,790, comparatively expensive for 20TB raw capacity, compared with other iSCSI or even FC systems. The price for a redundant controllers system with the same amount of storage is $192,465.

The Sun Storage 7410 system is clearly positioned – in terms of price, feature set and performance capacity – to go toe-to-toe with big systems from NetApp and EMC that are designed to support dozens of connected servers simultaneously. While we could not push the box to its capacity, we were impressed by what it could handle in our test environment. That said, Sun could improve the overall usability of the product with some upgraded management tools and wider configuration support in its clustered implementation.

Harbaugh is a freelance reviewer and IT consultant in Redding, Calif. He has been working in IT for almost 20 years, and has written two books on networking, as well as articles for most of the major computer publications. He can be reached at logan@lharba.com.