LSI command-line utility for SAS2 non-RAID controllers

If you are using a LSI Logic 92xx (ie, 9211-8i, 9211-4i, 9200-16e, etc) SAS2 controller for your OpenSolaris or NexentaStor storage machine (Solaris-based distributions use the mpt_sas driver for these), you may have had trouble finding the utilities to manage it from a command line (this is useful to force a rescan, bus reset, blink a drive, etc.) I also had these issues awhile ago, and ran across the utility at SuperMicro’s FAQ site (of all places!) I still haven’t found an official download link (at LSI, or in the ‘downloads’ section at SuperMicro), but if you are looking for them, you can find them at:

Thank you very much for the link. This is a hard one to find. I was on the phone with LSI support this afternoon (on another matter) and the, otherwise very knowledgeable, tech was not familiar with the utility.

On a related note, the latest firmware for the 9211-8i (and related variants) does not include support for Initiator-Target mode. LSI support is looking into why. But in the meantime, those using these cards with ZFS may want to avoid version 5.00.13.00 dated 9 Feb 2010 (also known as FW P5 or ‘Phase 5’).

The LSI BIOS reports whether you are in IT or IR mode. I have not found anything from the command line with this information.

I am running OpenSolaris b134, and my 9211 cards with IR firmware are using mpt_sas. This is contrary to all reports I have found. And it was true with the original firmware (June 2009) as well as the latest (February 2010).

Just spoke with LSI again. There is no IT-mode version of the latest firmware for the internal 6Gbps cards; it is IR-mode only. Conversely, there is no IR-mode version of the latest firmware for the external 6Gbps cards; they are IT-only. The rumor is IT-mode will be supported in the next release of the firmware.

The ramification when running a JBOD, as ZFS prefers, with the IR-mode firmware is a slight performance hit. Functionally it is fine.

You realize this exchange of ours is a significant contribution to the net-knowledge on using the LSI cards under Solaris – there just is not that much information out there. Which is too bad, as all evidence says they are an excellent choice and are clearly well supported. On a related note I also spoke to our SuperMicro VAR who reports they are moving away from the SuperMicro versions of the LSI cards. While in theory they are supposed to be equivalent their experience is otherwise.

Excellent, thanks for digging that info out! You’ve had much better luck at getting information out of your LSI people than I’ve ever had. ;) Yeah, the functionality of the IR mode works as I’d expect from IT — per default, the drives are all exported individually to the OS, and the Solaris kernel has full access to them.. not like the LSI cards that default to the ‘MegaSAS fakeraid’ junk or whatever it’s called, where the drives aren’t exported at all if you don’t make an array. I guess I was mistakingly thinking that was ‘IR’ mode, must be yet-another-RAID-mode.

That’s a bummer to hear that the SuperMicro versions are different — I’ve also looked at them (because they are less expensive of course), but have always ended up going LSI so far.

And sadly, I agree that what you have contributed to the conversation is ‘new’ news for the ‘net – which is rather sad, but at least it’s good to have the information available at least! Hopefully people will be able to find it here, and further discuss..

Since you guys appear to be the only ones around to have some insight into LSI 9211, SuperMicro and ZFS, here goes. I’m building up a SuperMicro X8DTH-6F for testing ZFS under VM ESX. I’m using the onboard SAS2008 for RAID-1 for the VM OS, 2-SSD for the ZIL and a larger SSD or cache. Only the VM OS is RAID, the other drives are all 3.0Gbps. I added 2 (real) LSI 9211’s cards connected to 6.0Gbps HDD for Tank. At least that’s the idea. I see you suggest no using version 5.00.13.00; why? Is there a “better” version? There seems to be some mix-‘n match problems having 3 SAS 2008 controllers with different versions of FW and I’m not sure which BIOS actually is getting loaded as there seems to be a difference in the numbering scheme between SuperMicro’s numbering and LSI.
I follow the idea of IR and IT mode, but don’t really know what the practical difference is.
I’ve send email to SuperMicro and get a blank-stare followed by “you wanta do what?”; no word from LSI yet.
As an aside (and off topic), anybody used the VT-d with VM and dedicated drives?
In advance,
Jon

I am actually using 5.00.13.00 with no issues; not sure what issues Mark had with it.. hopefully he’ll chime in. ;)

In my experience, the numbering scheme between SuperMicro and LSI is fairly consistent, but there are sometimes newer versions available from one or the other for their specific hardware. Note that they are *not* cross-compatible – I tried loading the newer firmware from SuperMicro on my LSI cards, and they didn’t work at all after that – had to go back to the newest LSI firmware.

I basically just grabbed the newest BIOS and Firmware images from both SuperMicro and LSI, and flashed each vendor’s card with the firmware they provided. After that, I’ve had no issues with compatibility, except that when I enter the LSI BIOS during boot, it prints an error while loading (I’m blanking on what it was).. everything works once the BIOS loads though, and it doesn’t present a problem at the OS-level. I did look up the error message and it was supposed to be fixed in an upcoming release of the BIOS (ie – it’s spurious.)

My understanding on IT vs IR is that the current firmware revs only support IR.. I believe IR just lets you set up the “fakeraid” arrays on the card, which are presented to the OS as a single LUN but really use CPU power for the RAID’ing. I haven’t tried the IR stuff so can’t comment on that.

As far as using VMware – I haven’t tried with a SAS2 controller, but did with an older 3081E controller.. my goal was to have a machine with a single two-port SAS controller and eight drives to run as a “self-hosting” virtualization machine with Nexenta running as a VM and providing storage for the other VMs on the host. I couldn’t get it working (with either VMware or Xen); I think that if I had been able to use separate controllers for the boot drives, and pass through the entire controller to the Nexenta guest, things would have worked ok, but I only had the one controller available, so was trying to pass through individual drives. In your situation, it sounds like it’d probably work to me.. love to hear what your results are. ;)

We have a very similar setup 2 x netentastor heads with crucial read cache, 45 bay supermicro chassis and 10x 2TB SAS drives in each head with intel X25E for the write cache. Using MPIO from the hypervisor (connected via 2x 1gbps) to the SAN(connected via 10gbps) we aren’t able to break 100MB of throughput and even when we put a 10gbps card in the hypervisor without mpio we can’t break 300mb on the read and 100MB on the write. This is all using a dd test with 1MB chunks 52GB files

The performance we are seeing is very dishearting. I do however have a feeling the SAS cabling is messed up between the redundant nextena heads and JBOD which is giving us a 3Gbps (300MB) limit but the write should still be able to max that 3Gbps out.

My gosh! I’m so sorry for not ack’ing this comment earlier — I wanted to wait until I could read it, and have just been crazy busy!

In any case, that is very odd! Are you using multiple X25E’s for write cache, or just a single one / single mirror? We can break 100mbit on writes with a single client on a single 1gbit link on our setup.

For the writes – you will be lower than the maximum SAS speed (as you also have to take into account the writes for the redundant drives, etc).. however, you *should* have at least 12gbit total SAS bandwidth (assuming SAS1, not SAS2) available (4 channels in the cable, with the port multipliers split up among those).. do you have the drives spread out across different multipliers on the enclosure to try to get them on separate SAS channels?

Oh, and do you have write cache *disabled* on the luns? In my experience, if write caching is enabled, it actually does not use the log drives; they are only used for sync writes.. so kind of unintuitive, but write cache has to be turned off to use the ultra-fast SSDs to cache up writes! :)

Also curious if you’ve contacted Nexenta on this?

In any case, best of luck, and I’d be happy to offer any further advice!

I stumbled upon this post trying to find out why my 9211-8i wouldn’t be upgraded from IR to IT. When trying to flash it with P8 IT firmware, it halts the process after “Valid Initialization Image verified.” and “Valid BootLoader Image verified”, and spits out the following error: “ERROR: Cannot Flash IT Firmware over IR Firmware” (screenshot). Have you encountered this before? Using “-o” has no effect.

Woah. Now, after upgrading to P8 IR, it states that I have 8 enclosures (and, in turn, results in 8×6 HDDs being reported by the controller — I have 6 HDDs connected), whereas it only displayed 1 enclosure (the 9211-8i is connected to a HP SAS expander) on the old firmware (P7, I believe).

I’ve now been able to flash it to P7 IT firmware; I had to boot it into native DOS, since I had to use “sas2flsh.exe” (and not “sas2flash.exe”) to be able to erase the flash (to downgrade) and to overwrite the IR firmware with IT. I spent ages getting DOS to find my CD-ROM so that I could access the firmware-files/utilities.

Now the expander shows up as one enclosure. I discussed it with LSI over email, and they could confirm that the HP SAS expander (2.06) seems to be incompatible with the 9211-8i running P8 firmware.

hi,thanks for your help!
i have a question about sas2ircu command line.
when i typed in the command line :” ./sas2ircu list” ,it can work successully.
wen i typed the follow command line “./sas2ircu controller0 status”,it return :invalid controller index specified. but the controller 0 have been find in “./sas2ircu list” .
whish your help,thanks.

The LSI rep I spoke said that I could run a Phase 12 driver with P11 Firmware, but the reason for the concurrent releases (of matching firmware & driver revisions) is that engineering has a set of testing suites that they run when they release a major revision upgrade.

Keeping firmware and driver in step has the benefit of vendor testing.

I was also told there should be no risk of data loss during a firmware upgrade, though LSI officially recommends a data backup before running a firmware upgrade. Thoughts, experience with this?

I’ve never had a firmware upgrade lose data for me; the only place I’d be concerned is if the controller is doing RAID for you. If the controller’s just a HBA, then if it bombs during the firmware upgrade, plug the drives into something else, and you’re good to go.

However – backups are a good thing, you should have them if the data matters. ;)

..and was curious what kind of luck you’re having with drive locating these days. On my build described here (4x 9211-8i’s and an onboard controller, with a 846A and a 826A backplane connected), the ‘sas2ircu locate’ function works for the right-hand half of the front backplane (controllers #1/#2), but nothing else.. curious if it’s better than that for you or not. :)

I think I’ve figured it out – looks like some of the SAS cables have sideband support and some don’t.. as I understand it, that would do it? Yup – confirmed.. the cables that have the sideband are working fine; the ones that don’t aren’t. Obvious, eh? :)

If all cables had sideband, would sas2ircu report the actual slot numbers/etc from the backplane? Or is it always the ‘virtual’ stuff like: