Quicksearch

Disclaimer

The individual owning this blog works for Oracle in Germany. The opinions expressed here are his own, are not necessarily reviewed in advance by anyone but the individual author, and neither Oracle nor any other party necessarily agrees with them.

Navigation

Wednesday, April 26. 2017

Sometimes some simple question leads you down deep to basic discussions: This was something I had to solve 2 years ago. The hard part was not finding it out, that was pretty obvious at start. The problem was to it was to explain it.

Since then I saw it reoccurring several times. Once occasion two weeks ago, thus I decided to write this down. I will just take the question of the customer „I had a SAN and my tar -x was running fine with a short runtime. Now you gave me a ZFSSA, I’m using NFS and the times for tar -x are horrible. 61 Minutes “. I’m writing about this now, because in different appearances I’ve seen this problem again and again over the last few months and I think it's time to write an article where I can just point when I’m seeing this problem again.

To start with it: This issue is not Solaris specific ... this more or less a NFS specific problem. So maybe it's interesting to users of others operating systems as well.

Sunday, March 5. 2017

Recently I had a performance tuning gig at a customer reporting that despite having the same number of vCPUs configured into the Logical Domain, the performance of both systems was different. A further observation was that a significant number of CPUs weren’t used on that system.

Sunday, March 5. 2017

I just want to share something that I've learned a few days ago by doing this. Let's assume you have a ZFS pool. 1 TB in size. You want to add some storage. By accident you grab a slice 0 that is 128MB large instead of the whole LUN. The obvious question is how do you get rid of it. You may get to the idea that you replace the 128 MB LUN with a 1 TB LUN. We do this replacement all the time to increase of rpools. However: For the given situation this is an exceptionally bad idea.

Friday, October 14. 2016

This issue got on my table quite a number of times in the last 4 weeks. Customer is trying to use the vHBA feature of Oracle VM for SAPRC 3.3 and newer. While configuring and testing to do so, the customer is seeing a error message like

When you see something like that, the solution is quite simple. You need a LUN 0 to be visible and functional(it has to react when commands were send) on the target. Or as the documentation states:

When configuring a virtual SAN, note that only SCSI target devices with a LUN 0 have their physical LUNs visible in the guest domain. This constraint is imposed by an Oracle Solaris OS implementation that requires a target's LUN 0 to respond to the SCSI REPORT LUNS command

I would not call it a constraint, it's more like standard compliance as due to the definition of the SCSI-3 standard it’s the only LUN that you could always assume to be there as the SCSI-3 standard simply mandates it ... if the device is standard compliant. The SCSI-3 specification (available at t10.org) states in section „4.7.2 SCSI target device“:

A logical unit is the object to which SCSI commands are addressed. One of the logical units within the SCSI target device shall be accessed using the logical unit number zero. See 4.8 for a description of the logical unit.

And shall is specified as:

A keyword indicating a mandatory requirement. Designers are required to implement all such mandatory requirements to ensure interoperability with other products that conform to this standard.

So as soon as you make a LUN 0 available on the targets you are seeing with the vHBA, you will see the LUNs.

Wednesday, September 28. 2016

One of the usual lines in customers /etc/system is the line to limit the the size of the ARC. For a long time you used the zfs_arc_limit parameter for doing so. However with Solaris 11.2 there is a new parameter. It’s named user_reserve_hint_pct. It’s currently the suggested way to limit the ARC. However it works different than the old parameter. I want to shed some light on this in this blog entry.

Tuesday, May 31. 2016

Last night the 3rd update to the Solaris 11.3 cheatsheet went online. I've added live migration for kernel zones, live reconfiguration for kernel zones and non-global zones and a part about dissecting the version numbers of Solaris 11 pacakages. It's available at the usual location.

Friday, May 20. 2016

Some typos fixed, some clarifications included, some content added, a layout quirk removed. There is a new version of the Solaris 11.3 cheatsheet. You will find the new cheatsheet (version of 19.05.16 22:08 when you look at the last page) at the known location

PS: I just saw i forgot to update the contributions list ... dang .... will be in the next update.

Tuesday, April 19. 2016

As i get some questions about it in recent times: With a normal SATA/SAS disk you can simply unplug a disk without warning and preparation. NVMe are different. For all practical purposes they are PCI devices. And they want to be handled as such. Unplugging a NVMe drive without using the proper procedure is pretty much like yanking out a PCI card without preparation. So when you want to remove a NVMe disk you have to use the hotplug command in Solaris. To cite the documentation: