Category Archives: storage

Note – this process will completely destroy all configuration and data on the ZFS Appliance. I only need to do this when a system is returned to me with an unknown IP and password, but I can get onto the ILOM. Please contact Oracle Support before doing this and truly understand what you are doing.

Normally, if you can login to a system you can issue the command ‘maintenance system factoryreset’ to get this result. DO NOT DO THIS IF YOU HAVE ANY DATA YOU NEED ON THE APPLIANCE.

As Oracle

Stop any databases running from the ORACLE_HOME where you want to enable DNFS.
Ensure you can remotely authenticate as sysdba, creating a password file using orapwd if required.
Relink for dnfs support

I was a little uncertain about the oradnfstab entries as most examples relate to a ZFS-BA which has many IB connections and 2 active heads, whereas the 7320 in this case was set in Active/Passive. I created $ORACLE_HOME/dbs/oradnfstab with the following entries.

I had a situation where I wanted to restrict access to a project on my ZFS storage appliance (7320) to a small list of hosts on a private network. The project needs to be accessible r/w, with root permissions from 4 hosts that I need to specify by IP address.

192.168.28.2
192.168.28.3
192.168.28.6
192.168.28.7

However, other hosts in the 192.168.28.X/22 range must not be able to mount the share.
The way to achieve this is to lock down the permissions and then explicitly grant access to the systems you need. You have 3 ways of specifying the names of hosts for exceptions:-

Host(FQDN) or Netgroup – This requires you to have your private hostnames registered in DNS, which was not possible in my case. You CANNOT enter an IP address in this field.

DNS Domain – all of my hosts are in the same domain, so this was not fine grained enough.

Network – Counter-intuitively, it is network that will allow me to specify individual IP addresses, using a CIDR netmask that allows only 1 host (the netmask does not have to match that of the underlying interface)

First thing – set the default NFS share mode to ‘NONE’ so that non-excepted hosts cannot mount the share.

Then add exception for each host, using a /32 netmask which limits it to a single IP.

Way back in the mists of time I used to use port based zoning on Brocade switches, however, I started having problems with this and newer storage systems (almost certainly pilot error!). I needed to zone some switches for a customer’s piece of work and this time I thought I’d get with the future and use WWN based zoning.

So, in my setup I have 2 hosts, each with 2 connections per switch, and 2 storage arrays with 1 connection to the switch.

swd77:admin> cfgcreate "customer1","port2; port3; port8; port9"
swd77:admin> cfgsave
You are about to save the Defined zoning configuration. This
action will only save the changes on Defined configuration.
Any changes made on the Effective configuration will not
take effect until it is re-enabled.
Do you want to save Defined zoning configuration only? (yes, y, no, n): [no] yes

When you’re happy with your configuration, enable it.

swd77:admin> cfgenable customer1
You are about to enable a new zoning configuration.
This action will replace the old zoning configuration with the
current configuration selected.
Do you want to enable 'customer1' configuration (yes, y, no, n): [no] y
zone config "customer1" is in effect
Updating flash ...

Check at the OS level to see if you can see all your required volumes.

You know how it is, you get a rack of storage which you’ve turned into multiple identical LUNs. Then you need to feed these to ASM, which requires you to start your partitions at cylinder > 0

Firstly you need to make sure all of your disks have a valid label. You can do this either by manually selecting each disk you need to label in format and answering ‘yes’ when prompted, or you could create a command file to feed into format.

The file should list the disk number (from format) and issue the command to label the disk

disk 12
label
disk 14
label
disk 72
label

I have a simple script to generate this file, but it does assume you have something distinctive about the disks so you can grep them out of the format output. In this case it is all the devices on c26, but it could be something like the cylinder count or disk manufacturer.

The sccs commands are asynchronous – they return before the action is completed, so long running tasks like creating a RAID5 volume will still be running while you create your volumes on it.

Can delete any mix ups simply

root@c14-48 # sscs delete -a esal-2540-2 volume vol3

At this point, you can either map your volumes to the default storage domain, and all hosts connected to the storage will be able to see all the volumes, or you can do LUN mapping and limit which hosts can see which volumes.

Map to the default storage domain

for i in 1 2 3 4 5do sscs map -a esal-2540-2 volume vol${i}done

Create host based mappings

Create your hosts

root@c14-48 # sscs create -a esal-2540-2 host dingo

root@c14-48 # sscs create -a esal-2540-2 host chief

Create the initators that map to the World Wide Number (WWN) for the Host Bus Adaptor (HBA) of each machine.

First find your WWN – you can do this either by looking on the storage switch if you have one, or on the hosts that will be accessing the storage.

Looking on the host you issue the command fcinfo hba-port, and look for the HBA port WWN associated with the correct fibre channel devices. I’ve highlighted the entries in red for clarity.