Tag Archives: zfs

Note – this process will completely destroy all configuration and data on the ZFS Appliance. I only need to do this when a system is returned to me with an unknown IP and password, but I can get onto the ILOM. Please contact Oracle Support before doing this and truly understand what you are doing.

Normally, if you can login to a system you can issue the command ‘maintenance system factoryreset’ to get this result. DO NOT DO THIS IF YOU HAVE ANY DATA YOU NEED ON THE APPLIANCE.

Application Zones on a SuperCluster Solaris 11 LDOM are subject to a lot fewer restrictions than the Exadata Database Zones. This also means that the documentation is less proscriptive and detailed.

This post will show a simple Solaris 11 zone creation, it is meant for example purposes only and not as a product. I am going to use a T5 SuperCluster for this walkthrough. The main difference you will need to consider for a M7 SuperCluster are:

both heads of the ZFS-ES are active so you will need to select the correct head and infiniband interface name.

there is only 1 QGBE card available per PDOM. This means you may need to present vnics from the domain that owns the card for the management network if you require this connectivity.

Considerations

As per note 2041460.1 the best practice for creating the file systems for the zone root filesystem is to 1 LUN per LDOM and create a filesystem on this shared pool for each application zone. Reservations and quotas can be used to prevent a zone from using more that its share.

You need to make sure you calculate minimum number of cores required for the global-zone as per note 1625055.1

You need to make sure that the IPS repos are all available, and that any IDRs you have applied to your global zone are available.

Preparation

Put entries into global zone’s hostfile to for your new zone. I will use 3 addresses, one for the 1gbit management network, 1 for the 10gbit client network and 1 for the infiniband network on the storage partition (p8503).

Create an iscsi LUN for the zone root filesystem if you do not already have one already defined to hold zone roots. I am going to use the iscsi-lun.sh script that is designed for use by other tools which create the Exadata Database Zones. The good thing about using this is it follows the naming convention etc. used for the other zones. However, it is not installed by default on Application zones (it is provided by the system/platform/supercluster/iscsi package in the exa-family repository) and this is not a supported use of the script.

-z is the name of my ZFS-ES

-i is the 1gbit hostname of my globalzone

-n and -N are used by the exavm utility to create the LUNs. In our case they will both be set to 1.

-s The size of the LUN to be created.

-l the volume block size. I have selected 32K, you may have other performance metrics that lead you to a different block size.

root@sc5bcn01-d3:/opt/oracle.supercluster/bin# ./iscsi-lun.sh create \
-z sc5bsn01 -i sc5bcn01-d3 -n 1 -N 1 -s 500G -l 32K
Verifying sc5bcn01-d3 is an initiator node
The authenticity of host 'sc5bcn01-d3 (10.10.14.14)' can't be established.
RSA key fingerprint is 72:e6:d1:a1:be:a3:b3:d9:96:ea:77:61:bd:c7:f8:de.
Are you sure you want to continue connecting (yes/no)? yes
Password:
Getting IP address of IB interface ipmp1 on sc5bsn01
Password:
Setting up iscsi service on sc5bcn01-d3
Password:
Setting up san object(s) and lun(s) for sc5bcn01-d3 on sc5bsn01
Password:
Setting up iscsi devices on sc5bcn01-d3
Password:
c0t600144F0F0C4EECD00005436848B0001d0 has been formatted and ready to use

Create partitions so your zone can access the IB Storage Network (optional, but nice to have, and my example will include them). First locate the interfaces that have access to the IB Storage Network partition (PKEY=8503) using dladm and then create partitions using these interfaces.

Create the Zone

Prepare your zone configuration file, here is mine. Note, I have non-standard link names to make it more readable. You will need to use ipadm to determine the lower-link names that match your system

create -b
set brand=solaris
set zonepath=/zoneroots/sc5b01-d4-rpool
set autoboot=true
set ip-type=exclusive
add net
set configure-allowed-address=true
set physical=sc5b01d4_net7_p8503
end
add net
set configure-allowed-address=true
set physical=sc5b01d4_net8_p8503
end
add anet
set linkname=net0
set lower-link=auto
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add net
set linkname=mgmt0
set lower-link=net0
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add net
set linkname=mgmt1
set lower-link=net1
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add anet
set linkname=client0
set lower-link=net2
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end
add anet
set linkname=client1
set lower-link=net5
set configure-allowed-address=true
set link-protection=mac-nospoof
set mac-address=random
end

Implement the zone configuration using your pre-configured file or type it in manually..

Next you boot the zone, and use zlogin -C to login to the console and answer the usual Solaris configuration questions about root password, timezone, locale. I do not usually configure the networking at this time, and add it later.

Resource Capping

At the time of writing (20/04/16) virtual and physical memory capping is not supported on SuperCluster. This is mentioned in Oracle Support Document 1452277.1 (SuperCluster Critical Issues) as issue SOL_11_1.

If you have a filesystem that contains data which is accessed often, but you do not want to record the access time information because it is static data (e.g. content for a webserver) you can change this in zfs properties.

As Oracle

Stop any databases running from the ORACLE_HOME where you want to enable DNFS.
Ensure you can remotely authenticate as sysdba, creating a password file using orapwd if required.
Relink for dnfs support

I was a little uncertain about the oradnfstab entries as most examples relate to a ZFS-BA which has many IB connections and 2 active heads, whereas the 7320 in this case was set in Active/Passive. I created $ORACLE_HOME/dbs/oradnfstab with the following entries.

I had a situation where I wanted to restrict access to a project on my ZFS storage appliance (7320) to a small list of hosts on a private network. The project needs to be accessible r/w, with root permissions from 4 hosts that I need to specify by IP address.

192.168.28.2
192.168.28.3
192.168.28.6
192.168.28.7

However, other hosts in the 192.168.28.X/22 range must not be able to mount the share.
The way to achieve this is to lock down the permissions and then explicitly grant access to the systems you need. You have 3 ways of specifying the names of hosts for exceptions:-

Host(FQDN) or Netgroup – This requires you to have your private hostnames registered in DNS, which was not possible in my case. You CANNOT enter an IP address in this field.

DNS Domain – all of my hosts are in the same domain, so this was not fine grained enough.

Network – Counter-intuitively, it is network that will allow me to specify individual IP addresses, using a CIDR netmask that allows only 1 host (the netmask does not have to match that of the underlying interface)

First thing – set the default NFS share mode to ‘NONE’ so that non-excepted hosts cannot mount the share.

Then add exception for each host, using a /32 netmask which limits it to a single IP.

Usually to get rid of a defunct ZFS pool, you just import it by id and destroy it. Unfortunately, this pool was created on a newer version of solaris and so I cannot import it onto my machine.root@ssccn1 # zpool import pool: rpool id: 3132242033135066260 state: UNAVAILstatus: The pool was last accessed by another system.action: The pool cannot be imported due to damaged devices or data. see: http://www.sun.com/msg/ZFS-8000-EYconfig:

Sometimes I hit a problem when I have downloaded a load of software images and I managed to totally fill my ZFS home directory. Unfortunately I don’t have root on this system so can’t extend my quota so I have to find a workaround.

One way to get some space back is to resize/truncate a file using dd. Locate a large file on your disk that you no longer need or can easily replace.
kitty@eedi-sol-desktop2 # ls *iso
OAKFactoryImage_2.6.0.0.0_130423.1.iso sol-11_1-text-sparc.iso