Tag Archives: solaris

Ramdisks are a great way to ‘prove’ that it’s not the performance of the underlying disks device that is stopping a process from writing a file quickly (doesn’t prove anything about the filesystem though…) . Ramdisks are transient, and are lost on system reboot, and also consume the memory on your system, so if you make them too large you can cause yourself other problems.

Creating a Ramdisk

The ramdiskadm command is used to create a ramdisk. In this example I am creating a 2G ramdisk called ‘idisk’

This post is mainly related to SuperCluster configuration, but can be applied to other solaris based systems.

In SuperCluster you can run the Oracle RDBMS in zones, and the zones are CPU capped. You may want to change the number of CPUs assigned to your zone for a couple of reasons

1) You have changed the number of CPUs that are available in the LDOM supporting this zone using a tool such as setcoremem and want to resize the zone to take this into account.

2) You need to change the zone size because you need more / less capacity.

Determine the number of processors that need to be reserved for the global zone (and all of your other zones!)

For SuperCluster you should reserve a miniumum of 2 cores per IB HCA in domains with a single HCA, a minimum of 4 cores for domains with 2 or more HCA. You also need to take into account other zones running on the system.

So in my case, there are 512 virtual CPU, of which I need to reserve at least 32 (4 cores x 8 threads) for my global zone, and as I will also have an app zone on there, possibly a lot more. So lets say I’m going to keep 16 cores in global that would leave 384 for my dbzone

Get the name of the pool

The SuperCluster db zone creation tools create the pools with logical names based on the zonename. However, the way to be sure what pool is in use is to look at the zone’s definition

If this doesn’t return the name of the pool, your zone is not using resource pools and may be using one of the other methods of capping CPU usage such as allocating dedicated cpu resources (ncpus=X). If so.. stop reading here as this is not the blog post you are looking for.

Find the processor set associated with your pool by either looking in /etc/pooladm.conf or by running pooladm with no parameters to get the current running config. Checking in pooladm is WAY more readable so that is my preferred method.

It looks to me that the res=”pset_1″ in the pool definition points to the ref_id=”pset_1″ in the pset definition.

OK, so now I know my pset is called pset_sc5acn01-d5.blah.uk.mydomain.com_id_6135 and it currently has 8 cpus. I also know that my running config and my persistent config are synchronised.

Shutdown the database zone.

This may not be necessary, but since Oracle can make assumptions based on CPU count at startup, I think it is safest.
# zoneadm -z sc5acn01-d5 shutdown

Change the pset configuration

I’m going to do this by changing the config file to make it persistent, as there’s nothing more embarassing than making a change that is lost by a reboot. I set the processor set to have a minimum of 384 CPU and a maximum of 384 CPU.
# poolcfg -c 'modify pset pset_sc5acn02-d5.blah.uk.mydomain.com_id_6135 ( uint pset.min=384 ; uint pset.max=384 )' /etc/pooladm.conf

Check that it has applied to your config file# grep pset_sc5acn01-d5 /etc/pooladm.conf

Force it to re-read the file and use the new configuration

# pooladm -c
# pooladm -s

Now you can run pooladm without any arguments and get the running config. If it all looks ok, go ahead and boot your zone

Unfortunately Solaris does not come with a ftps compatible client by default, but you can get round this by either a) installing another client such as lftp or b) using CURL

CURL is not a super friendly program however, so here are some of my notes on using it to access one of my servers.

Get a directory listing of the files in the source directory
curl -k -3 --ftp-ssl -u techie ftp://files.blah.com:990//export/home/sources/* -l

-k says to ignore certificate problems (this may be bad in the outside world – but this is all internal to my network and we don’t have ssl certificates)

-u specifies the username to connect to the server. You can optionally follow this with a : and a password -u username:password but this will be clear text. If you don’t specify a password on the command line it will prompt. Another option is to use a curl config file (–config with the username:password specified in it)

-3 says to use SSLv3 (this may not be relevant in your environment)

// in the directory path says that it is an absolute path not a relative one.

This will give you a list of the files inside the directory, which is super handy for scripting purposes.

Then you can issue curl command for each file you need to download like
curl -k -3 --ftp-ssl -u techie ftp://files.blah.com:990//export/home/sources/myfile.txt -O

-O specifies to create the file in your local directory, with the same filename.

If you have a filesystem that contains data which is accessed often, but you do not want to record the access time information because it is static data (e.g. content for a webserver) you can change this in zfs properties.

Certificates that have been exported by Apache cannot be directly imported into OTD. Apache exports consist of 2 files, a .pem and a .key, and if you try to load the .pem file you will get the error ‘OTD-64112’

Then you import them into Oracle Traffic Director. The first time I tried this I had problems, as the new certificates were not visible in the GUI despite it noticing that files cert8.db/key3.db had changed and prompting for a reload. cert8.db/key3.db are the older Berkeley format databases, and OTD uses the newer SQLite format files cert9.db/key4.db. You can control this by setting the environment variable NSS_DEFAULT_DB_TYPE or by putting the prefix sql: on the directory path to the certificate directory specified by -d