The d100 disk is a mirrored disk. Each mirror is made up of three striped disks of one size concatenated with four striped disks of another size. There is also a hot spare disk. This system uses IPI disks (idX). SCSI disks (sdX) are treated identically.

Determine the /dev/dsk entries for each exported file system. Use either the whatdev script to find the instance or nickname for the drive or type
ls -lL /dev/dsk/c1t0d0s4 and more /etc/path_to_inst to find the
/dev/dsk entries. An explanation of these steps follows.

If you will determine the /dev/dsk entries for exported file systems with the whatdev script, follow these steps:

Determine the disk number by typing whatdevdiskname (the disk name returned by the df
/filesystemname command).

In this example you would type whatdev /dev/dsk/c1t0d0s4. Disk number id8 is returned, which is IPI disk 8.

server% whatdev /dev/dsk/c1t0d0s4id8

Repeat steps b and c for each file system not stored on a metadisk
(dev/md/dsk).

If the file system is stored on a meta disk, (dev/md/dsk), look at the metastat output and run the whatdev script on all drives included in the metadisk.

In this example type whatdev /dev/dsk/c2t1d0s7.

There are 14 disks in the /export/home file system. Running the whatdev script on the /dev/dsk/c2t1d0s7 disk, one of the 14 disks comprising the
/export/home file system, returns the following display.

server% whatdev /dev/dsk/c2t1d0s7id17

Note that /dev/dsk/c2t1d0s7 is disk id17; this is IPI disk 17.

Go to Step 7.

If you didn't determine the /dev/dsk entries for exported file systems with the whatdev script, you need to identify the /dev/dsk entries for exported file systems with ls
-lL. Follow these steps:

List the drive and its major and minor device numbers by typing
ls -lLdisknumber.

In the sar -d option reports the activities of the disk devices. The 15 means that data is collected every 15 seconds. The 1000 means that data is collected 1000 times. The following terms and abbreviations explain the output.

Table 3-6 Output of the sar -d 15 1000 | d2fs.server Command

device

Name of the disk device being monitored

%busy

Percentage of time the device spent servicing a transfer request (same as iostat
%b)

avque

Average number of requests outstanding during the monitored period (measured only when the queue was occupied) (same as iostat actv)

r+w/s

Number of read and write transfers to the device, per second (same as iostat
r/s + w/s)

blks/s

Number of 512-byte blocks transferred to the device, per second (same as iostat 2*(Kr/s
+ Kw/s))

avwait

Average time, in milliseconds, that transfer requests wait in the queue (measured only when the queue is occupied) (iostat
wait gives the length of this queue)

avserv

Average time, in milliseconds, for a transfer request to be completed by the device (for disks, this includes seek, rotational latency, and data transfer times)

For file systems that are exported via NFS, check the %b/%busy value.

If it is more than 30 percent, check the svc_t value.

The %b value, the percentage of time the disk is busy, is returned by the iostat command. The %busyvalue, the percentage of time the device spent servicing a transfer request, is returned by the sar command. If the %b and the %busy values are greater than 30 percent, go to Step e. Otherwise, go to Step 9.

Calculate the svc_t/(avserv
+ avwait) value.

The svc_t value, the average service time in milliseconds, is returned by the iostat command. The avserv value, the average time (milliseconds) for a transfer request to be completed by the device, is returned by the sar command. Add the avwait to get the same measure as svc_t.

If the svc_t value, the average total service time in milliseconds, is more than 40 ms, the disk is taking a long time to respond. An NFS request with disk I/O will appear to be slow by the NFS clients. The NFS response time should be less than 50 ms on average, to allow for NFS protocol processing and network transmission time. The disk response should be less than 40 ms.

The average service time in milliseconds is a function of the disk. If you have fast disks, the average service time should be less if you have slow disks.

Collect data on a regular basis by uncommenting the lines in the user's
sys crontab file so that sar collects the data for one month.

Performance data will be continuously collected to provide a history of sar results.

The NFS server display shows the number of NFS calls received (calls) and rejected (badcalls), and the counts and percentages for the various calls that were made. The number and percentage of calls returned by the nfsstat
-s command are shown in the following table.

The following terms explain the output of the nfsstat
-s command.

Table 3-7 Description of the Output of the nfsstat -s Command

calls

Total number of RPC calls received

badcalls

Total number of calls rejected by the RPC layer (the sum of badlen and xdrcall)

nullrecv

Number of times an RPC call was not available when it was thought to be received

badlen

Number of RPC calls with a length shorter than a minimum-sized RPC call

xdrcall

Number of RPC calls whose header could not be XDR decoded

Table 3-8 explains the nfsstat
-s command output and what actions to take.

Badcalls are calls rejected by the RPC layer and are the sum of badlen and xdrcall. The network may be overloaded. Identify an overloaded network using network interface statistics.

readlink > 10% of total lookup calls on NFS servers

NFS clients are using excessive symbolic links that are on the file systems exported by the server. Replace the symbolic link with a directory. Mount both the underlying file system and the symbolic link's target on the NFS client. See Step 11.