This entry is
similar in theme to one of my previous posts
about verifying your hdisk queue_depth settings with kdb. This time we want to check if an attribute for a Virtual FC (VFC)
adapter has been modified and whether or not AIX has been restarted since the
change. The attribute I’m interested in is num_cmd_elems.
This value is often changed from its default settings, in AIX environments, to improve
I/O performance on SAN attached storage.

From
kdb you can identify the VFC
adapters configured on an AIX system using the vfcs subcommand. Not only does this tell you what adapters you
have, but it also identifies the VIOS each adapter is connected to and the
corresponding vfchost adapter. Nice!

(0)>
vfcs

NAMEADDRESSSTATEHOSTHOST_ADAPOPENED NUM_ACTIVE

fcs00xF1000A00103D40000x0008vio1vfchost100x010x0000

fcs10xF1000A00103D60000x0008vio2vfchost100x010x0000

You can view
the current (running) configuration of a VFC adapter using the kdb vfcs subcommand and the name of the
VFC adapter, for example fcs1:

Using the
output from this command we can determine the current (running) value for a
number of VFC attributes, including num_cmd_elems.

So I start
with an adapter with a num_cmd_elems
value of 200. Both the lsattr command
and kdb report 200 (C8 in hex) for num_cmd_elems.

#
lsattr -El fcs1 -a num_cmd_elems

num_cmd_elems
200 Maximum
Number of COMMAND Elements True

#
echo vfcs fcs1 | kdb | grep num_cmd_elems

num_cmd_elems:
0xC8location_code: U9119.FHA.87654A1-V20-C10-T1

I change num_cmd_elems to 400 with chdev –P (remember,
the –P flag only updates the AIX ODM, and not the running configuration of the
device in the AIX kernel. You must either reboot for this change to take effect
or offline & online the device).

#
chdev -l fcs1 -a num_cmd_elems=400 -P

fcs1
changed

Now the lsattr command reports num_cmd_elems is set to 400 in the ODM.

Just say you
change the queue_depth on a hdisk
with chdev –P. This updates the devices
ODM information only, not its running configuration. The new value will take
effect next time I reboot the system. So now I have a different queue_depth in the ODM compared to the
devices current running config (in the kernel).

What if I
forget that I’ve made this change to the ODM and forget to reboot the system
for many months? Someone complains of an I/O performance issue....I check the
queue_depths and find they appear to be set appropriately but I still see disk
queue full conditions on my hdisks. But have I rebooted since changing the values?

How do I
know if the ODM matches the devices running configuration?

For example,
I start with a queue_depth of 3,
which is confirmed by looking at lsattr
(ODM) and kdb (running config)
output:

# lsattr -El
hdisk6 -a queue_depth

queue_depth 3 Queue DEPTH
True

# echo
scsidisk hdisk6 | kdb | grep queue_depth

ushort queue_depth =
0x3;
< In Hex.

Now I change
the queue_depth using chdev –P i.e. only updating the ODM.

# chdev -l
hdisk6 -a queue_depth=256 -P

hdisk6
changed

# lsattr -El
hdisk6 -a queue_depth

queue_depth 256 Queue DEPTH
True

kdb reports that the disks running
configuration still has a queue_depth of
3.

# echo
scsidisk hdisk6 | kdb | grep queue_depth

ushort queue_depth = 0x3;

Now if I varyoff
the VG and change the disk queue_depth,
both lsattr (ODM) and kdb (the running config) show the same
value:

# umount
/test

# varyoffvg
testvg

# chdev -l
hdisk6 -a queue_depth=256

hdisk6
changed

# varyonvg
testvg

# mount
/test

# lsattr -El
hdisk6 -a queue_depth

queue_depth 256 Queue DEPTH
True

# echo
scsidisk hdisk6 | kdb | grep queue_depth

ushort queue_depth =
0x100;
< In Hex = Dec 256.

# echo
"ibase=16 ; 100" | bc

256

This is one
way of checking you’ve rebooted since you changed your queue_depth attributes. I’ve tried this on AIX 6.1 and 7.1 only.