An e-mail with instructions has been sent to your Veritas address. Follow the instruction there to complete verification.

Step 1: Click on the "VERIFY MY ACCOUNT" button, and you will be directed to the authentication confirmation page.

Step 2: This confirmation page will display that your e-mail verification is completed (i.e., "You have been verified as an active Veritas employee"). If not, please follow the instructions indicated in the e-mail body.

Step 3: Refresh the [here] page after email validation is done. Note that the validation process is an one-time task only.

If for any reasons, you have not received the e-mail verification, go back and try again. If you still have not received it, please contact us.

Patches for your product can have a variety of names. These names are based on product, component, or package names. For more information on patch naming conventions and the relationship between products, components, and packages, see the SORT online help.

* * * READ ME * * *
* * * Veritas Volume Manager 6.0.1 * * *
* * * Public Hot Fix 2 * * *
Patch Date: 2012-10-10
This document provides the following information:
* PATCH NAME
* PACKAGES AFFECTED BY THE PATCH
* BASE PRODUCT VERSIONS FOR THE PATCH
* OPERATING SYSTEMS SUPPORTED BY THE PATCH
* INCIDENTS FIXED BY THE PATCH
* INSTALLATION PRE-REQUISITES
* INSTALLING THE PATCH
* REMOVING THE PATCH
* KNOWN ISSUES
PATCH NAME
----------
Veritas Volume Manager 6.0.1 Public Hot Fix 2
BASE PRODUCT VERSIONS FOR THE PATCH
-----------------------------------
OPERATING SYSTEMS SUPPORTED BY THE PATCH
----------------------------------------
Solaris 10 X86
INCIDENTS FIXED BY THE PATCH
----------------------------
This patch fixes the following Symantec incidents:
Patch ID: 148491-01
* 2860207 (Tracking ID: 2859470)
SYMPTOM:
The EMC SRDF-R2 disk may go in error state when you create EFI label on the R1
disk. For example:
R1 site
# vxdisk -eo alldgs list | grep -i srdf
emc0_008c auto:cdsdisk emc0_008c SRDFdg online c1t5006048C5368E580d266 srdf-r1
R2 site
# vxdisk -eo alldgs list | grep -i srdf
emc1_0072 auto - - error c1t5006048C536979A0d65 srdf-r2
DESCRIPTION:
Since R2 disks are in write protected mode, the default open() call (made for
read-write mode) fails for the R2 disks, and the disk is marked as invalid.
RESOLUTION:
As a fix, DMP was changed to be able to read the EFI label even on a write
protected SRDF-R2 disk.
* 2876865 (Tracking ID: 2510928)
SYMPTOM:
The extended attributes reported by "vxdisk -e list" for the EMC SRDF luns are
reported as "tdev mirror", instead of "tdev srdf-r1". Example,
# vxdisk -e list
DEVICE TYPE DISK GROUP STATUS
OS_NATIVE_NAME ATTR
emc0_028b auto:cdsdisk - - online thin
c3t5006048AD5F0E40Ed190s2 tdev mirror
DESCRIPTION:
The extraction of the attributes of EMC SRDF luns was not done properly. Hence,
EMC SRDF luns are erroneously reported as "tdev mirror", instead of "tdev srdf-
r1".
RESOLUTION:
Code changes have been made to extract the correct values.
* 2892499 (Tracking ID: 2149922)
SYMPTOM:
Record the diskgroup import and deport events in
the /var/adm/messages file.
Following type of message can be logged in syslog:
vxvm:vxconfigd: V-5-1-16254 Disk group import of <dgname> succeeded.
DESCRIPTION:
With the diskgroup import or deport, appropriate success message
or failure message with the cause for failure should be logged.
RESOLUTION:
Code changes are made to log diskgroup import and deport events in
syslog.
* 2892621 (Tracking ID: 1903700)
SYMPTOM:
vxassist remove mirror does not work if nmirror and alloc is specified,
giving an error "Cannot remove enough mirrors"
DESCRIPTION:
During remove mirror operation, VxVM does not perform correct
analysis of plexes. Hence the issue.
RESOLUTION:
Necessary code changes have been done so that vxassist works properly.
* 2892630 (Tracking ID: 2742706)
SYMPTOM:
The system panic can happen with following stack, when the Oracle 10G Grid
Agent Software invokes the command :-
# nmhs get_solaris_disks
<leaf trap>unix:lock_try+0x0()
genunix:turnstile_interlock+0x1c()
genunix:turnstile_block+0x1b8()
unix:mutex_vector_enter+0x428()
unix:mutex_enter() - frame recycled
vxlo:vxlo_open+0x2c()
genunix:dev_open() - frame recycled
specfs:spec_open+0x4f4()
genunix:fop_open+0x78()
genunix:vn_openat+0x500()
genunix:copen+0x260()
unix:syscall_trap32+0xcc()
DESCRIPTION:
The open system call code path of the vxlo (Veritas Loopback Driver) is not
releasing the acquired global lock after the work is completed. The panic may
occur when the next open system call tries to acquire the lock.
RESOLUTION:
Code changes have been made to release the global lock appropriately.
* 2892643 (Tracking ID: 2801962)
SYMPTOM:
Operations that lead to growing of volume, including 'vxresize', 'vxassist
growby/growto' take significantly larger time if the volume has version 20
DCO(Data Change Object) attached to it in comparison to volume which doesn't
have DCO attached.
DESCRIPTION:
When a volume with a DCO is grown, it needs to copy the existing map in DCO and
update the map to track the grown regions. The algorithm was such that for
each region in the map it would search for the page that contains that region
so as to update the map. Number of regions and number of pages containing them
are proportional to volume size. So, the search complexity is amplified and
observed primarily when the volume size is of the order of terabytes. In the
reported instance, it took more than 12 minutes to grow a 2.7TB volume by 50G.
RESOLUTION:
Code has been enhanced to find the regions that are contained within a page and
then avoid looking-up the page for all those regions.
* 2892650 (Tracking ID: 2826125)
SYMPTOM:
VxVM script daemons are not up after they are invoked with the vxvm-recover
script.
DESCRIPTION:
When the VxVM script daemon is starting, it will terminate any stale instance
if it does exist. When the script daemon is invoking with exactly the same
process id of the previous invocation, the daemon itself is abnormally
terminated by killing one own self through a false-positive detection.
RESOLUTION:
Code changes are made to handle the same process id situation correctly.
* 2892660 (Tracking ID: 2000585)
SYMPTOM:
If 'vxrecover -sn' is run and at the same time one volume is removed, vxrecover
exits with the error 'Cannot refetch volume', the exit status code is zero but
no volumes are started.
DESCRIPTION:
vxrecover assumes that volume is missing because the diskgroup must have been
deported while vxrecover was in progress. Hence, it exits without starting
remaining volumes. vxrecover should be able to start other volumes, if the DG
is not deported.
RESOLUTION:
Modified the source to skip missing volume and proceed with remaining volumes.
* 2892665 (Tracking ID: 2807158)
SYMPTOM:
During VM upgrade or patch installation on Solaris platform,
sometimes the system can hang due to deadlock with following stack:
genunix:cv_wait
genunix:ndi_devi_enter
genunix:devi_config_one
genunix:ndi_devi_config_one
genunix:resolve_pathname
genunix:e_ddi_hold_devi_by_path
vxspec:_init
genunix:modinstall
genunix:mod_hold_installed_mod
genunix:modrload
genunix:modload
genunix:mod_hold_dev_by_major
genunix:ndi_hold_driver
genunix:probe_node
genunix:i_ndi_config_node
genunix:i_ddi_attachchild
DESCRIPTION:
During the upgrade or patch installation, the vxspec module is
unloaded and reloaded. In the vxspec module initialization, it tries to lock
root node during the pathname go-through while already holding the subnode,
i.e, /pseudo. Meanwhile, if there is another process holding the lock of root
node is acquiring the lock of the subnode /pseudo, the deadlock occurs since
each process tries to get the lock already hold by peer.
RESOLUTION:
APIs which are introducing deadlock are replaced.
* 2892689 (Tracking ID: 2836798)
SYMPTOM:
'vxdisk resize' fails with the following error on the simple format EFI
(Extensible Firmware Interface) disk expanded from array side and system may
panic/hang after a few minutes.
# vxdisk resize disk_10
VxVM vxdisk ERROR V-5-1-8643 Device disk_10: resize failed:
Configuration daemon error -1
DESCRIPTION:
As VxVM doesn't support Dynamic Lun Expansion on simple/sliced EFI disk, last
usable LBA (Logical Block Address) in EFI header is not updated while expanding
LUN. Since the header is not updated, the partition end entry was regarded as
illegal and cleared as part of partition range check. This inconsistent
partition information between the kernel and disk causes system panic/hang.
RESOLUTION:
Added checks in VxVM code to prevent DLE on simple/sliced EFI disk.
* 2892702 (Tracking ID: 2567618)
SYMPTOM:
VRTSexplorer coredumps in checkhbaapi/print_target_map_entry which looks like:
print_target_map_entry()
check_hbaapi()
main()
_start()
DESCRIPTION:
checkhbaapi utility uses HBA_GetFcpTargetMapping() API which returns the current
set of mappings between operating system and fibre channel protocol (FCP)
devices for a given HBA port. The maximum limit for mappings was set to 512 and
only that much memory was allocated. When the number of mappings returned was
greater than 512, the function that prints this information used to try to
access the entries beyond that limit, which resulted in core dumps.
RESOLUTION:
The code has been changed to allocate enough memory for all the mappings
returned by HBA_GetFcpTargetMapping().
* 2922770 (Tracking ID: 2866997)
SYMPTOM:
After applying Solaris patch 147440-20, disk initialization using vxdisksetup
command fails with following error,
VxVM vxdisksetup ERROR V-5-2-43 <disk>: Invalid disk
device for vxdisksetup
DESCRIPTION:
A un-initialized variable gets a different value after OS patch installation,
thereby making vxparms command outputs give an incorrect result.
RESOLUTION:
Initialize the variable with correct value.
* 2922798 (Tracking ID: 2878876)
SYMPTOM:
vxconfigd, VxVM configuration daemon dumps core with the following stack.
vol_cbr_dolog ()
vol_cbr_translog ()
vold_preprocess_request ()
request_loop ()
main ()
DESCRIPTION:
This core is a result of a race between two threads which are processing the
requests from the same client. While one thread completed processing a request
and is in the phase of releasing the memory used, other thread is processing a
request "DISCONNECT" from the same client. Due to the race condition, the
second thread attempted to access the memory which is being released and dumped
core.
RESOLUTION:
The issue is resolved by protecting the common data of the client by a mutex.
* 2924117 (Tracking ID: 2911040)
SYMPTOM:
Restore operation from a cascaded snapshot succeeds even when it's one
of the source is inaccessible. Subsequently, if the primary volume is made
accessible for operation, IO operations may fail on the volume as the source of
the volume is inaccessible. Deletion of snapshots would as well fail due to
dependency of the primary volume on the snapshots. In such case, following error
is thrown when try to remove any snapshot using 'vxedit rm' command:
""VxVM vxedit ERROR V-5-1-XXXX Volume YYYYYY has dependent volumes"
DESCRIPTION:
When a snapshot is restored from any snapshot, the snapshot becomes
the source of data for regions on primary volume that differ between the two
volumes. If the snapshot itself depends on some other volume and that volume is
not accessible, effectively primary volume becomes inaccessible after restore
operation. In such case, the snapshots cannot be deleted as the primary volume
depends on it.
RESOLUTION:
If a snapshot or any later cascaded snapshot is inaccessible,
restore from that snapshot is prevented.
* 2924188 (Tracking ID: 2858853)
SYMPTOM:
In CVM(Cluster Volume Manager) environment, after master switch, vxconfigd
dumps core on the slave node (old master) when a disk is removed from the disk
group.
dbf_fmt_tbl()
voldbf_fmt_tbl()
voldbsup_format_record()
voldb_format_record()
format_write()
ddb_update()
dg_set_copy_state()
dg_offline_copy()
dasup_dg_unjoin()
dapriv_apply()
auto_apply()
da_client_commit()
client_apply()
commit()
dg_trans_commit()
slave_trans_commit()
slave_response()
fillnextreq()
vold_getrequest()
request_loop()
main()
DESCRIPTION:
During master switch, disk group configuration copy related flags are not
cleared on the old master, hence when a disk is removed from a disk group,
vxconfigd dumps core.
RESOLUTION:
Necessary code changes have been made to clear configuration copy related flags
during master switch.
* 2924207 (Tracking ID: 2886402)
SYMPTOM:
When re-configuring dmp devices, typically using command 'vxdisk scandisks',
vxconfigd hang is observed. Since it is in hang state, no VxVM(Veritas volume
manager)commands are able to respond.
Following process stack of vxconfigd was observed.
dmp_unregister_disk
dmp_decode_destroy_dmpnode
dmp_decipher_instructions
dmp_process_instruction_buffer
dmp_reconfigure_db
gendmpioctl
dmpioctl
dmp_ioctl
dmp_compat_ioctl
compat_blkdev_ioctl
compat_sys_ioctl
cstar_dispatch
DESCRIPTION:
When DMP(dynamic multipathing) node is about to be destroyed, a flag is set to
hold any IO(read/write) on it. The IOs which may come in between the process of
setting flag and actual destruction of DMP node, are placed in dmp queue and are
never served. So the hang is observed.
RESOLUTION:
Appropriate flag is set for node which is to be destroyed so that any IO after
marking flag will be rejected so as to avoid hang condition.
* 2930399 (Tracking ID: 2930396)
SYMPTOM:
The vxdmpasm/vxdmpraw command does not work on Solaris. For
example:
#vxdmpasm enable user1 group1 600 emc0_02c8
expr: syntax error
/etc/vx/bin/vxdmpasm: test: argument expected
#vxdmpraw enable user1 group1 600 emc0_02c8
expr: syntax error
/etc/vx/bin/vxdmpraw: test: argument expected
DESCRIPTION:
The "length" function of expr command does not work on
Solaris.
This function was used in the script and used to give error.
RESOLUTION:
The expr command has been replaced by awk command.
* 2933467 (Tracking ID: 2907823)
SYMPTOM:
Unconfiguring devices in 'failing' or 'unusable' state (as shown by cfgadm
utility) cannot be done using VxVM Dynamic reconfiguration(DR) tool.
DESCRIPTION:
If devices are not removed properly then they can be in 'failing' or 'unusable'
state as shown below:
c1::5006048c5368e580, 255 disk connected configured failing
c1::5006048c5368e580, 326 disk connected configured unusable
Such devices are ignored by DR Tool, and they need to be manually unconfigured
using cgadm utility.
RESOLUTION:
To fix this, code changes are done so that DR Tool asks user if they wants to
unconfigure 'failed' or 'unusable' devices and takes action accordingly.
* 2933468 (Tracking ID: 2916094)
SYMPTOM:
These are the issues for which enhancements are done:
1. All the DR operation logs are accumulated in one log file 'dmpdr.log', and
this file grows very large.
2. If a command takes long time, user may think DR operations have stuck.
3. Devices controlled by TPD are seen in list of luns that can be removed
in 'Remove Luns' operation.
DESCRIPTION:
1. All the logs of DR operations accumulate and form one big log file which
makes it difficult for user to get to the current DR operation logs.
2. If a command takes time, user has no way to know whether the command has
stuck.
3. Devices controlled by TPD are visible to user which makes him think that he
can remove those devices without removing them from TPD control.
RESOLUTION:
1. Now every time user opens DR Tool, a new log file of form
dmpdr_yyyymmdd_HHMM.log is generated.
2. A messages is displayed to inform user if a command takes longer time than
expected.
3. Changes are made so that devices controlled by TPD are not visible during DR
operations.
* 2933469 (Tracking ID: 2919627)
SYMPTOM:
While doing 'Remove Luns' operation of Dynamic Reconfiguration Tool, there is no
feasible way to remove large number of LUNs, since the only way to do so is to
enter all LUN names separated by comma.
DESCRIPTION:
When removing luns in bulk during 'Remove Luns' option of Dynamic
Reconfiguration Tool, it would not be feasible to enter all the luns separated
by comma.
RESOLUTION:
Code changes are done in Dynamic Reconfiguration scripts to accept file
containing luns to be removed as input.
* 2934259 (Tracking ID: 2930569)
SYMPTOM:
The LUNs in 'error' state in output of 'vxdisk list' cannot be removed through
DR(Dynamic Reconfiguration) Tool.
DESCRIPTION:
The LUNs seen in 'error' state in VM(Volume Manager) tree are not listed by
DR(Dynamic Reconfiguration) Tool while doing 'Remove LUNs' operation.
RESOLUTION:
Necessary changes have been made to display LUNs in error state while doing
'Remove LUNs' operation in DR(Dynamic Reconfiguration) Tool.
* 2942166 (Tracking ID: 2942609)
SYMPTOM:
You will see following message as error message when quiting from Dynamic
Reconfiguration Tool.
"FATAL: Exiting the removal operation."
DESCRIPTION:
When user quits from an operation, Dynamic Reconfiguration Tool displays it is
quiting as error message.
RESOLUTION:
Made changes to display the message as Info.
INSTALLATION PRE-REQUISITES
---------------------------
A Solaris 10 issue may prevent this patch from complete installation.
Before installing this VM patch, install the Solaris patch
119254-70 (or a later revision). This Solaris patch fixes packaging,
installation and patch utilities. [Sun Bug ID 6337009]
Download Solaris 10 patch 119254-70 (or later) from Sun at
http://sunsolve.sun.com
INSTALLING THE PATCH
--------------------
If the currently installed VRTSvxvm is below 6.0.100.000, you must
upgrade VRTSvxvm to 6.0.100.000 level before installing this patch.
A system reboot is required after installing this patch.
Patch Installation Instructions
-------------------------------
Patching may be performed either manually or with the use of the included
hotfix installer. To continue, select one of the methods below:
Patch using hotfix installer -- Complete step 1, then go to step 4.
Patch manually -- Complete step 1, then continue with steps 2-3.
1. Before applying the patch, ensure that no VxVM volumes are in use
or open by performing the following actions:
(a) Terminate applications which use VxVM volumes
(b) Stop I/Os to all VxVM volumes
(c) Unmount filesystems which occupy VxVM volumes.
{METHOD name="Patch manually"}
2. Check whether root support or DMP native support are enabled. If either
support function is enabled, it will be retained after patch upgrade.
a) Check root support:
# vxdmpadm native list vgname=rootvg
If the output is some list of hdisks, root support is enabled on this machine
b) Check DMP native support:
# vxdmpadm gettune dmp_native_support
If the current value is "on", DMP native support is enabled on this machine.
3.
a) Before applying this VxVM 6.0.1.200 patch, stop the VEA Server's vxsvc
process:
# /opt/VRTSob/bin/vxsvcctrl stop
b) On each system to be patched, execute the commands
# cd <patch_location> && patchadd <patch_location>/148491-01
where <patch_location> is the patches/ directory immediately
beneath <hotfix_location> .
c) Reboot the system(s) to complete the patch upgrade:
# /usr/sbin/shutdown -g0 -y -i6
{/METHOD}
{METHOD name="Patch using hotfix installer"}
4. To apply patch using the hotfix installer, execute the commands
# cd <hotfix_location> && ./installFS601P2
where <hotfix_location> is the top of the hotfix directory tree.
After the installer successfully completes its task, reboot the
system(s) to which the patch was applied.
{/METHOD}
REMOVING THE PATCH
------------------
The following example removes a patch from a standalone system:
# patchrm 148491-01
KNOWN ISSUES
------------
* Tracking ID: 2949012
SYMPTOM: As the Dynamic Reconfiguration (DR) script does not contain i386 as
list of supported architecture, pre-check for DR Tool fails.
WORKAROUND: NONE
SPECIAL INSTRUCTIONS
--------------------
You need to use the shutdown command to reboot the system after patch
installation or de-installation:
# /usr/sbin/shutdown -g0 -y -i6
A Solaris 10 issue may prevent this patch from complete installation.
Before installing this VM patch, install the Solaris patch
119254-70 (or a later revision). This Solaris patch fixes packaging,
installation and patch utilities. [Sun Bug ID 6337009]
Download Solaris 10 patch 119254-70 (or later) from Sun at
http://sunsolve.sun.com
OTHERS
------
NONE

VERITAS

SORT Support

Get notifications about ASLs/APMs, HCLs, patches, and high availability agents

As a registered user, you can create notifications to receive updates about NetBackup Future Platform and Feature Plans, NetBackup hot fixes/EEBs in released versions, Array Support Libraries (ASLs)/Array Policy Modules (APMs), hardware compatibility lists (HCLs), patches and high availability agents. In addition, you can create system-specific notifications customized to your environment.

Compare configurations

The Compare Configurations feature lets you compare different system scans by the data collector. When you sign in, you can choose a target system, compare reports run at different times, and easily see how the system's configuration has changed.