Target VM
Configuration

Execute the
df
command to examine the current disks that are mounted and accessible.

Step 4

Create an ext4
file system on the new disk:

mkfs -t ext4
/dev/sdb

Note

b in
/dev/sdb is the second SCSI disk. It warns that you are
performing this operation on an entire device, not a partition. That is
correct, since you created a single virtual disk of the intended size. This is
assuming you have specified the correct device. Make sure you have selected the
right device; there is no undo.

Step 5

Execute the
following command to verify the existence of the disk you created:

# fdisk -l

Step 6

Execute the
following command to create a mount point for the new disk:

# mkdir
/<NewDirectoryName>

Step 7

Execute the
following command to display the current
/etc/fstab:

# cat /etc/fstab

Step 8

Execute the
following command to add the disk to
/etc/fstab so that it is available across
reboots:

/dev/sdb
/<NewDirectoryName> ext4 defaults 1 3

Step 9

Reboot the VM.

shutdown -r now

Step 10

Execute the
df
command to check the file system is mounted and the new directory is available.

Update the collectd
process to use the new file system to store KPIs

After the disk is
added successfully,
collectd can use the new disk to store the KPIs.

Step 1

SSH into
pcrfclient01/pcrfclient02.

Step 2

Execute the
following command to open the logback.xml file for editing:

vi
/etc/collectd.d/logback.xml

Step 3

Update the file
element <file> with the new directory that was added in the
/etc/fstab.

Step 4

Execute the
following command to restart
collectd:

monit restart collectd

Note

The content of
logback.xml will be overwritten to the default path after a new upgrade. Make
sure to update it after an upgrade.

Mounting the
Replication Set from Disk to tmpfs After Deployment

You can mount all of
the members of the Replication set to tmpfs, or you can mount specific members
to tmpfs. These scenarios are described in the following sections.

Scenario 1 –
Mounting All Members of the Replication Set to tmpsf

Step 1

Modify
mongoConfig.cfg using the vi editor on cluster manager. Change the DBPATH
directory for the SPR Replication set that needs to be put on tmpfs.

Note

Make sure you
change the path to
/var/data/sessions.1, which is the tmpfs filesystem.
Also, make sure to run diagnostics.sh before and after the activity.

The following
example shows the contents of mongoConfig.cfg before modification:

Clone Sessionmgr01
VM

Downtime: No downtime
Before You Begin

Before disk
repartition, clone sessionmgr01. This step is optional but to reduce the risk
of losing the data during disk repartitioning, the customer can take the backup
of sessionmgr01 VM. If the customer does not have enough space to take the
backup this step can be ignored.

Blade with
enough space to hold cloned image of sessionmgr01.

Step 1

Login to
vSphere Client on sessionmgr01 blade with administrator credentials.

If cloning is
not possible because of space limitation on Blade, backup of sessionmgr01 VM
can be taken by saving OVF of sessionmgr01 VM to local storage like Laptop,
Desktop. (Both cloning and OVF backup are optional steps, but either one of
them is highly recommended.)

Step 5

Log in using the
VMware vSphere Client as an administrator (e.g. root) to the ESXi host which
has your Linux Virtual Machine on it.