Oracle on vSphere – Summary of Storage options

Storage – the final frontier. These are the voyages of any Business Critical Oracle database, its endless mission: to meet the business SLA, to sustain increasing workload demands and seek out new challenges, to boldly go where no database has gone before.

Storage is one of the most important aspect of any IO intensive workloads, Oracle workloads typically fit this bill and we all know how a mis-configured Storage or incorrect tuning often leads to database performance issues, irrespective of any architecture where the database is hosted on.

As part of my pre-sales Oracle Specialist role where I talk to Customers , Partners and VMware field, I always bring up the fact how we can go and procure ourselves the biggest and baddest piece of infrastructure on this face of earth and all it takes is one incorrect setting or mis-configuration and everything goes to “Hell in a Handbasket”.

The focus of the blog is on identifying the various storage options available to house the oracle database workloads on VMware Software Defined Data Center (SDDC).

Storage on VMware vSphere platform can be of one of the following

vmdk (Block or NFS Datastore)

Physical / Virtual Raw Device Mapping (RDM)

vSAN Datastore (Object Store File System (OSFS))

Virtual Volumes (vVOL)

In Guest (dNFS / iSCSI / NFS)

Key points to take away from this blog

vmdk/s provisioned from VMFS / vsanDatastore / vVOL compatible datastores can be used as Oracle ASM using ASMLib / ASMFD / Linux udev for device persistence or File system storage for database storage

RDM (Physical / Virtual) can also be used as VM storage for Oracle databases.

VMware recommends using VMDK (s) for provisioning storage for ALL Oracle environments

In guest storage options are also available as database storage

Example of Storage provisioning using EMC Unity array

Let’s look at the basic storage building blocks in case of a vSphere cluster connected to an EMC Unity array using block storage for example:

-Disks (Magnetic Spindles / SSD) which comprises the raw storage of an array
-Disks are assigned to Dynamic / Traditional storage pools which can have RAID group configurations (Raid 1+0, Raid 5, Raid 6)
-LUN/s are created on the RAID Groups
-LUN/s are then volume accessed / mapped / masked to ESXi hosts in the vSphere Cluster

Layout of the Linux disks:
-/dev/sda is for ASM disk for database ‘ORA12CM’
-/dev/sdc is for ASM disk for database ‘RMANDB’
-/dev/sdb for RMAN backup (/backup file system)
-/dev/sdd is for root volume ( / )
-/dev/sde is for Oracle binaries mount point (/u01)

Persistent Naming issue in Linux

On Linux o/s, device names like /dev/sda and /dev/sdb switch around on each reboot, culminating in an unbootable system, kernel panic, or a block device disappearing. Persistent naming solves these issues.

Linux LVM label, however , provides correct identification and device ordering for a physical device, since devices can come up in any order when the system is booted. An LVM label remains persistent across reboots and throughout a cluster. So file systems using LVM2 does not have this issue

Oracle ASMLIB requires that the disk be partitioned for use with Oracle ASM. You can use the raw device without partitioning as is if you are using Linux UDEV for Oracle ASM purposes. For Oracle OCFS2 or clustered file system, partitioning of disk is required.

Partitioning is a good practice anyways to prevent anyone from attempting to create a partition table and file system on any raw device he gets his hands on which will lead to issues if the device is being used by ASM.

After partitioning using Linux utilities e.g fdisk or parted, we can then create the ASM disk using Oracle ASMLib commands.

parted or fdisk
1) Starting from RHEL 6, Redhat recommends use of parted
2) fdisk doesn’t understand GUID Partition Table (GPT) and is not designed for large partitions
3) fdisk cannot be used for drives greater than 2TB size

Oracle ASMFD simplifies the configuration and management of disk devices by eliminating the need to rebind disk devices used with Oracle ASM each time the system is restarted.

Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group.

After the disk is partitioned, Use the ASMCMD utility to provision disk devices for use with Oracle ASM Filter Driver

Udev uses rules files that determine how it identifies devices and creates device names. The udev service (systemd-udevd) reads the rules files at system startup and stores the rules in memory. If the kernel discovers a new device or an existing device goes offline, the kernel sends an event action (uevent) notification to udev, which matches the in-memory rules against the device attributes in /sys to identify the device. As part of device event handling, rules can specify additional programs that should run to configure a device.

Steps for creating a VM vmdk/s on a vsanDatastore or vVOL compatible datastore is exactly the same as in the case of provisioning vmdk’s on an VMFS datastore.

After creating VM vmdk/s on a vsanDatastore or vVOL compatible datastore, all steps for provisioning Oracle database storage either using file system OR Oracle ASM using ASMLib/ASMFD/Linux udev for device persistence is exactly the same.

2) Oracle workloads on RDM/s (Physical / Virtual)

When you give your virtual machine direct access to a raw SAN LUN, you create an RDM disk that resides on a VMFS datastore and points to the LUN. You can create the RDM as an initial disk for a new virtual machine or add it to an existing virtual machine. When creating the RDM, you specify the LUN to be mapped and the datastore on which to put the RDM.

After creating VM rdm/s, all steps for provisioning Oracle database storage either using file system OR Oracle ASM using ASMLib/ASMFD/Linux udev for device persistence is exactly the same as detailed above.

3) Oracle workloads on In Guest Storage (dNFS / NFS / iSCSI )

Direct NFS Client integrates the NFS client functionality directly in the Oracle software to optimize the I/O path between Oracle and the NFS server. This integration can provide significant performance improvements.

Oracle workloads can also be provisioned on in guest iSCSI targets. After creating in guest iSCSI targets, all steps for provisioning Oracle database storage either using file system OR Oracle ASM using ASMLib/ASMFD/Linux udev for device persistence is exactly the same as detailed above.

vmdk’s provisioned from VMFS / vsanDatastore / vVOL compatible datastores can be used as Oracle ASM using ASMLib / ASMFD / Linux udev for device persistence or File system storage for database storage

RDM (Physical / Virtual) can also be used as VM storage for Oracle databases.

VMware recommends using VMDK (s) for provisioning storage for ALL Oracle environments

In guest storage options are also available as database storage

Conclusion

Storage is one of the most important aspect of an IO intensive Oracle workload and VMware vSphere provides a lot of storage options by which we are effectively able to harness the power of the vSphere platform and Storage to effectively meet the business SLA’s.

All Oracle on vSphere white papers including Oracle licensing on vSphere/vSAN, Oracle best practices, RAC deployment guides, workload characterization guide can be found in the url below