In your
design, you have the Celerra unit acting like a NetApp gFiler/V-series would
formating a LUN on the CX-3 and then servicing NFS requests to the Oracle hosts
in addition to the Oracle hosts connecting directly to the CX-3 for FCP
operation. This design implementation required two products (Celerra and CLARiiON) to implement versus a NetApp filer that could serve both NFS and FCP
LUNs from the same storage engine.

More to the
core of the issue is the performance issues that you mention. The
tradeoff is that in the NetApp environment you have a filesystem designed for
NFS and then the tacked on LUNs implemented as a file instead of EMC LUN
environment with a unit tacked on for NFS. On the other side of the coin
you have the disk space reservations needed for LUNs on the NetApp filers that
you would not see with the LUNs on the CLARiiON, but you have increased
filesystem usage on CLARiiON from the Celerra that you would not have with
WAFL.

Let me know if I am on the right track.

On the first issue, the writer of this comment is both correct and incorrect. Yes, Celerra uses CLARiiON as the back end. No, you do not need to purchase an additional product. This is because a Celerra, in either the integrated or multi-protocol versions, includes the CLARiiON back end.

Take the Celerra NS40. This is a Celerra head, consisting of two data movers and a control station (think of these as being similar to the NetApp head), connected via a FCP network to a CLARiiON CX3-40 back end. The following graphic makes this clear.

What EMC NAS Engineering did was actually quite brilliant. They simply exposed the FCP ports of the back-end CLARiiON for connection to hosts. Again, this provides both NFS (and CIFS) access via the Celerra front-end and FCP access via the CLARiiON back-end. This is what we now call the Celerra NS Multi-Protocol series array.

The way that this is bundled is also nice. I do not do pricing, as that is not really my area within EMC. However, I have been present when pricing was presented to many customers. It turns out that the cost of a Celerra NS40 and a CLARiiON CX3-40 are basically the same. The effect of that is that the customer gets the additional functionality that the Celerra provides for free.

It's kind of like buying a Lexus for the price of a Toyota. The FCP access to the CLARiiON CX3-40 is identical, but the Celerra NS40 provides additional functionality at minimal additional cost. That's a good deal for the customer.

Hence the blended solution. You can take a Celerra NS40, which inherently includes the CLARiiON CX3-40. You use the Celerra for NFS access to Oracle objects that do not need high-performance, low-latency I/O. The rest you place directly onto the CLARiiON CX3-40.

In the process, by the way, you actually install and configure less software on the database host. This is because, as I pointed out in my last post, you must configure a shared storage layer for the CRS files, which cannot be managed by ASM. Typically, this would require OCFS2, or even raw devices. You get this for free with NFS, which is already installed for you.

The commenter is actually on the right track on the second point. I have covered this fairly thoroughly in other posts on this blog, but the issues with NetApp FCP access include:

A LUN is actually a file in the WAFL file system. This can be easily proven by mounting the volume where a LUN resides via NFS or CIFS. You will find there a file of the same size and name as the LUN.

WAFL has some very troubling aspects with respect to sequential read performance. See my previous post on this point.

With EMC CLARiiON, again, a LUN is a LUN, i.e. a storage object you configure directly onto a RAID group. No file system in between you and the disk, in other words.

February 18, 2008

The program that I manage just published a solution that I am pretty jazzed about. This is in conjunction with the EMC Celerra NS Multi-Protocol Series Array. This array allows for both traditional NAS (i.e NFS and CIFS) access as well as FCP access to the CLARiiON CX-3 Series back end array.

Yes, I know. NetApp provides multi-protocol access already. However there is a very big difference between the FCP access provided on a NetApp filer and a CLARiiON CX-3 Series array. That is with NetApp, the FCP access is a band-aid solution which is really a special file running on top of the WAFL file system. With the CLARiiON CX-3, the LUN that you see over FCP is a really, live, good, old-fashioned LUN sitting on a RAID group. No extra WAFL file system to muck up your read access, or otherwise complicate things.

In a word, it's simpler.

Having said that, NFS has its place. There are lots and lots of files which must be managed by an Oracle RAC database server which do not require the low-latency, high-performance access of FCP. Further, if you are using ASM (which we are), then many of these files cannot be stored over ASM. This means that you need a clustered file system on top of ASM.

Guess what? You already have one. Which is completely ubiquitous and automatically installed on every UNIX and UNIX-like operating system in the industry. It's called NFS.

And it works just fine for files like the CRS files, i.e. the voting disk and the OCR file. It also works beautifully for backups, flashback recovery files, and archived logs. These files absolutely do not need the high-performance, low-latency access of FCP. Why not free up your expensive SAN and use NFS over IP to manage these files?

That's the idea behind the blended FCP / NFS solution. I have not done an exhaustive search, but as near as I can tell, no other storage vendor has done this yet. The blended solution looks like this:

The blended solution can be found here. This will be the vehicle whereby our program showcases FCP solutions from EMC from now on. I hope you find it as innovative and interesting as I do.

disclaimer: The opinions expressed here are my personal opinions. I am a blogger who works at EMC, not an EMC blogger. This is my blog, and not EMC's. Content published here is not read or approved in advance by EMC and does not necessarily reflect the views and opinions of EMC.