Alvaro - thanks for your helpful advice - but the consensus was to try and avoid Oracle upgrades unless absolutely necessary - so I apologize - but I won't be able to provide ACFS2 scaling feedback soon.

We decided to try Dude's first suggestion and got e2fsprog version 1.42.

Sure enough - it allows you to create a >16TB filesystem - once you specify the magic parameter "-O 64Bit"

So all is well on that - thank you Dude, and thank you everyone for all the ideas. We are now stressing the new FS out - see if we can truly fill 80TB without losing anything and whether it scales.

We are already finding that these ext4 filesystems - even those < 16TB scale well on a single Datapump Parallel Export from a single DB - but not so well when hit by multiple DBs - each with their 16 or 32 or 64 PXs. Our network is 10Gbit - we get 4-5 Gbit on a single DB-to-FS parallel handshake - but not much higher when we add more DBs. Not sure why we are leaving another 5Gbit on the table on the filer side.

Any advice on UEK2 / ext4 NFS mount tuning? Does that warrant a seperate Forum post?

Starting a new thread for a new topic is usually a good advice. There is lots of information on the Internet about tuning NFS performance. Perhaps you can increase performance adding more NFS server threads. Looking at your data, however I'd rather suspect you are already maxing out your hardware performance, e.g. disk, system bus, storage controller, etc.

Dude - first - thank you - we combined all of your suggestions - e2fsprog 1.42 with -O 64bit plus splitting the large mount in two and defining 2 datapump directories and it works very well.

We now have 2 vgs/lvs/filesystems of 40TB mounted on 2 folders. Datapump export files 1,3,5... go to #1 and 2,4,6... go to #2. They scale to 10Gbit (1GByte/sec) write speed very nicely. We've been hitting it hard with concurrent parallel reads/writes/copies/removes and so far it's very resilient and error free.

Here is the next dilemma. We put these 2 mounts in /etc/fstab with a type of "ext4" (which they are). By convention - when Linux reboots - it tries to check all the filesystems on /etc/fstab using "fsck.fstype" - in our case - it uses "fsck.ext4". However - this utility doesn't know how to deal with this large new fs and the check fails and the reboot fails.

Having installed e2fsprog 1.42 - the utility to use is e2fsck 1.42 which works fine and gives the fs the all-clear - but that's not what the boot uses by default.

As a stopgap - we went into /etc/fstab and changed the 6th column for these 2 filesystems to "0" - so that they don't get checked upon boot. This solved the problem and allowed UEK2 to come up.

As an aside - even though we were root - we weren't initially able to change fstab as the filesystem was in read-only mode - but a remount of the root partition as read-write - (mount -o remount,rw "root partition" /) - solved that problem.

So we have a workaround by not checking these 2 filesystems at boot time - but is there a better way? How do we tell the OS that these 2 filesystems are > 16TB and use the 64-bit version of e2fsprogs - so it must use "e2fsck" to check them upon reboot?

Well - we took the plunge - we created a new ramfs image - but OEL wouldn't boot and we got the following message -

"panic occurred switching back to text console"

So we rebooted, interrupted at grub.conf initiation, booted into the Redhat flavor, reinstated the old ramfs image from the backup, and OEL came up fine.

For now - we will proceed by zeroing out the 6th column for 64-bit filesystems in etc/fstab - so that they don't get checked at boot time. The small system filesystems will continue to get checked by fsch.ext4 which is fine.

Hopefully in the not too distant future - OEL will be truly all-64 bit and that would be the end of that.

In the meantime - if anyone has additional information to share - please do.

Thanks Dude for the builtin "undo" mechanism. It made it very easy to test this.