The system splits the ownership of the drives between both shelves with the following assignments:

Node 1 owns all disks and partitions (0 - 23) in shelf 1

Node 2 owns all disks and partitions (0 - 23) in shelf 2

Creating a new aggregate on Node 1 with Raid Group size 23 will give me:

RG 0 - 21 x Data and 2 x Parity

RG 1 - 21 x Data and 2 x Parity

2 x data spares

One root partition is roughly 22GB on a 3.8TB SSD

The maximum amount of partitioned disks you can have in a system is 48, so with the 2 shelves, we are at maximum capacity for partitioned disks. For the next shelf, we will need to utilize the full disk size in new aggregates.

Benefits of this setup I see:

in the case of 1 disk failure or a shelf failure, only 1 node/aggregate would be affected.

Cons of this setup:

A single node root and data aggregate workload is pinned to 1 shelf

It's possible to reassign disks so that 1 partition is owned by the partner node which will allow you to split the aggregate workload between shelves, however in the case of a disk or shelf failure both aggregates would be affected.

I then completed the cluster setup wizard and connected the 2nd disk shelf.

The system split the disk ownership up for shelf 2 in the following way:

Disks 0 - 11 owned by node 1

Disks 12 - 23 owned by node 2

Next, I proceeded to add disks 0 - 11 to the node 1 root aggregate and disks 12 - 23 to the node 2 root aggregate. This partitioned the disks and assigned ownership of the partitions the same as shelf 1.

Because the system was initialized with only 1 shelf connected, it created the root partition size as 55GB as opposed to 22GB in my second test scenario above. What this means is that a 55GB root partition is used across the whole 2 shelves as opposed to 22GB. How much space do you actually save when using 3.8TB SSD's:

Re: Ontap 9.x root-data-data partitioning discussion

Great Blog entry from @davidrnexon to start things off .. @robinpeter talk to us about "If SSD, dont add more than 2 shelf in a loop " - so if there is a fantasy huge budget and a AFF A700 tons of SSD shelves, can we configure 20 loops or more, and still have plenty of ports for FC and 10/40 GB nics ? thanks

- if i get my hands on a big AFF i want to use all the tools for the drives .. RAID-TEC, ADP, and Root-Data-Data

Re: Ontap 9.x root-data-data partitioning discussion

@robinpeter we actually had a customer that had a chassis failure in their 2000 series last week. Took the whole storage down affecting 500+ staff. 5-6 hour turn around for parts and engineer to replace the chassis. I never heard of this before but unfortunately these really bad situations do happen

@xiawiz I doubt you will fit 20 loops in any system, also you wouldn't run Raid-tec with SSD it's more for the larger SATA drives, in which case you would be looking at the 8200 series.

Re: Ontap 9.x root-data-data partitioning discussion

It is an A200 w/ 2 shelves (internal + 1 224) and ADP, but judging by the root partition size (55GB), it looks like it was configured first with a single shelf. Playing around on synergy, a 2 shelf config should have the 22GB root partition (like you got in scenario 2). This would be my preference. I kicked off a 4a and it did partition the 2nd shelf drives, but I'm thinking that because I did not remove all the existing partitions first, they ended up being 55GB. I'm going to follow this procedure and re-initialize

Re: Ontap 9.x root-data-data partitioning discussion

yes that's correct, it will partition the new shelf but keep the root partition size the same, which in your case is 55GB.

If you want to get the partition size down, you'll need to remove everything - volumes, aggregates, partitions and disk ownership.

With the 2 shelves plugged in you can then re-initialize node 1 (keep node 2 in the boot loader until node 1 is finished), it will take half the disks, own them and partition them with the smaller root size. Then you can do the same with node 2.

Are you familiar with the process to remove everything and re-partition ?