Can I run DPL2 on HCP with S-nodes?

DPL2 is totally fine to use and highly recommended if you run G10 nodes with local storage and no replication. Adding HCP S-nodes does not change this statement.

However, the HCP S-nodes will not use DPL2, even if selected on the HCP G10 nodes. Instead the HCP S-nodes use Erasure Code Protection and will sustain 6 concurrent drive failures before data is at risk. Hence, HCP S-nodes do not need the DPL2 protection.

Can I write multiple copies to S-nodes, simulating DPL2?

The S10/S30 nodes are put in a storage pool on HCP. The storage pool has a setting to specify the number of copies. This is typically set to one copy. You can change this to for example two copies. When it is set to two copies, HCP will write two copies of the object to the S-nodes. If there is only one HCP S10 or S30 node in the pool, both copies will be written to the same HCP S10 or S30. Because each HCP S-node does single instancing, the effective protection of two identical objects written to the same HCP S-node is the same as one copy of the object. So this does not give you any extra protection.

A valid use of the multiple copies setting in the storage pool is when you have multiple storage components in the pool, each represented as a single bucket. HCP will then write each copy to distinct bucket (storage component) in the pool, providing protection of the object on multiple S-nodes.

This is mainly HCP feature and behavior discussion so I am copy Joshua to keep me honest. At the end I talk about making sense for HCPS.

The HCP DPL name space setting is only enforced on local and SAN attached storage.

The number of data copies on storage that is managed by Adaptive Cloud Tiering (ACT) is set on the storage pool. HCPS nodes are part of such pool for S3 storage.

Hence, the DPL namespace setting and number of data copies on storage pool can be different.

Setting the number of data copies to more than one requires an equal or higher number of buckets in an S3 storage pool to be present to satisfy the number of data copies rule.

Thats it for as far it goes for HCP. But does it make sense to setup multiple buckets in a single pool for a single HCPS?

HCPS automatically performs single instancing. This implies that same objects written to HCPS will effectively become one copy. So writing more then one copy of the same object does not give you more protection than the 20+6 that HCPS uses.

However, if each bucket is a distinct separate HCPS node, you do increase the protection and durability of the data. But with 20+6 you already achieved a significant durability. Adding mutiple copies to this has a far less effect compared to making your first copies of your data.