If your disks were not empty, there may be some errors telling you that the kernel still uses old partitions, you will need to reboot, and redo the steps to install lvm2, bcache-tools, and mdadm if needed. Then pursue from here, else pvcreate or other instructions may fail, especially, if you don't reboot before installing bcache, you will have to purge/clean all disks from bcache and re-create them with reboots in between.

Bcache Setup

Here we will have one cache for serving two backing devices. We could have created two caches and use a 1:1 cache:backing setup to be sure to reserve a certain amount of cache per backing device (i.e: create 2 lvm cache partition 12G/52G for example). In this setup the cache will be shared between both devices.

Bcache Creation

Then create the bcache devices for both RAIDs:

For avoiding left-overs:
# wipefs -af /dev/mapper/ssd-cache
# wipefs -af /dev/mapper/RAID1-root
# wipefs -af /dev/mapper/RAID10-srv
For the same reason, we will add --wipe-bcache:
# make-bcache --writeback --wipe-bcache -B /dev/mapper/RAID1-root -B /dev/mapper/RAID10-srv \
-C /dev/mapper/ssd-cache --wipe-bcache
Notes: make-bcache is the command that will create the bcache, it can take options.
--writeback: for performance, but in production you may not want to use this mode for safety reasons.
-B refers to the backing devices, here we will use one cache to cache two disks.
-C refers to the caching device, multiple devices not yet supported; use mdadm as a workaround if needed.
--wipe-bcache, will overwrite any previous bcache superblocks, thus destroying previous bcache data.

This is what lsblk must show you at this step:

# lsblk

NAME

MAJ:MIN

RM

SIZE

RO

TYPE

MOUNTPOINT

sda

8:0

0

465,3G

0

disk

├─sda1

8:1

0

1023M

0

part

└─sda2

8:2

0

464,3G

0

part

└─RAID1-root

252:3

0

464,3G

0

lvm

└─bcache0

251:0

0

464,3G

0

disk

sdb

8:16

0

3,7T

0

disk

└─sdb1

8:17

0

3,7T

0

part

└─RAID10-srv

252:4

0

3,7T

0

lvm

└─bcache1

251:1

0

3,7T

0

disk

sdc

8:32

0

111,8G

0

disk

└─sdc1

8:33

0

111,8G

0

part

├─ssd-swap

252:0

0

12G

0

lvm

├─ssd-cache

252:1

0

64G

0

lvm

│ ├─bcache0

251:0

0

464,3G

0

disk

│ └─bcache1

251:1

0

3,7T

0

disk

└─ssd-free

252:2

0

35,8G

0

lvm

sdd

8:48

1

58,9G

0

disk

└─sdd1

8:49

1

4K

0

part

loop0

7:0

0

1G

1

loop

/rofs

Format baches

I format both ext4 (had a recent issue with bcache + btrfs):

# mkfs.ext4 /dev/bcache0
# mkfs.ext4 /dev/bcache1

Partitioning, and disk setup is now done, we head on to server install:

Install mandatory packages

We need to install lvm2 and bcache-tools (which will add udev hooks), we need to add mdadm in case we used a linux software RAID option (Those are really important else system won’t boot).

# apt-get install -y lvm2 bcache-tools mdadm

Setup fstab

Now we will need to get UUIDs for disks to be sure to mount the good one on the good mount-point. Bcache number is not a guaranteed one (bcache0 can become bcache1 at next boot). For this, we will use blkid tool: