Ambedded

News

New in Luminous: Erasure Coding for Block Device and Ceph Filesystem

New in Ceph Luminous: Erasure Coding for Block Device and Ceph Filesystem
Ceph version before Luminious (12.2.1), RBD and Filesystem can not directly use erasure coding pool as their backend storage. You have to add a replica pool as the cache tier of erasure coding pool. This limitation has been solved by Blue Store.

creating logical reference to the old about-to-be-overitten blocks on disk. Overwrite it with the new data to a new location and later discard the old reference. This end result is that after a failure, any partially applied updates can be rolled back so that the data stored in RADOS is always in a consistent state.
2) Using EC pool with RBD: Image can be stored in EC pool. However, it still need a replica pool for storing image header and metadata.
3) If you are writing lots of data into big objects, EC pools are usually faster then replicated pools: less data is being written (only 1.5x what you provided, vs 3x for replication). The OSD processes consume a lot more CPU than they did before, however, so if your servers are slow you may not realize any speedup. Large or streaming reads perform about the same as before.
4) Small writes, however, are slower than replication.s:

Details are in Ceph Blog.

http://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/

Ceph Software Defined Storage (SDS) Appliance, with easy use GUI, powered by ARM micro server cluster in a box to support object storage, block storage and file system and OpenStack Swift, Cinder, Nova and Glance.