Use Cases / Reference Architectures

Sometimes the best way to get started with a new technology is by seeing how others have used it successfully. In this section, we try to highlight best practice implementations of Ceph Software and supporting hardware by commercial providers and experts in the field.

Share With Us!

Do you have a use case or a reference architecture that you would like to share with the Ceph community? We’re always happy to help show others how the world is using Ceph. Feel free to send your information toceph-community@ceph.com

This performance and sizing guide describes Red Hat Ceph Storage coupled with QCT storage servers and networking as object storage infrastructure. In this document testing, tuning, and performance are described for both large-object and small-object workloads. This guide also presents the results of the tests conducted to evaluate the ability of configurations to scale to host hundreds of millions of objects. After hundreds of hours of benchmarking exercise, this performance & sizing guide provides empirical answers to a range of performance questions surrounding Ceph Object Storage.

Understanding a Multi-Site Ceph Gateway Installation

With the major rework of the Ceph gateway software in the Jewel release it became necessary to revisit the installation and configuration process for S3 and Swift deployments. Although some documentation is already available on the Internet most of them do not convey a deeper understanding of the various configuration parameters. This applies in particular for the fail­over and fall­back process in case of a disaster. This whitepaper aims at improving this situation.

This reference architecture describes how to deploy Red Hat OpenStack Platform and Red Hat Ceph Storage in a way that both the OpenStack Nova Compute services and the Ceph Object Storage Daemon (OSD) services reside on the same node. A server which runs both compute and storage processes is known as a hyper-converged node. There is increasing interest in the field for hyper-convergence for cloud (NFVi and Enterprise) deployments. The reasons include smaller initial deployment foot prints, a lower cost of entry, and maximized capacity utilization.

Ceph on NetApp E-Series

This technical report describes how to build a Ceph cluster using a tested E-Series reference architecture. The report also describes the performance benchmarking methodologies used along with test results.

Red Hat Ceph Storage on Dell PowerEdge R730xd

This technical white paper provides performance and sizing guidelines for Red Hat Ceph Storage running on Dell servers, specifically the Dell PowerEdge R730xd server, based on extensive testing performed by Red Hat and Dell engineering teams. The PowerEdge R730xd is an award-winning server and storage platform that provides high capacity and scalability and offers an optimal balance of storage utilization, performance, and cost, along with optional in-server hybrid hard disk drive and solid state drive (HDD/SSD) storage configurations.

Red Hat Ceph Storage on Supermicro Storage Servers

Ceph users frequently request simple, recommended cluster configurations for different workload types. Common requests are for throughput-optimized and capacity-optimized workloads, but IOPS-intensive workloads on Ceph are also emerging. To address the need for real-world performance, capacity, and sizing guidance, Red Hat and Supermicro have performed extensive testing to characterize Red Hat Ceph Storage deployments on a range of Supermicro storage servers in optimized configurations.

Red Hat Ceph Storage on Intel Processors and SSDs

Ceph users frequently request simple, optimized cluster configurations for different workload types. Common requests are for throughput-optimized and capacity-optimized workloads, but IOPS-intensive workloads on Ceph are also emerging. Based on extensive testing by Red Hat and Intel with a variety of hardware providers, this document provides general performance, capacity, and sizing guidance for servers based on Intel® Xeon® processors, optionally equipped with Intel® Solid State Drive Data Center (Intel® SSD DC) Series. provides general performance, capacity, and sizing guidance.

Cisco UCS C3160 high Density Rack Server with Red Hat Ceph Storage

Object storage resolves the challenges of storing massive amounts of data: in particular, unstructured data. Object storage provides the infrastructure to store files along with their metadata, together called objects. Object storage can be accessed through applications that use Representational State Transfer (REST) APIs. The widespread use of big data has led to massive growth in storage requirements, scalability challenges for traditional file- and block-based systems, and the need for simpler and easier maintenance. These factors together have led to a rapid increase in object storage hardware and software solutions.

Accelerating Ceph for Database Workloads with an all PCIe SSD Cluster

PCIe SSDs are becoming increasingly popular for deploying latency sensitive workloads such as database and big data in enterprise and service provider environments. Customers are exploring low latency workloads on Ceph using PCIe SSDs to meet their performance needs. In this presentation, Intel looks at a high IOPS, low latency workload deployment on Ceph, performance analysis on all PCIe configurations, best practices, and recommendations.

Red Hat Ceph Storage on QCT Servers

Running Red Hat® Ceph Storage on QCT servers provides open interaction with a community-based software development model, backed by the 24×7 support of the world’s most experienced open-source software company. Use of standard hardware components helps ensure low costs, while QCT’s innovative development model enables organizations to iterate more rapidly on a family of server designs optimized for different types of Ceph workloads. Unlike scale-up storage solutions, Red Hat Ceph Storage on QCT servers lets organizations scale out to thousands of nodes, with the ability to scale storage performance and capacity independently, depending on the needs of the application and the chosen storage server platform.

Ceph@HOME: the domestication of a wild cephalopod

I’ve long looked for a distributed and replicated filesystem to store my data. I’ve also been the sysadmin at the university, in the distributed systems lab, and for some time the entire computing institute. In both positions, I took care of backups and worried about the potential for loss of data due to disk failures and of keeping the network going in the presence of hardware failures.