One of the major benefits of cloud-based storage solutions are their scalability and elasticity, allowing companies to store as much, or as little data, as they need from month to month. That scalability is made possible by object storage.

In this podcast, industry expert Marc Staimer, president of Dragon Slayer Consulting, speaks with SearchCloudStorage.com assistant site editor Rachel Kossman about the technical details, and benefits, of cloud object storage. The two also discuss the limitations of object storage, such as poor IOPS and high latency. Staimer also gives listeners a technical drilldown into the details of cache coherency.

Kossman: You recently gave a Storage Decisions presentation where you explained to attendees that scalability for the cloud relies on object storage. Can you explain cloud object storage to our listeners?

Staimer: You have to consider that today, scalability is becoming a major issue in storage. If you're looking at a single organization that needs to scale its storage into the petabytes, what happens when you have dozens, hundreds or thousands of organizations having to do the same on cloud storage? That means it has to scale in multiple orders of magnitudes greater than what an individual with their own storage system may require. Since standard storage systems like [storage-area network] SAN, [network-attached storage] NAS, scale-out SAN or scale-out NAS, aren't going to scale to that magnitude, you've got to have a different type of storage or you can't offer cloud storage that would be cost effective to the masses. The only technology that can do that cost effectively today is object storage. Object storage has no limits; it can scale nearly indefinitely at this point, therefore, it's an ideal variation of storage that works for cloud storage.

Kossman: So this eliminates the need for cache coherency, correct? Can you elaborate on that?

Staimer: The object approach to storage puts more emphasis on the individual chunks of data that are loosely federated [rather] than on a consistent storage system across all the resident data. Traditional storage requires this, which means for object storage, you don’t have to have a single, or aggregated, namespace governing all the data. Instead, you have a loose federation of individual data elements that control their own destiny. This eliminates the need for cache coherency across the entire system because each individual chunk of data has metadata about the whole data set. The need for every node to be aware of the objects owned by other nodes, and even the concept of ownership of a piece of data by a physical node, goes away. This allows data to scale based on rules about the data itself rather than about the storage system. As long as the data meets specified policies about how many copies it needs to have, where it can live and what geographic location, and so on, the system can grow and scale basically indefinitely.

Kossman: Is there any object storage that's good for primary storage?

Staimer: There is, but it's limited. Right now the only object storage system that works at the performance level that people expect in primary storage is Scality's Organic Ring. They have a variation that gives SAN-like performance for applications like email, databases or things of that nature.

Managing data with an object storage system

GUIDE SECTIONS

0 comments

E-Mail

Username / Password

Password

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy