cloud storage

The cloud is already known for high scalability and availability. So why settle for a generic cloud solution when you can get something much more flexible and economical at the same time? Discover how you can get all the benefits of cloud storage for backup without actually subscribing to a public cloud. You can do so by implementing a cloud storage solution on premises that’s using the same approach the cloud vendor uses to manage its own storage resources.

My division is Orange Labs: Products & Services. Our expertise comes from 3,300 people in 11 different countries. We define the overall technology strategy for all the products provided to the group like: consumer cloud storage, e-mail, and user directories.

Vote before Wednesday the 6th of August 2014 The next OpenStack Summit in Paris is just a few months away – Nov. 3rd to 7th – and Scality will demonstrate and present innovative storage services for OpenStack. It will be a fantastic opportunity to meet Scality engineers, attend Scality sessions, and see demos of the Scality RING coupled with OpenStack Swift and Cinder.

April 28th – Scality was very visible and active during NAB 2014 a few weeks ago with ton of visits on our booth. We had the opportunity to be interviewed to introduce the mission of Scality and the design we selected to build a massively scalable storage platform: the RING.

April 7th – Scality selects Aspera, the market leader for large file transfer, the reference in Media and Entertainment market. Certified on the RING platform at the file level on connector nodes, Aspera fasp engines leverage the parallelism of the architecture to deliver really high transfer rates. Multiple deployment examples can coexist: Scality at the source and destination, Scality to/from any site with Aspera running (an other file server or NAS, an Amazon Aspera EC2 instance)

April 7th – Scality selects Aspera, the market leader for large file transfer, the reference in Media and Entertainment market. Certified on the RING platform at the file level on connector nodes, Aspera fasp engines leverage the parallelism of the architecture to deliver really high transfer rates. Multiple deployment examples can coexist: Scality at the source and destination, Scality to/from any site with Aspera running (an other file server or NAS, an Amazon Aspera EC2 instance)

March 17th – Beyond Scality and SGI oem agreement for the RING sold as SGI OmniStor, SGI wished to validate and integrate the RING with SGI DMF. Data Migration Facility, the famous SGI solution, largely deployed and adopted in technical computing environment is a well known and recognized product. Considering the RING, one of the leading massively scalable storage platform in the industry, based on object storage technologies is an obvious choice for SGI. How data manager could deal with the real data deluge they live every day?

December 18th – Data Centers represents the Holy Grail of IT since large and visible internet companies demonstrate their capability to redesign and revolutionize the heart of corporate IT. They also promote, validate and build real high-end infrastructure on commodity hardware. This ange represents the first dimension of the ecosystem for large scale IT environment, the capability to be hardware agnostic meaning that the difference comes from software. Scality is a software only solution deployed on x86 “classic” servers transforming a rack of servers into a large storage pool.

December 16th – API or not API ? that is the question… Most of the time users can’t really choose and decision is controlled by application. In 2 words, if the application is a commercial one, users don’t have access to the code and can’t embed any specific code. In that case, if the application requires a file access and users pick an object storage solution, the only possible glue between the two is a gateway. Now when users design their own application, a decision must be taken between several available object API. Among others people have choice between an.

December 9th – Scality was pretty active during SC2013 at Denver a few weeks ago, lots of visits of our booth and also the opportunity to speak with InsideHPC to summarize our criterias for an ExaScale DC and the role of Scality RING as a Tier2 in a multi-tier environment especially in a Lustre or GPFS compute environment. Philippe Nicolas, Product Strategy (pn@scality.com)

December 9th – Scality was pretty active during SC2013 at Denver a few weeks ago, lots of visits of our booth and also the opportunity to speak with InsideHPC to summarize our criterias for an ExaScale DC and the role of Scality RING as a Tier2 in a multi-tier environment especially in a Lustre or GPFS compute environment. Philippe Nicolas, Product Strategy (pn@scality.com)

– Marc Staimer, President & CDS Dragon Slayer Consulting On 17 September 2013, GigaOM released a white paper by this analyst, “How object storage tackles thorny ‘exascale’ problems”. Many readers labor under the false assumption that the thorny exascale problems are unique to those with exabytes of storage. They are not. These problems exist and are just as pernicious at the petascale (petabytes of storage capacity) and even large terascale (hundreds of terabytes) levels. So why is this false perception so pervasive?

San Francisco – October 26th, 2012 As Internet use continues to skyrocket, the volume of files being produced, shared and stored climbs at a tremendous pace. The traditional methods of storing this kind of ‘unstructured data’ show signs of failing under the strain of this completely unanticipated load. In reaction, system administrators work furiously to plug the fissures appearing in their infrastructure, and CIOs cast about urgently trying to assess and source potential solutions to these tectonic shifts.

George Crump from Storage Switzerland, back in January, wrote an article about the True Cost of Storage that clearly stated how to really calculate one’s Total Cost of Ownership (TCO) when it comes to Storage. Through all our discussions about storage with prospects and customers, we realized that the real, hard cost of storage was not always well calculated, and pretty elusive to the IT managers. It was made even more obvious when comparing to the new types of Storage available to customers, especially Cloud Storage since the models, both economic and technical, were quite different.