Panasas

We’re pleased to release the first in a three-part series of video Q&A sessions with Panasas Chief Scientist Dr. Garth Gibson. The series will offer a candid look at the underpinnings of the PanFS file system and what sets it apart from other approaches to data management and data protection.

Geoffrey Noer

There’s no question that this is a big announcement: not only have we announced our new flagship platform ActiveStor 16 but we also announced PanFS 6.0, the biggest step forward for our storage operating system in many years.

Geoffrey Noer

Happy birthday, RAID! Twenty-five years ago, in March 1988, Panasas founder and chief scientist, Dr. Garth Gibson, published the paper “A Case for Redundant Arrays of Inexpensive Disks (RAID)” with co-authors David Patterson and Randy Katz, inventing a concept that would prove central to the storage industry for decades to come. Congratulations to all three of these storage visionaries!

Brent Welch

In my last post I talked about how the Panasas parallel file system (PanFS) achieves extreme performance for big data sets. It also provides redundancy without the need for hardware RAID controllers. In the attached video, Garth Gibson, Panasas founder and CTO, digs deep into the specifics of file system RAID and how PanFS delivers redundancy as part of the file system itself.

Brent Welch

Traditional RAID is designed to protect whole disks with block-levelredundancy. An array of disks is treated as a RAID group, or protection domain, that can tolerate one or more failures and still recover a failed disk by the redundancy encoded on other drives. The RAID recovery requires reading all the surviving blocks on the other disks in the RAID group to recompute blocks lost on the failed disk. As disks have increased capacity by 40% to 100% per year, their bandwidth has not increased substantially.