As a Storage Solutions provider, AiNET recognized shortcomings in the underlying platforms of such solutions, and, in response, developed the AiNET® Peta10 Storage Solution. Building on the most advanced proprietary and open storage arrays available, the highly-agile Peta10 can be configured to address virtually any storage requirement.

The Peta10 is designed to solve a number of problems facing enterprises of all sizes:

55 magnetic, spinning disks (HDD) in each 4U shelf

5 SSD (up to 10TB of SSD) per 4U shelf

Hybrid Storage with Automatic Tiering

Flash optimized for high IOPS (>100,000 IOPs per 4U shelf)

Increased SAN storage efficiency through thin provisioning and data deduplication: 50% or greater

Higher storage utilization can lead to lower total cost of ownership

Benefits apply across all SAN protocols: NFS (v3/v4), FC, iSCSI, CIFS, Infiniband and FCoE as well as NAS protocols

In service capacity upgrades (ISSU) – Designed to grow with existing technology, the Peta10 allows in service capacity upgrades of hundreds of Terabytes per shelf. Perfect for Information Lifecycle Management (ILM). More important than ISSU, AiNET’s new-every-two program provides for FREE capacity upgrades for all Peta10’s under active support contracts, every two years.

AiNET Managed Services back the Peta10 with 100% Pro-active Monitoring and Fault Detection, Software and Updates, and System, Parts and Labor support 24x7x365.

The AiNET Peta10 Storage Solution is supported globally with Advanced Equipment Replacement. Peta10 SAN Solutions are available with in service system upgrade (ISSU) allows for capacity upgrades without service interruption (densities over 550TB per 4U shelf with 20TB of Flash Cache/ARC Cache).

Moving beyond years of legacy design and bottlenecks, the Peta10 eliminates the traditional hardware RAID controllers and their inherent problems:

Slow, non-parallel I/O processing

Risk of data loss

Loss of data integrity

High cost, poor integrity design

Poor random access performance

Copy-on-write storage model

The Peta10 uses a copy-on-write schema for file system transactions. By allowing multiple data requests to the same, recently written data by copying the data when committed to the disk (rather that the traditional method of committing it to metadata) the overall concurrency and latency of the filesystem are improved. Copy-on-write eliminates many of the requirements of file system locking and the inherent risks of data contention in highly concurrent file systems.

Petabyte architecture: Always consistent on-disk data

Most production systems today still allow the on-disk data to be inconsistent in some way for varying periods of time. If an unexpected crash or power cycle happens while the on-disk state is inconsistent, the entire disk system will require some form of repair. While somewhat acceptable with smaller file systems (under 1TB) this penalty grows exponentially more expensive as one approaches the Petabyte level. Slow, time consuming re-writes and even worse — expensive fscks, chkdsks, or meta data replays do not scale. Eliminating the on-disk inconsistencies is part of a true petabyte solution.

High data ingest rates

Whether using Peta10s (P10s) as part of a GPFS, Lustre or other high-performance cluster, or simply as a standalone all-in-one storage solution, Peta10s have a high ingest rate. Full, wire-speed write at over 20Gb/s per stream is possible in various configurations for super computing clusters.

Checksum all data on read and write

All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (Fletcher-2, Fletcher-4, or SHA-256) of the target block which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and an intent log (or journal) is used when synchronous write semantics are required.

SSD eliminates legacy caching and constraints

The combination of copy-on-write and the in-built SSD allows the file system to organize and reorder writes to the spinning magnetic hard drives without losing transactional atomicity (write-order) presented to the client. This allows enhanced performance for transactions involving “hot” data as well as small block data updates. Hot data would be provided by the SSD as an L2 cache transaction — providing silicon speed well beyond the capabilities of traditional SANs.

Small block random transactions, reinvented

Small block updates, especially heterogeneously from disparate data clients challenge traditional SAN architectures with high seeks and small times on disk. Copy-on-write eliminates these challenges by reorganizing random writes into long sequential writes opportunistically based on drive head positioning — in fact, several reads and writes can occur simultaneously with very little drive head movement. This allows the Peta10 to frequently exceed the previous 100 tps per spindle limits of all traditional systems.

Enabling advanced solutions

The SSD is a revolutionary improvement on the traditional RAID memory cache. By its persistence, it eliminates many of the data risks associated with power integrity, battery life and incomplete disk writes. By its size (Terabytes), it allows the Peta10 to perform complex seek operations (deduplication, encryption, integrity verification, L2 caching) at nearly instantaneous (silicon) speeds. These multiple levels of performance, optimization and integrity verification enhance the performance and reliability of the entire Peta10 platform.

Peta10 and the HBA

By directly utilizing the Host Bus Adapter (HBA), the Peta10 is able to manage integrity on disk by the checksum for every block throughout the file system tree. This end-to-end integrity system is further described here.

Peta10: Higher concurrency, higher density

By organizing each Peta10 shelf into as many as 8 independent, internally protected arrays, each unit is able to provide a high level of concurrency and performance for even disparate, random access work loads. Better concurrency allows higher average utilization of bus and network bandwidth while reducing request times.

Teamed with other Peta10s for high availability improves performance as Peta10s can collaboratively solve large data set (Big Data) get/read, set/write requests (RESTFUL requests). Each Peta10 shelf is capable of over 100,000 IOPS (I/O operations per second). Each standard cabinet is capable of over 1,000,000 IOPS on well over a 5 Petabytes formatted data storage.

Fully supported, monitored and maintained — as a service

Coupled with AiNET’s Global Support and Services, the Peta10 is never alone. Whether provided as a part of a Trusted Storage as a Service offering, colocated in a datacenter, or operated in your own Enterprise data center, the Peta10 is under continuous observation and profiling. With constant health and performance monitoring over the Internet, VPN or private-line, over 1500 performance points per shelf are measured and tracked each minute. Software and hardware are fully supported and tightly integrated. Rich profiling and predicative failure modeling are performed by an AiNET 24/7/365 Operations Center and advanced preventative maintenance is scheduled before drive failures occur.