HPCwire » storagehttp://www.hpcwire.com
Since 1986 - Covering the Fastest Computers in the World and the People Who Run ThemSun, 02 Aug 2015 12:39:43 +0000en-UShourly1http://wordpress.org/?v=4.2.3Panasas Rolls Out ActiveStor 18http://www.hpcwire.com/2015/07/09/panasas-rolls-out-activestor-18/?utm_source=rss&utm_medium=rss&utm_campaign=panasas-rolls-out-activestor-18
http://www.hpcwire.com/2015/07/09/panasas-rolls-out-activestor-18/#commentsThu, 09 Jul 2015 12:51:06 +0000http://www.hpcwire.com/?p=19328Hybrid scale-out NAS specialist Panasas today introduced ActiveStor 18, the latest in its ActiveStor appliance line. According to the company, the new appliance features a 33 percent increase in density, is available in 4TB and 8TB hard drive configurations, and delivers 20PB and 200GB/s scalability. The product will ship in September. “We’re now using Western Read more…

]]>Hybrid scale-out NAS specialist Panasas today introduced ActiveStor 18, the latest in its ActiveStor appliance line. According to the company, the new appliance features a 33 percent increase in density, is available in 4TB and 8TB hard drive configurations, and delivers 20PB and 200GB/s scalability. The product will ship in September.

“We’re now using Western Digital HGST He8 technology – a seven platter, helium filled design. This is also our first generation of ActiveStor with the file system support necessary to leverage 4KB native sector support,” said Geoffrey Noer, vice president of product management at Panasas. The amount of cache per storage blade was also doubled. (Specs snapshot below)

Panasas has mainly served HPC workflows with strength in traditional technical computing markets such as Oil & Gas, government, and manufacturing. The new release continues that focus but also supports the company’s new thrust into media and entertainment, which Panasas says is a market in transition and increasingly turning to HPC technologies. Company literature indicates ActiveStor 18 is well suited for mixed workloads: large file throughput, IOPS performance, and lowest cost per TB.

Of note, Panasas is one of the few storage vendors sticking with its proprietary storage operating system, PanFS, which the company considers an important differentiator: By closely integrating PanFS with the hardware, “You substantially improve reliability, manageability, and performance. PanFS provides a single global namespace for simplified storage management,” said Noer. That said, three protocols are supported: Panasas DirectFlow, NFS (Linux), and SMB (Windows).

The most important differentiator may be the Panasas two-blade hybrid architecture and DirectFlow protocol. “We have a storage blade and a director blade. The storage blade scales the amount of storage capacity and throughput performance; the director blades are responsible for metadata, small files, and transactional type of performance. Both of those resources scale as you scale the system,” said Noer. Separating the two functions (main storage and metadata handling) reduces the need to frequently interrupt the storage blade and boosts throughput performance.

In typical HPC workflows, said Noer, 60 to 80 percent of all of the files by count are smaller than 64 Kilobytes, they’re very small files, but all of those small files take up less than one percent of the file systems’ capacity.

Also included on the appliance is PanFS RAID 6+, first introduced last summer by Panasas. PanFS RAID6+ is an intelligent per-file distributed RAID architecture implemented with erasure codes in software instead of traditional hardware RAID controllers. “Rebuilding drives was starting to be a problem even at 1TB or 2TB hard drives. Now with 6TB and 8TB drives you can be looking at upwards of a week or certainly days to do a rebuild in that sort of an environment with hardware RAID.”

“If you approach things very differently [as in RAID 6+] and protect the files instead of the blocks of hard drives you can limit the rebuilds to only the work that absolutely has to be done. So if we have a clump of bad sectors for example we only have to rebuild the files that touched the bad sectors, we don’t have to rebuild the entire storage,” said Noer. “If the whole hard drive actually dies, again we don’t have to rebuild the unused capacity, we only rebuild the files that were affected by that drive and avoiding rebuilds means not having to bring the performance of the systems down unnecessarily.”

Interest in RAID 6+ is growing according to a recent survey of 90 Panasas ActiveStor users in which 62 percent reported RAID 6+ will be important and 16 percent characterized it as critical. “About the only place RAID 6+ is not so valuable is in a very pure HPC scratch environment where the data is temporary and can easily be regenerated,” Noer said.

Though at home in HPC, Panasas has begun looking towards broader enterprise opportunities. Media and entertainment is the first.“That segment has already gone from standard resolution to HD and now a lot of filming is being done in 4K resolution and media is being distributed in more formats. There’s mounting pressure on data storage they are staring to consider scale out solutions over older SAN technology,” said Noer.

DDN (Santa Clara, Calif.) led off its technology roadmap with the release Tuesday (June 30) of version 2.0 of its Web Object Scaler (WOS) object storage platform that targets private and public cloud storage build outs. The upgraded object storage platform will be available in August, the company said.

The WOS update will be followed during the second half of 2015 by the release of a hyper-converged “Wolfcreek” platform that will serve as a “building block” for future DDN storage offerings. Wolfpack is billed as capable of over 60 Gb/sec. throughput and more than 5 million IOPS in a 4U rack space. Like other storage vendors, DDN is stressing hyper-converged infrastructure as datacenters run out of space.

Also in the second half of this year, DDN said it would roll out its “Infinite Memory Engine” (IME) that leverages I/O accelerations software to deliver a claimed 1,000-fold increase in I/O performance. IME also is being positioned as among the first flash-based, “application aware” I/O accelerators delivering 4 million IOPS per system, the company said.

Like many successful HPC vendors, DDN has increasingly turned its attention to the burgeoning enterprise market. “There are only so many government agencies and universities and it becomes a matter of simply stealing market share rather than opportunities for growth,” said Alex Bouzari, DDN’s chairman and co-founder, said in an interview.

Bouzari stressed that the enterprise market for DDN would eventually dwarf the HPC market in which it is already a dominant player (see HPC Wire, DDN, IBM Lead Large HPC Storage Supplier Pack). “It’s basically all of the vertical enterprises markets which are exhibiting HPC kinds of requirements; for the most part the Global 1000.”

Both HPC and large enterprise segments crave performance, but with distinct differences: “HPC customers say, ‘Give me the highest bandwidth possible and I will sacrifice reliability.’ Enterprise customers would never compromise on resiliency and redundancy – that’s their business,” said Bouzari

DDN has relied heavily on distribution channels for product fulfillment, even though its direct sales team is the major force creating demand. The nature of its channel is changing, Bouzari stressed, particularly the enterprise market: “It used to be that 80 percent of our revenue was through the channel and 80 percent of that went through very big resellers such as IBM, HP and Dell. Now the channel is still 80 percent of our but only a very small portion goes through the big guys. Most of it is going through mid-size and small resellers.”

With those market realities in mind, DDN said this week it would expand its software-defined storage offerings. For example, WOS object story represents the first DDN storage software for use on non-DDN hardware. SFX flash caching software followed. Similarly, IME will be offered as a software-only option.

These and other recent announcements illustrate how software-defined infrastructure is making headway in the enterprise datacenter.

DDN’s enterprise push targets a range of industries and HPC customers from financial services and energy to cloud services providers and telecommunications carriers. Along with IME and WOS, DDN’s storage portfolio includes “Scaler” file systems appliances and its Storage Fusion Architecture that combines storage, processing and embedded applications.

DDN is betting that the software-defined datacenter will require across-the-board data management, including persistent disk and tape storage along with distributed data storage in the cloud. Lance Broell, product marketing manager for WOS products at DDN, said a key use case will be active archiving in private enterprise clouds.

Broell cited other WOS use cases besides public and private cloud storage, including content distribution, file synch and share along with video streaming.

The WOS upgrade includes up to 96 8-Tb SATA drives in a 5U rack. That, DDN claims, is twice the density of storage competitors like Hewlett-Packard and Dell that currently offer about 50 drives in roughly the same configuration. WOS storage capacity tops out at 8 billion objects, the company said.

The object storage upgrade also adds support for OpenStack/Swift APIs designed to enable OpenStack environments to leverage WOS using the native Swift API. The move was prompted in part by the enterprise requirements to be compatible with Amazon Web Services’ Simple Storage Service (S3) object storage. Broell said an embedded interface option provides both S3 and Swift interfaced embedded into WOS appliances. Hence, any S3 or Swift instance can access a WOS node in a storage cluster.

In an attempt to squeeze more efficiency out of storage infrastructure, DDN also implemented a two-stage hierarchical erasure coding approach that reduces overhead by, for example, eliminating the need to rebuild network drives. It also boosted network security by adopting secure sockets layer certification. That’s important, Broell noted, because data traffic is most vulnerable as it moves across wide-area networks.

Networks are also increasingly becoming bottlenecks for object storage. Hence, DDN claims WOS reduces latency by as much as 90 percent as video streaming, content distribution and large file transfers become standard requirements in both the HPC and enterprise markets.

]]>http://www.hpcwire.com/2015/06/30/ddn-unveils-wos-2-0-and-details-technology-roadmap/feed/0HP Removes Memristors from Its ‘Machine’ Roadmap Until Further Noticehttp://www.hpcwire.com/2015/06/11/hp-removes-memristors-from-its-machine-roadmap-until-further-notice/?utm_source=rss&utm_medium=rss&utm_campaign=hp-removes-memristors-from-its-machine-roadmap-until-further-notice
http://www.hpcwire.com/2015/06/11/hp-removes-memristors-from-its-machine-roadmap-until-further-notice/#commentsThu, 11 Jun 2015 14:30:47 +0000http://www.hpcwire.com/?p=18869One year after Hewlett-Packard launched its ambitious “this will change everything” project called “The Machine,” the company is making some concessions to its initial vision, something it says is necessary in order to deliver a working prototype by next year. Announced with great fanfare at last year’s HP Discovery event, the Machine was to be Read more…

]]>One year after Hewlett-Packard launched its ambitious “this will change everything” project called “The Machine,” the company is making some concessions to its initial vision, something it says is necessary in order to deliver a working prototype by next year.

Announced with great fanfare at last year’s HP Discovery event, the Machine was to be a reinvention of computing for the data era. It was to be special in every way — specialized cores, a purpose-built open source operating system optimized for non-volatile systems, and the centerpiece: memristor non-volatile memory, a special kind of resistor circuit that functions as both storage and memory.

Now some of that specialness is being put on hold in favor of a more conventional approach. The memristor is the main sticking point; the technology has come a long way under HP’s research arm, but still isn’t economically viable for volume production.

“We way over-associated this with the memristor,” Mr. Fink said in an interview with New York Times writer Quentin Hardy. “We’re doing what we can to keep it working within existing technology.”

In that vein, HP will use DRAM memory for its prototype, and will convert the shared memory pool to non-volatile memory, for example phase change memory, in future versions.

Memristors are still on the table and HP is aiming to have them inside the system when it makes its market debut five years from now.

A mechanical mockup of the prototype was on display at last week’s Discovery conference in Las Vegas. Next year, HP expects to reveal a working rack with 320 TB of “main memory” (240 TB shared memory plus 80 TB local to the compute node), 2,500 CPU cores, and an optical backplane. It will run a version of Linux rather than a customized operating system.

Specialized processing was one of the hallmarks of the original announcement. The right compute for the right workload would make it possible to achieve a factor of six times performance increase using 80 times less energy, HP said a year ago. Since repositioning the Machine as a “memory-driven computer architecture” last week, the messaging has focused more on the democratization of fast memory and less on processing power. While power-efficient memory is crucial for reaching computing milestones, such as exascale, it was the combining of component technologies into a single project that made the Machine such a radical departure from the status quo.

“A revolutionary new computer architecture…this changes everything,” was how company CEO Meg Whitman characterized this confluence.

Despite the scaled-down plans, HP Labs Deputy Director Andrew Wheeler insists “it’s been a great first year” full of “significant progress on all fronts.”

“The primary objective for next year is to deliver that initial working prototype of the Machine. This is important to us so we can use that platform to continue our research as well as to enable internal development teams and partners so they can advance our memory-driven computing architecture,” said Wheeler.

Speaking at Discover 2015, Sarah Anthony, systems research project manager at HP, addressed the Machine’s flattened memory architecture as she pointed to the mechanical mockup. “Here in this one node volume, we have terabytes of memory and we have hundreds of gigabits per second of bandwidth off the node, and that’s really important because we’ve changed what I/O is. It’s not I/O, it’s a memory pipe,” she said.

“It’s going to provide a great foundation for ultra-scale analytics, but it has a significant impact on the system software. If you think about it, the essential characteristics of the Machine are that you have this massive capacity in terms of memory, tremendous bandwidth and very low latency. This is going to cause us to make modifications in the operating system and the software system on top of that,” continued Rich Friedrich, director of Systems Software for the Machine at HP.

For lots more on HP’s design plans, check out the Discovery 2015 panel presentation “HP Labs presents a peek under the hood of the Machine, the future of computing,” available in full below:

In another HP Labs presentation titled “Reimagining systems and application software for The Machine,” Principal Researcher at HP Kimberly Keeton covers the defining features of the Machine and explores the implications for systems software, programming models and applications. Also included is an overview of the Machine’s “shared something” approach, which represents a middle ground between shared everything and shared nothing models.

]]>http://www.hpcwire.com/2015/06/11/hp-removes-memristors-from-its-machine-roadmap-until-further-notice/feed/0How Big Data Analytics Supports Tyrone’s ISP Customers in Indiahttp://www.hpcwire.com/2015/06/08/how-big-data-analytics-supports-tyrones-isp-customers-in-india/?utm_source=rss&utm_medium=rss&utm_campaign=how-big-data-analytics-supports-tyrones-isp-customers-in-india
http://www.hpcwire.com/2015/06/08/how-big-data-analytics-supports-tyrones-isp-customers-in-india/#commentsMon, 08 Jun 2015 07:00:13 +0000http://www.hpcwire.com/?p=18751The big data revolution is sweeping across India, transforming engineering, science, healthcare, finance and every other aspect of business and society. In particular, one vital segment of the nation’s economy, the Internet Service Providers (ISPs), are seeking better ways to use the massive amounts of information they and their customer’s generate to provide enhanced services Read more…

]]>The big data revolution is sweeping across India, transforming engineering, science, healthcare, finance and every other aspect of business and society.

In particular, one vital segment of the nation’s economy, the Internet Service Providers (ISPs), are seeking better ways to use the massive amounts of information they and their customer’s generate to provide enhanced services and increase profits.

Just a few years ago, big data analytic tools were used only by large corporations that could pay the price – an outlay that included hiring highly educated (and expensive) data scientists and building a supporting IT infrastructure.

Now, with the advent of Hadoop and other open source tools, as well as the efficiencies of cloud computing, the drive to exploit the many benefits of big data is being democratized.

Organizations – like India’s ISPs – can now explore how big data analytics can help them adapt to new opportunities and challenges and bring advanced services and support to their customers. Tyrone, headquartered in Bangalore, is helping make this happen.

Tyrone is India’s leading provider of servers, storage, and back-up as well as offering big data and HPC solutions on its own clusters and in the cloud.

“For our larger customers, we are providing such services as traffic analysis, dealing with security attacks, and monitoring and analyzing the huge amounts of data they are generating,” says Sandeep Lodha, Tyrone CEO.

This, he says, includes the ISPs, who are facing a number of challenges resulting from the exponential growth of big data.

“About 50% of the work that comes in to Tyrone from our ISP customers has to do with log analysis and action items based on that analysis,” Lodha continues. He notes that just about everything generates logs – from Apple watches to devices connected up to the Internet of Things. And every log has its own specific challenges associated with ingesting, cleaning and running analytics in order for the ISP take timely action. This can range from warding off a denial of service attack to providing a new service that enhances the ISP’s competitive position.

One of the problems, Lodha points out, is that ISP routers generate huge amounts of data – including logs. But these routers lack the capacity to store large amounts of data in a complex, compressed format that can be easily and rapidly searched. For example, the ISP may want to determine the activity around a specific IP address during a certain time period. Or conduct a traffic analysis in order to optimize the network. In other instances, the ISP may need the system data to help ward off dangerous attacks on web sites it is hosting or to meet compliance requirements mandated by law enforcement agencies (LEA).

Tyrone is helping the ISPs deal with these and other big data challenges by creating a modular architecture specifically designed to handle the storage problem and provide all the capabilities necessary to ingest, clean and analyze customer data. The platform uses the Hadoop stack and Analytics Engine running on Tyrone servers powered by Intel Xeon processors. Tyrone now supports the Lustre file system as the underlying file system for Hadoop in response to the growing amounts of customer data generated on high performance computers. Data stored on systems based on Lustre and running Hadoop means that users do not have to transfer data back and forth between Lustre and HDFS.

Says Lodha, “Our modular approach allows us to maintain a highly compressed data footprint so our ISP customer do not have to spend a lot on data storage. Even with the compression, they can conduct analytics at a reasonable speed. If they want to speed up the analytics even more, we simply uncompress the data without making any changes to the analytics engine because of the modular architecture.”

Tyrone currently has six locations throughout India providing big data services to its customers, including the ISPs.

This in turn allows the ISPs to provide their Internet customers with new, improved and safer services based on the results of the big data analytics running on the Tyrone platforms.

]]>http://www.hpcwire.com/2015/06/08/how-big-data-analytics-supports-tyrones-isp-customers-in-india/feed/0Tyrone Brings HPC and Big Data Solutions to India and the Worldhttp://www.hpcwire.com/2015/03/02/tyrone-brings-hpc-and-big-data-solutions-to-india-and-the-world/?utm_source=rss&utm_medium=rss&utm_campaign=tyrone-brings-hpc-and-big-data-solutions-to-india-and-the-world
http://www.hpcwire.com/2015/03/02/tyrone-brings-hpc-and-big-data-solutions-to-india-and-the-world/#commentsMon, 02 Mar 2015 08:00:31 +0000http://www.hpcwire.com/?p=17619India’s appetite for high performance computing – ranging from powerful HPC clusters to advanced workstations – continues to grow as an increasing number of companies not only address opportunities within the country, but world wide as well. Indian enterprises are making their mark in such fields as seismic engineering for oil and gas, aeronautics and Read more…

]]>India’s appetite for high performance computing – ranging from powerful HPC clusters to advanced workstations – continues to grow as an increasing number of companies not only address opportunities within the country, but world wide as well.

Indian enterprises are making their mark in such fields as seismic engineering for oil and gas, aeronautics and defense, automotive design, advanced mathematical modeling, weather forecasting, entertainment, chemical and physical engineering, and manufacturing.

All require advanced computational capabilities and Tyrone, headquartered in Bangalore, is in the forefront of meeting these needs for both the country’s large enterprise and its fast growing small- to medium-sized business sector.

But, as Tyrone’s CEO Sandeep Lodha points out, India’s HPC community is facing a number of challenges.

For example, he notes, “Many HPC users generate tons of data but never organize this data or delete unnecessary data. Needed are massive amounts of low cost storage with a low power footprint. In fact, power is one of our biggest challenges. Large HPC clusters and supercomputers require tremendous amounts of power and cooling. Power is a scarce commodity in India and therefore PUE (Power Usage effectiveness) is extremely important”

Another problem is inadequate Internet access to some regions of the country. This makes it difficult to establish dispersed research facilities that are tied into to their larger counterparts in India’s cities.

Cloud computing is also having trouble making inroads in India because the concept of buying time on a HPC facility is still not accepted – although this is changing. In addition, code modernization – rewriting legacy applications to take advantage of today’s parallel computing systems – is slowing the growth of HPC adoption throughout India.

“For over a decade, we have been helping thousands of companies in India and around the world meet these and other computational challenges by supplying servers and workstations, storage, back-up and HPC and Big Data solutions,” says Lodha.

Servers and workstations

One of the mainstays of Tyrone’s offerings is a line up of nearly 280 workstations and servers. The systems are powered by Intel Xeon processors and Xeon Phi accelerators, GPUs from NVIDIA, and AMD Opteron series processors. Tyrone works with its server and workstation customers to help maximize their ROI and enjoy an out-of-the-box experience by delivering a solution that features advanced components that can scale to meet future needs.

HPC

Tyrone has a long and successful history of HPC solutions in India. For example, in 2013 Netweb Technologies, the Tyrone affiliate that handles all project implementation for Tyrone products in the country, installed India’s largest supercomputer. Built with hybrid technology, the system consists of 224 Intel-based Tyrone servers featuring Xeon processors and Xeon Phi coprocessors.

Tyrone HPC offerings include:

HPC clusters

Linux clusters

Rendering clusters

GPU optimized computing

SMP solutions

HPC management tools

“Although we have our share of large customers using our HPC clusters, our fastest growing sector for these services are small- to medium-sized enterprises,” Lodha says. “They are well aware of the competitive advantages that HPC-based modeling, simulation and analytics can bring.”

HPC in the Cloud

Another fast growing segment of Tyrone’s worldwide offerings is HPC in the cloud – remote computational services provided by Tyrone’s own dedicated clusters or in cooperation with partners such as Amazon. Infiniband connectivity speeds up the transfer of large datasets. Tyrone also offers proximity services, providing workstations and support to customers who want to run their jobs close to the physical HPC cluster.

Comments Lodha, “Our HPC in the cloud solution provides all the benefits of HPC without any of the hassles. Since you don’t own the data center, you don’t have to worry about power and cooling, real estate, and manpower issues. We provide all that as ‘HPC as a Service.’”

Big Data

Recently Tyrone has been involved in a number of Big Data initiatives that make the most of the company’s HPC experience. For example, one of India’s leading telecom service providers worked with Netweb to deploy an Intel-based, fully integrated Hadoop distribution. The implementation helped the telco establish market leadership and enhanced customer satisfaction by providing personalized and contextually relevant offers to their customers.

Storage

Tyrone provides two classes of storage solutions: the FS2 unified flexible storage system; and storage arrays that provide a wide variety of options – from iSCSI to Infiniband.

FS2 is a next generation storage solution that supports HPC and provides excellent results at affordable prices. One of the distinguishing features of this solution is the unification of NAS, SAN and VTL in a single box. This same box can be used as the system scales, lowering customer ROI.

The company also provides platforms for easy, cost effective, scalable backup, archive and restore functions.

Global Vision

“Although today our focus is primarily helping large and small- to medium-sized enterprises in India realized the benefits of advanced HPC solutions, we are also moving into the global marketplace,” Lodha says. “HPC and Big Data are a worldwide phenomena and Tyrone and Netweb are positioned to make a major contribution to their growth.”

]]>http://www.hpcwire.com/2015/03/02/tyrone-brings-hpc-and-big-data-solutions-to-india-and-the-world/feed/0Virginia Bioinformatics Institute Taps DDN Storage to Combat Ebola Outbreakhttp://www.hpcwire.com/2015/02/25/virginia-bioinformatics-institute-taps-ddn-storage-to-combat-ebola-outbreak/?utm_source=rss&utm_medium=rss&utm_campaign=virginia-bioinformatics-institute-taps-ddn-storage-to-combat-ebola-outbreak
http://www.hpcwire.com/2015/02/25/virginia-bioinformatics-institute-taps-ddn-storage-to-combat-ebola-outbreak/#commentsWed, 25 Feb 2015 22:13:35 +0000http://www.hpcwire.com/?p=17579When it comes to mitigating infectious disease outbreaks, like Ebola, time is of the essence. Researchers at Virginia Bioinformatics Institute (VBI) rely on rapid-response agent modeling to help public health organizations determine what steps to take in the face of deadly outbreaks. Based on factors such as demographics, family structures, travel patterns and other activities, Read more…

]]>When it comes to mitigating infectious disease outbreaks, like Ebola, time is of the essence. Researchers at Virginia Bioinformatics Institute (VBI) rely on rapid-response agent modeling to help public health organizations determine what steps to take in the face of deadly outbreaks. Based on factors such as demographics, family structures, travel patterns and other activities, the models shed light on how the disease is progressing regionally and globally.

VBI is one of the foremost research institutions using agent-based simulations to model biological systems, from cells to people, cities and countries. To handle both data- and compute-intensive models, VBI depends on its 2,500 core compute cluster and 1 petabyte of high-performance storage from DataDirect Networks (DDN).

DDN’s GRIDScaler GPFS parallel file system is another component that enables researchers and scientists at VBI to perform rapid, accurate Ebola outbreak modeling for the U.S. Department of Defense’s Defense Threat Reduction Agency (DTRA). As VBI computational epidemiologist Caitlin Rivers asserts, “Decision makers can’t wait for the outbreak to be over in order to make their decisions.”

Rivers describes how her team was able to provide in-depth analysis for the DoD rapid-response agency in just 48 hours, helping them decide the best locations for emergency treatment units in Sierra Leone, Guinea and Liberia.

“We received a call on Friday that the Department of Defense wanted some insight into where they should place hospital units,” says Rivers. “We were able to do simulations to optimize the amount of time that any individual would have to travel in order to reach a hospital unit. The data storage and the technology component is really critical to being able to provide that level of detail in the simulations. With Ebola, each sick person on average infects two other people and over time that’s exponential growth so a single infected person infects two people, then four people, and so on. That information had to be transmitted back to the DoD by Monday morning because a plane was about to leave with the supplies to build these units.”

In addition to speed and urgency, scalability is another key factor for VBI researchers engaged in infectious disease analysis. Data is increasing in multiple dimensions to the point where VBI is experiencing nearly 100 percent data growth year-over-year. Already VBI has expanded its DDN storage system from 300 terabytes to just over 1 petabyte and plans to add more capacity.

“The variety of data we gather as part of our modeling process drives the incredible amount of detail within our models as well as the output of each model,” states Kevin Shinpaugh, PhD, director of IT and High Performance Computing at VBI. “With DDN storage, we’re confident we can scale data storage to address both current and future modeling demands while expediting accurate responses during an emerging crisis.”

]]>http://www.hpcwire.com/2015/02/25/virginia-bioinformatics-institute-taps-ddn-storage-to-combat-ebola-outbreak/feed/0Seagate Stakes its Claim as an HPC Storage Leader Helping to Shape IT’s Futurehttp://www.hpcwire.com/2014/11/17/seagate-stakes-claim-hpc-storage-leader-helping-shape-future/?utm_source=rss&utm_medium=rss&utm_campaign=seagate-stakes-claim-hpc-storage-leader-helping-shape-future
http://www.hpcwire.com/2014/11/17/seagate-stakes-claim-hpc-storage-leader-helping-shape-future/#commentsMon, 17 Nov 2014 08:00:44 +0000http://www.hpcwire.com/?p=16236In recent years, high performance computing has had a major impact on the world and the development of IT systems in general. The nature of weather forecasting has changed to be much more precise. Our lives have been lengthened by the types of research that medical scientists are able to perform, and we now can Read more…

]]>In recent years, high performance computing has had a major impact on the world and the development of IT systems in general. The nature of weather forecasting has changed to be much more precise. Our lives have been lengthened by the types of research that medical scientists are able to perform, and we now can see through clouds and space dust to determine what black holes are made of.

This week at the Supercomputing Conference (SC’14) in New Orleans HPC will take center stage. As technology evolves, the role HPC plays in solving the world’s toughest problems becomes reality every day.

Seagate is proud to be one of the companies that has been supporting the evolution of technology for more than 30 years. As Ken Claffey, vice president and general manager of the company’s Cloud Systems & Solutions division, recently commented, “At Seagate we believe in working closely with the pioneers that will help shape the future of IT. Our belief is that the HPC community is really at the forefront of defining the future.”

Of all the elements that come into play for successful HPC implementations – no matter what the scale – storage is one of the most critical. Storage technology can either be a bottleneck or an enabler. Seagate has delivered a string of industry firsts that have helped define state-of-the-art scale-out architecture for HPC storage systems.

Seagate’s innovation is extended through the device to the information infrastructure. This brings a unique capability unmatched by any of its competitors – continuum of ownership across the component chain. The company designs and manufactures its components and drives, firmware, storage enclosure, software enclosure management and software layer. This is demonstrated best in ClusterStor, a fully integrated scale-out solution designed for HPC, cloud and big data customers. ClusterStor delivers what Seagate likes to call the “three Ps” – performance, productivity and profitability.

Performance – ClusterStor’s architecture provides unmatched speed and efficiency. The high-end ClusterStor 9000 and its family members, the ClusterStor 6000 and ClusterStor 1500, are also the most efficient HPC storage systems on the market. Their architecture allows the systems to achieve greater performance per disk drive, making the entire rack faster and more efficient.

An example of performance is what our strategic partner, Cray, is doing with Seagate storage technology in the contract awarded by the National Nuclear Security Administration. The $174 million contract includes Cray’s recently announced XC40 supercomputer and the new Cray Sonexion 2000 storage system line. The Cray Sonexion 2000 is a scale-out Lustre storage system, purpose-built for big data and supercomputing, Powered by Seagate. The parallel file systems used for primary storage will be comprised of numerous racks of Cray Sonexion 2000 storage. The integrated solution will include approximately 82 petabytes of total usable capacity and about 1.7 terabytes-per-second of total aggregate performance.

Productivity – Seagate ClusterStor products are designed for ease of use and management. Before being shipped, each system is fully integrated and tested at the factory. This means that users can be up and running in hours vs. days, weeks, or even months required by competitive offerings or do-it-yourself “science project” options. A rapid installation, combined with ease of use means faster ROI as users speed up their time to results for their HPC science, big data or other projects.

Profitability – Rapid ROI is part of a larger picture – users are more profitable due to the efficient nature of ClusterStor’s architecture. Seagate is able to optimize ClusterStor at every layer of the storage stack – from the application to the file system, all the way down to the storage media itself. This integrated approach means fewer disks, less supporting infrastructure, power and cooling savings that can run as high as 33%, and reduced administrative and overhead costs. Not only is ROI positively impacted – total cost of ownership is reduced as well. When dealing with HPC systems, the savings can be dramatic – sometimes hundreds of thousands of dollars in operating costs alone.

Seagate believes in working closely with the pioneers in HPC who will shape the future of high performance computing and ultimately every aspect of information technology. Given its leadership position and commitment, it’s no wonder the company’s motto is “Seagate is HPC storage!”

]]>http://www.hpcwire.com/2014/11/17/seagate-stakes-claim-hpc-storage-leader-helping-shape-future/feed/0Want to Boost HPC Performance? Adding Disks is Not the Answerhttp://www.hpcwire.com/2014/10/20/want-boost-hpc-performance-adding-disks-answer/?utm_source=rss&utm_medium=rss&utm_campaign=want-boost-hpc-performance-adding-disks-answer
http://www.hpcwire.com/2014/10/20/want-boost-hpc-performance-adding-disks-answer/#commentsMon, 20 Oct 2014 07:00:00 +0000http://www.hpcwire.com/?p=15722These days storage is in the HPC spotlight. How well an HPC application performs relies not only on total system memory bandwidth and sustained floating point operations per second, but also on a storage architecture that supports sufficient throughput to handle constantly increasing amounts of data. The key to storage performance is not capacity – Read more…

]]>These days storage is in the HPC spotlight. How well an HPC application performs relies not only on total system memory bandwidth and sustained floating point operations per second, but also on a storage architecture that supports sufficient throughput to handle constantly increasing amounts of data.

The key to storage performance is not capacity – it’s efficiency. Top performance relies on a fully integrated storage architecture supplied by a single vendor who is able to optimize from the drive to the file system.

Low-efficiency storage systems by their very nature require IT to add hardware and storage capacity just to attain target performance levels. This increased complexity adds costs, enlarges the physical storage footprint, and degrades the overall reliability of the system, limiting the availability and operational performance of the entire HPC infrastructure.

Searching for a Solution: Start with the Disk Drive

When investigating HPC storage solutions, ask the vendor if the proposed hardware is necessary to achieve your desired performance. Be on guard to ensure your vendor isn’t adding capacity just to hit rack-scale performance numbers, thus leaving a lot of wasted capacity on the table.

Seagate, with its 30-year history of designing and manufacturing drives, knows how to combine performance with efficiency. Only Seagate designs and manufactures the disk drives and the enclosures. It conducts proprietary testing on the system and/or rack-level solution, and uses its software expertise and management-layer capabilities to deliver an optimized end-do-end solution no other company can match.

The result: an HPC storage solution that can provide twice the performance and twice the efficiency of competitive offerings.

Disk efficiency translates into performance increases and reduced costs – fewer devices, fewer failures, reduced management, and improved TCO. The savings at a small scale are impressive; at a large scale the savings are dramatic: sometimes hundreds of thousands of dollars in operational costs alone.

In Conclusion

Seagate achieves unprecedented HPC storage efficiency – with an associated reduction in costs and boost in performance to handle the most difficult technical computing and big data problems – by starting at the fundamental level: the disk drive combined with a unique enclosure design. Anyone can throw hardware at a storage problem to scale up and meet performance objectives. But not everyone can do it efficiently.

Traveling to New Orleans for SC’14? Please click here to join us at the Seagate HPC User Group.

]]>http://www.hpcwire.com/2014/10/20/want-boost-hpc-performance-adding-disks-answer/feed/0Leveraging Cloud on Your Own Termshttp://www.hpcwire.com/2014/09/15/leveraging-cloud-terms/?utm_source=rss&utm_medium=rss&utm_campaign=leveraging-cloud-terms
http://www.hpcwire.com/2014/09/15/leveraging-cloud-terms/#commentsMon, 15 Sep 2014 07:00:10 +0000http://www.hpcwire.com/?p=15120Many IT organizations are seeking a new approach to the data management challenges presented when using multiple clouds. In particular, they want an approach that allows them to leverage public cloud economics while maintaining control of their data. NetApp offers a new approach to hybrid IT by using dedicated, secure, private storage with low-latency access Read more…

]]>Many IT organizations are seeking a new approach to the data management challenges presented when using multiple clouds. In particular, they want an approach that allows them to leverage public cloud economics while maintaining control of their data.

NetApp offers a new approach to hybrid IT by using dedicated, secure, private storage with low-latency access to public cloud compute services. Called NetApp Private Storage (NPS) for Cloud, this solution is possible by locating the storage “next to” rather than placing it “into” the cloud. In this way, companies have the freedom to connect to hyperscale clouds such as Amazon Web Services (AWS) and Microsoft Azure. Although this capability is still relatively new, cloud innovators are already deploying this type of solution today to optimize costs, seize opportunities, and mitigate risks.

NetApp Private Storage for Amazon Web Services and NetApp Private Storage for Microsoft Azure use secure, dedicated, high-speed network connections between a customer’s NetApp storage in select Equinix data centers and compute resources from industry leading clouds adjacent to those co-location data centers. In this way, companies can take advantage of the cost and elasticity benefits of public cloud compute while leveraging the performance, availability, and control of their data on privately owned NetApp storage.

The hybrid IT model enabled by an NPS approach solves many of the issues that arise when using a pure public cloud. Even more value can be realized if this hybrid model can be extended to embrace multiple public clouds.

Enter hybrid IT AND multi-cloud

In many cases, companies can benefit from using cloud services from multiple providers. Each cloud has a unique set of features with different costs. For example, a company might choose different hyperscale providers such as AWS for dev/test, Microsoft Azure for collaborative applications, and yet another for running analytics or modeling applications. In such a scenario with the NetApp solution, the data could reside on a single private storage array and be directly connected to the compute resources of each provider.

While building discrete links to multiple services in order to retain control over data would be arduous and complex. There is a solution that supports this hybrid multi-cloud model in a type of hub and spoke architecture where the data used by any application running on any public cloud service is retained in one place. An example of what can be done is a scenario that couples NPS with the Equinix Cloud Exchange.

Equinix offers broad interconnection options to multiple major networks and public clouds.. Their Cloud Exchange switching technology can connect a single strategically placed storage device to multiple clouds almost instantly. These connections are more secure and reliable and have lower latency than public Internet connections. A company hosting its data on NetApp in an Equinix center can place it in close proximity to multiple cloud services allowing the user to leverage multiple clouds or switch clouds without lock in and time consuming data migrations.

NetApp Private Storage for Cloud solutions used with public cloud compute services and the Equinix Cloud Exchange provide control over data, while affording maximum cloud flexibility. A company using this approach can take advantage of the best cloud solution for every application, prevent cloud provider lock-in, and be certain their data is safe and managed to meet regulations and legal requirements.

Discover more from NetApp on how new hybrid cloud architectures enable you to take advantage of public cloud resources in this paper:

]]>http://www.hpcwire.com/2014/09/15/leveraging-cloud-terms/feed/0DDN’s IME Software Scales I/O Performance on the Rocky Road to Exascalehttp://www.hpcwire.com/2014/08/28/ddns-ime-software-scales-io-performance-rocky-road-exascale/?utm_source=rss&utm_medium=rss&utm_campaign=ddns-ime-software-scales-io-performance-rocky-road-exascale
http://www.hpcwire.com/2014/08/28/ddns-ime-software-scales-io-performance-rocky-road-exascale/#commentsThu, 28 Aug 2014 16:33:11 +0000http://www.hpcwire.com/?p=14846Exascale, once just a gleam in the eyes of a few prescient computer scientists, is beginning to take shape. That arbitrary date of 2018 for a thousand-fold increase in computing power no longer seems far fetched. But as exascale comes into focus, some very specific roadblocks are being resolved and storage is one of them. Read more…

]]>Exascale, once just a gleam in the eyes of a few prescient computer scientists, is beginning to take shape. That arbitrary date of 2018 for a thousand-fold increase in computing power no longer seems far fetched. But as exascale comes into focus, some very specific roadblocks are being resolved and storage is one of them.

Scaling performance on traditional spinning disk storage is expensive and inefficient. In the conventional approach, the number of drive spindles is directly correlated with I/O delivery. Users are forced to buy lower capacity, more expensive drives to increase spindles without increasing capacity. The desired I/O may be achieved, but at a cost that includes: lost storage density; inability to realize the efficiencies of higher capacity HDDs; and all this results in more systems to house, power and manage.

Back in 1999, when VMware® launched the virtualization revolution by decoupling the physical server from the logical server, they created a new compute provisioning paradigm that forever changed the data center. VMware finally allowed users to run multiple jobs on a single virtualized system. This helped solve many of the problems on the compute side associated with idle capacity, inefficient use of servers and the negative economics of overprovisioning.

Enter IME

Much like the business and architectural transformation that resulted from VMware’s innovations, DataDirect Networks (DDN) has finally resolved the long standing challenges associated with the overprovisioning of storage by decoupling I/0 performance from capacity. The solution, know as the Infinite Memory Engine™ (IME), is a highly transactional, resilient and reliable “burst buffer cache” and I/O accelerator for HPC and Big Data applications.

IME is composed of client software resident on compute nodes, and server software for the I/O servers that aggregate and virtualize disparate compute or I/O server resident SSDs. This creates a single pool of extremely low latency, high performance, non-volatile memory-based storage to become a new fast data tier.

Not only does IME intelligently decouple storage performance from spinning disk storage capacity, it also:

Significantly accelerates applications by moving I/O right next to compute resources to reduce latency, delivering 50% faster performance than all flash arrays

With IME, DDN has addressed a storage problem that has been unresolved ever since the introduction of disk-based storage. IME allows data centers to run more complex simulations faster with less hardware. Large datasets can be moved out of HDD storage and into memory quickly and efficiently. Then, data can be moved back to HDD storage once processing is complete much more efficiently with unique algorithms that align small and large writes into streams,, enabling users to implement the largest, economical HDDs to hold capacity. Workload performance is optimized to reduce time to insight and discovery. Cost savings of up to 80% can be realized while achieving infinite scalability and highly efficient I/O performance.

DDN’s IME solution transforms storage from a bottleneck to becoming a major contributor to a smoothly functioning IT infrastructure that supports the organization’s most ambitious HPC and big data and performance-intensive applications.

And looking to the future, IME has taken its place as one more step on enabling the road to exascale.