Primeur magazine

Edition: live - Issue:
2011-06-22

Blog

The fastest computer in the world, just called K? One letter? Yes, it is not
that they ran out of inspiration at Riken. K (or Kei as we know the Japanese character outside Asia) has several meanings. One is a gateway, and the K-computer is expected to be a gateway that opens up new ways for scientific discoveries. Another meaning is 10**16, which happens to be the speed of the machine: 10 Petaflop/s. And it is related to Kobe, home of the K computer.
 Dr. Watanabe showing the certificate for fastest supercomputer in the world.
Read further...

The special TOP500 session at ISC11 started with an opportunity to ask questions to Yui Oniaga, designer of the K computer and Tadashi Watanabe designer of the Earth Simulator. According to Oniaga, the K system does not have GPUs, because they really would not add to the performance of the system. The most important part is the direct network connection. Which currently is based on Infiniband. The individual speed of the nodes is not that important, as long as their connection is.
Read further...

Chair Maria Ramalho of the Prospect Association has handed over the first report "High Performance Computing in Europe: a vision for 2020" to the European Commission at ISC11 in Hamburg. The report from Prospect, an organisation that fosters the advancement of science and research in the field of supercomputing and associated technologies in Europe, has a number of recommendations, not only aimed at the European Commission, but to the European HPC community as a whole.
Read further...

Hardware

In the Tuesday morning session on "Sustained Performance on Petascale Systems" Michael Resch from the High Performance Computing Center in Stuttgart elaborated on the practical use of Petaflop/s systems. Why do we need Petaflop/s, he asked the audience. The mission of supercomputing is to support research. This is important for society, for economy and for politics. We need simulation to solve different questions in traffic, turbulence, energy, and so on. More computing power is needed to develop solutions for difficult issues.
Read further...

At the ISC'11 Panasas booth we had a talk with Geoffrey Noer, Senior Director of Product Marketing. Panasas is presenting its fourth generation of blade systems. The company has a new executive team and is planning to nearly double its staff this year despite the economic recession. They have a large interest for expansion in Europe and Asia. With the launch of ActivStor11, the company is focusing customers who seek to purchase a scalable storage solution. Panasas likes to keep its existing customers happy by offering them flexibility at reasonable prices as their need for storage grows with the years.
Read further...

Steve Hammond from the National Renewable Energy Laboratory (NREL) chaired the panel on "Energy Efficiency or Net Zero Carbon by 2020" in the Thursday morning session at ISC'11. The panelists were Jean-Pierre Panziera from Bull, Michael K. Patterson from Intel, Volker Lindenstruth from the University of Frankfurt, and Taisuke Boku from the University of Tsukuba. Steve Hammond ignited the debate by stating that servers and data centres represent one of the fastest growing sectors in energy consumption. In the USA alone, it is estimated that servers and data centres consume a big part of the available energy. Taking a holistic view of computng and data centres reveals that there is significant potential for improving energy efficiency and overall sustainability of data centres.
Read further...

Last keynote speaker at ISC'11 was Dean Klein from Micron Technology. He talked about future trends in memory systems. Are memory systems a showstopper or do they hold a performance potential in store for HPC? Dean Klein started by saying that memory is an issue. The future trends show issues and opportunities for DRAM. There is an increased memory hierarchy. We face the economic reality but if there is opportunity ahead, we should call to action.
Read further...

Applications

First keynote speaker in the opening session at ISC'11 was Dr. Henry Markram, Director of the Brain Mind Institute at EPFL in Lausanne, Switzerland. Dr. Markram has an experience of more than 20 years in brain research. He started out in medicine where he catalogued diseases and specialized in neurology in Cape Town, South Africa. There are 500 clinically classified diseases in the brain. Unfortunately, the pharmaceutical industry are pulling out of brain research because it is too costly and this industry is only focusing on where the money is. Academia receives 5% of the available funding to address the rest of the brain diseases. At present, there are some 5 million papers studying the brain which involves an enormous data growth. Dr. Markram emphasized that we need to start getting organized. The solution to the data tsunami is integration: the creation of a unifying model.
Read further...

As Russian scientists increasingly deploy GPU-enabled supercomputers to tackle scientific challenges, Moscow State University is upgrading its Lomonosov system with NVIDIA Tesla GPUs to be one of the world's fastest supercomputers. The upgraded system couples 1,554 NVIDIA Tesla X2070 GPUs with an identical number of quad-core CPUs, delivering an expected 1.3 petaflops of peak performance, placing at number 1 in Russia and amongst the fastest systems in the world.
Read further...

hpc-ch is the Swiss HPC Service Provider Community. Their mission is to promote knowledge exchange between providers of HPC services in Swiss universities and research centres and to promote HPC in Switzerland. In on-line and in regular meetings the system specialists of hpc-ch are discussing best practices and developments in HPC. Social gatherings of hpc-ch's HPC specialists are building up a team spirit across Switzerland. Despite of its size, Switzerland is an important global player in research, notably in science and engineering. This implies that Switzerland is at the leading edge on HPC and HPC supported science. At the ISC'11 in Hamburg hpc-ch presents on its booth an overview of the HPC ecosystem in Switzerland. The focus is on th hpc-ch member organisations which are currently entities hold by the states (cantons) or the confederation. The relationship between the organisations located in different parts of the country is characterized by a friendly co-operation as well as a fruitful competition.
Read further...

In the Wednesday morning session on "State of the Art in Visualization" at ISC'11 Stephan Olbrich from the University of Hamburg held a talk on scalable in-situ data extraction and visualization in massively parallel simulation applications. Simulation applications are compute- and data-intensive. Think about high-resolution, time-dependent simulation, such as atmospheric convection and turbulent flows. With the Parallel Large-Eddy Simulation Model (PALM), simulating unsteady flow phenomena on a 3D grid with about 10 to 11th data points for about 10 to 4th time steps utilizes up to 4096 cores. Storing all resulting scalar and vactor values per grid cell and time step amounts to a raw data volume of about 10.000 terabytes. Currently, this cannot be stored or post-processed.
Read further...

Uwe Woessner from the High Performance Computing Center in Stuttgart (HLRS) was the second speaker in the "State of the Art in Visualization" session. He shared his enthusiasm for automotive industry visualization and virtual prototyping with the audience and explained how you can build up a virtual test drive from hybrid prototypes. It is hard and serious business indeed but also a hell of a lot of fun.
Read further...

Valerio Pascucci from the University of Utah was the third speaker in the "State of the Art in Visualization" session at ISC'11. His subject was large data analysis and visualization for science discovery. Valerio Pascucci told the audience that we are exposed to many different data sources so we need efficient tools to analyze this data such as sensing devices and supercomputers like BlueGene/L and Jaguar. Traditional data analysis tools are often ineffective for massive models. This is because massive models are challenging due to the sheer volume of information, the complexity of the information represented, and the complexity of presentation. Traditional tools do not scale with data sizes so it is difficult to capture multiple scales. Thenumerical methods are unstable and sensitive to noise and we need proper abstractions.
Read further...

TOP500

A Japanese supercomputer capable of performing more than 8 quadrillion calculations per second (petaflop/s) is the new number one system in the world, putting Japan back in the top spot for the first time since the Earth Simulator was dethroned in November 2004, according to the latest edition of the TOP500 List of the world's top supercomputers. The system, called the K Computer, is at the RIKEN Advanced Institute for Computational Science (AICS) in Kobe.
Read further...

RIKEN and Fujitsu have taken the first place on the 37th TOP500 list announced at the 26th International Supercomputing Conference (ISC'11) held in Hamburg, Germany. This ranking is based on a performance measurement of the "K computer", currently under their joint development. The TOP500-ranked K computer system, currently in the configuration stage, has 672 computer racks equipped with a current total of 68,544 CPUs. This half-build system achieved the world's best LINPACK benchmark performance of 8.162 petaflops (quadrillion floating-point operations per second), to place it at the head of the TOP500 list. In addition, the system has recorded high standards with a computing efficiency ratio of 93.0%. This is the first time since June 2004 with the "Earth Simulator" that a Japanese supercomputer has been ranked first on the TOP500 list.
Read further...

As requested by old ISC tradition, Erich Strohmaier highlighted the facts and details in the newly issued TOP500 list at ISC'11. There are now 10 Petaflop systems in the TOP500. The Roadrunner which was the first Petaflop system in this list some years ago, is now ranked no. 10 with the lowest power consumption in that TOP10. The TOP10 is not that exciting except for the new no. 1, of course. There is a new increase in performance since last year. As for the performance development, the effect of Moore's Law is being shown in an exponential growth.
Read further...

AMD is prominently featured in the newest bi-annual TOP500 Supercomputers list, announced at the International Supercomputing Conference 2011. AMD's leadership in High Performance Computing (HPC) is demonstrated by double digit growth in the total number of systems on the TOP500 list that are based on AMD platforms. More than half of the 68 supercomputers based on AMD technology now feature the 8- and 12-core AMD Opteron 6100 Series processor. These systems demonstrate massive performance capability as measured by the LINPACK benchmark, while driving commerce and helping researchers investigate complex science problems.
Read further...

The TOP500 rankings have once again awarded the title of 'Europe's most powerful supercomputer' to Tera 100. Having already been named as number one in Europe in the previous listing published in the USA, in November 2010, this confirms the leadership position enjoyed by Tera 100's technology - developed by Bull along with the teams from CEA-DAM - the Military Applications Directorate of the French Alternative Energies and Atomic Energy Commission - in an ultra-competitive marketplace.
Read further...

T-Platforms, an international developer of supercomputers and supplier of a full range of solutions and services for high-performance computing, has completed its project to modernize Russia's most powerful supercomputer, "Lomonosov". As a result, the performance of the computer complex, installed at Lomonosov Moscow State University, has reached the level of 1.3 PFLOPS, unsurpassed in Russia, which positioned it to achieve 13th place in the latest edition of the TOP500 most powerful supercomputers in the world.
Read further...

NASA's Pleiades supercomputer system, built with SGI Altix ICE technology, has achieved more than one petaflop/second in sustained compute performance based on the LINPACK performance benchmark, and has now moved into the no. 7 spot on the June 2011 TOP500 list of supercomputer sites in the world. The Pleiades system is run by the NASA Advanced Supercomputing (NAS) Division at Ames Research Center, and represents NASA's state-of-the-art technology for meeting the agency's supercomputing requirements, enabling scientists and engineers across the nation to conduct modelling and simulation for NASA missions.
Read further...

Never change a winning expert-analyst. The evening keynote on Wednesday, 22 June at ISC'11 was provided by Thomas Sterling. If 2010 was all about "Ignating Exaflops" then 2011 states that Petaflop/s are now the new norm worldwide. What's more: Asia is surging forward. The power consumption is both a driver and limiting factor. In programming multi-core is a major area of pursuit. Programming through GPU is beneficial for heterogeneous systems. Commodity Linux clusters are still ubiquitous. At the top end tightly coupled MPPs are in resurgence and we are setting the international cross-hairs on Exaflops, according to the summary analysis by Thomas Sterling.
Read further...

For the closing session at ISC'11 Chair Prof. Dr. Hans Meuer presented a panel debate that was moderated by Addison Snell from Intersect360 Research. Four panelists were submitted to a high-speed analyst crossfire on a hot set of topics related to HPC, Cloud, exascale, the TOP500, GPU, and ISC'11. The four panelists were Jean-Marc Denis from Bull and Andrew Jones from NAG Group on the vendors' side and Satoshi Matsuoka from the Tokyo Institute of Technology and Michael M. Resch from the High Performance Computing Center Stuttgart on the users' side.
Read further...

The Grid

In the Tuesday morning session on "Cloud Computing and HPC", Tom-Michael Thamm from mental images described the company's RealityServer Service in the Cloud. mental images is a rendering company which has been on the market for 25 years. Their focus, based on an OEM model, is aimed at rendering for industrial companies interested in simulation. Since 2007, mental images is wholly owned by NVIDIA. mental ray is the company's major product, together with 3D visualization component software and mental mill, a shading application. mental images is working for companies such as Autodesk, Dassault and PTC.
Read further...

Last speaker in the "Cloud Computing and HPC" session, chaired by Wolfgang Gentzsch, was Josh Simons from VMware. He talked about the converging concerns of HPC, Cloud and the enterprise. VMware is an expert in virtualization and has 250.000 customers. Today, 85% of the workloads are virtualized and virtualization will be a 241 billion dollar market in the coming years.
Read further...

Company news

Altair Engineering Inc.'s PBS Professional, the company's EAL3+ security certified commercial-grade high-performance computing (HPC) workload management solution, is celebrating its 20th anniversary on June 17. During the month of June, the company also will exhibit at the International Supercomputing Conference (ISC) in Hamburg, Germany June 19-23, and launch its PBS Professional version 11.1 update.
Read further...

Adapteva has selected ETI's SWARM - SWift Adaptive Runtime Machine - software solution for its Epiphany multicore processor, the first ever architecture capable of scaling to thousands of parallel processors on a single chip. ETI's SWARM programming environment enables application development on the Epiphany processing architecture, and is part of the company's unique many-core expertise offerings that include design, development, and other custom programming services for advanced HPC and embedded computing systems.
Read further...

T-Platforms, a global HPC company providing comprehensive supercomputing systems, software and services, and AEON Computing, a U.S.-based supplier of HPC clusters, servers, workstations and custom solutions, have signed a strategic reseller agreement between the two companies. As part of this new agreement, AEON Computing will supply T-Platforms systems, components and integrated solutions to the U.S. high performance computing market.
Read further...

When 2,300 experts from around the world convene to discuss the latest developments in high-performance computing, the conversations are bound to be interesting. But the 2011 International Supercomputing Conference (ISC11) will take this a step further with a series of sessions designed to ensure lively exchanges. Now in its 26th year, ISC11 will be held June 19 - 23 at the Congress Center Hamburg, Germany. While international in scope, all proceedings are conducted in English.
Read further...

Panasas Inc., an expert in high performance parallel storage for technical computing applications and big data workloads, has launched the Panasas ActiveStor 11 parallel storage system appliance. Powered by the PanFS operating system, ActiveStor 11 seamlessly scales to 6PB of capacity and 115GB/s of throughput from a single global namespace. Its advanced blade architecture blends performance, capacity, and cost-efficiency in a system optimised for data-intensive applications where time-to-results is a critical concern.
Read further...

The Appro Xtreme-X supercomputers were selected for a major joint procurement by the US Department of Energy's National Nuclear Security Administration (NNSA). Multiple systems will be delivered to the three national labs in NNSA's Advanced Simulation and Computing (ASC) programme: Lawrence Livermore (LLNL), Los Alamos (LANL) and Sandia (SNL). This contract represents the second time that NNSA has chosen Appro as its multi-year exclusive supplier of comprehensive capacity cluster systems across all three Labs.
Read further...

Leveraging the superior message rate performance and scalable latency of its TrueScale InfiniBand architecture, QLogic has been selected to provide its 12000 Series switches and 7300 Series adapters in a massive supercomputer deployment for the Department of Energy National Nuclear Security Administration's (NNSA's) Tri-Laboratory Linux Capacity Cluster 2 (TLCC2). The clusters will be deployed over the next two years at Lawrence Livermore National Laboratory (LLNL), Sandia National Laboratories, and Los Alamos National Laboratory, and encompasses as many as 20,000 nodes. Appro, the system integrator in the agreement, is providing its Xtreme-X supercomputer hardware along with QLogic Quad Data Rate (QDR) InfiniBand solutions.
Read further...

T-Platforms, an international developer of supercomputers and a supplier of a full range of solutions and services for high-performance computing, is participating in the International Supercomputing Conference - ISC'11, one of the world's most prestigious conferences focused on high performance computing. The ISC'11 conference and exhibition will be held from June 20th to the 23rd at the Congress Center Hamburg (CCH) in Hamburg, Germany. ISC11 will attract more than 2,000 experts in the sphere of supercomputer technologies and research for its 26th annual gathering. The conference exhibition will include computer hardware and software companies along with research organisations presenting the latest developments and research accomplishments in scientific and technical computing.
Read further...

Cloud computing has swiftly moved from industry hype to a top IT initiative, with more CIOs building a private Cloud, evaluating service providers for public Cloud services, or employing a hybrid model that combines the best of both. Through an innovative global ecosystem of Cloud partners and a product portfolio that offers new integrated Cloud management capabilities and pre-validated Cloud infrastructure solutions, NetApp helps enterprise customers accelerate and simplify their transition to private, public, and hybrid Cloud models.
Read further...

Bull and seven Founding Members have created the Bull User group for eXtreme Computing (BUX) on 20 June 2011, at ISC'11 in Hamburg. BUX is an independent worldwide group of users that will co-operate to increase the capabilities of the large-scale, parallel scientific and technical computing solutions supplied by Bull, to promote the exchange of information and understanding of these systems, and to provide guidance to Bull on the essential development and support issues for large-scale technical systems. Bull, as an Affiliate, is committed to supporting this new user group.
Read further...

Bright Computing, an expert in cluster management software, is now shipping Bright Cluster Manager release 5.2. This latest version of the product includes multiple new features and a broad range of expanded capabilities for managing HPC clusters. These capabilities include enhanced multi-cluster support, full CUDA 4.0 support for NVIDIA GPUs, tightly integrated support for on-demand SMP, additional options for integrated workload management, support for the latest releases of Linux distributions, cluster reliability improvement and an enhanced web-based user portal.
Read further...

T-Platforms, a global HPC company providing comprehensive supercomputing systems, software and services, will participate in a new programme jointly funded between the European Union and the Russian Ministry of Education and Science, and aimed at improving supercomputer performance. The programme is called HOPSA, which stands for Holistic Performance System Analysis, and has been funded for two years.
Read further...

Just before the start of the Big Exhibition Party at ISC'11, the organizers took media representatives on an exciting tour crowded with HPC innovation in industry. At the Intel booth 12 demonstrations are running and the MIC technology is being displayed. HP has launched a new series of second generation servers for HPC with GP/GPU use. The new X9000 storage line has 1 single interface for administration and many, many cores. They also have a new data centre in a container which will be shipped at the end of this year. The cost of energy has been tremendously reduced with this solution.
Read further...

Xyratex Ltd., a provider of enterprise class data storage subsystems and hard disk drive capital equipment, is exhibiting at the International Supercomputing Conference (ISC'11) in Hamburg, Germany. This year marks the first exhibitor level participation for Xyratex at ISC with a large booth on the exhibition floor, informative presentations on 'The Future of HPC Data Storage', Lustre roadmap futures from Xyratex at the HPC Advisory Council workshop, multiple sessions introducing ClusterStor 3000 and inclusion in the the world's first FDR 56Gb/s InfiniBand ISCnet demonstration along with other key HPC organisations including Microsoft, Mellanox, HP, Dell, and Fujitsu. The HPC Advisory Council, a leading organisation for HPC research and education, 56Gb/s InfiniBand demonstration will interconnect participating exhibitors' booths via the ISCnet network to demonstrate various HPC applications and products including Xyratex's new ClusterStor 3000 storage solution.
Read further...

Super Micro Computer Inc., a global expert in server technology innovation and green computing, will demo its latest HPC solutions at the 2011 International Supercomputing Conference (ISC) in Hamburg, Germany. Supermicro is spotlighting its new 1U (1026GT-TRF) and 2U (2026GT-TRF) high-density GPU SuperServers supporting up to 4 and 6 GPUs respectively, the GPU SuperBlade (SBI-7126TG) providing 20 GPUs and 20 CPUs in a 7U enclosure, the TwinBlade (SBA-7222G-T2) supporting up to 3,840 Cores per rack and the 4-Way SuperBlade (SBA-7142G-T4), a high-performance (up to 60x 4-Way servers per 42U SuperRack) compute platform with QDR 40Gb/s InfiniBand or 10GbE connectivity per blade. Also on display will be their enterprise-class 8-Way, 5U SuperServer (5086B-TRF), a high-performance platform, supporting up to 80 Cores/2TB of memory and designed for mission-critical, high-availability computing environments.
Read further...

High-performance computing (HPC) applications put incredible stress on an organisation's IT infrastructure as a result of the extreme amount of data and information that must be stored, managed, and processed. Addressing this challenge is no easy task and requires a storage solution that can deliver the performance, bulletproof reliability, and efficiency necessary to enable customers to succeed in today's hypercompetitive business environment. In May, NetApp unveiled the NetApp E5400 storage system, which is available to OEMs as part of the E-Series Platform, to help customers manage the massive amount of data resulting from big-bandwidth and high-performance applications. To help meet the growing needs of customers in high-performance environments, NetApp's E5400 has been benchmarked for use with the Lustre file system, an open-sourced high-performance file system used by the majority of organisations on the Supercomputer 500 list.
Read further...

VMware Inc. has launched VMware vFabric 5, an integrated application platform for virtual and Cloud environments. Combining the market-leading Spring development framework for Java and the latest generation of vFabric application services, vFabric 5 will provide the core application platform for building, deploying and running modern applications. vFabric 5 introduces for the first time a flexible packaging and licensing model that will allow enterprises to purchase application infrastructure software based on virtual machines, rather than physical hardware CPUs, and to pay only for the licenses in use.
Read further...

The University of Birmingham in Birmingham, England has selected Moab Adaptive HPC Suite as its intelligent automation software solution. Adaptive Computing's innovative technology reduces air conditioning costs and input power for the Birmingham Environment for Academic Research (BlueBear). With Moab Adaptive HPC Suite, the annual cost savings on input power and air conditioning costs for BlueBear are estimated at 10% of the total power bill for running the cluster - totaling approximately GBP 10,000 or 16,000 USD.
Read further...

The Institute of Cancer Research (ICR) has chosen Moab Adaptive HPC Suite to manage the organisation's new high performance computing facility. With Adaptive Computing's flexible technology, the ICR can now process and work flow-manage large data sets of its researchers' genetic and molecular discoveries, and employ key reporting features to inform its diverse group of stakeholders and advise on the future expansion of its high performance computing capability.
Read further...

Penguin Computing chose Adaptive Computing's Moab ClusterSuite to manage the workload on its HPC Cloud offering Penguin on Demand (POD). POD is currently the only on-demand HPC offering that supports 'bare metal' execution of compute jobs, effectively providing public access to a supercomputer to users that require compute capacity that is unavailable in-house.
Read further...

Numascale, a provider of innovative technology for cost-effective shared memory and cluster computer systems, has provided support for their NumaConnect technology on new servers from IBM and Supermicro. Both servers use Socket G34, supporting the 8 or 12 core AMD Opteron Magny-Cours processors. This lays the foundation for an increased scalability in high-performance SMP systems using standard servers and NumaConnect SMP adapters.
Read further...

Mellanox's line of industry and performance-leading InfiniBand and Ethernet connectivity solutions with NEC LX Series Supercomputers are available through NEC HPC Europe. The move enables NEC to address the growing demand for Mellanox's end-to-end connectivity products and advanced technology from leading European HPC centres, Cloud computing providers and enterprise customers.
Read further...

Mellanox and Lawrence Livermore National Laboratory (LLNL), have introduced world-leading scalability achieved with LLNL supercomputers and Mellanox InfiniBand interconnect solutions. The results are based on collaboration between the two organisations to enhance high-performance computing (HPC) software drivers and MPI libraries on top of Mellanox's scalable interconnect solutions. The joint effort has delivered new levels of workload performance and maximized the return on investment for LLNL users.
Read further...

SGI plans to deliver next generation supercomputers with Intel Many Integrated Core (MIC) architecture, based on Intel x86 architecture, and announced a strategic development partnership with Intel Corporation to deliver such systems.
Read further...

Platform Computing's collaboration and technology partnership with the European Organization for Nuclear Research (CERN) has been recognized as a 2011 Honors Laureate candidate byComputerworld. The annual award programme, which honours visionary applications of information technology promoting positive social, economic and educational change, recognized CERN's groundbreaking research. The research was made possible by its implementation of the world's largest Cloud computing environment for scientific collaboration, which is managed by Platform Computing.
Read further...

Bright Computing, an expert in cluster management software, is teaming with partners NVIDIA and NextIO in a panel discussion at the International Supercomputing Conference (ISC) in Hamburg, Germany. A European HPC journalist moderates as representatives from the three companies explore lessons in deriving maximum value from GPUs in high performance computing, including when not to use the accelerators.
Read further...

Cray Inc. has sold a Cray XE6m supercomputer to GE Global Research, the technology development arm for the General Electric Company. The new Cray XE6m supercomputer will be used to support simulation-based engineering and science across the various disciplines at GE Global Research. The Cray system will give GE Global Research the ability to run more complex simulations in order to explore multi-physics challenges, gain higher fidelity insights and pursue areas of science and product development that could not be simulated using standard commodity clusters.
Read further...

CINECA, Italy's largest computing centre and scientific research consortium, has selected DataDirect Networks' (DDN) Storage Fusion Architecture 10000 (SFA10000) and IBMs General Parallel File System (GPFS) to power academic and commercial research across the European Union in such fields as computational science, life science and chemistry, and material science.
Read further...

SGI has introduced the SGI InfiniteStorage (IS) 5500, a next generation storage platform. Built with an innovative modular design for extreme density and scalability, the IS5500 offers exceptional performance for high-bandwidth and high-IOPS applications with enterprise-class reliability.
Read further...

Force10 Networks Inc.'s ExaScale E-Series 40 Gigabit Ethernet (40 GbE) line card has been certified to interoperate with IBM's iDataPlex servers and clusters and its new Intelligent Cluster BOM components. The certification of the 40 GbE line card will enable IT managers to leverage IBM clustering technology with up to 56 ports of 40 GbE connectivity within one ExaScale chassis-based virtualized core switch/router. Today's announcement also represents an extension of interoperability, which also includes IBM's support of the Force10 S60 top-of-rack (ToR) access switch.
Read further...

Fusion-io's technology has been utilized to realize significant performance improvements in MySQL database queries for bioinformatics research. The Protein Data Bank (PDB) research is being conducted at the University of California, San Diego, in collaboration with the San Diego Supercomputer Center (SDSC). Researchers at SDSC noted that replacing hard drive disks (HDDs) with Fusion-io technology in their database infrastructure reduced query times from 30 minutes to three minutes.
Read further...