GPU virtualization – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemFri, 09 Dec 2016 13:18:37 +0000en-UShourly1https://wordpress.org/?v=4.760365857HPC in the Cloud Research Rounduphttps://www.hpcwire.com/2013/03/15/hpc_in_the_cloud_research_roundup-4/?utm_source=rss&utm_medium=rss&utm_campaign=hpc_in_the_cloud_research_roundup-4
https://www.hpcwire.com/2013/03/15/hpc_in_the_cloud_research_roundup-4/#respondFri, 15 Mar 2013 07:00:00 +0000http://www.hpcwire.com/?p=8592<img src="http://media2.hpcwire.com/hpccloud/Network_optimization_225x.jpg" alt="" width="95" height="64" />The top HPC cloud research story this week addresses the question: What if it were possible to cheaply and easily test the suitability of moving to a cloud platform – a virtual "try it before you buy it"? In other items, researchers explore the reliability of HPC cloud, take another pass at GPU virtualization, and evaluate I/O performance in Amazon's EC2 cloud.

]]>Our HPC cloud research stories are hand-selected from leading science centers, prominent journals and relevant conference proceedings. The top item this week addresses the question: what if it were possible to cheaply and easily test the suitability of moving to a cloud platform – a virtual “try it before you buy it”? We also explore the reliability of HPC cloud, take another pass at GPU virtualization, and evaluate I/O performance in Amazon’s EC2 cloud.

Novel Cloud Evaluation Project Receives Google Award

What if were possible to predict the suitability of the cloud resources for a given application? This smart idea is the basis of a research project led by University of Texas at Dallas professor Dr. Lawrence Chung. The researcher and his SilverLining team from the Erik Jonsson School of Engineering and Computer Science have already caught the attention of Web giant Google. Earlier this month, Dr. Chung’s team (and six other worthy recipients) received the first-ever Google App Engine Research Award.

The projects, which each received $60,000 in Google App Engine credits, were selected for their intellectual excellence, innovation and expected to benefit society.

The SilverLining team starts with the premise that the “initial purchase, re-purchase, and operation of computing equipment that has become unsustainable and, hence, is becoming an increasingly great burden on the US economy.”

Countless organizations are interested in the benefits of cloud computing, but testing a new cloud system can be costly and time-consuming. Chung and his colleagues propose that this complex process can be simulated on one system.

Chung explains: “We play with numbers and do not need the real software and machines. Using this approach, we can see the behavior of the cloud very quickly and inexpensively.”

The project seeks to determine the feasibility of predicting: 1. whether an operational system can migrate to a cloud, while making everyone happy, and 2. the performance and scalability of the system after or even before it is actually built.

The researchers have run initial simulations and benchmarks, but to verify their work, they require access to a large-scale cloud-based infrastructure. This is very similar to comparing a virtual model with a physical model, but as is typically the case, the “physical model” requires some capital outlay.

“Before we use the simulator further, we want to make sure that the results we obtain from simulators are going to be meaningful,” Chung said.

With the Google award, the SilverLining team now has access to a full-scale cloud infrastructure enabling them to run the their experiments and compare the results to their simulations to see if they hold up.

The other Google App Engine Award recipients are from the California Institute of Technology, University of Bristol, Massachusetts Institute of Technology, Carnegie Mellon University, University of Washington and Arizona State University.

Next >> Reliability in HPC Cloud

Making HPC Cloud Computing More Reliable

A team of computer scientists from Louisiana Tech University has contributed to the growing body of HPC cloud research, specifically as it relates to the reliability of cloud computing resources. Their paper, A Reliability Model for Cloud Computing for High Performance Computing Applications, was published in the book, Euro-Par 2012: Parallel Processing Workshops.

Cloud computing and virtualization allow resources to be used more efficiently. Public cloud resources are available on-demand and don’t require an expensive capital expenditure. But with an increase in both software and hardware components, comes a corresponding rise in server failure. The researchers assert that it’s important for service providers to understand the failure behavior of a cloud system, so they can better manage the resources. Much of their research applies specifically to the running of HPC applications on the cloud.

In the paper, the researchers “propose a reliability model for a cloud computing system that considers software, application, virtual machine, hypervisor, and hardware failures as well as correlation of failures within the software and hardware.”

They conclude failures caused by dependencies create a less reliable system, and as the failure rate of the system increases, the mean time to failure decreases. Not surprisingly, they also find that an increase in the number of nodes decreases the reliability of the system.

Next >> GPU Virtualization

GPU Virtualization using PCI Direct Pass-Through

The technical computing space has seen several trends develop over the past decade, among them are server virtualization, cloud computing and GPU computing. It’s clear that GPGPU computing has a role to play in HPC systems. Can these trends be combined?

A research team from Chonbuk National University in South Korea has written a paper in the periodical Applied Mechanics and Materials, proposing exactly this. The investigate a method of GPU virtualization that exploits the GPU in a virtualized cloud computing environment.

The researchers claim their approach is different from previous work, which mostly reimplemented GPU programming APIs and virtual device drivers. Past research focused on sharing the GPU among virtual machines, which increased virtualization overhead. The paper describes an alternate method: the use of PCI direct pass-through.

“In our approach, bypassing virtual machine monitor layer with negligible overhead, the mechanism can achieve similar computation performance to bare-metal system and is transparent to the GPU programming APIs,” the authors write.

Analysis of I/O Performance on AWS High I/O Platform

The HPC community is still exploring the potential of the cloud paradigm to discern the most suitable use cases. The pay-per-use basis of compute and storage resources is an attractive draw for researchers, but so is the illusion of limitless resources to tackle large-scale scientific workloads.

In the most recent edition of the Journal of Grid Computing, computer scientists from the Department of Electronics and Systems at the University of A Coruña in Spain evaluate the I/O storage subsystem on the Amazon EC2 platform, specifically the High I/O instance type, to determine its suitability for I/O-intensive applications. The High I/O instance type, released in July 2012, is backed by SSD and also provides high levels of CPU, memory and network performance.

The study looked at the low-level cloud storage devices available in Amazon EC2, ephemeral disks and Elastic Block Store (EBS) volumes, both on local and distributed file systems. It also assessed several I/O interfaces, notably POSIX, MPI-IO and HDF5, that are commonly employed by scientific workloads. The scalability of a representative parallel I/O code was also analyzed based on performance and cost.

As the results show, cloud storage devices have different performance characteristics and usage constraints. “Our comprehensive evaluation can help scientists to increase significantly (up to several times) the performance of I/O-intensive applications in Amazon EC2 cloud,” the researchers state. “An example of optimal configuration that can maximize I/O performance in this cloud is the use of a RAID 0 of 2 ephemeral disks, TCP with 9,000 bytes MTU, NFS async and MPI-IO on the High I/O instance type, which provides ephemeral disks backed by Solid State Drive (SSD) technology.”

]]>The top research stories of the week have been hand-selected from prominent journals and leading conference proceedings. Here’s another diverse set of items, including novel methods of data race detection; a comparison of predictive laws; a review of FPGA’s promise; GPU virtualization using PCI Direct pass-through; and an analysis of the Amazon Web Services High-IO platform.

Scalable Data Race Detection

A team of researchers from Berkeley Lab and the University of California Berkeley are investigating cutting-edge programming languages for HPC. These are languages that promote hybrid parallelism and shared memory abstractions using a global address space. It’s a programming style that is especially prone to data races that are difficult to detect, and prior work in the field has demonstrated 10X-100X slowdowns for non-scientific programs.

In a recent paper, the computer scientists present what they say is “the first complete implementation of data race detection at scale for UPC programs.” UPC stands for Unified Parallel C, an extension of the C programming language developed by the HPC community for large-scale parallel machines. The implementation used by the Berkeley-based team tracks local and global memory references in the program. It employs two methods for reducing overhead 1) hierarchical function and instruction level sampling; and 2) exploiting the runtime persistence of aliasing and locality specific to Partitioned Global Address Space applications.

Experiments show that the best results are attained when both techniques are used in tandem. “When applying the optimizations in conjunction our tool finds all previously known data races in our benchmark programs with at most 50% overhead,” the researchers state. “Furthermore, while previous results illustrate the benefits of function level sampling, our experiences show that this technique does not work for scientific programs: instruction sampling or a hybrid approach is required.”

A fascinating new study applies the scientific method to some of our most popular predictive models. A research team from MIT and the Santa Fe Institute compared several different approaches for predicting technological improvement – including Moore’s Law and Wright’s Law – to known cases of technological progress using past performance data from different industries.

Moore’s Law, theorized by Intel co-founder Gordon Moore in 1965, predicts that a chip’s transistor count will double every 18 months. In more general terms, it suggests that technologies advance exponentially with time. Wright’s Law was first formulated by Theodore Wright in 1936. Also called the Rule of Experience, it holds that progress increases with experience. Other alternative models were proposed by Goddard, Sinclair et al., and Nordhaus.

The study, which employed hindcasting, used a statistical model to rank the performance of the postulated laws. The comparison data came from a database on the cost and production of 62 different technologies. The expansive knowledge-base enabled researchers to test six different prediction principles against real-world data.

The results revealed that the law with the greatest accuracy was Wright’s Law, but Moore’s Law was a very close second. In fact, the laws themselves are more similar than previously realized.

“We discover a previously unobserved regularity that production tends to increase exponentially,” write the authors. “A combination of an exponential decrease in cost and an exponential increase in production would make Moore’s law and Wright’s law indistinguishable…. We show for the first time that these regularities are observed in data to such a degree that the performance of these two laws is nearly the same.”

“Our results show that technological progress is forecastable, with the square root of the logarithmic error growing linearly with the forecasting horizon at a typical rate of 2.5% per year,” they conclude.

The team includes Bela Nagy of the Santa Fe Institute, J. Doyne Farmer of the University of Oxford and the Santa Fe Institute, Quan Bui of St. John’s College in Santa Fe, NM, and Jessika E. Trancik of the Santa Fe Institute and MIT. Their findings are published in the online open-access journal PLOS ONE.

FPGAs (field programmable gate arrays) have been around for many years and show real potential for advancing HPC, but their popularity has been restricted because they are difficult to work with. This is the assertion of a group of researchers from the T.J. Watson Research Center. They argue that FPGAs won’t become mainstream until their various programmability challenges are addressed.

In a paper published last month in ACM Queue, the research team observes that there exists a spectrum of architectures, with general-purpose processors at one end and ASICs (application-specific integrated circuits) on the other. Architectures like PLDs (programmable logic devices), they argue, have that best-of-both-worlds potential in that they are closer to the hardware and can be reprogrammed. The most prominent PLD is in fact an FPGA.

The authors write:

FPGAs were long considered low-volume, low-density ASIC replacements. Following Moore’s law, however, FPGAs are getting denser and faster. Modern-day FPGAs can have up to 2 million logic cells, 68 Mbits of BRAM, more than 3,000 DSP slices, and up to 96 transceivers for implementing multigigabit communication channels. The latest FPGA families from Xilinx and Altera are more like an SoC (system-on-chip), mixing dual-core ARM processors with programmable logic on the same fabric. Coupled with higher device density and performance, FPGAs are quickly replacing ASICs and ASSPs (application-specific standard products) for implementing fixed function logic. Analysts expect the programmable IC (integrated circuit) market to reach the $10 billion mark by 2016.

The researchers note that “despite the advantages offered by FPGAs and their rapid growth, use of FPGA technology is restricted to a narrow segment of hardware programmers. The larger community of software programmers has stayed away from this technology, largely because of the challenges experienced by beginners trying to learn and use FPGAs.”

The rest of this excellent paper addresses the various challenges in detail and brings attention to the lack of support for device drivers, programming languages, and tools. The authors drive home the point that the community will only be able to leverage the benefits of FPGAs if the programming aspects are improved.

The technical computing space has seen several trends develop over the past decade, among them are server virtualization, cloud computing and GPU computing. It’s clear that GPGPU computing has a role to play in HPC systems. Can these trends be combined? A research team from Chonbuk National University in South Korea has written a paper in the periodical Applied Mechanics and Materials, proposing exactly this. The investigate a method of GPU virtualization that exploits the GPU in a virtualized cloud computing environment.

The researchers claim their approach is different from previous work, which mostly reimplemented GPU programming APIs and virtual device drivers. Past research focused on sharing the GPU among virtual machines, which increased virtualization overhead. The paper describes an alternate method: the use of PCI direct pass-through.

“In our approach, bypassing virtual machine monitor layer with negligible overhead, the mechanism can achieve similar computation performance to bare-metal system and is transparent to the GPU programming APIs,” the authors write.

The HPC community is still exploring the potential of the cloud paradigm to discern the most suitable use cases. The pay-per-use basis of compute and storage resources is an attractive draw for researchers, but so is the illusion of limitless resources to tackle large-scale scientific workloads.

In the most recent edition of the Journal of Grid Computing, computer scientists from the Department of Electronics and Systems at the University of A Coruña in Spain evaluate the I/O storage subsystem on the Amazon EC2 platform, specifically the High I/O instance type, to determine its suitability for I/O-intensive applications. The High I/O instance type, released in July 2012, is backed by SSD and also provides high levels of CPU, memory and network performance.

The study looked at the low-level cloud storage devices available in Amazon EC2, ephemeral disks and Elastic Block Store (EBS) volumes, both on local and distributed file systems. It also assessed several I/O interfaces, notably POSIX, MPI-IO and HDF5, that are commonly employed by scientific workloads. The scalability of a representative parallel I/O code was also analyzed based on performance and cost.

As the results show, cloud storage devices have different performance characteristics and usage constraints. “Our comprehensive evaluation can help scientists to increase significantly (up to several times) the performance of I/O-intensive applications in Amazon EC2 cloud,” the researchers state. “An example of optimal configuration that can maximize I/O performance in this cloud is the use of a RAID 0 of 2 ephemeral disks, TCP with 9,000 bytes MTU, NFS async and MPI-IO on the High I/O instance type, which provides ephemeral disks backed by Solid State Drive (SSD) technology.”

]]>https://www.hpcwire.com/2013/03/07/the_week_in_hpc_research-8/feed/04166NVIDIA Raises Its Game to the Cloudhttps://www.hpcwire.com/2012/05/17/nvidia_raises_its_game_to_the_cloud/?utm_source=rss&utm_medium=rss&utm_campaign=nvidia_raises_its_game_to_the_cloud
https://www.hpcwire.com/2012/05/17/nvidia_raises_its_game_to_the_cloud/#respondThu, 17 May 2012 07:00:00 +0000http://www.hpcwire.com/?p=8819NVIDIA GeForce GRID, a cloud gaming platform announced at the 2012 GPU Technology Conference (GTC), seeks to reduce the the latency associated with cloud gaming.

]]>This week in San Jose, California, NVIDIA kicked off its GPU Technology Conference with a slew of announcements around its new virtualized GPU portfolio. As part of this media push, the company launched its NVIDIA GeForce GRID cloud gaming platform, “which allows gaming-as-a-service providers to stream next-generation games to virtually any device, without the lag that hampers current offerings.”

Looking to move market share away from the console gaming industry as well as bring new users into the fold, the company has already partnered up with a few cloud gaming providers.

Today, most game developers rely on users to purchase a disc-based console or suitably-capable PC to supply processing power for their software. The landscape is changing, though, as companies like OnLive, Otoy and Gaikai have started offering cloud-based gaming services. The model is akin to streaming content through Netflix, and users can access the service through their existing computer, TV, tablet or smartphone.

Moving the heavy lifting to a datacenter adds device flexibility for end users, but it also introduces a major challenge for network-dependent gamers. Latency, or “lag” in gamerspeak, has the ability to ruin the multiplayer experience for online players.

The subject has drawn the ire of many gamers, especially those who play the very-popular first person shooter games. Let’s say one player shoots an automatic weapon at an online opponent and half the bullet spray appears to make contact, well due to lag, the server may only register one or two hits.

NVIDIA aims to allay these concerns by increasing processing speed and capacity available to cloud gaming providers. The GeForce GRID processors consist of two Kepler GPUs, each with its own encoder, outfitted with 8GB of VRAM and 320GB/s memory throughput. Each processor comes equipped with 3,072 CUDA cores capable of 4.7 teraflops of shader performance within a 250W TDP.

While first-generation cloud gaming servers relied on a single GPU per server, GeForce GRID servers enable up to four Kepler GPUs to be connected to each server.

The new processors employ advanced power management techniques to optimize performance per watt, an important metric for cloud-based providers. According to the official release, the GPUs minimize power consumption by simultaneously encoding up to eight game streams, allowing providers to support millions of concurrent gamers. On the server side, per-game energy requirements are cut in half.

NVIDIA claims the GeForce GRID platform can reduce game server latency to as little as 10 ms – the same delay as a local console – by capturing and encoding game frames in a single pass. During GTC, the technology was successfully demoed using only a TV, game controller and a network connection to Gaikai’s service.

The gaming industry has seen a recent drop in revenue, but still pulled in $17 billion in retail sales during 2011. That figure does not include the additional $7.24 billion gained from rentals, subscriptions, mobile and social game purchases. If NVIDIA can deliver on its latency claims, that second number could increase substantially. Taking direct aim at Sony, Microsoft and Nintendo, cloud gaming providers have made their platforms available on smart TV platforms, hoping to attract curious or casual gamers unwilling to invest in consoles.

Although GeForce GRID may not signal the end of console gaming, it certainly threatens to steal some of its thunder and possibly attract a new customer base for the gaming industry.