Articles and news on parallel programming and code modernization

In this video, Beenish Zia from Intel presents: BigDL Open Source Machine Learning Framework for Apache Spark. “BigDL is a distributed deep learning library for Apache Spark*. Using BigDL, you can write deep learning applications as Scala or Python* programs and take advantage of the power of scalable Spark clusters. This article introduces BigDL, shows you how to build the library on a variety of platforms, and provides examples of BigDL in action.”

In this video, Larry Meadows from Intel describes why modern processors require modern coding techniques. With vectorization and threading for code modernization, you can enjoy the full potential of Intel Scalable Processors. “In many ways, code modernization is inevitable. Even EDGE devices nowadays have multiple physical cores. And even a single-core machine will have hyperthreads. And keeping those cores busy and fed with data with Intel programming tools is the best way to speed up your applications.”

As the trend toward exascale HPC systems continues, the complexities of optimizing parallel applications running on them increase too. Potential performance limitations can occur at the application level which relies on the MPI. While small-scale HPC systems are more forgiving of tiny MPI latencies, large systems running at scale prove much more sensitive. Small inefficiencies can snowball into significant lag.

Over at the University of Delaware, Julie Stewart writes that assistant professor Sunita Chandrasekaran has received an NSF grant to develop frameworks to adapt code for GPU supercomputers. She is working with complex patterns known as wavefronts, which are commonly found in scientific codes used in analyzing the flow of neutrons in a nuclear reactor, extracting patterns from biomedical data or predicting atmospheric patterns.

Tuning code has, for a long time, been an art. Knowing what to look for and how to correct inefficiencies in serious numerical computations has not been easy for most programmers. It’s often hard to even know which tool to start with. Which is why the Intel® VTune™ Amplifier Application Performance Snapshot could prove to be a great way to get an instant summary of an application’s performance characteristics and issues.

Python is now the most popular programming language, according to IEEE Spectrum’s fifth annual interactive ranking of programming languages, ahead of C++ and C. Recent Intel Distributions for Python show that real HPC performance can be achieved with compilers and library packages optimized for the latest Intel architectures. Moreover, the library packages targeted for big data analysis and numerical computation included in this distribution now support scaling for multi-core and many-core processors as well as distributed cluster and cloud infrastructures.

Parallelism helps applications make the best use of processors on single or multiple devices. However, parallelism implementation itself can prove a challenging task. In this video, Mike Voss, principal engineer with the Core and Visual Computing Group at Intel discusses the benefits of Intel® Threading Building Blocks (Intel® TBB), a C++ library, and how it can simplify the work of adding parallelism without the need to probe into threading details.

“Understanding a cluster can be complex if tools are not available such as Intel Cluster Checker. Think of how many times users complain that their applications are not runing with the expected performance and how long it takes system administrators to diagnose the issue. With Intel Cluster Checker, diagnosing and debugging of these issues is easier and less complex. By usingthis tool, customers will be more statisfied and a higher return on the investment will be realized.”

Latest Video

Industry Perspectives

In this special guest post, Axel Huebl looks at the TOP500 and HPCG with an eye on power efficiency trends to watch on the road to Exascale. "This post will focus one efficiency, in terms of performance per Watt, simply because system power envelope is a major constrain for upcoming Exascale systems. With the great numbers from TOP500, we try to extend theoretical estimates from theoretical Flop/Ws of individual compute hardware to system scale." [Read More...]

White Papers

A parallel file system offers several advantages over a single direct attached file system. By using fast, scalable, external disk systems with massively parallel access to data, researchers can perform analysis against much larger datasets than they can by batching large datasets through memory. To Learn More about the Parallel File Systems download this guide