ISC – HPCwirehttps://www.hpcwire.com
Since 1987 - Covering the Fastest Computers in the World and the People Who Run ThemTue, 26 Sep 2017 18:39:43 +0000en-UShourly1https://wordpress.org/?v=4.8.260365857Tsinghua Crowned Eight-Time Student Cluster Champions at ISChttps://www.hpcwire.com/2017/06/22/tsinghua-team-wins-eighth-student-cluster-championship-isc/?utm_source=rss&utm_medium=rss&utm_campaign=tsinghua-team-wins-eighth-student-cluster-championship-isc
https://www.hpcwire.com/2017/06/22/tsinghua-team-wins-eighth-student-cluster-championship-isc/#respondThu, 22 Jun 2017 19:10:22 +0000https://www.hpcwire.com/?p=37016Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented the award to the team with the best overall score, Tsinghua University. Sponsored by Inspur and using Nvidia graphics processors, Tsinghua was also one of the three […]

Always a hard-fought competition, the Student Cluster Competition awards were announced Wednesday, June 21, at the ISC High Performance Conference 2017. Amid whoops and hollers from the crowd, Thomas Sterling presented the award to the team with the best overall score, Tsinghua University. Sponsored by Inspur and using Nvidia graphics processors, Tsinghua was also one of the three teams who had a perfect score for the deep learning part of the competition.

The team is a force to be reckoned with. They are the only team to achieve a “triple crown” victory by winning competitions at ISC, SC, and ASC. They’ve won eight championships in total and this is their third win at the ISC competition.

The award for Fan Favorite, the second year in a row, went to Universitat Politècnica De Catalunya Barcelona Tech (UPC) who captivated ISC attendees and garnered the most votes. Over 2,100 people voted for their Fan Favorite — a record for this conference attendee participation portion of the competition.

2nd place CHPC team with CHPC Director Dr. Happy Sithole and friends

The award for the Highest High Performance Linpack went to FAU boyzz from Friedrich-Alexander University Erlangen–Nürnberg. They are one of the few teams that has competed worldwide at all three competitions — the Supercomputing Conference (SC), Asia Student Supercomputer Challenge (ASC) in China, and ISC. They used a traditional cluster with 12 GPUs.

Onto the award for running deep learning applications, Vice General Manager XuJun Fu of Baidu Cloud, who brought the deep learning applications to the competition, announced the winners. Tsinghua University, Nanyang Technological University and Beihang University all took home the top prize for solving the Captcha Challenge, achieving the highest degree of model accuracy.

This year, ten teams from around the world came to Frankfurt to build a small cluster of their own design and test their HPC skills by optimizing and running a series of benchmarks and applications. The teams must keep their power consumption below 3000 watts on one power circuit while running the benchmarks and applications.

The teams used a variety of designs. Two teams utilized liquid cooling technology, eight teams used GPUs and one team used Xeon Phi. UPC built an ARM based cluster with 48 core chips, liquid cooled. EPCC University of Edinburgh (EPCC) were described as the Linpack junkies, driving their results with a liquid cooled system.

Gilad Shainer and Thomas Sterling reveal Tsinghua University as the team with the highest overall score

Scot Schultz, director of Educational Outreach, HPC Advisory Council, provided some additional perspective for this article. “We decided to use a slightly more interesting use case of solving for Captcha,” he said, “because it not only highlights the power of deep learning to be a useful tool create models to recognize and classify unwieldy data, such as distorted characters, grainy images and overlapping characters, but it also demonstrates that it is possible for this powerful technology to be used in less positive ways, such as solving for security or privacy. Realizing that everyone has access to the tools we use to move society forward, we need to be aware of the possible mis-use, especially as it becomes more pervasive across industry, healthcare, financial services, and the like.”

“We’re always amazed and impressed by the quality of work and solutions we see from students during the ISC Student Cluster Competition,” said Doug Miles, director of PGI Compilers & Tools at Nvidia, a key sponsor for the student teams, many of whom rely on Nvidia GPUs to drive their workloads. “We support this event to encourage students from around the world to dive in and learn how to program parallel systems using the same compilers and programming models used by HPC professionals every day on the world’s fastest supercomputers.”

Gilad Shiner, chairman of HPC Advisory Council, said, “The HPC Advisory Council is proud to host, together with ISC, the 6th student cluster competition at the ISC High Performance conference. It enables the next generation of HPC professionals and drives HPC technologies to be used in more areas and applications.

“We want to thank all of the university teams that participated in the competition. Through this competition they have gained knowledge and expertise in HPC, deep learning technologies, and solutions. We hope to see these teams as well as new teams at the ISC 2018 competition.”

]]>https://www.hpcwire.com/2017/06/22/tsinghua-team-wins-eighth-student-cluster-championship-isc/feed/037016At ISC – Goh on Go: Humans Can’t Scale, the Data-Centric Learning Machine Canhttps://www.hpcwire.com/2017/06/22/isc-goh-go-humans-cant-scale-data-centric-learning-machine-can/?utm_source=rss&utm_medium=rss&utm_campaign=isc-goh-go-humans-cant-scale-data-centric-learning-machine-can
https://www.hpcwire.com/2017/06/22/isc-goh-go-humans-cant-scale-data-centric-learning-machine-can/#respondThu, 22 Jun 2017 13:00:55 +0000https://www.hpcwire.com/?p=37010I’ve seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In this game, the brain doesn’t stand a chance. Scoff at such talk as farfetched or far […]

I’ve seen the future this week at ISC, it’s on display in prototype or Powerpoint form, and it’s going to dumbfound you. The future is an AI neural network designed to emulate and compete with the human brain. In this game, the brain doesn’t stand a chance.

Scoff at such talk as farfetched or far off in a hazy utopic/dystopic future. Roll your eyes and say we’ve heard the hype before (some of us remember a supercomputer company 25 years ago with the inflated name of “Thinking Machines,” long defunct). But it’s neither futuristic nor hype, it’s happening now, the technology pieces are taking shape, and the implications for business, for the work world and for our everyday lives – for good or ill – are as staggering as they are real.

Aside: It’s somewhat unsettling that conference attendees here in Frankfurt don’t seem particularly interested in those implications. For the moment, ISC is at the gathering point of computer scientists bringing about massive technological change, but nearly all the talk here is about the “how” of AI systems, not the “what then?” But there’s one anecdote making the rounds that has raised eyebrows: when Google engineers were asked to how its AlphaGo machine the winning move against the world champion of Go (the world’s most complex board game), the answer was: “We don’t know” (more on this below).

Quite consciously, engineers are architecting HPC systems along the lines of our brain. The new architecture is an emerging style of computing called “data intensive” or “data centric.” It replaces processing with memory (i.e., data) at the center of the computing universe. Combined with advanced algorithms, new memory and processor technologies are coming on line to make the new architecture a practical reality. Once the pieces are in place, the next step will be to scale these systems beyond all measure of human brain capacity.

What does data centric computing mean? How does it work? Why does it represent a major shift in advanced scale computing?

Let’s start answering those questions by first looking at how data centric systems are measured. The benchmark for new AI systems isn’t how fast they solve linear algebra problems (i.e., Linpack). That’s how processor-centric systems have been measured for decades, and considering the capabilities of data-centric systems under development, that benchmark seems wholly inadequate.

Rather than throughput, AI-based systems are measured in relation to people: their ability to compete with humans at our most intellectually challenging games of reason – checkers, chess, Go, poker. The standard of success isn’t training the system to become perfect at it, or to “solve” the game (i.e., work out every possible combination of moves). The benchmark is playing the game better than any human.

That’s the objective. Once the system is better than any of us, it’s ready to move into an advisory role, providing guidance and suggestions, augmenting our capabilities. For now. In a decade or so, these systems will take over tasks for us altogether.

Driving is a prime example. If driving were a game, humans would still beat machines – even though statistics show we’re getting worse at it (according to Dr. Pradeep Dubey, Intel Fellow, Intel Labs & Director, Parallel Computing Lab, who presented at ISC on autonomous vehicle technology). Around the world, two people are killed in car accidents each minute. In the U.S., 40,000 people are killed annually and 2 million suffer permanent injuries.

Meanwhile, AI is enabling machines to get better at driving. A convergence point is coming. For now, the car’s intelligence is limited to navigating, warning us about traffic conditions and setting off beepers when we get close to curbs and other cars.

The next step: our roads will have special lanes where we’ll temporarily hand over operation of the car to itself. A few years after that, we won’t drive at all. Driving is a game in which machines will soon be much better than we are.

Dr. Eng Lim Goh, Vice President of HPE and an industry visionary for decades, is a prime driver of new AI system development. At ISC this week, he discussed why AI in all its forms – machine learning, deep learning, strategic reasoning, etc. – is the driving force bringing about “data intensive” computing architectures.

Here’s his schema for the data intensive computer:

The left side of the diagram is old-style, LINPACK-benchmarked, processor-centric computing. That’s where HPC happens. The processor is at the center. Data is sent to the CPU, simulations are run, and new, and more, data comes out. These systems have hit a wall of their own making. The problem occurs when HPC systems run their simulations, generating exponentially more machine-generated data than they started with. They’re producing data beyond the capability of data scientists to analyze. Big data isn’t big enough.

“For 30 years we’ve lived in this world where small amounts of data go in, and we apply supercomputing power onto our partial differential equations, or our models, to generate lots of data,” he said.

Already, Goh pointed out, there aren’t enough data scientists to meet demand for today’s data analytics requirements. For the torrents of machine-generated data to come, there’s an overwhelming need to automate how data is analyzed.

Take for example seismic exploration.

For exploration of energy reserves at sea, ships drag cables with hydrophones, fire shots into the ocean floor and collect the echo on sensors. Goh said for every 10TB of data collected by the sensors, 1PB of simulation data is produced – 100X the original data.

That’s where the right side of the diagram comes in: high performance analytics (HPA), self-learning AI systems that can take voluminous amounts of data produced by HPC, put it in memory, and work up answers to questions.

Dr. Eng Lim Goh

The key to the data-centric system of the future is the border area in the middle of the diagram. That’s where memory (i.e., data) resides, like a queen bee. It will be surrounded by a variety of processors (CPUs, GPUs, FPGAs, ASICs, assigned jobs appropriate for their capabilities) operating around the data, like drones.

Looked at this way, in a world where most companies have analyzed only about 3 percent of their data on average, traditional HPC systems seem glaringly incomplete. Combining the left side of the diagram and the right, integrating HPC with HPA – that takes supercomputing somewhere new. That’s a machine with a new soul.

But Goh conceded there are barriers to HPC and HPA joining forces.

“The two worlds are very different,” Goh said. “The HPC world where I lived, I’m guilty of this. All these years we assumed data movement was free. Guess what? When Linpack started 20 years ago we didn’t consider data movement. Yet we’re still ranking our Top500 systems that way. We’re still guilty that way.

“But the data scientists of the world also have something to say about us,” he added. “They assume compute is free. Take Hadoop. Hadoop is a technique where you map your data out onto compute nodes, then do your computation, then you reduce the data you bring back. The data world called this MapReduce. So we have to bring the two worlds together. More and more now, people should be investing in one system of left and right, not just the left.”

Goh pointed to the middle of his diagram and said that’s where the big architectural challenge lies. “If you have to move an exabyte of data between system A and B, if they are two different systems, it will be impractical. The world will come to this (integration of HPC and HPA).”

That’s why the U.S. effort to develop a “capable” exascale computer by the early 2020’s puts as much emphasis on compute power as memory capacity. A mission document issued by the Exascale Computing Project said its intent to build a system not just with an exaflop of processing power but one that also can handle an Exabyte of data in memory.

“Essentially, it’s a bandwidth machine,” Goh said. “It’s a supercomputer, but really it’s a data mover. Not only are NVlinks all connected, they’re also GPU-connected, so clumps of four GPUs can talk to other clumps of four GPUs directly. Then we have four OPA’s coming out of each node, giving one OPA per GPU. So this is really a data machine.”

The Bridges supercomputer pulled off one of the most impressive game wins of the emerging AI era when it defeated four of the world’s top poker players earlier this year. Actually, the competition stretched across two years, Goh said, with the AI system losing $700,000 to the players the first year they played. The second year, with 10X more compute from the Bridges computer, the AI system (“Libratus”) took the four humans for $1.7 million, a classic hustle.

While IBM Deep Blue (chess) and Google’s AlphaGo have grabbed most of the machine-defeated-human headlines of late, it’s less well known that machines have beaten humans at checkers, which has 1020 “naïve” (or possible) combinations, since the early 1990s, several years before IBM beat the world’s top chess player. Chess has 1047 naïve combinations. How big is 1047? An exascale machine running for 100 years would complete only 1028 combinations. The point being that without integrated AI techniques, processing only gets you so far.

Go, meanwhile, has 10171 combinations. Poker, with “only” 10160 combinations, offers up the added complexity of “incomplete information.” By contrast with the three board games, in which you can see the pieces held by your opponent, in poker, you don’t know what your opponents have in their hands.

“So we didn’t solve chess, machines didn’t solve chess,” Goh said, “all they did was be good enough to be superhuman – to beat any human. That’s a term were going to hear more and more now.”

After Goh’s presentation, he was asked to response to Google not understanding how AlpaGo won the Go tournament. The issue, he said, is overcoming opacity.

“We’re working very hard to increasing transparency,” he said. “Some people have discussed the idea that there are many stages in a neural network, to intercept it in between those stages, and take its output and see if you can make sense of it.”

Leaving a strong role for human supervision also is important. He pointed out that since the Industrial Revolution, workers get promoted from first operating a machine to supervising machines.

He also discussed the distinction between the “correct” and the “right” answer. An AI-based system may deliver a correct answer, but whether it’s “right” – acceptable within human social mores, the bounds of business ethics, or even an aesthetic judgment – is something only humans can decide.

“Societal values need to be applied, human values need to be applied,” he said.

]]>https://www.hpcwire.com/2017/06/22/isc-goh-go-humans-cant-scale-data-centric-learning-machine-can/feed/037010OpenSuCo: Advancing Open Source Supercomputing at ISChttps://www.hpcwire.com/2017/06/15/opensuco-open-source-supercomputing-isc/?utm_source=rss&utm_medium=rss&utm_campaign=opensuco-open-source-supercomputing-isc
https://www.hpcwire.com/2017/06/15/opensuco-open-source-supercomputing-isc/#respondThu, 15 Jun 2017 20:37:14 +0000https://www.hpcwire.com/?p=36553As open source hardware gains traction, the potential for a completely open source supercomputing system becomes a compelling proposition, one that is being investigated by the International Workshop on Open Source Supercomputing (OpenSuCo). Ahead of OpenSuCo’s inaugural workshop taking place at ISC 2017 in Frankfurt, Germany, next week, HPCwire reached out to program committee members […]

As open source hardware gains traction, the potential for a completely open source supercomputing system becomes a compelling proposition, one that is being investigated by the International Workshop on Open Source Supercomputing (OpenSuCo). Ahead of OpenSuCo’s inaugural workshop taking place at ISC 2017 in Frankfurt, Germany, next week, HPCwire reached out to program committee members Anastasiia Butko and David Donofrio of Lawrence Berkeley National Laboratory to learn more about the effort’s activities and vision.

OpenSuCo: As we approach the end of MOSFET scaling, the HPC community needs a way to continue performance scaling. One way of providing that scaling is by providing more specialized architectures tailored for specific applications. In order to make possible the specification and verification of these new architectures, more rapid prototyping methods need to be explored. At the same time, these new architectures need software stacks and programming models to be able to actually use these new designs.

There has been a consistent march toward open source for each of these components. At the node hardware level, Facebook has launched the Open Compute Project; Intel has launched OpenHPC, which provides software tools to manage HPC systems. However, each of these efforts use closed source components in their final version. We present OpenSuCo: a workshop for exploring and collaborating on building an HPC system using open-source hardware and system software IP (intellectual property).

The goal of this workshop is to engage the HPC community and explore open-source solutions for constructing an HPC system – from silicon to applications.

Figure illustrates the progress in open source software and hardware

HPCwire: We’ve seen significant momentum for open source silicon in the last few years, with RISC-V and Open Compute Project for example, what is the supercomputing perspective on this?

OpenSuCo: Hardware specialization, specifically the creation of Systems-On-Chip (SoCs), offers a method to create cost-effective HPC architectures from off-the-shelf components. However, effectively tapping the advantages provided by SoC specialization requires the use of expensive and often closed source tools. Furthermore, the building blocks used to create the SoC may be closed source, limiting customization. This often leaves SoC design methodologies outside the reach of many academics and DOE researchers. The case for specialized accelerators can also be made from an economic sense as, in contrast to historical trends, the energy consumed per transistor has been holding steady, while the cost (in dollars) per transistor has been steadily decreasing, implying that we will soon be able to pack more transistors into a given area than can be simultaneously operated.

From an economic standpoint, we are witnessing an explosion of highly cost-sensitive and application-specific IoT (internet of things) devices. The developers of these devices face a stark choice: spend millions on a commercial license for processors and other IP or face the significant risk and cost (in both development time and dollars) of developing custom hardware. Similar parallels can be drawn to the low-volume and rapid design needs found in many scientific and government applications. By developing a low cost and robust path to the generation of specialized hardware, we can support the development and deployment of application-tailored processors across many DOE mission areas.

The design methodologies traditionally focused for use in these cost sensitive design flows can be applied to high-end computing due to the emergence of embedded IP offering HPC-centric capabilities, such as double-precision floating point, 64-bit address capability, and options for high performance I/O and memory interfaces. The SoC approach, coupled with highly accessible open source flows, will allow chip designers to include only features they want, excluding those not utilized by mainstream HPC systems. By pushing customization into the chip, we can create customization that is not feasible with today’s commodity board-level computing system design.

HPCwire: Despite pervasive support in tech circles not everyone is convinced of the merits of open source, what is the case for open source in high performance computing?

OpenSuCo: While many commercial tools provide technology to customize a processor or system given a static baseline, they generally provide only proprietary solutions that both restrict the level of customization that can be applied, as well as increase the cost of production. This cost is of greatest importance to low-volume or highly specialized markets, such as those found in the scientific, research, and defense applications, as large volume customers can absorb this NRE as part of their overall production. As an alternative to closed source hardware flows, open source hardware has been growing in popularity in recent years and mirrors the rise of Linux and open source software in the 1990s and early 2000s. We put forth that Open Source Hardware will drive the next wave of innovation for hardware IP.

In contrast to closed-source hardware IP and flows, a completely open framework and flow enable extreme customization and drive cost for initial development to virtually zero. Going further, by leveraging community-supported and maintained technology, it is possible to also incorporate all of the supporting software infrastructure, compilers, debuggers, etc. that work with open source processor designs. A community-led effort also creates a support community that replaces what is typically found with commercial products and leads to more robust implementations as a greater number of users are testing and working with designs. Finally, for security purposes, any closed-source design carries an inherent risk in the inability to truly inspect all aspects of its operation. Open source hardware allows the user to inspect all aspects of its design for a thorough review of its security.

HPCwire: Even with the advances in open source hardware, a completely open source supercomputing system seems ambitious at this point. Can you speak to the reality of this goal in the context of the challenges and community support?

OpenSuCo: We agree that building a complete open-source HPC system is a daunting task, however, a system composed of an increased number of open source components is an excellent way to increase technological diversity and spur greater innovation.

The rapid growth and adoption of the RISC-V ISA is an excellent example of how a community can produce a complete and robust software toolchain in a relatively short time. While largely used in IoT devices at the moment, there are multiple efforts to extend the reach of RISC-V – in both implementations and functionality, into the HPC space.

HPCwire: What is needed on the software side to make this vision come together?

OpenSuCo: The needs and challenges of an open source-based supercomputer are not any greater than that of a traditional “closed” system. Most future systems will need to face the continuing demands of increased parallelism, shifting Flop-to-Byte ratios and an increase in the quantity and variety of accelerators. An open system may possess greater transparency and a larger user community allowing more effective and distributed development. Regardless, continued collaboration between software and hardware developers will be necessary to create the required community to support this effort. As part of the OpenSuCo workshop we hope to engage and bring together a diverse community of software and hardware architects willing to engage on the possibility of realizing this vision.

HPCwire: You’re holding a half-day workshop at ISC 2017 in Frankfurt on June 22. What is on the agenda and who should attend?

While many of the emerging technologies and opportunities surround the rise of open-source hardware, we would like to invite all members of the HPC community to participate in a true co-design effort in building a complete HPC system.

HPCwire: You’ll also be holding a workshop at SC17. You’ve put out a call for papers. How else can people get involved in OpenSuCo activities?

OpenSuCo: While we have long advocated for innovative and open source systems for the HPC community, we are just beginning to tackle this comprehensive solution and cannot do it alone. We welcome collaborators to help build the next generation of HPC software and hardware design flows.

As deputy chair of Industrial Day at ISC next week, my goal is to help bring clarity to the key opportunities and challenges afforded by HPC-scale technologies, including the specific barriers commercial companies are likely to encounter as they deploy new solutions or upgrade existing ones.

Our Industrial Day agenda will focus on choosing infrastructure products and services that provide higher ROI and greater flexibility, and on deploying practical solutions that help maximize innovation potential, increase market share and support new business models.

Industrial HPC users can be grouped into two categories: those who operate their own data centres and those who buy or access on-demand HPC resources. At this first iteration of Industrial Day, we’ll be focusing mostly on the first category, the on-prem data centre, which has been increasing steadily over the last 30 years.

Dr. Marie-Christine Sawley of Intel

Fifty percent of systems on the TOP500 list are now deployed in corporations, including four systems in the TOP50 and 10 in the TOP100. Many of them are based in Europe, owned and operated by leaders in energy and power, aeronautics, automotive, telecommunications, finance and other industries. Notable high-end users include Airbus, BMW, and Total, which now operates the world’s largest HPC system in the private sector.

Industrial Day will focus on recent developments in the European HPC community that are of interest to commercial organizations and that we believe will have a cascading effect on future solutions and usage models. Topics will include: how to qualify exascale performance, infrastructure selection, and the development of high performance data analytics (HPDA) use cases.

The Benefits of Exascale Performance

Complex, fundamental research in areas such as fusion, material science and quantum chromodynamics continue to move high performance computing to higher scale. However, the growing number of industrial users have different decision criteria and often operate at smaller scales. If many of the top-end solutions and lessons learned offer value for commercial users, other advances also are laying the foundation for future innovation that should be considered when evaluating options.

At Industrial Day, we’ll have experts speak in detail about the benefits exascale computing will provide for aircraft design and for complex multi-disciplinary simulations. We’ll also be talking about the software challenges of exascale computing, and the great value offered by projects such as EXA2CT, which is boot strapping exascale code enhancement by creating libraries and proto-applications of direct interest to industrial users: examples include fast Fourier transforms, linear algebra functions, and other core computations.

ISC will provide many additional opportunities to interact with experts breaking new ground in exascale computing. A great deal of collaborative research is underway in European HPC community. environments.

Selecting and Scaling Infrastructure, Services and Software

Universities and research organizations have extensive experience in procuring, connecting, and sharing HPC resources, and they tend to be among the earliest evaluators and adopters of new technology. Many are contributing to advances in HPC system software, virtualization and cloud computing that are redefining how HPC resources are deployed. For Intel, as for other technology vendors introducing new options at every layer of the solution stack — compute, memory, storage, fabric, and software — the experience and insights of these organizations can be invaluable.

Examples include the DEEP and DEEP-ER projects at the Jülich Supercomputing Centre, focused on creating an innovative HPC system architecture that distributes workloads across a standard HPC cluster and a highly-parallel booster system using an MPI-like software layer. Other projects we run in collaboration with our partners include high-density compute options—such as Intel Xeon Phi processors and FPGAs—into mainstream usage.

At Industrial Day, we’ll take a practical look at how these processes are being handled and how to balance requirements and suppliers to achieve higher value, higher reliability, conquer new market segments while containing costs. We’ll also talk about innovations in software extending the value of simulation-based design to other areas of the enterprise, such as innovation for new materials management, product design or service offering. Using design models to generate high-definition 3D images, for example, can be a useful tool for attracting customers earlier in the product design lifecycle.

Evaluating High Performance Data Analytics (HPDA)

HPC and big data analytics have evolved in relative isolation, but they are coming together quickly and have enormous potential for extracting actionable insights from rich and complex data sets. A great deal of research is focused on these areas, and high-value use cases are beginning to appear. HPC brings speed and scale to deep neural network training and other machine learning strategies that become progressively smarter on their own. HPDA opens the door to real-time and near-real time solutions that that can radically improve critical decision making in data-rich industrial environments.

Industrial Day will focus on examples in railway traffic control, IoT data analysis, and product performance lifecycle management. Industrial HPC users will gain a better understanding of machine learning and other HPDA technologies and better insight into the kinds of resources required for practical solutions that combine HPC best practices for operating very large systems with the latest advances in data analytics.

Jumpstarting a Two-Way Conversation

Investment in HPC has never been higher and Europe is an important locus for R&D, with a high density of universities and research organizations collaborating on large projects. The EU is fueling innovation with investments of €700M by the end of the decade.[1] Programs such as Horizon 2020 and platforms such as the European Technology Platform for HPC (ETP4HPC) bring EU decision makers together with HPC leaders to refine the agenda and keep R&D efforts on track.

The breadth and depth of this activity makes ISC High Performance 2017 an important event for HPC users. As deputy chair of Industrial Day, I’ll be leading an HPC user round table to discuss the opportunities and challenges that are most relevant to industrial users. Next year, I’ll be Industrial Day chair, and I’ll be using that information — and the feedback we receive — to extend and focus the agenda for Industrial Day at ISC High Performance 2018, so we can provide a richer exchange platform for industrial users.

Dr. Marie-Christine Sawley is the Intel manager of the ECR lab in Paris, HTC collaboration with CERN and code modernization with BSC, and manages Intel’s participation in the EXA2CT and READEX projects funded by the European Union.

]]>https://www.hpcwire.com/2017/06/14/isc-industrial-day-bridging-academia-industrial-hpc-users/feed/036497Scaling an HPC Career in Nepal Can Be a Steep Climbhttps://www.hpcwire.com/2017/04/20/scaling-hpc-career-nepal-can-steep-climb/?utm_source=rss&utm_medium=rss&utm_campaign=scaling-hpc-career-nepal-can-steep-climb
https://www.hpcwire.com/2017/04/20/scaling-hpc-career-nepal-can-steep-climb/#respondThu, 20 Apr 2017 19:37:40 +0000https://www.hpcwire.com/?p=35139Umesh Upadhyaya works as an IT Associate at the International Centre for Integrated Mountain Development (ICIMOD) in Nepal, which supports the country’s one and only HPC facility. He is directly involved in an initiative that focuses on climate change and atmosphere modeling

Umesh Upadhyaya works as an IT Associate at the International Centre for Integrated Mountain Development (ICIMOD) in Nepal, which supports the country’s one and only HPC facility. He is directly involved in an initiative that focuses on climate change and atmosphere modeling, an area that has particular relevance to the country’s dependence on its agricultural production and hydroelectric power.

Part of what Umesh wants to accomplish at ICIMOD is acquiring the necessary technical skills so that he can assist research scientists in setting up and supporting HPC resources at the Nepal facility. Unfortunately, at this point the government doesn’t have the funds to allocate for training or workshops to help him acquire such skills.

Umesh Upadhyaya

The conference organizers for ISC High Performance became aware of his plight and are offering Umesh free registration for the tutorials, the conference and workshops at this year’s conference in June. STEM-Trek, a non-profit group that supports professional development for individuals from underserved regions who are trying to establish themselves in the HPC workforce, is trying to help him secure travel funding. The hope is that an ISC exhibitor will come forward to help sponsor his trip to Frankfurt, Germany.

ISC’s Nages Sieslack recently got an opportunity to speak with Umesh about his work, ICIMOD’s mission, and what he would like to achieve if he could attend the ISC conference this year.

Why are you interested in attending ISC 2017?

Umesh Upadhyaya: Scientific computing is such an exciting realm of technology and there is a severe lack of skills in Nepal in this particular area. By attending the ISC 2017, I would have the opportunity to network with academics, researchers, and representatives from industry and bring in a lot of experience to my organization via interactions in the various workshops, tutorials and conference sessions.

The ISC 2017 platform will also help me learn about advancements in infrastructure design, state-of-the-art hardware, breakthroughs in computational sciences, and the latest use cases of HPC.

Can you tell us a bit about ICIMOD and its purpose?

Umesh: The International Centre for Integrated Mountain Development is a regional intergovernmental learning and knowledge-sharing center, based in Kathmandu, Nepal. It serves the eight regional member countries of the Hindu Kush Himalayas – Afghanistan, Bangladesh, Bhutan, China, India, Myanmar, Nepal, and Pakistan. ICIMOD aims to assist mountain people to understand changes to their environment, adapt to them, and make the most of new opportunities, while also addressing upstream-downstream issues.

ICIMOD supports transboundary programs through partnership with regional partner institutions, facilitates the exchange of experience, and serve as a regional knowledge hub. It also strengthens networking among regional and global centers of excellence. Overall, we are working to develop an economically and environmentally sound mountain ecosystem to improve the living standards of local populations and to sustain vital services for the billions of people living downstream now, and in the future.

Can you describe your center’s current HPC capabilities?

Umesh: The High-Performance Center provides a unique ability to access the latest systems, CPUs, and networking technologies. ICIMOD has installed and is operating a high performance computing cluster based on Dell blade servers equipped with Intel Xeon processors. The center hosts a Linux environment and is specifically used for atmosphere modeling. Air quality scientists currently run the latest version of WRF, WRF-Chem, and STEM etc. in this HPC environment. The in-house Dell blade servers and storage system comprises 160 cores, 512 GB of memory and 100 TB of disk storage.

What kind of research are you and your project involved in, and how do you use HPC at the center?

Umesh: The cluster is currently available to ICIMOD scientists and PhD fellows for academic and research purposes, especially those related to weather and pollution models. The modeling software is currently supported by the GCC, Intel and PGI compilers.

I am working with research scientists to determine and compare run times of WRF v3.8 compiled with gfortran versus commercial Intel and PGI compilers. Currently, ICIMOD uses MPICH2 and gfortran for multi-CPU runs for WRF v3.8 in a multi-clustered environment, and recently we subscribed to PGI and Intel compilers. All our compute nodes are bare metal and have low-latency interconnects for better parallel processing performance. My research emphasis is on the performance of WRF on different software compilers using our 160-core Dell cluster.

What would you like to achieve by attending ISC 2017?

Umesh: Attending ISC 2017 would be a professionally rewarding experience for someone like me, who is beginning his career in HPC. Sharing the same space with attendees at ISC will help me engage with the larger HPC community and hopefully return with new ideas that make me more effective at my work. Overall, I believe the conference will inspire me to grow and challenge myself in many areas of HPC.

In this contributed Q&A, ISC’s Nages Sieslack interviews Martin Meuer and Thomas Meuer, managing directors of the ISC Group, about the diversity initiatives and goals that were introduced this year. The event group is putting in a serious effort to increase participation of women and other under-presented groups at its annual conference, the next iteration of which takes place June 18-22, in Frankfurt, Germany.

What Does Diversity Mean to the ISC High Performance Conference?

Thomas Meuer: Diversity is multifaceted. In the context of a conference, the term can refer to speakers, participants or exhibitors and also include aspects such as age, gender, culture or geographical origin, sexual orientation, and more. But also a balance in attendance from industry and academia reflects diversity, as well as the program composition.

Martin Meuer

Martin Meuer: We do strive to address many of the above-mentioned facets and players in the community, but of course we also realize that we can only directly influence aspects under our purview. We can and are, for example, ensuring a greater gender balance in the appointment of ISC chairs, session chairs and speakers. Likewise we are also striving to compile a
diverse program that we believe have an appeal to the HPC and AI communities. We have started ensuring that the ISC exhibition offers all the important components of HPC. This year we shall also have businesses like Amazon, Google and Baidu exhibiting at the show.

Why is diversity important to a technical conference?

Martin Meuer: A technical conference is basically a large user group meeting that brings together different players, be it the people driving businesses or researchers that advance technologies. If a certain segment wouldn’t be at the industry conference, their contribution will be missed, which has consequences for the development of the community and vice-versa.

Imagine the Chinese or Japanese HPC community not attending ISC – that would adversely affect the knowledge sharing and collaboration on international exascale initiatives. Or the women in this field not actively being present at any HPC conferences – female researchers would lose representation at community gatherings.

As we see it, there is no doubt whatsoever that technical communities like the HPC community greatly benefit from having topics viewed and reviewed from different perspectives. There is ample research and published studies that shows that diversity breeds innovation, attracts talent, helps businesses perform better, and it also provides a stronger sense of a community.

Did diversity grow organically at ISC or is it a recent effort?

Thomas Meuer

Thomas Meuer: We have always aimed for a balance in the conference program, mostly with regard to the geographical origin of the speakers. Over the last 30 years, we have welcomed attendees from over 80 countries.

There is still a great potential in improving female researchers’, scientists’ and business leaders’ representation at ISC. Women generally remain underrepresented in the STEM workforce, including HPC. This is apparent from our published data where only 10 to 15 percent of past attendees are women. We wanted to change that in 2017, but not in baby steps. We engaged in many meetings with Toni Collis, the director of the Women in HPC (WHPC), and thanks to her support and guidance, we were able to introduce specific goals, which are now published online.

How is diversity reflected in the 2017 conference?

Martin Meuer: First of all we have introduced a compliance program and our goal is to fill 25 percent of the committee chairs, deputy chairs and session chairs’ positions with women. The following step was to urge individual session chairs to invite at least one-third female experts as speakers. This was a tough call for most session chairs. In some areas, for example in industrial HPC topics, it is almost impossible to find female speakers to address particular topics that are important to the B2B manufacturing industry.

However we are very pleased to introduce three distinguished female researchers to address the topics of data network and data analytics this year. For those who missed our announcement, this year’s conference keynote will be delivered by data scientist Dr. Jennifer Tour Chayes from Microsoft Research.

To encourage greater diversity in the research program, we established the double-blind review process to handle the following submissions: research papers, research posters and the PhD forum.

Finally we are bringing in the deep learning community to ISC by integrating a new program element – the Deep Learning Day on Wednesday, June 21. This program will enable HPC practitioners, the deep learning community, and the user community to engage with each other.

In light of the travel restrictions the US government has attempted to impose on certain countries, do you anticipate any impact on the ISC conference?

Thomas Meuer: We don’t wish to speculate on external facts, but we can tell you that we have received a record high number of submissions in our BoF and tutorials sessions. Maybe this is an indication that we will reach a new ISC attendance record.

However we have been hearing from a number of trusted partners in the US that they might increase their attendance at HPC conferences outside the country if travel restrictions impede their ability to meet and collaborate with foreign experts at US-based events.

Here let me take this interview as an opportunity to urge all groups within the HPC community to make use of the ISC platform. Get in touch with our chairs or us directly to recommend methods, or even programs to promote diversity.

]]>https://www.hpcwire.com/2017/03/03/isc-lays-out-new-diversity-goals/feed/034459SC Says Farewell to Salt Lake City, See You in Denverhttps://www.hpcwire.com/2016/11/18/sc-says-farewell-salt-lake-city-see-denver/?utm_source=rss&utm_medium=rss&utm_campaign=sc-says-farewell-salt-lake-city-see-denver
https://www.hpcwire.com/2016/11/18/sc-says-farewell-salt-lake-city-see-denver/#respondSat, 19 Nov 2016 03:36:07 +0000https://www.hpcwire.com/?p=32059After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. This year, 11,000-plus visitors, participants and volunteers from 65 countries came to Salt Lake City for the largest HPC conference in the world […]

After an intense four-day flurry of activity (and a cold snap that brought some actual snow flurries), the SC16 show floor closed yesterday (Thursday) and the always-extensive technical program wound down today. This year, 11,000-plus visitors, participants and volunteers from 65 countries came to Salt Lake City for the largest HPC conference in the world to take part in the four-day expo and a six-day technical program.

In an on-site media briefing held Monday, SC16 Chair John West said that this year’s SC provided record space (450,000 square feet) to a record 349 research and industry booths. Attendance however was down from last year, when nearly 13,000, descended into Austin, Texas, a show high.

One of the main events of the show each year is SciNet, which is the largest scientific network in the world for one week. This year SciNet delivered over terabits-per-second of bandwidth (and pushed a record 1.2 terabytes of traffic over the show floor), enough to send about 450 million snaps on Snapshot in 60 seconds, said West, about 850x more snaps than the rest of the world does in that time frame. That’s a lot of selfies, West joked.

John West

West also highlighted the student program. This year there were over 270 students from around the world. Two-thirds of those are attending the conference for the first time and 25 percent are female. “This is where you come to meet the people to shape our industry, and we are looking for future volunteers,” said West.

The conference is non-profit and all volunteer with about 650 volunteers coming together to make it happen every year. Planning for each show begins about three years in advance.

The focus of this year’s conference is on workforce and diversity. Says West, “there aren’t good workforce figures for HPC because it’s just a slice of computing, but the Bureau of Labor Statistics for the United States says that every year, we’re about 200,000 workers short in computing and related fields and if we keep on this trajectory that will be a million workers short by 2022. That’s a big gap, and while HPC is only a fraction of that gap, we are part of it. If you run centers or you work in centers, you know that recruiting HPC is already challenging and it’s going to get worse, so where are we going to find our new talent?

“Two-thirds of the US workforce are women in minorities and less than 20 percent of those folks work in computing, so there’s a big untapped pool that we need to figure out how to engage meaningfully and that’s what SC is working on this year.”

Toni Collis

For a deeper look at what SC is doing to support greater inclusivity, read Kim McMahon’s interview with John West and Trish Damkroger, chair of the Diverse HPC Workforce committee. Next year it will continue as the Inclusivity Committee. University of Edinburgh researcher Toni Collis, founder and director Women in HPC, has been tapped to head up the committee. HPCwire interviewed Toni on camera at SC and will make this video available in the coming weeks – this one is not to be missed.

The conference also recognized the 14 recipients of the ACM SIGHPC and Intel Computational and Data Science Fellowship awards. You can learn more about the candidates and the impetus for the program here.

West also shared the results of studies that cite as reasons for women not choosing a computing field the perception that it doesn’t have a direct impact on the things they care about. In response, SC launched a pilot project that gets students hands on using supercomputing to help understand social change. West says that the project is still an experiment at this point, but they are exploring a path to it becoming part of the program in the future.

The SC committee also works with the National Science Foundation and the Department of Energy to support Women in IT Networking at SC (the WINS program) – the goal of this program, now in the second year, is to jump start the careers of women who are engineers by providing them with an intensive hands-on experience in building SciNet.

HPC matters is a recurring thread at SC and one of the most-trending Twitter hashtags coming out of the show. The Monday opening panel has become a way to showcase HPC efforts that are having the biggest impact for societal good. This year’s panel, HPC Impacts on Precision Medicine: Life’s Future–The Next Frontier in Healthcare, described how precision medicine is being used to fight disease, enhance health and lifestyle, extend life, and contribute to basic science along the way.

As always the show had an impressive and packed technical program that made many attendees wish they had parallel bandwidth. Post-Moore’s law technologies featured prominently in the program with panels such as “Post Moore’s Era Supercomputing in 20 Years” and “The End of Von Neumann? What the Future Looks Like for HPC Application Developers” drawing a standing-room only crowd. Popular invited talks included “Memory Bandwidth and System Balance in HPC Systems” and “Beyond Exascale: Emerging Devices and Architectures for Computing” (which we will be covering in a future piece).

Major awards at the conference include the TOP500, which is covered here, with the number one prize going for the second time to the Sunway Taihu-Light system, installed at the National Supercomputing Center in Wuxi. A 12-member Chinese team has also won the 2016 ACM Gordon Bell prize. The research project, “10M-Core Scalable Fully-Implicit Solver for Nonhydrostatic Atmospheric Dynamics,” presents a method for calculating atmospheric dynamics. It’s the first time a Chinese team has won the award and also the first win for a Python-based application.

A Chinese team also took home top honors in the Student Cluster Competition. The University of Science and Technology of China pulled off an SC first, winning both the first place prize for highest overall score and the highest Linpack run, a record 31.5 teraflops. Just ten years ago their system would have been 36th in the TOP500.

Speaking of how much has changed over the last decade, Intersect 360 CEO Addison Snell gives an insightful look back on where HPC has been and where it’s headed in this retrospective piece.

We look forward to seeing many of you in June at the International Supercomputing Conference in Frankfurt, Germany, and then in Denver for SC17, November 12-17. The SC17 website is already live and the chairing baton has been handed to Bernd Mohr, member of the Division “Application Support” at the Jülich Supercomputing Centre (JSC) in Germany.

]]>https://www.hpcwire.com/2016/11/18/sc-says-farewell-salt-lake-city-see-denver/feed/032059A Celebration of Women in HPChttps://www.hpcwire.com/2016/07/07/celebration-women-hpc/?utm_source=rss&utm_medium=rss&utm_campaign=celebration-women-hpc
https://www.hpcwire.com/2016/07/07/celebration-women-hpc/#respondThu, 07 Jul 2016 21:16:14 +0000https://www.hpcwire.com/?p=28535Why are there not more women in HPC? This was the simple question that led to the formation of the Women in HPC (WHPC) network nearly three years ago. Under the direction of founder Dr. Toni Collis of the Edinburgh Parallel Computing Centre (EPCC), the organization has been gaining momentum and making a name for itself since its inaugural Women in HPC workshop at SC14. At ISC 2016 in Frankfurt, Germany, WHPC expanded its program to three events: its fourth international Women in High Performance Computing (WHPC) workshop; a BOF on women in HPC; and a networking luncheon. The BOF (June 21) and the workshop (June 23) shared the theme of addressing the gender gap in HPC. Women in HPC has found that women make up between 5 and 17 percent of HPC users, researchers and conference attendees.

Why are there not more women in HPC? This was the simple question that led to the formation of the Women in HPC (WHPC) network nearly three years ago. Under the direction of founder Dr. Toni Collis of the Edinburgh Parallel Computing Centre (EPCC), the organization has been gaining momentum and making a name for itself since its inaugural Women in HPC workshop at SC14. At ISC 2016 in Frankfurt, Germany, WHPC expanded its program to three events: its fourth international Women in High Performance Computing (WHPC) workshop; a BOF on women in HPC; and a networking luncheon.

The BOF (June 21) and the workshop (June 23) shared the theme of addressing the gender gap in HPC. Women in HPC has found that women make up between 5 and 17 percent of HPC users, researchers and conference attendees. More broadly, reports show women holding only about a quarter of the technology jobs; while in executive positions, the number is about half that.

The BOF and workshop programs focused on the importance of increasing diversity in the workplace, the effect of implicit and explicit biases, and the development of best practices that can be implemented by employers to improve diversity in their organizations. The networking lunch (June 22) offered the opportunity to celebrate the successes of women in high performance computing careers and provided a forum for women in HPC to make new contacts and friendships.

“We are bringing together women leaders and young career women in corporate organizations, research institutions, academia, and business for networking, mentoring, and sharing of knowledge,” read the invitation from WHPC organizers, “Come and meet the Women in HPC team, leading employers, network with other women in the community, and find out about the Women in HPC initiative.”

The luncheon brought together women from the vendor community as well as research and academia, however representation from women starting out their careers was not as high as hoped, said Collis. “We made an effort to target early career women, but universities send far fewer women to ISC than SC. I think they send their senior people to ISC and those are still primarily men,” shared Collis. “It’s a problem that we need to change.”

Tony Collis

WHPC aims to broaden participation of women and other underrepresented groups across all areas of HPC and attendance at SC and ISC plays a crucial role in career development. When the general audience of mostly men literally does not see or interact with their women peers, it leads to biases about what women are interested in or capable of. WHPC is shining a light on such challenges, exposing their existence and validating the experience of women who witness these biases and other forms of discrimination first-hand. (For insight into the self-propagating nature of biases, I refer you to this elucidating blog post by Lorena Barba, associate professor in the School of Engineering and Applied Science at the George Washington University.)

Dr. Collis kicked off the program by recalling sentiments shared by Carolyn Devany, president of Data Vortex Technologies, who was a panelist at the BOF the day before.

“‘This community is charged more than any other with the responsibility of charting our future course and the preservation of our planet. Women are in it for the long haul; we play an extremely important role,'” quoted Collis.

“I could not have said it better than that,” she continued in her own words. “HPC really is fundamental to changing the planet and making it a better place. If 51 percent of the human race is not involved and is not represented, that is an issue and so we are here today to make sure those women are represented. We are here to celebrate what we’ve achieved so far, to celebrate the role that women are already playing and to take this forward. I hope that you meet like-minded people, share ideas on how we can improve the situation for each other, and network. For those of you who are starting out in HPC and ready for the next step, meet your peers. For employers, meet the people who are going to shape the future of our community.”

Delivering the keynote for the luncheon was Marie-Christine Sawley, who is esteemed for her work on numerous high-profile HPC projects. Since 2010, Sawley has been director of the Intel Exascale Lab in Paris. Prior to this, she was a senior scientist for the ETH Zurich in the CMS computing team at CERN for three years; and from 2003 and 2008, she was the director of the CSCS, the Swiss national supercomputing center.

Sawley traced the path that brought her into the HPC fold from her start as a plasma physicist to her current post at the Intel Exascale lab.

“In the arc of my career, I’ve been observing and questioning why since I was working so closely with universities and even today still do, why do we have so many difficulties not only in HPC but in computer science, attracting young women,” she said.

Sawley recalled being the only girl in an academic program with seven or eight boys, noting that while strides have been made toward greater gender equality, there is still a way to go. An ongoing paradigm shift in HPC could be key to this shift.

“I think we need to drive this change,” said Sawley. “There are a number of things that have to do with interest, with bias and with the way we address HPC and what is it good for. As a community we need to push the message further. Indeed, HPC has come from CFD and meteorology and modeling, and advance materials as well as fundamental sciences — but in the last ten years, it has moved a lot toward impacting our day to day life. And this is nothing compared to what will happen in the future. It’s impacting many things. It’s impacting the way we organize our transportation system and the way we address the question of research and medicine. The world is made of 51 percent females and the numbers show that in HPC and in technology, we are not close to even half of what the statistics would show.

“At the IDC breakfast, the presenters showed the innovation in HPC awards and the three main topics are things that are impacting our day to day life. One was parcel delivery; the second had to do with cancer research for children; and the third was looking at improved diagnostics for prenatal screening of genetic disease. And if you were at Raj Hazra‘s keynote, he gave a fascinating account of the connected car, and everybody is driving and buying cars.”

“Do we really want to have the capacity to think, produce, come to market, communicate about and leave talents on the table? Probably not. This is the kind of thing we want to share here. In essence, for Intel this is a very important issue,” she said.

“You have heard that at Intel we are going through an evolution. We are putting the accent from the PC market to things that have to do with the datacenter technology and IoT. This is a major orientation and it’s really for the long haul. In doing that, we definitely know that we need all the talents and we need more diversity and we need also more capacity to engage into HPC.

“I have plenty of colleagues inside Intel that are working in HPC and you don’t need to be specifically interested in technology development – you have plenty of different areas you can contribute to HPC, from communications, marketing and finance to running the company. The message is we need to be more vocal about how HPC is impacting our day to day life. It’s not only for looking at astrophysics, which by the way is very interesting and I want to push that technology forward, but beyond that there are many important things that are impacting our day to day.”

At SC15 Intel announced a fellowship program in partnership with ACM SIGHPC aimed at increasing the participation of under-represented groups – women students and those with diverse racial and ethnic backgrounds – in computational and data science graduate programs worldwide. The call for proposals for the Intel Computational & Data Science Fellowships went out in March and the winners will be announced on July 31. Recipients will be formally recognized during SC16.

The luncheon event was held at no charge to participants thanks to the support of sponsors Intel, DDN, Data Vortex, and IBM.

“Bringing together women at different points in their careers is an incredibly empowering function of Women in HPC,” said DDN’s Molly Rector, who was at the event. “The lessons learned from each other as well as the incredible networking opportunities make the luncheon and ongoing relationships incredibly value add to my personal and professional life.”

]]>https://www.hpcwire.com/2016/07/07/celebration-women-hpc/feed/028535Top 10 Things I Liked About ISChttps://www.hpcwire.com/2016/07/06/top-10-things-liked-isc/?utm_source=rss&utm_medium=rss&utm_campaign=top-10-things-liked-isc
https://www.hpcwire.com/2016/07/06/top-10-things-liked-isc/#respondWed, 06 Jul 2016 20:15:14 +0000https://www.hpcwire.com/?p=28498From the moment I walked onto the show floor, I could feel the energy, the excitement – ISC16 was here! The booths were filled with enthusiastic people ready to share all of their new technologies with the inquisitive attendees making their way through the aisles.

Kim McMahon reprises her Top 10 list of favorite things from the 2016 ISC High Performance show in Frankfurt.

I hope this look back reminds you of your ISC experience and puts a smile on your face!

The Energy

From the moment I walked onto the show floor, I could feel the energy, the excitement – ISC16 was here! The booths were filled with enthusiastic people ready to share all of their new technologies with the inquisitive attendees making their way through the aisles.

Networking

It’s such an important part of any event, and the opportunities to network at ISC did not disappoint! The melting pot of scientists, engineers, sales and marketing professionals, and media made it easy to enjoy engaging conversations across a wide range of topics. Great conversation + business card swap + firm handshake + a smile = networking success!

New Technologies

Product announcements as well as researchers and vendors showing off their new technologies – that’s the most interesting part! The Top500 announcement, Intel’s KNL, CoolIT’s new liquid cooling product, DDN’s new storage, Qarnot Computing’s cloud product, Penguin’s Frostbyte, NAG and Cavium partnership, PRACE activities, SGI, and EPCC new announcement, to name a few. There are many more – go to your favorite industry publication and check out the recent press releases for your full list of ISC announcements.

Student Cluster Competition

This is the future of our industry! Forming the team, building the cluster, running the tests – always exciting. Their motto sums it all up: “Serious hardware. Real science. Student supercomputing lives here.” Congratulations to Team South Africa on the win (their third)!

Vendor Showdown and the IDC Breakfast

These two events are always on my list of things to do while at ISC and SC. IDC provides their snapshot view of the market and this year had two presentations, an HPC overview and one focused on industry. Vendor Showdown on Monday gives vendors time to present, but it’s the questions from the moderators that are the most fun to watch!

Women in HPC Events

The first WHPC luncheon was a combination of networking and a thank-you to the industry. Held at the Marriott, over 50 people came to visit, network, and get to know what Women in HPC is all about. Watch for their next networking event at SC16. Other WHPC activities at ISC 16 were the BoF, a workshop, presentations, and interviews. A lot of talk on diversity, what it really means, and how it betters the industry. Stay tuned, the conversation continues at SC16.

The Marriott Frankfurt Hotel

Marriott hotels are kinda my thing, and with the Frankfurt Marriott so close to the convention center, it became a thing for many ISC goers as well. More networking opportunities and in a nice hotel to boot. If you didn’t know, Champions at the Marriott was THE bar to be at late night.

Social Media

There was quite a bit of buzz in and around ISC 16. A lot of real time posts, some attention-grabbing, some just simply informative – either way, they all got people talking.

The Xand McMahon Booth

This event was especially important to me because it was the official launch of my partnership with Lara Kisielewska to combine her HPC marketing firm and the HPC division of my marketing firm into Xand McMahon. And what better way to announce our partnership than with a booth at ISC!

The Opportunity

At the end of the day, I am just so grateful that I have the opportunity to travel to events such as ISC, network with such a diverse group, and help some of them with their marketing strategies along the way. I truly feel as if I am living the dream.

About the Author

Kim McMahon has performed sales and marketing for more years than she cares to count. She writes frequently on marketing, technology, life, the world and how they sometimes all come together.

McMahon Consulting is a full-service marketing firm with over 15 years of experience in Enterprise Technical Computing and the high-end IT space. Xand McMahon is solely dedicated to HPC, co-founded by Kim McMahon and Lara Kisielewska. Together Lara and Kim love helping their clients see the differentiated value in their technology and watching them get that “a-ha!” look in their eyes. The two have more than 30 years of experience in this space and work with clients around the globe.

]]>https://www.hpcwire.com/2016/07/06/top-10-things-liked-isc/feed/028498South Africa Team Claims Third ISC Student Cluster Championshiphttps://www.hpcwire.com/2016/06/23/south-africa-team-scores-third-isc-scc-win/?utm_source=rss&utm_medium=rss&utm_campaign=south-africa-team-scores-third-isc-scc-win
https://www.hpcwire.com/2016/06/23/south-africa-team-scores-third-isc-scc-win/#respondThu, 23 Jun 2016 17:45:34 +0000http://www.hpcwire.com/?p=28229At an awards ceremony held yesterday at ISC 2016 in Frankfurt, Germany, a roar of applause filled the auditorium as Team South Africa took to the stage to collect their third HPCAC-ISC Student Cluster Competition championship prize from HPC luminary Thomas Sterling. "I have to say that this is extraordinary," said Sterling, who was helping to officiate along with Gilad Shainer (of Mellanox).

At an awards ceremony near the close of ISC 2016 in Frankfurt, Germany, this week, attendees cheered as Team South Africa took to the stage to collect their third HPCAC-ISC Student Cluster Competition championship prize from HPC luminary Thomas Sterling.

“I have to say that this is extraordinary,” said Sterling, who was helping officiate along with Gilad Shainer (of Mellanox). “Last year, I said those South African teams are pretty good. They came in second last year and first the two times before that. This is a remarkable performance.”

The win marked quite the accomplishment for the team, which had support from their home institution, the Centre for High Performance Computing (CHPC); their sponsors Dell and Mellanox; and the community at large. The student team lineup includes Andries Bingani, Ashley Naudé, Avraham Bank, Craig Bester, Sabeehah Ismail, and Leanne Johnson.

The road to becoming three-time ISC 2016 cluster champions kicked into gear with a first-place finish in the annual SA-SCC event, under guidance of team supervisors David Macleod and Matthew Cawood, both of CHPC.

A feature piece on the South African team by HPCwire contributor STEM-Trek‘s Elizabeth Leake highlights their coaching style:

The CHPC won the ISC challenge in 2013 and 2014, but came in second to a Chinese team [Tsinghua University] in 2015. The pressure is on to reclaim their international title in 2016. It’s clear CHPC’s coaching methodology doesn’t emphasize winning, however.

“That’s not to say that we don’t think winning is important, it’s just that we have higher priorities. Our goal is to expose as many students as possible to the HPC field at a time in their education when they can take related courses. This means that year after year we self-impose disadvantages by fielding the youngest and least experienced team in the competition. The fact that we are the only team that does this, yet we’re consistently strong, is the greatest win we could have hoped for,” said David Macleod.

The team also reflects CHPC’s dedication to gender and racial equity with a diverse mixed-gender team. Coming from the other side of the world, Team NERSC was also girl-powered, representing the first time an all-woman team has competed in the history of the Student Cluster Team contests.

Looking at the various system specs for 12 cluster designs, what stands out is Team South Africa’s choice of accelerator. The team opted for NVIDIA Tesla K40s over K80 GPUs and was the only group to do so. They considered K80s but were attracted by the lower power profile of the K40s. Thanks to the team’s sponsors, Dell and Mellanox, their travel expenses were covered and they received some impressive hardware: 10 dual-socket Xeon-E5 2695 v4s nodes (total: 360 cores) connected by EDR InfiniBand. They used eight of these nodes for the application runs.

Team NERSC had the most Xeon cores with 8-nodes of two-socket 22-core E5-2699 CPUs totaling 352 cores, compliments of Intel and Cray, who delivered the hardware to the team in May.

The Fastest Computer on the Continent

As if winning the third gold SCC title weren’t enough reason to celebrate, South Africa has also stood up a new supercomputer, named Lengau (“Cheetah” in the African language of seTswana). Benchmarked at 782-petaflops (LINPACK), the machine takes the number 121 spot, putting it in the top quadrant of the TOP500 list that was announced on Monday. Lengau is comprised of 1,008 Xeon-based Dell PowerEdge server nodes filling 19 racks (with storage). It has a Dell Storage capacity of five petabytes, and uses Dell Networking Ethernet switches and Mellanox EDR InfiniBand with a maximum interconnect speed of 56 GB/s.

A Competition with Surprises

A community favorite along with its counterparts in Asia and in the US, the HPCAC-ISC Student Cluster Competition is jointly organized by the HPC Advisory Council (HPCAC) and ISC. This year, 12 teams from around the world came to build a small cluster of their own design and test their HPC mettle by optimizing and running a series of benchmarks and applications. The catch is to do so without going over the 3,000W power limit.

The first portion of the competition is benchmarking the system with the High Performance Conjugant Gradient (HPCG) and High Performance Linpack (HPL). Next comes the bulk of the contest: four applications that comprise 80 percent of the overall score; only three are known ahead of time. This year’s lineup included the data-intensive Graph500 benchmark; Splotch (a ray tracing algorithm used for visualizations); and WRF, a popular weather forecasting model. The surprise application, CloverLeaf, was a first for the competition; the hydrodynamics mini-app solves the compressible Euler equations on a Cartesian grid.

Teams are scored based on their performance on the HPCC benchmark run (10 percent), a suite of test applications (80 percent), as well as their ability to articulate their strategy and results in front of a panel of expert judges (10 percent). At the awards ceremony, the top achievers are presented with prizes in five categories: first, second and third place, highest LINPACK performance and fan favorite.