Dr. Eng Lim Goh on Why HPC Matters

In this video from SC14, SGI CTO Dr. Eng Lim Goh discusses why HPC Matters with Rich Brueckner from insideHPC. They then move on to the topic of Exascale and how SGI plans to get there.

The Nov. 17 HPC Matters Plenary was led by Dr. Eng Lim Goh, senior vice president and CTO at SGI, who will demonstrate the vital nature of supercomputers across much of the world’s economic, cultural, scientific, and social accomplishments. He will also focus on numerous real world examples and will explain how much of what is made possible today, would be impossible without HPC and the role it plays across all industries. Joining Dr. Goh on stage was Dr. Piyush Mehrotra, chief of the NASA Advanced Supercomputing Division, to speak on HPC-based research that is leading us to revolutionary insights about the world.”

Full Transcript:

insideHPC: Hi, I’m Rich. We’re inside HPC. We’re here in SC14 in New Orleans at the SGI booth. I’m here with Eng Lim Goh, formerly SGI and the CTO of the company. How are you, sir?

Eng Lim Goh: It’s great to see you again, Rich. It’s been a while.

insideHPC: I want to start at the beginning, which was, you did a plenary this year with SGI. This was an event that has never been done before. It started actually before the conference. How did it go?

Eng Lim Goh: Yes, it’s quite a number of firsts, right? It’s the first plenary and the first time the organizers are doing it. They did a great job. The organizers told me that there was 1,700 people that showed up in this arena. The goal was to send a message in the one-hour talk that HPC matters to a wide audience. Co-presenting with me was Piyush from NASA, and both of us gave a set of customer examples of why HPC matters in five categories:

Human basic needs – why HPC matters to humanity in human basic needs – to reducing hardships, to commerce, entertainment, and then once we have all that, to answering profound questions, like whether we are alone in this universe, and that’s the Kepler program. And then, going further back in time, looking at using the square kilometer array to look at the Dark Ages, and then going further back, the Planck satellite to look at the cosmic microwave background. And then, where the LHC comes in, to go even further back to try and recreate the different particles that exist at the time, the quark gluon state at the time. So, you can see how old the different instruments are linked up and how they are they generate massive amounts of data that needs to be processed by high performance computing. It was good to try and put all these together and overall the effort put in. You know, there was 1,700 people watching it.

It’s got to be very gratifying when you’ve worked so hard for something like that. Great. I want to change the subject just a little bit because the next big leap for us in super computing, HPC does matter, but we want to do exascale. We’ve seen announcements of machines that are in the 150 [?] that are coming in 2017, but where does SGI see exascale and what are your next steps?

We see it in two groups of machines that ultimately, as you will hear more and more often in this conference, them coming together, the compute sight of exascale and high performance data analytic site of exascale. Slowly but surely, the two will come together. Currently, we have two separate systems for that; we have the Ice X machine that has a path to exascale. The SGI ICE X and ICE XA that we just announced for release in 2015, and then Ice Exa for release in 2017.

insideHPC: And that’s exa for exascale, right?

Eng Lim Goh: Right, Ice X, Ice XA, Ice Exa. For that, we are focused on power cooling and interconnect with our partners, the topology for it. A large part of the investment is also in software; the resiliency, management, and performance side of software. That would be the quick summary of the Ice Exa on the computer side. But there is also, just as importantly we believe, a data analytics side, and this is where we are preparing the UV to be the pre and post, and the in situ visualizer, in situ analyzer, as well as the system that will be streaming data for computational steering as the exascale application is running.

Computational steering, we believe, will become more and more important. In situ of computational steering will become more and more important because for an exascale application to run, you want to really not wait until the completion of the application to find out whether you’ve got it right. You want to watch it while it’s going on, while it’s running. So, this is where we’re investing our RND and UV. The key advantage of it is coherent sharp memory. Not sharp memory, but coherent sharp memory. And because of that, you truly can leave your data intact without having to distribute them. So, that’s the two areas of investment as we move forward to exascale. We already found early users of the new generation of UV machine. One particular partner of ours is looking at how we can take, for example, a year’s worth of Twitter data. Each day, there’s about half a billion tweets. A year would be about 200 billion tweets, right? To try and dump it all in a single UV and within a five-seconds turnaround time, be able to answer questions about that.

insideHPC: Do some graph processing and stuff and find patterns?

Eng Lim Goh: Yes. Even to just simple filter. Tell me all tweets in 200 billion that talks about supercomputing. And then straightaway, you’re mapped onto a global wall atlas for you where all the tweets are appearing and you’ll find the geographical location interesting, like New Orleans, you point to it, it scrolls through the tweets that are talking about super computing there. So, I’ve tried that and the goal is to have a five-second turnaround time for that. Of course, we’ve not reached this 200 billion yet. We are using a sample size version of it. The goal is to put up to, if possible – if the drivers support – for example, soon, 32 KATs from NVIDIA in a UV with 32 sockets and handle all the 200 billion tweets. That would be a long range goal to try and get to.

insideHPC:That’s exciting because there’s no telling what you might discover if you had this capability.

Eng Lim Goh: That’s right. In fact, someone just dumped the entire Wikipedia right into memory, and then wrote a script that said, “For every year from history of Wikipedia,” – that’s in the history of Wikipedia until today – “every year, all countries mentioned together connect them by a line and partly on a world Atlas.” If the mentioned was sensed to be positive, green line. If the mentioned was sensed to be negative, red line. And he just ran the animation.

And then, suddenly, he saw flashes of red in different years. He started to as the question, “Could it be that the flash comes before the actual negative event? Then if it is the case, can Wikipedia be a predictive tool?”

That’s pretty exciting. Could this some total of human knowledge, in this package–In communication, human knowledge, updates people put in, could this sum total– can it be a predictive tool or forecasting tool of social events? This is like a profound way of thinking about things, right? So, these are the new things that you won’t think of.

insideHPC: Well, I guess when we say HPC matters, we’re not kidding, right [laughter]?

Eng Lim Goh: Yes. Yes. In fact, go look up the video when it’s published on HPC Matters, the first plenary. We put all of these examples in there.
Yeah. We’re really looking forward to seeing that.

insideHPC:Well, thanks for sharing that with us today.

Eng Lim Goh: I’m very glad. It’s been a great show here and I think you guys are doing great work, as always. And good job, Rich. We need you in the community. Thank you very much. See you later.

Resource Links:

Latest Video

Industry Perspectives

In this podcast, the Radio Free HPC team goes off the supercomputing rails a bit with a discussion on digital immortality. "A new company called Nectome will reportedly archive your mind for future uploading to a machine. While the price of $10K seems reasonable enough, they do have to kill you to complete the process." [Read More...]

White Papers

This guide to artificial intelligence explains the difference between AI, machine learning and deep learning, and examine the intersection of AI and HPC. To learn more about AI and HPC download this guide.