Posted
by
samzenpus
on Wednesday August 18, 2010 @04:35PM
from the in-the-palm-over-your-hand dept.

aarondubrow writes "Researchers at MIT have created an experimental system for smart phones that allows engineers to leverage the power of supercomputers for instant computation and analysis. The team performed a series of expensive high-fidelity simulations on the Ranger supercomputer to generate a small "reduced model" which was transferred to a Google Android smart phone. They were then able to solve engineering and fluid flow problems on the phone and visualize the results interactively. The project proved the potential for reduced order methods to perform real-time and reliable simulations for complicated problems on handheld devices."

It sounds like the supercomputer generated an algorithm for the smartphone to run. I guess they can call that "leveraging the power of a supercomputer" but implying the phone app is doing supercomputing stretches things a bit far. I call misleading headline.

This is a university PR piece, and these are notorious for being vague about what they are actually claiming.

It sounds like the supercomputer is basically doing all the grunt work and the phone is doing something analogous to interpolating the results. For example if the supercomputer supplies pre-computed results for some question for parameter alpha=1.0 and alpha=2.0 and if the user selects alpha=1.5 then the phone will interpolate the two supplied results and get an answer that will (if the interpolatio

What the phone is doing is "reduced order modeling", which (if the article is using the term accurately) finding a simple set of equations whose solution provably approximates a far more complex system of equations. It's not just interpolation (lookup tables, or machine learning from a training ensemble). Reduced order models actually have dynamics in them.

The potential innovation here, as I see it, is that it takes some supercomputing effort to build the reduced order model for a specific problem. So yo

I just re-read the article. What it sounds like they're doing is having the supercomputer craft a reduced order model which is optimized to a particular range of parameters. That suggests to me that they're constructing a perturbative model about some fixed solution that the supercomputer produces. Perturbative approximations are more accurate the closer they are to the "reference" solution. So the innovation appears to be: the user can specify what set of parameters they want to perturb about, and therefore construct a custom model which is optimized to perform well in the parameter range that user is interested in.

Does it still hurt that the G1 moved more units in its first minute in the market than the Microsoft Kin did in whole lifespan? That would be nice if it bothered you. I subsist from your pain. I'm a pain vampire that way.

The Google phone is still available through other vendors though not from Google, but it did what it was intended to do: convince vendors that there was a market for the Android candybar phone. It's proved and there's no more need for the proof because the idea has taken off. Google d

It's funny how a phone which didn't sell well seems to keep showing up in press releases

The Iphone?

(Apple have about 3% of the market, yet get a mention several times a day in any random Slashdot story; to put things in perspective, Nokia ship twice as many phones per quarter than Apple have ever sold, and even just one of their many products, the 5800, has sold equal to or more than the original "iPhone". Android has now already overtaken Apple btw, and is the fastest growing platform, whilst Apple are act

good point and valid too. But to be fair, the press doesn't write many stories about nuts, bolts and screws( sold in the billions of units ) but they'll write about nice shiny new cars made up of nuts, bolts, and screws. The point is, utility devices are boring to the press but put something shiny in front of them and they'll gobble it up. Not to mention that most in the media segment are of the artsy-fartsy type and therefore are more likely to side with Apple. Remember, Apple users were constantly fightin

The money quote "This is not the first time that model reduction algorithms have been used to ameliorate the complexities of large-scale physical simulations. The advantage of the system designed by Knezevic and his colleagues is its rigorous error bounds, which tell a user the range of possible solutions, and provide a metric of whether an answer is accurate or not. The error bounds are based on mathematical theory developed in Prof. Patera's research group at MIT over a number of years. "

The research is about error bounds on coarse grained models. The smart phone is just hype.

Everyone is spamming slashdot, and the people voting on the firehose are generally too lame to understand it. Throw "reduced order methods to perform real-time and reliable simulations" at them and they click the + just to look smart.

I had images in my brain of an article about getting torque and mpi (something that, to my knowledge, there's no reason outside practicality to stop from working) plus some sort of auto-meshing running on android phones or some such (though using wifi as an interconnect makes me shudder), *that* would be phone supercomputing, this is *not*.

The smart phone is not totally a hype. It serves kind of like a proof of concept or a demo. If the reduced model can be run on a smart phone quickly and accurately enough, it can be run on similar embedded devices. This could possibly be commercialized pretty quickly.

Yep. The research is good, but the smartphone is just an angle. Sorta like if I write an adventure game ans say "it could be the first text adventure played on a space station" just because it isn't actually incompatible with the laptops on ISS.

So... if you analyze a problem and discover you can get mostly accurate results from a simple algorithm, you don't need a supercomputer anymore? What a concept! I'm going to go write the first physics simulator for personal computers!

Seriously, the cool bit is that they're generating these reduced models programatically. But the way it reads, it sounds like the reduced model itself, and the fact that it runs on smart phones are the important parts.

Seriously, the cool bit is that they're generating these reduced models programatically. But the way it reads, it sounds like the reduced model itself, and the fact that it runs on smart phones are the important parts.

And that sounds accurate to me. It's a demonstration, among other things, that you can control a complex system, that originally required the efforts of a supercomputer to model, with far simpler tools.

I remember when 100 MFLOPS was munitions-grade computing. Now you have 1 GHz in your pocket and 4 cores of 3 GHz hooked up to 450 cores at 1.5 GHz and you still think Crysis is stuttering even in low res...

...but I'm going to go ahead and argue that they are not "performing supercomputing on a phone", because that kind of marketing doesn't belong in research.

Yes, it could be very useful; I have no doubt it's just as useful as they claim. And yes, it allows someone in practice to solve a problem "in the field" with a phone, when otherwise a supercomputer might have to be used.

But the supercomputing was done on a supercomputer in advance, when the reduced model was calculated. Its just that instead of giving one specific answer for one specific input, the supercomputer is returning an algorithm that will approximate the answer within known error bounds for a specified domain of inputs. Executing the algorithm isn't supercomputing (if it were, you couldn't do it in a few seconds on a phone); it's using the fruits of the earlier supercomputing that produced the algorithm.

I tend to agree, as I could also use my smartphone terminal to log unto that supercomputer, and have a thin client application that is the GUI for setting most info for the processing needed to be done, then let the supercomputer do its thing, and then have that thin client visually display the results, so no real supercomputing going on here...

This is just marketing at work.
from TFA:
The real impact of the system may come in the application of these methods to aircraft or automobiles, which use control systems to react to inputs from the environment in order to achieve optimal safety and performance. Examples include traction control in cars and stabilization systems in jet fighters. “If you have sensors feeding in data to the reduced order model system, then it could solve the equation corresponding to the input data, and indicate the app

It's a table lookup where the things you look up in the table are easily-solved algebraic equations across the small domain of the single table entry, instead of a continuous model in differential equations across the entire state space of the table.

The thing that can solve the continuous model in differential equations across the entire state space is a supercomputer. The thing that can chop the continuous model up into a table of simple algebraic equations is also a supercomputer. The thing that can loo

Yes, we know that reduced order models already exist. The new thing here is that you can input the parameter range you want, wait a few hours, and the supercomputer sends you back a custom reduced order model optimized for the parameter range you care about. You can then apply this model "in the field" to the situation you're dealing with. It's supposed to be useful for situations where the details of the problem aren't known ahead of time, and you can't pre-compute the reduced order model.

The lattice QCD people, at least, are porting their code to CUDA just as fast as they can. The bottleneck right now is that there's no good way to get multiple GPU's to communicate (quickly). So, for the largest problems (simulating a 64x64x64x192 lattice), you still need a conventional supercomputer (like Ranger, the one in the article here), because it's just too huge to put on a single GPU and multi-GPU doesn't scale well.

But for smaller problems (like a 24x24x24x64 lattice), GPU's will be great, and peo

This isn't just a UI, it's a reduction of the algorithm provided by a supercomputer. However, I believe that this first set of lines is misleading, inaccurate, and likely an example of the writer not knowing what they're talking about:

What if you could perform supercomputing calculations in real-time, on your smartphone... Researchers... have created an application that does just that.

It doesn't do supercomputing because it isn't a supercomputer, it just makes an educated guess based on sitting at the supercomputer's knee and playing "monkey see, monkey do". Not a bad trick but the claim's overwrought.

I'm pretty sure you have a different definition of 'supercomputing' than the rest of the world and I.

I'm fairly sure you also don't understand what they are talking about either. They are attempting to imply they are doing the work of a super computer on the phone, and in fact the super computer is doing the brunt work of reducing it so it only leaves a tiny bit of work left for the phone to process so it looks impressive. That tiny bit just happens to be the most useful bits for a person to play with.

If you replace "educated guess" with "first-order approximation", it sounds a lot better -- and, in fact, this happens in the sciences all the time. But that's just what a first-order approximation is: it's a guess (based on the first term of a series) that is educated (based on some belief that the subsequent terms are smaller).

I assure you, I understand supercomputing well enough and that your comments on supercomputing don't counter what I was saying. Your description of the action of their software:

Most of the work is done on a super computer, then a tiny UI layer is thrown on top

doesn't accurately describe model reduction algorithms. Another commenter pointed out that 'first order approximation' is a better term. The phone doesn't 'finish the last bit' of processing but makes a low-order approximation of the entire process.

I've been using FEMM lately for some magnetics stuff I've been working on. I would LOVE an android port, or some way to run simulations from my phone.

I don't *really* need it, but its just funny how something like that is actually possible these days. We probably will have supercomputers in our hands someday. I mean, current phones already are supercomputers by the standards of what...? 30 years ago? 20 years ago?

Smartphones will become the tricorders of the future, its inevitable.-Taylor

My stock Nexus One running Froyo (Android 2.2) gets an effective 34 MFlops on linpack (and I didn't even kill the background tasks).This is better than 1969's top supercomputer (theoritical peak of 36 MFlops, effective ~10) and equivalent to 1974's top supercomputer CDC STAR-100 [wikipedia.org] that had a theoritical peak of 100 MFlops but which had much lower realworld performance.The nexus one cost me 600 with tax and shipping. The CDC 7600, which is easily beaten, cost 5 millions in 1970's dollars.

Supercomputers are big. Even when idle they still require lots of power and cooling, so ideally you want your supercomputer to be 100% utilized all of the time. That's why most supercomputers are "over-subscribed" and have batch schedulers (moab/torque, PBS, LSF, etc.). Users submit jobs, and the scheduler goes about placing those jobs on the supercomputer in a way that keeps utilization as close to 100% as possible. This means that typically when you submit a job it will not run immediately.

You don't understand how this works. You do the computation ahead of time on the supercomputer to build your reduced order model which you download onto your phone and take out into the field. Once you've downloaded the model, you don't need the supercomputer any more. You can use the phone to do computations using the reduced model as much as you like. If you get into a regime where the predicted error from the reduced order model is too high, you can go back to the supercomputer and update the model. If t

I've been waiting over a decade for Java's features to support mobile objects to have an infrastructure that made deploying them worthwhile. Why send the logic around the network, instead of just sending the data to where the processors are? Well, with the vast majority of computing power now distributed among so many users, and mostly idling across the year, it's worth using distributed supercomputing now. Folding@Home was a good start, but the distributed app should be generic enough that any crunching ca