He points out Java's traditional benefits over traditional HPC languages (C++, Fortran?): faster development, higher code reliability, portability, and adaptive runtime optimization, and mentions an experienced C coder did better with Java than he did with C, despite more experience with C than Java.

He says there are some caveats, though:

Pure java libraries aren't well suited for HPC. This one's interesting, because I thought the whole point of the post was to say "Java is okay for HPC," only for him to say "Java is okay for HPC as long as you don't rely on Java libraries."

He then puts in a heavy ad for Ateji PX, which allows you to spawn processes with a new "||" operator. It also supports parallel iteration, and interprocess communication.

Things like this are becoming really common. Even if you ignore distributed things like Terracotta or Coherence (process distribution and data distribution, in order), there's also Akka (scala's actor model, distributed, available for Java and Scala), and Erjang (a JVM-based Erlang VM).

I agree with his remarks. We also use Java in scientific computing because of its ease and higher reliability. However in order to benefit more from parallel processing resourcse we recently started to convert a Java simulator to C++. We plan to use NVidia graphic cards and conventional multi-thread hybrid to speed up the simulaion.

While I see significant but limited adoption of Java for HPC in the scientific areas (weather analysis, earthquake prediction, particle accelerator number crunching, genome crunching, etc.), Java is used everywhere in HPC in the financial services industry, and more and more in the insurance industry (which still has the other foot stuck on the mainframe ;-).

Over the next year or two, you'll see more and more use cases for Java in HPC. In just the past year, we converted a plant (seed) genetic analysis system from C to Java (speeding it up immensely!), we've seen some big Java projects at CERN, we've moved several pharma HPC systems to Java (drug and genetic analysis work), and helped some insurance companies both build out Java-based HPC infrastructure and encapsulate their legacy mainframe systems within a(n) SOA. So I think the trend is moving toward open systems and higher level languages (Java compared to C, for example), but there's no way that the hundreds of millions of lines of FORTRAN and C will get replaced overnight. Besides, for single-threaded crunching, FORTRAN and C still have an edge (while Java has a significant advantage in scale-out environments).

One of the reasons that I think Java will accelerate its adoption in the scientific area is that Java is getting Infiniband support. If you were at JavaOne this year, you probably heard all about the new Oracle Exalogic machine. Exalogic is based on Infiniband (IB) from the ground up, with the ability to chain up to 8 of the Exalogic machines (each with 30 computational blades) together at Infiniband wire speed (i.e. no additional hops). Once user-mode, hardware-virtualized IB is surfaced within the JVM, the cost of inter-machine HPC (which today is an area dominated by C due to the low-level messaging and RDMA libraries that are available) will be negligable for several reasons: First of all, the communication will be conducted in user mode (no switch to kernel mode); second, it will not involve thread switches on the sending side; third, it will not require interrupts or thread switches on the receiving side (this is huge!!!!); and fourth, it will support an asynchronous I/O (AIO) model, including for the IB RDMA capability.

Does Real Time fall into HPC? I have recently seen a lot of articles and example of how Java can perform on par or better then C/C++ but there has not been much talk about how Java (especially its Garbage Collector) can perform in near real time (or near real time) scenarios.

It really depends on what you define as real time, i.e. how much of a pause is tolerable. For many soft real time systems, Java works fine (limit allocation rate, use CMS with a sufficiently over-sized heap). Java, like most general purpose languages and OSs (like Linux and most flavors of C), is not appropriate for hard real time systems, since any disk I/O (for example) can take an indeterminate amount of time.

It'd be interesting to see a list and categorization of distribution of data and processing with the products in this category.

Most products would fall only in one category (distribution of either data or processing), with tasks being farmed out through explicit JVM participation (i.e., how DSO might do it: you spin up a number of JVMs and they watch a data structure for tasks, a strategy someone asked me about yesterday), or data being centralized in a datastore separate from the processing distribution (or passed around in remote calls.)

Some products provide an API for distributing processing *and* data - Coherence, for example, adds an API to run invocations over their data grid (which they claim to have "invented" in 2002 - and, Oracle, assuming Tangosol actually did that, you're not Tangosol. Plus, JINI existed in 1999. Sorry.)

Of course, in my opinion, GigaSpaces does it right. We colocate data with processing, so data appears in-process to operations, even though it's transactional and replicated, and our results are produced via the Future classes, so using distributed processing in GigaSpaces is not significantly different than using Executor services.

Routing is automatic (and configurable, if that's what you need) so processes go to where the data is. I'm sure Cameron will say "no, we do that, you don't" or "no (tm), Oracle(tm) does(tm) it(tm) better(tm)", but I've always been more impressed personally with GigaSpaces' approach and mindset than Tangosol's - enough that I left TheServerSide to go to GigaSpaces.

It'd be interesting to see a list and categorization of distribution of data and processing with the products in this category.

Most products would fall only in one category (distribution of either data or processing), with tasks being farmed out through explicit JVM participation (i.e., how DSO might do it: you spin up a number of JVMs and they watch a data structure for tasks, a strategy someone asked me about yesterday), or data being centralized in a datastore separate from the processing distribution (or passed around in remote calls.)

Some products provide an API for distributing processing *and* data - Coherence, for example, adds an API to run invocations over their data grid (which they claim to have "invented" in 2002 - and, Oracle, assuming Tangosol actually did that, you're not Tangosol. Plus, JINI existed in 1999. Sorry.)

Of course, in my opinion, GigaSpaces does it right. We colocate data with processing, so data appears in-process to operations, even though it's transactional and replicated, and our results are produced via the Future classes, so using distributed processing in GigaSpaces is not significantly different than using Executor services.

Routing is automatic (and configurable, if that's what you need) so processes go to where the data is. I'm sure Cameron will say "no, we do that, you don't" or "no (tm), Oracle(tm) does(tm) it(tm) better(tm)", but I've always been more impressed personally with GigaSpaces' approach and mindset than Tangosol's - enough that I left TheServerSide to go to GigaSpaces.

I didn't ask Oracle if I could go to them.

I could be wrong, but the concept of tuple spaces predates JINI. I'll have to dig through my cache of academic papers on tuple spaces and see when the old paper was published. According to wikipedia, tuplespace and linda language dates back to early 90's.

To me distributed cache and tuple space are different. Even though one can use tuple space like a cache, it's not the same thing as a distributed cache. We should try to avoid equating the two, since each was designed to solve a different problem.

Joe: My point was that GigaSpaces provides a distributed capability - not just a distributed cache. Cache was mentioned only with reference to Coherence.

If I didn't know better from your posts here and on other threads, I'd start to think that you really have a serious Oracle fetish ;-)

Joe: Some products provide an API for distributing processing *and* data - Coherence, for example, adds an API to run invocations over their data grid (which they claim to have "invented" in 2002 - and, Oracle, assuming Tangosol actually did that, you're not Tangosol. Plus, JINI existed in 1999. Sorry.)

Wow. I don't even know where to start. What we invented in 2002 (among other things) was a peer-to-peer, fully HA, dynamically-partitioning, in-memory data management system. Peer-to-peer meaning no centralized catalog or decision maker, for example .. no single point of failure, no single point of bottleneck.

I'm not sure where your derision comes from. As an engineer, I'd expect that you'd give credit for technical excellence where credit is due. If someone did it before we did it, I'm sure Google can help you find it. (Hint: It was only years later that our competitors were able to start offering similar capabilities.)

TechTarget provides technology professionals with the information they need to perform their jobs - from developing strategy, to making cost-effective purchase decisions and managing their organizations technology projects - with its network of technology-specific websites, events and online magazines.