SofCheck preps ParaSail parallel language

LONDON – SofCheck Inc., best known as a vendor of software analysis and verification technology and Ada Compilers, is working on a parallel programming language called ParaSail that has been presented at two learned conferences recently.

ParaSail – for Parallel Specification and Implementation Language – is being developed "from scratch" and is particularly aimed at safety-critical systems where C/C++, and parallelizations of C/C++ are notoriously unsafe but it is also intended to make use of abundant processing resources that will soon be potentially available. It is SofCheck's assertion that chips with more than 64 cores will become relatively easy to make, but they will prove difficult to program effectively without a well-constructed parallel programming language.

SofCheck (Burlington, Mass.) is led by chairman and chief technology officer Tucker Taft who is well known as an industry leader in compiler construction and programming language design. It was Taft, while employed at Intermetrics Inc., who was the lead designer of the Ada 95 programming language and who helped add formal methods to parts of the Ada language.

Taft presented a paper entitled An introduction to ParaSail at the Ada Europe conference held in Valencia, Spain, in June 2010 and a paper by the same title at the Open Source Convention being held this week in Portland Oregon. Taft was also scheduled to present a tutorial on experimenting with ParaSail, for which there is now a prototype compiler.

According to the abstract of Taft's Oscon paper "ParaSail is a new language, but it borrows concepts from other programming languages, including the ML/OCaml/F# family, the Lisp/Scheme/Clojure family, the Algol/Pascal/Modula/Ada/Eiffel family, the C/C++/Java/C# family, and the region-based languages, especially Cyclone."

The language is described as being simpler than many others with only four basic concepts; modules, types, objects, and operations. It does not include pointers, exceptions and uses stack- and region-based data storage management rather than garbage collection. It supportds implicit parallelism, making programmers have to work to achieve sequential operation, rather than the other way around. By default the program constructs run in parallel and it promotes a formal approach to software with compile-time checks for correctness with respect to the formal annotations.

In the Oscon paper Taft was set to describe the status of a prototype compiler and a ParaSail Virtual Machine.

Will this new language do the CPU resource allocation at compile time? That kind of feature may make the system to have a predictable performance from a multi core CPU with distributed allocation of the tasks.

Who cares about Legacy software - those that are left behind.. As for this approach Parallel by default - work to make soemthing sequential is probably a good idea - since that is how most programmers are going to think. Sequences - rather than autonomous code running around. It kind of fits with someone coordinating a multitasked function all at once - all the plans are layed out with contingencies - those contingencies that can not be preplanned must have a route or path to handle them. Who to take those unplanned contingencies to is also something that needs to be set forth - some things should be done (by the person there) at the lowest level possible and others must be escalated - knowing what to escallate and to how high and to whom is a challenge for parallel processing.

Another one on the block. There have been quite a few parallel languages which are being implemented in select areas which are amenable to parallelisation but not in general computing domain. The issue with adoption is the huge wall of legacy so until that is addressed there is not much hope, nobody wants to devote expensive engineering resources to re-write those legacy software.

Lua, Python and Perl are not dead. In fact, a study at the Univ of Dusseldorf found that people were twice as productive in these languages than in C/C++/Java. The Linguas Franca are pretty awful. 50 years of Algol is enough.

As an intermediate development step, can processes which can be proven to be free standing be carved out for parallel processing while other processes remain sequential? Perhaps algorithmic emphasis needs to be devoted to determining which subroutines can be proven to be independent and where programming checkpoints need to be introduced to ensure that no thread goes any further until all previous threads have completed and their results have been made available for subsequent calculations.