Hi Alexei,
> I would like to put a trivial though an important question to you. For
> some time I try to find problems to be best solved*
> using Mathematica, and where parallel computing would be necessary. So
> far I did not succeed in finding them. This tells
> me that I do not understand something important. I would be grateful, if
> you could give few examples. Very frankly,
> this my mail is not aimed to state a uselessness of parallelization. In
> contrast, I need few realistic examples to make my mind and
> begin understanding, if I may need using a parallelization myself.
>
>
> * Just to explain the point: molecular dynamics simulation for instance,
> often requires parallelization, but here Fortran
> of C would be probably methods of choice, rather than Mathematica. In
> analytical calculations in contrast, Mathematica
> would be the method of choice, but in this case I do not see how
> parallelization may be benefited from.
I think in the above you are implicitly making at least 2
presuppositions that are not well justified, and probably plain wrong:
1) Whether or not an algorithm can successfully be parallelized has
anything to do with it being analytical or not.
Whether or not an algorithm can take advantage of parallelism or not
does only depend on whether there are parts of the algorithm which can
be executed in parallel without suffering too much from communication
overhead. It has nothing to do with what these parts do, numerics,
analytics or anything else. Trivial examples for easily parallelizable
algorithms are those called "embarassingly parallel", where you do
completely independent calculations in parallel.
One example for analytical calculations which can profit from
"embarassing" parallelism I know of are multiloop calculation in quantum
field theory, where you have to calculate many, sometimes thousands, of
Feynman diagrams _analytically_, and in the end sum up all of these.
Since each of these diagrams can be calculated independently of each
other, an "embarassing" parallel algorithm would just distribute the
calculation of these to the processors available and gain almost linear
speedup, since there is practically no communication overhead.
2) Mathematica is not useful for anything but analytic calculations
(like e.g. numerical analysis), probably because it is too slow.
For every useful language/system/tool there is a trade off between
execution-speed, time of development, code maintenance and many other
aspects. Of course when you neglect the amount of time and work you have
to invest, you will always be able to write code in C or Fortran that is
faster than Mathematica code. The same is true for Assembler code, which
will always outperform C or Fortran code if decently written. Both these
statements of course imply that the programmer knows his business,
otherwise Wolframs developers or the authors of the compiler in action
will probably do a better job although their codes must be much more
general.
There are many aspects that contribute to a well reasoned decision which
is the best tool/system/language for a specific task, and pure execution
speed is in many cases only one of the less important factors. If you
can receive a result by running 100 NDSolve-s in parallel within half an
hour, I would not even think about which ode-library I would link to my
Fortran program to build a program that probably gets the same result in
1 Minute -- once I have spent 2 days writing it (and another week
debugging that stupid index being one off). Let alone writing
specialized solver code that makes best use of the Cell processors of
the new supercomputer so it will run the same thing in 5 Seconds.
> ... and where parallel computing would be necessary
Usually parallel computing can not do more for you than speed up your
calculations. So when would parallel computing be _necessary_? I can
think of two cases:
1) the calculation times for a problem would be too long to wait for
(something in the order of years). For these cases to be solvable in
reasonable time you will need massive parallelism, that is you are
talking about probably 10-thousands of processors. Here it certainly
makes sense to spend some time coding to get maximal execution speed.
This is probably not the class of problems that Mathematica 7 really
addresses.
2) The result is only useful if it is calculated fast enough. E.g. a
weather forecast for the next day is useless if it takes 4 days to
calculate it. Reducing this by parallel computing to 1/2 a day would
make the same program useful, and I can see many cases where that would
make Mathematica programs useful, which without parallelism would be
useless.
In all other cases parallelism is just helping you to do more in less
time, which is often a good enough reason to make use of it. If you can
increase your throughput by a factor of 4 with the 4 Kernels Mathematica
7 lets you use you probably can make your boss quite happy...
hth,
albert