Thank you very much for the advice. I think that I will try to get the coarse picture of the system load before going into the MPI profiling of the application itself. Since no processes (intensive ones) will run during the testing, I believe it will give me an overview of the system status and let me see where did the bottleneck appear. After this is finished, I will look ino the MPI profiling of the app itself.
Thank you again,
Tomislav
> ----- Original Message -----
> From: beowulf-request at beowulf.org> Sent: 11/03/10 08:00 PM
> To: beowulf at beowulf.org> Subject: Beowulf Digest, Vol 81, Issue 5
>> Send Beowulf mailing list submissions to
>beowulf at beowulf.org>> To subscribe or unsubscribe via the World Wide Web, visit
>http://www.beowulf.org/mailman/listinfo/beowulf> or, via email, send a message with subject or body 'help' to
>beowulf-request at beowulf.org>> You can reach the person managing the list at
>beowulf-owner at beowulf.org>> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Beowulf digest..."
>>> Today's Topics:
>> 1. Re: cluster profiling (Prentice Bisbal)
>>> ----------------------------------------------------------------------
>> Message: 1
> Date: Wed, 03 Nov 2010 10:49:31 -0400
> From: Prentice Bisbal <prentice at ias.edu>
> Subject: Re: [Beowulf] cluster profiling
> To: Beowulf Mailing List <beowulf at beowulf.org>
> Message-ID: <4CD1767B.8090701 at ias.edu>
> Content-Type: text/plain; charset=UTF-8
>>>>tomislav_maric at gmx.com wrote:
> > Hi everyone,
> >
> > I'm running a COTS beowlulf cluster and I'm using it for CFD simulations with the OpenFOAM code. I'm currently writing a profiling application (a bunch of scripts) in Python that will use the Ganglia-python interface and try to give me an insight into the way machine is burdened during runs. What I'm actually trying to do is to profile the parallel runs of the OpenFOAM solvers.
> >
> > The app will increment the mesh density (the coarsness) of the simulation, and run the simulations increasing the number of cores. Right now the machine is miniscule: two nodes with Quad cores. The app will store the data (timing of the execution, the number of cores) and I will plot the diagrams to see when the case size and the core number is starting to drive the speedup away from the "linear one".
> >
> > Is this a good approach? I know that this will show just tendencies on such an impossible small number of nodes, but I will expand the machine soon, and then their increased number should make these tendencies more accurate. When I cross-reference the temporal data with the system status data given by the ganglia, I can derive conclusions like "O.K., the speedup went down because for the larger cases, the decomposition on max core number was more local, so the system bus must have been burdened, if ganglia confirms that the network is not being strangled for this case configuration".
> >
> > Can anyone here tell me if I am at least stepping in the right direction? :) Please, don't say "it depends".
> >
>> Have you looked at something like Vampir for MPI profiling? Support for
> VampirTrace is built into OpenMPI, if you compile Open MPI wih the
> correct options.
>> The rub is that I think you need to pay for a Vampir GUI to analyze the
> data. I've never used it myself, but I saw a demo once, and it looked
> pretty powerful.
>>http://www.vampir.eu/>> You might also want to look at Tau, PAPI, and Perfmon2
>>http://www.cs.uoregon.edu/research/tau/home.php>http://icl.cs.utk.edu/papi/>http://perfmon2.sourceforge.net/>> I set this up for one of my users a couple of years ago. I could be
> wrong, but I think Tau requires PAPI, and PAPI in turn requires the
> perfmon2 kernel patches. I could be wrong, since it's been a couple of
> years. Reading the docs above should point you in the correct direction.
>> That's probably more than you wanted to know.
>>>> --
> Prentice
>>> ------------------------------
>> _______________________________________________
> Beowulf mailing list
>Beowulf at beowulf.org>http://www.beowulf.org/mailman/listinfo/beowulf>>> End of Beowulf Digest, Vol 81, Issue 5
> **************************************