Super-scaling

On Wed, Feb 19, 2003 at 01:30:22PM +0000, Simon Hogg's all...
> Is there a rough rule of thumb which dictates when a program (if ever)
> shows superscaling with number of nodes. Of course, I would not expect
> this to carry on ad infinitum, but does anyone see superscalar behaviour up
> to, a certain number of nodes.
>> What would be the conditions for this to occur?
I've seen this with the Gromacs benchmark d.dppc job (specifically) on dual
Thunderbirds. Now, it may well happen on other CPUs but I havent run the job
on them (I suspect it would be true for similar speed (1.33GHz) MPs for eg but
once things get faster the characteristics may change and superscaling may
disappear).
For example, due to the nature of the Xeon's setup, a quad Xeon 2.2Ghz,
probably of an older mobo architecture with a lack of memory bandwidth did NOT
display this effect with the same job. Now this may also be because of the
Xeon's much larger cache(s) on die, which means the job runs completely in
cache even on one cpu alone, and thus two CPUs would slow things down. (It is
true that one CPU ran the job about 1.6 times faster than a single 1.33GHz
Tbird). (Furthremore, I am not sure if it was in hyperthreading mode, now that
I think back -- was I really using two physical CPUs?)
I suspect that having the split dataset fit within the cache when it doesnt
fit in one cpu's cache in entirety, and not thrashing the caches, is what's
critical.
Furthremore, I never saw this effect for smaller or larger jobs I tested
in Gromacs on that particular cluster. (I didnt tweak any of the jobs
to find the threshhold of where this effect stops either, which might
be an interesting thing.)
Also of note is that the scaling behaviour went away as soon as I got away
from shared memory as the 'interconnect' - once it went onto the network (4+
cpus), things slowed down drastically.
/kc
>> Simon
>> _______________________________________________
> Beowulf mailing list, Beowulf at beowulf.org> To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf
--
Ken Chase, math at velocet.ca * Velocet Communications Inc. * Toronto, CANADA