[gmx-users] Gromacs on SGI Altix
Mark Abraham
Mark.Abraham at anu.edu.au
Tue Jan 3 22:35:47 CET 2006
Mingfeng Yang wrote:
> Erik Lindahl wrote:
>
>> Hi,
>>
>>>>
>>> Thank you, Florian! I will try d.dppc. I have another question. Is
>>> Gromacs specifically optimized for x86 machines, and the power of
>>> Itanium2 is not fully used. Because for Amber8, my Itanium2 is
>>> almost two times faster than my opteron, but for Gromacs, opteron is
>>> slightly (~1.2 times) faster than Itanium2. Other factors are almost
>>> same. Just due to the reason that there is no assembly loops
>>> optimization for IA64?
>>
>>
>> There are certainly assembly loops for IA64 (trust me, I handcoded
>> them :-)
>> Erik
Yup, there was a big difference on my SGI Altix 3700Bx2 between 3.2.1
and a 3.3 beta that had these routines in.
> Thank you, Erik! Anyway, I am impressed by the efficiency of Gromacs,
> especially its performance on our opteron cluster. The scaling for the
> SGI altix has come out; this time, with 2CPUs, it's 1.8 times faster
> than single CPU. I don't care too much about altix, coz most of my job
> will be run on opteron.
That doesn't sound good enough to me. Using a gromacs 3.3 beta, I
reported scaling here back in August (See
http://www.gromacs.org/pipermail/gmx-users/2005-July/016104.html) which
didn't deviate from linear until somewhere between 8 and 16 processors
(except for 2 processors which was super-linear!). Your configure line
looked fine - my installation used an earlier version of icc and
sgi-mpt, as you have.
If your 2 CPU benchmark is indicative of scaling to higher processor
numbers, I'd guess your interconnects are not as good as mine, which is
"SGI's NUMAlink4 interconnect both within and between partitions
providing 3.2 Gbytes/s bidirectional bandwidth per link and < 2us MPI
latency." However the Amber8 scaling suggests your connectivity should
be good enough...
Mark
More information about the gromacs.org_gmx-users
mailing list