[gmx-users] Gromacs 4 Scaling Benchmarks...
Martin Höfling
martin.hoefling at gmx.de
Tue Nov 11 12:37:30 CET 2008
Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma:
> I have also tried scaling gromacs for a number of nodes ....but was not
> able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor per
As mentioned before, performance strongly depends on the type of interconnect
you're using between your processes. Shared Memory, Ethernet, Infiniband,
NumaLink, whatever...
I assume you're using ethernet (100/1000 MBit?), you can tune here to some
extend as described in:
Kutzner, C.; Spoel, D. V. D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.; Groot,
B. L. D. & Grubmüller, H. Speeding up parallel GROMACS on high-latency
networks Journal of Computational Chemistry, 2007
...but be aware that principal limitations of ethernet remain. To come around
this, you might consider to invest in the interconnect. If you can come out
with <16 cores, shared memory nodes will give you the "biggest bang for the
buck".
Best
Martin
More information about the gromacs.org_gmx-users
mailing list