[gmx-users] Gromacs 4 Scaling Benchmarks...

Christian Seifert cseifert at bph.ruhr-uni-bochum.de
Tue Nov 11 15:48:03 CET 2008


A page on the wiki with further information and hints would be nice.
Topic: "improving performance with GMX4" or "Pimp my GMX4" ;-)

The beta manualpage of mdrun (version4) is not very comprehensible/user
friendly in my eyes.

- Christian


On Tue, 2008-11-11 at 09:12 -0500, Justin A. Lemkul wrote:
> 
> vivek sharma wrote:
> > HI MArtin,
> > I am using here the infiniband having speed more than 10 gbps..Can you 
> > suggest some option to scale better in this case.
> > 
> 
> What % imbalance is being reported in the log file?  What fraction of the load 
> is being assigned to PME, from grompp?  How many processors are you assigning to 
> the PME calculation?  Are you using dynamic load balancing?
> 
> All of these factors affect performance.
> 
> -Justin
> 
> > With Thanks,
> > Vivek
> > 
> > 2008/11/11 Martin Höfling <martin.hoefling at gmx.de 
> > <mailto:martin.hoefling at gmx.de>>
> > 
> >     Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma:
> > 
> > 
> >      > I have also tried scaling gromacs for a number of nodes ....but
> >     was not
> >      > able to optimize it beyond 20 processor..on 20 nodes i.e. 1
> >     processor per
> > 
> >     As mentioned before, performance strongly depends on the type of
> >     interconnect
> >     you're using between your processes. Shared Memory, Ethernet,
> >     Infiniband,
> >     NumaLink, whatever...
> > 
> >     I assume you're using ethernet (100/1000 MBit?), you can tune here
> >     to some
> >     extend as described in:
> > 
> >     Kutzner, C.; Spoel, D. V. D.; Fechner, M.; Lindahl, E.; Schmitt, U.
> >     W.; Groot,
> >     B. L. D. & Grubmüller, H. Speeding up parallel GROMACS on high-latency
> >     networks Journal of Computational Chemistry, 2007
> > 
> >     ...but be aware that principal limitations of ethernet remain. To
> >     come around
> >     this, you might consider to invest in the interconnect. If you can
> >     come out
> >     with <16 cores, shared memory nodes will give you the "biggest bang
> >     for the
> >     buck".
> > 
> >     Best
> >              Martin
> >     _______________________________________________
> >     gmx-users mailing list    gmx-users at gromacs.org
> >     <mailto:gmx-users at gromacs.org>
> >     http://www.gromacs.org/mailman/listinfo/gmx-users
> >     Please search the archive at http://www.gromacs.org/search before
> >     posting!
> >     Please don't post (un)subscribe requests to the list. Use the
> >     www interface or send it to gmx-users-request at gromacs.org
> >     <mailto:gmx-users-request at gromacs.org>.
> >     Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> > 
> > 
> > 
> > ------------------------------------------------------------------------
> > 
> > _______________________________________________
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before posting!
> > Please don't post (un)subscribe requests to the list. Use the 
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
-- 
M. Sc. Christian Seifert
Department of Biophysics
University of Bochum
ND 04/67
44780 Bochum
Germany
Tel: +49 (0)234 32 28363
Fax: +49 (0)234 32 14626
E-Mail: cseifert at bph.rub.de
Web: http://www.bph.rub.de




More information about the gromacs.org_gmx-users mailing list