[gmx-users] trying to get better performance in a Rocks cluster running GROMACS 4.0.4

FLOR MARTINI flormartini at yahoo.com.ar
Thu Sep 24 17:08:17 CEST 2009


Thanks for your question.
We are running a lipid bilayer of 128 DPPC and 3655 water molecules and the nstep of the mdp is a total for 10 ns. I don´t think really that our system is a small one...

Dra.M.Florencia Martini

Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas

Cátedra de Química General e Inorgánica

Facultad de Farmacia y Bioquímica

Universidad de Buenos Aires

Junín 956 2º (1113)

TE: 54 011 4964-8249 int 24

--- El jue 24-sep-09, Berk Hess <gmx3 at hotmail.com> escribió:

De: Berk Hess <gmx3 at hotmail.com>
Asunto: RE: [gmx-users] trying to get better performance in a Rocks cluster running GROMACS 4.0.4
Para: "Discussion list for GROMACS users" <gmx-users at gromacs.org>
Fecha: jueves, 24 de septiembre de 2009, 11:22 am




Hi,

You don't mention what kind of benchmark system tou are using for these tests.
A too small system could explain these results.

Berk


Date: Thu, 24 Sep 2009 07:01:04 -0700
From: flormartini at yahoo.com.ar
To: gmx-users at gromacs.org
Subject: [gmx-users] trying to get better performance in a Rocks cluster	running GROMACS 4.0.4

hi,

   We are about to start running GROMACS 4.0.4 with OpenMPI, in 8
nodes, quad core Rocks cluster. We made some tests, without PME and
found two notable things:

* We are getting the best speedup (6) with 2 nodes ( == 8 cores ). I read
the "Speeding Up Parallel GROMACS in High Latency networks" paper, and
thought that the culprit was the switch, but ifconfig shows no retransmits
(neither does ethtool -s or netstat -s). Does version 4 includes the
alltoall patch? Is the paper irrelevant with GROMACS 4?

* When running with the whole cluster ( 8 nodes, 32 cores ), top reports
in any node a 50% system CPU usage. Is that normal? Can it be accounted to
the use of the network? The sys usage gets a bit up when we configured the
Intel NICs with Interrupt Coalescense Off, so I'm tempted to think it
 is
just OpenMPI hammering the tcp stack, polling for packages.

Thanks in advance,

Dra.M.Florencia Martini

Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas

Cátedra de Química General e Inorgánica

Facultad de Farmacia y Bioquímica

Universidad de Buenos Aires

Junín 956 2º (1113)

TE: 54 011 4964-8249 int 24




      
Encontra las mejores recetas con Yahoo! Cocina.


http://ar.mujer.yahoo.com/cocina/ 		 	   		  
Express yourself instantly with MSN Messenger! MSN Messenger 

-----Adjunto en línea a continuación-----

_______________________________________________
gmx-users mailing list    gmx-users at gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the 
www interface or send it to gmx-users-request at gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php


      Yahoo! Cocina

Encontra las mejores recetas con Yahoo! Cocina.


http://ar.mujer.yahoo.com/cocina/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20090924/ee824fe6/attachment.html>


More information about the gromacs.org_gmx-users mailing list