[gmx-users] trying to get better performance in a Rocks cluster running GROMACS 4.0.4

Carsten Kutzner ckutzne at gwdg.de
Thu Oct 8 13:04:42 CEST 2009


Hi,

sorry for answering late, I was on vacation. If you have not tried it  
already,
I would say that a direct back-to-back connection will give you a few  
percent
extra in scaling, but probably not more. The sad thing about Ethernet  
is that
its throughput is fixed while the performance of the processors rises  
from year
to year. Nowadays you additionally have many of them on a single node  
that
have to share the interface.

Carsten


On Sep 25, 2009, at 8:46 PM, FLOR MARTINI wrote:

> Hi, yeah, we have a clearly better performance with 2 nodes (8 CPU)!  
> if we try the same on 1 node (4 CPU) we have a day of difference.  
> Yes, we have a Gigabit ethernet network, so it is clear what you say  
> about congestion problems. We were thinking about to by pass the  
> switch, using another ethernet between two nodes each. Do you think  
> that it could be better for our performance??
> Thank you in advance.
> Flor
>
> Dra.M.Florencia Martini
> Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
> Cátedra de Química General e Inorgánica
> Facultad de Farmacia y Bioquímica
> Universidad de Buenos Aires
> Junín 956 2º (1113)
> TE: 54 011 4964-8249 int 24
>
> --- El vie 25-sep-09, Carsten Kutzner <ckutzne at gwdg.de> escribió:
>
> De: Carsten Kutzner <ckutzne at gwdg.de>
> Asunto: Re: [gmx-users] trying to get better performance in a Rocks  
> cluster running GROMACS 4.0.4
> Para: flormartini at yahoo.com.ar, "Discussion list for GROMACS users" <gmx-users at gromacs.org 
> >
> Fecha: viernes, 25 de septiembre de 2009, 9:19 am
>
> Hi,
>
> if you run without PME, there will be no all-to-all communication  
> anyway,
> so in this sense the paper is (mostly) irrelevant here. Since you  
> mention this
> paper I assume that your network is gigabit ethernet. If you run on  
> recent
> processors then I would say that for a 10000 atom system on 8 cores  
> the
> ethernet is clearly the limiting factor, even if it runs optimal  
> (the chance for
> congestion problems on two nodes only is also very limited - these are
> likely to appear on 3 or more nodes).
>
> What is your performance on a single node (4 CPUs)? You could compare
> that to the performance of 4 CPUs on 2 nodes to determine the network
> impact.
>
> Carsten
>
>
> On Sep 24, 2009, at 5:08 PM, FLOR MARTINI wrote:
>
>> Thanks for your question.
>> We are running a lipid bilayer of 128 DPPC and 3655 water molecules  
>> and the nstep of the mdp is a total for 10 ns. I don´t think really  
>> that our system is a small one...
>>
>> Dra.M.Florencia Martini
>> Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
>> Cátedra de Química General e Inorgánica
>> Facultad de Farmacia y Bioquímica
>> Universidad de Buenos Aires
>> Junín 956 2º (1113)
>> TE: 54 011 4964-8249 int 24
>>
>> --- El jue 24-sep-09, Berk Hess <gmx3 at hotmail.com> escribió:
>>
>> De: Berk Hess <gmx3 at hotmail.com>
>> Asunto: RE: [gmx-users] trying to get better performance in a Rocks  
>> cluster running GROMACS 4.0.4
>> Para: "Discussion list for GROMACS users" <gmx-users at gromacs.org>
>> Fecha: jueves, 24 de septiembre de 2009, 11:22 am
>>
>> Hi,
>>
>> You don't mention what kind of benchmark system tou are using for  
>> these tests.
>> A too small system could explain these results.
>>
>> Berk
>>
>>
>> Date: Thu, 24 Sep 2009 07:01:04 -0700
>> From: flormartini at yahoo.com.ar
>> To: gmx-users at gromacs.org
>> Subject: [gmx-users] trying to get better performance in a Rocks  
>> cluster	running GROMACS 4.0.4
>>
>> hi,
>>
>>    We are about to start running GROMACS 4.0.4 with OpenMPI, in 8
>> nodes, quad core Rocks cluster. We made some tests, without PME and
>> found two notable things:
>>
>> * We are getting the best speedup (6) with 2 nodes ( == 8 cores ).  
>> I read
>> the "Speeding Up Parallel GROMACS in High Latency networks" paper,  
>> and
>> thought that the culprit was the switch, but ifconfig shows no  
>> retransmits
>> (neither does ethtool -s or netstat -s). Does version 4 includes the
>> alltoall patch? Is the paper irrelevant with GROMACS 4?
>>
>> * When running with the whole cluster ( 8 nodes, 32 cores ), top  
>> reports
>> in any node a 50% system CPU usage. Is that normal? Can it be  
>> accounted to
>> the use of the network? The sys usage gets a bit up when we  
>> configured the
>> Intel NICs with Interrupt Coalescense Off, so I'm tempted to think  
>> it is
>> just OpenMPI hammering the tcp stack, polling for packages.
>>
>> Thanks in advance,
>>
>> Dra.M.Florencia Martini
>> Laboratorio de Fisicoquímica de Membranas Lipídicas y Liposomas
>> Cátedra de Química General e Inorgánica
>> Facultad de Farmacia y Bioquímica
>> Universidad de Buenos Aires
>> Junín 956 2º (1113)
>> TE: 54 011 4964-8249 int 24
>>
>>
>> Encontra las mejores recetas con Yahoo! Cocina.
>> http://ar.mujer.yahoo.com/cocina/
>> Express yourself instantly with MSN Messenger! MSN Messenger
>>
>> -----Adjunto en línea a continuación-----
>>
>> _______________________________________________
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before  
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>>
>> Encontra las mejores recetas con Yahoo! Cocina.
>> http://ar.mujer.yahoo.com/cocina/_______________________________________________
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before  
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
>
>
>
>
>
>
> Encontra las mejores recetas con Yahoo! Cocina.
> http://ar.mujer.yahoo.com/cocina/_______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before  
> posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20091008/f9606d60/attachment.html>


More information about the gromacs.org_gmx-users mailing list