[gmx-users] Networking

Mehmet Suezen suzen at theochem.tu-muenchen.de
Tue Jan 8 09:19:03 CET 2002


Hi,

MyriNet may reduce communications overhead this is certain case. (Or
Dolphin) In my simulations
I gain around 30% over MyriNet. Since gromacs using
spatial-decomposition algorithms 
scaling should be fine. Also Kernel 2.4 gives better TCP performace,
which should be take into 
accound.

Mehmet



Justin MacCallum wrote:
> 
> Hi,
> 
> I have a couple of questions/comments regarding scaling over gigabit
> ethernet.
> 
> Our hardware vendor has given us three gigabit ethernet (Intel PRO/1000T
> Server) and a switch (Intel NetStructure 470T) for evaluation.  We have
> dual PIII 1Ghz nodes in our cluster.  I tried two test systems.  The first
> one was a 64 lipid DOPC bilayer with ~11000 atoms.  The second was a box
> of water containing 81000 atoms.  Both systems used a 0.9 rlist, 1.4 rdw
> and 0.9 rcoulomb with PME, order=4, fourierspacing=0.12, optimize_fft=yes.
> 
> For both systems I ran two simulations, one on two processors and one on
> four.  For both systems, the two processor version performed slightly
> better than the four processor one.  In both cases, the network load in
> the four processor versions was about 5%.  Based on previous benchmarks on
> fast ethernet nodes, I expect things to scale much better without PME.
> 
> >From this I've concluded that for our hardware and systems we are latency
> bound.  In order to improve scaling we need to move to an interconnect
> with lower latency, such as myrinet, or stop using PME.  Does this sound
> right?  Also, does anyone know of any kernel parameters that may be
> tweaked to improve latency on gibabit or fast ethernet?
> 
> Thanks,
> Justin
> 
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.



More information about the gromacs.org_gmx-users mailing list