[gmx-users] parallel GROMACS question: more the nodes, slower Gromacs ... :-(

Choon Peng cpchng at bii.a-star.edu.sg
Tue Jan 25 02:10:55 CET 2005


Dear Luca,

   It is heartening to hear I have helped someone with their GROMACS
installation :)
Actually the contributed page is a little old as I have added Apple Xserve a
long time ago too:
http://web.bii.a-star.edu.sg/~cpchng/GROMACS_HowTo.html

Anyway, the first thing to do is to test your MPI installation with some
other simple parallel program. See if that scales.
I do that with a Game of Life that I wrote a long time back.
It could be that the mpi version of GROMACS is not compiled probably and
what happens is multiple copies of sequential runs are started.
Do you at least get better performance in going from 1 to 2 CPUs?

Like what David says, we need more details about the system and problem
solved. Running many cpus on a small protein/solvent system is definitely
not scalable, for instance.


Regards,
Choon-Peng Chng

On 1/25/05 2:13 AM, "Luca Mollica" <mollica.luca at hsr.it> wrote:

> Dear all,
> 
> we are installing parallel GROMACS (double precision) on an IBM ppc64
> machine (28 64bit double processor), but we are experiencing some
> troubles with the test MD runs themselves (5 ps, by now).
> 
> We have followed the instructions posted on the GROMACS website by Choon
> Peng (http://www.gromacs.org/documentation/howtos/mpich_howto.html), and
> the compilation and installation were both fine after some additional
> set up optimized for our machine.
> 
> After the complete installation (MPICH, FFTW and GMX), a standard mdp
> file was used and compiled with grompp for a single processor in order
> to test the goodness of installation, and a 5ps simulation of a 300
> residues protein in explicit water was ended after ~200 s .
> 
> When we had tried to set up the simulation for many nodes (4, 12 and 24
> for testing), we had success compiling mdp file with -shuffle and -sort
> options. Everything was fine and all the nodes were recognised and log
> files written.
> [Please note that the simulation is started from a machine that works as
> a node itself]
> But, unfortunately, there was no way to get an increase in calculation
> speed with mdrun (specifing the complete path of mdrun) and activating
> MPI in the same line. We had even the impression that the calculation
> time was becoming ... longer as more nodes were added for calculation !!
> 
> Do you have any suggestion about this big problem ?? Or , better: do you
> think that the problem lies in the GMX installation, in the FFTW
> settings or in the MPI set up ?
> 
> Thanks in advance
> 
> Luca
> 
> 
> 
> 
> 
> 
> 
> 
> 
> ..............................................................................
> ................
> 
> Luca Mollica
> Dulbecco Telethon Institute (Biomolecular NMR Lab)
> 
> DIBIT-HSR,Via Olgettina 58, 1B4
> 20132 Milano (Italy)
> 
> Tel: 0039-02-26434824(Office)/26433497(Lab)
> Fax: 0039-02-26434153
> E-mail: mollica.luca#hsr.it
> luca_mollica#virgilio.it
> 
> "There is something to be learned from a rainstorm. When meeting
> with a sudden shower, you try not to get wet and run quickly along
> the road. By doing such things as passing under the eaves of houses
> one still gets wet. When you are resolved from the beginning,
> you will not be perplexed, though you will get the same soaking.
> This understanding extends to all things."
> 
> - Hagakure -
> 
> ..............................................................................
> ................
> 
> 
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> 




More information about the gromacs.org_gmx-users mailing list