[gmx-users] Optimizing a parallel simulation

Mark Abraham Mark.Abraham at anu.edu.au
Thu Oct 8 08:32:26 CEST 2009


vivek sharma wrote:
> Hi Mark,
> Thanks for your quick response.
> 
> 2009/10/8 Mark Abraham <Mark.Abraham at anu.edu.au 
> <mailto:Mark.Abraham at anu.edu.au>>
> 
>     vivek sharma wrote:
> 
>         Hi There,
>         While running a parallel MD simulation, I got following message
>         while playing with parameters:
>         NOTE 3 [file aminoacids.dat, line 1]:
>          The optimal PME mesh load for parallel simulations is below 0.5
>          and for highly parallel simulations between 0.25 and 0.33,
>          for higher performance, increase the cut-off and the PME grid
>         spacing
>         ---------------------------
> 
>         I don't have any idea when a run should be called a highly
>         parallel simulation?
> 
> 
>     Well below 224 :-)
> 
> 
>         Is it depend on problem-size or the number of processor we are
>         using?
> 
> 
>     Processors.
> 
> 
>         I am trying to optimize my simulation to run on 224 processors
>         and system consist of:
> 
>         Total atom count = 54640
>         where water molecules = 17074
> 
> 
>     You will want to tweak rcoulomb and fourier_space like the above
>     message says. Even so, with less than 500 atoms per
>     particle-particle processor you may find yourself spending more time
>     communicating than processing. I suspect your system may be too
>     small to get value from so many processors, even if you have good
>     network (e.g. Infiniband). You will wish to read 3.17 of the manual
>     to start understand the issues involved here.
> 
> Thanks for pointing me to the rel;ated section in the manual. While 
> tweaking the rcoulomb and fourier_space values, what parameter need to 
> be observed for checking the result?
> Is it the PME load or something else?

Good question. There are some heuristics reported on this list before, 
which you can search for. Varying rcoulomb affects the absolute cost of 
the real-space part, and the accuracy of both parts of the PME 
algorithm. Varying fourier_spacing affects the cost and accuracy of the 
reciprocal-space part.

Unfortunately I'm not aware of any literature that deals rigorously with 
"how accurate is enough?" for PME.

Mark

>         Also, I am getting following message while running grompp
>         ---------------------------------
>         NOTE 1 [file md.mdp, line unknown]:
>          The Berendsen thermostat does not generate the correct kinetic
>         energy
>          distribution. You might want to consider using the V-rescale
>         thermostat.
>         ------------------------------------
>         Any insight in the above message will be highly appreciated.
> 
> 
>     Seems self-explanatory to me - you're using Berendsen and that might
>     not be the right choice. Read the manual sections about the
>     different thermostats.
> 
>     Mark
>     _______________________________________________
>     gmx-users mailing list    gmx-users at gromacs.org
>     <mailto:gmx-users at gromacs.org>
>     http://lists.gromacs.org/mailman/listinfo/gmx-users
>     Please search the archive at http://www.gromacs.org/search before
>     posting!
>     Please don't post (un)subscribe requests to the list. Use the www
>     interface or send it to gmx-users-request at gromacs.org
>     <mailto:gmx-users-request at gromacs.org>.
>     Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php



More information about the gromacs.org_gmx-users mailing list