[gmx-developers] Scaling velocities when using domain decomp

Berk Hess hess at kth.se
Thu Mar 19 09:26:32 CET 2015


Hi,

You can simply print the kinetic energy and scaling factor you calculate 
and compare a serial against a parallel run and you will see where 
things go off.

But calculating kinetic energies and scaling is not trivial in MD. With 
the leap-frog integrator the reported kinetic energy is the average of 
that at t-0.5*delta_t and t+0.5*delta_t. And then the result will also 
depend when during the step you scale the velocities (before/after which 
half-step Ekin calculation).

Collecting the state is a very expensive operation that you should 
avoid. Better is to calculate the rank contribution to Ekin and do a 
MPI_Allreduce on that. But since mdrun already does this for energy 
reporting and temperature scaling, why not simply use the Ekin mdrun 
calculates already? I would suggest to look at one of the temperature 
coupling algorithms, e.g. v-rescale, and reuse most of that machinery.

Cheers,

Berk

On 03/19/2015 09:01 AM, Bernhardt, Nathan A. wrote:
> Hello everybody,
>
> This is my first time to post a message so feel free to comment 
> accordingly. With that said however, let me get right to it.
>
> i am performing md simulations (GROMACS 4.6.5) using the leap frog 
> integrator. At this point i am not using LINCS but hope to be able to 
> later. Lets just assume that at some point in the simulation i need to 
> scale velocities such that the total energy after scaling is equal to 
> some target value. I can do this easily so long as a single processor 
> is used in the simulation. Unfortunately, if i attempt to use parallel 
> processing (domain decomp) my equation for calculating the scaling 
> factor seems to no longer work. This should not be the case and leads 
> me to believe i may have misunderstood how gromacs is working.
>
> I have tried two ways of doing this when domain decomp is used. the 
> first uses dd_partition_system() and is as follows.
>
> if(DOMAINDECOMP(cr))
> {
> dd_collect_state(cr->dd, state, state_global);
> }
>
> if(group_rank == 0)
> {
> calc_scaling_factor(state_global, energy_data); //this function 
> calculates the scaling factor. i believe it is correct.
> }
>
> for(i=0; i<nr_atoms; i++)
> {
> state_global->v[i][0] = scaling_factor*state_global->v[i][0];
> state_global->v[i][1] = scaling_factor*state_global->v[i][1];
> state_global->v[i][2] = scaling_factor*state_global->v[i][2];
> }
>
> if(DOMAINDECOMP(cr))
> {
> dd_partition_system(fplog, step, cr, TRUE, 1,
> state_global, top_global, ir,
>             state, &f, mdatoms, top, fr,
>             vsite, shellfc, constr,
>             nrnb, wcycle, FALSE);
> }
>
>
>
>
>
> I will admit i do not fully understand what 
> all dd_partition_system() does and the parameters it takes in. Thus, i 
> tried to avoid using it by simply having each node scale the 
> velocities for its home atoms. Below is sample code from that attempt.
>
> if(DOMAINDECOMP(cr))
> {
> dd_collect_state(cr->dd, state, state_global);
> }
>
> if(group_rank == 0)
> {
> calc_scaling_factor(state_global, energy_data); //this function 
> calculates the scaling factor. i believe it is correct.
> }
>
> broadcast_scaling_factor(scaling_factor); //This sends the scaling 
> factor to all nodes so they can scale their home atoms velocities 
> accordingly.
>
> if(DOMAINDECOMP(cr))
> {
> for(i=0; i<mdatoms[0].homenr; i++)
> {
> state->v[i][0] = scaling_factor*state->v[i][0];
> state->v[i][1] = scaling_factor*state->v[i][1];
> state->v[i][2] = scaling_factor*state->v[i][2];
> }
> }
>
>
> Neither of these methods have proved successful. In both cases the 
> total energy after scaling is the same and off slightly from the 
> target value by about 10kj/mol. In the second method I have tested 
> that broadcasting of the scaling factor is successful. Since both 
> methods require a global state to be collected and used for 
> calculating the scaling factor i see no reason why the function that 
> calculates this factor would work for the single processor case and 
> not the domain decomp one. This leads me to believe that either the 
> dd_collect_state() function is not doing exactly what i think it is, 
> the dd_partition_system() may be working in ways i do not understand 
> or that state->v[][] may not be the array I need to modify. Any help 
> is appreciated.
>
> Thanks
> Nathan Bernhardt
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20150319/36d1d9a6/attachment-0001.html>


More information about the gromacs.org_gmx-developers mailing list