[gmx-developers] Scaling velocities when using domain decomp

Bernhardt, Nathan A. nbernhardt at ou.edu
Mon Mar 23 10:20:34 CET 2015


Update:
I figured out what i was doing wrong. I was not collecting forces to use when calculating my scaling factor. Since each node only has part of the systems forces i needed to collect these forces such that the master node may have forces for the entire system and compute scaling factor easily. I wrote a function to collect the forces and it now works fine.

Thanks for all the help. it sent me in the right direction.
Nathan Bernhardt

On Mar 22, 2015, at 5:06 PM, Bernhardt, Nathan A. <nbernhardt at ou.edu<mailto:nbernhardt at ou.edu>> wrote:

So i did as you advised previously and what i found is that everything diverges when comparing the serial case vs the parallel one. I checked ke and pe. It takes several thousand integration steps (1.5fs time step) but the differences in these values become quite large. The values i am comparing are enerd->term[F_EPOT] and enerd->term[F_EKIN]. These are only compared on an energy calculation step. During a certain part of the simulation i change ir->nstcalcenergy equal to 1 such that it will calculate energy every step. When I’m done i set ir->nstcalcenergy back to its original value. Furthermore, i am scaling v(t-1/2dt). My equation for computing scaling_factor considers the current forces f(t), v(t-1/2dt),pe(t) and what v(t+1/2dt) will be after scaling. This goes as follows



Target = pe(t) + ke^(t)

here ke^ implies that velocity scaling has already been done and Target is just some target energy we hope to have after scaling v(t-1/2dt) and computing v(t+1/2dt).

with leapfrog we have ke(t) = .5ke(t-1/2dt) + .5ke(t+1/2dt). Thus

Target - pe(t) = .5ke^(t-1/2dt) + .5ke^(t+1/2dt)

Target - pe(t) = 1/2*SUM[ 1/2*m*v^(t-1/2dt)*v^(t-1/2dt) + 1/2*m*v^(t+1/2dt)**v^(t+1/2dt) ]

Here SUM just means we are summing over all the atoms in the system.

with leapfrog v(t+1/2dt) = v(t-1/2dt) + f(t)*dt/m

Thus, we have the following

Target - pe(t) = 1/4*SUM[ m*v^(t-1/2dt)*v^(t-1/2dt) ] + 1/4*SUM[ m*(v^(t-1/2dt) + f(t)*dt/mi)*(v^(t-1/2dt) + f(t)*dt/mi) ]

Now we substitute v^(t-1/2dt) for K*v(t-1/2dt).  Here v(t-1/2dt) is the original unscaled velocity and k is the scaling factor we are trying to solve for. This gives

Target - pe(t) = 1/4*K*K*SUM[ m*v(t-1/2dt)*v(t-1/2dt) ] + 1/4*SUM[ m*(K*v(t-1/2dt) + f(t)*dt/mi)*(K*v(t-1/2dt) + f(t)*dt/mi) ]

The first term on the right hand side is just K*K*ke(t-1/2dt). The second term is more troublesome. To get things in terms of K on the right hand side we now must perform a vector sum between f(t) and v(t-1/2dt). We then perform a dot product of the resulting vector. And, we can’t forget that we have a summation sign in front of all of this. If you do the math you end up with terms that have K*K*(a number) + K*(another number) + (yet another number) = 0.

So to solve for K we merely solve quadratic formula. This is easy to do with a computer. We simply put in a few loops etc.

So… i do this. I let gromacs get past do_forec() such that we have pe(t) and f(t). Then, I run a loop over all the atoms and solve for ke(t-1/2dt). I then solve for k. Next, I multiply every component of my velocities v(t-1/2dt) by this factor. I also multiply my previously computed ke(t-1/2dt) by this factor twice to get the new ke(t-1/2dt). Then Gromacs does what it does. It computes v(t+1/2dt). With v(t+1/2dt), i run a loop over all the atoms again and compute ke(t+1/2dt). And, finally Etot(t) = pe(t) + .5ke(t-1/2dt) + .5ke(t+1/2dt) and that value should equal Target.

I realize that more efficient means of doing this are possible. However, this only needs to be done rarely. Say once every 20,000 steps or so. Thus, i am not concerned with that right now. Futhermore, when i do this using single processor it works perfectly. But, my problem is that when using multiple processors it fails.

Any further advices is greatly appreciated.

Thanks
Nathan


On Mar 19, 2015, at 3:26 AM, Berk Hess <hess at kth.se<mailto:hess at kth.se>> wrote:

Hi,

You can simply print the kinetic energy and scaling factor you calculate and compare a serial against a parallel run and you will see where things go off.

But calculating kinetic energies and scaling is not trivial in MD. With the leap-frog integrator the reported kinetic energy is the average of that at t-0.5*delta_t and t+0.5*delta_t. And then the result will also depend when during the step you scale the velocities (before/after which half-step Ekin calculation).

Collecting the state is a very expensive operation that you should avoid. Better is to calculate the rank contribution to Ekin and do a MPI_Allreduce on that. But since mdrun already does this for energy reporting and temperature scaling, why not simply use the Ekin mdrun calculates already? I would suggest to look at one of the temperature coupling algorithms, e.g. v-rescale, and reuse most of that machinery.

Cheers,

Berk

On 03/19/2015 09:01 AM, Bernhardt, Nathan A. wrote:
Hello everybody,

This is my first time to post a message so feel free to comment accordingly. With that said however, let me get right to it.

i am performing md simulations (GROMACS 4.6.5) using the leap frog integrator. At this point i am not using LINCS but hope to be able to later. Lets just assume that at some point in the simulation i need to scale velocities such that the total energy after scaling is equal to some target value. I can do this easily so long as a single processor is used in the simulation. Unfortunately, if i attempt to use parallel processing (domain decomp) my equation for calculating the scaling factor seems to no longer work. This should not be the case and leads me to believe i may have misunderstood how gromacs is working.

I have tried two ways of doing this when domain decomp is used. the first uses dd_partition_system() and is as follows.

 if(DOMAINDECOMP(cr))
{
dd_collect_state(cr->dd, state, state_global);
}

if(group_rank == 0)
{
calc_scaling_factor(state_global, energy_data); //this function calculates the scaling factor. i believe it is correct.
}

for(i=0; i<nr_atoms; i++)
{
state_global->v[i][0] = scaling_factor*state_global->v[i][0];
state_global->v[i][1] = scaling_factor*state_global->v[i][1];
state_global->v[i][2] = scaling_factor*state_global->v[i][2];
}

if(DOMAINDECOMP(cr))
{
dd_partition_system(fplog, step, cr, TRUE, 1,
        state_global, top_global, ir,
                state, &f, mdatoms, top, fr,
                vsite, shellfc, constr,
                nrnb, wcycle, FALSE);
}





I will admit i do not fully understand what all dd_partition_system() does and the parameters it takes in. Thus, i tried to avoid using it by simply having each node scale the velocities for its home atoms. Below is sample code from that attempt.

 if(DOMAINDECOMP(cr))
{
dd_collect_state(cr->dd, state, state_global);
}

if(group_rank == 0)
{
calc_scaling_factor(state_global, energy_data); //this function calculates the scaling factor. i believe it is correct.
}

broadcast_scaling_factor(scaling_factor); //This sends the scaling factor to all nodes so they can scale their home atoms velocities accordingly.

if(DOMAINDECOMP(cr))
{
for(i=0; i<mdatoms[0].homenr; i++)
{
state->v[i][0] = scaling_factor*state->v[i][0];
state->v[i][1] = scaling_factor*state->v[i][1];
state->v[i][2] = scaling_factor*state->v[i][2];
}
}


Neither of these methods have proved successful. In both cases the total energy after scaling is the same and off slightly from the target value by about 10kj/mol. In the second method I have tested that broadcasting of the scaling factor is successful. Since both methods require a global state to be collected and used for calculating the scaling factor i see no reason why the function that calculates this factor would work for the single processor case and not the domain decomp one. This leads me to believe that either the dd_collect_state() function is not doing exactly what i think it is, the dd_partition_system() may be working in ways i do not understand or that state->v[][] may not be the array I need to modify. Any help is appreciated.

Thanks
Nathan Bernhardt




--
Gromacs Developers mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org<mailto:gmx-developers-request at gromacs.org>.

--
Gromacs Developers mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org<mailto:gmx-developers-request at gromacs.org>.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20150323/64bcebc1/attachment-0001.html>


More information about the gromacs.org_gmx-developers mailing list