[gmx-users] Questions regarding Polarization Energy Calculation

Mark Abraham Mark.Abraham at anu.edu.au
Tue Aug 14 07:26:33 CEST 2012


On 14/08/2012 7:38 AM, jesmin jahan wrote:
> Dear Gromacs Users,
>
> I have some questions regarding GB-Polarization Energy Calculation
> with Gromacs. I will be grateful if someone can help me with the
> answers.
>
> I am trying to calculate GB-Polarization energy for different Protein
> molecules. I am interested both in energy values with the time
> required to calculate the Born Radii and Polarization Energy.
> I am not doing any energy minimization step as the files I am using as
> input are already minimized.
>
> Here is the content of my  mdrun.mdp file:
>
> constraints         =  none
> integrator            =  md
> pbc                       =  no
> dt                         =  0.001
> nsteps                 =  0
> implicit_solvent    =  GBSA
> gb_algorithm        =  HCT
> sa_algorithm        =  None
>
> And I am using following three steps for all the .pdb files I have:
>
> let x is the name of the .pdb file.
>
> pdb2gmx -f x.pdb -ter -ignh -ff amber99sb -water none
> grompp -f mdr.mdp -c conf.gro -p topol.top -o imd.tpr
> mpirun -np 8 mdrun_mpi  -deffnm imd -v -g x.log

So you're not using the advice I gave you about how to calculate single 
point energies. OK.

> 1 .Now the running time reported by a log file also includes other
> times. Its also not clear to me whether the time includes the time for
> Born Radii calculations.

The timing breakdown is printed at the end of the .log file. Likely your 
time is heavily dominated by the GB calculation and communication cost. 
Born radii calculation are part of the former, and not reported 
separately. You should not bother with timing measurements unless your 
run goes for at least several minutes, else your time will be dominated 
by I/O and setup costs.

> So, to get the GB-energy time  I am doing the following: I am also
> running a simulation with "implicit_solvent" set to "no" and I am
> taking the difference of these two (with GB and Without GB). Is that a
> right approach?

No, that measures the weight difference between an apple and an orange, 
not whether the apple's seeds are heavy.

> I also want to be sure that it also includes Born-Radii calculation time.

It's part of the GB calculation, so it's included in its timing.

>
> Is there any other approach to do this?
>
>
> 2. I was trying to run the simulations on 192 cores (16 nodes each
> with 12 codes). But I got "There is no domain decomposition for 12
> nodes that is compatible with the given box and a minimum cell size of
> 2.90226 nm" error for some pdb files. Can anyone explain what is
> happening. Is there any restriction on number of nodes can be used?

Yes. See discussion linked from http://www.gromacs.org/Documentation/Errors

>
> 3. I run the simulations with 1 way 96 (8 nodes each with 12 cores).
> Its not clear to me from the log file whether Gromacs is able to
> utilize all the 92 cores. It seems, it is using only 8 nodes.
> Does Gromacs use both shared and distributed memory parallelism?

Not at the moment. Look at the top of your .log file for clues about 
what your configuration is making available to GROMACS. It is likely 
that mpirun -np 8 makes only 8 MPI processes available to GROMACS. Using 
more will require you to use your MPI installation correctly (and we 
can't help with that).

> 4.   In the single-point energy  calculation "mdrun -s input.tpr
> -rerun configuration.pdb", is the configuration.pdb mentioned  is the
> original pdb file used on pdb2gmx  with -f option? Or its a modified
> pdb file? I am asking because if I use the original file that does not
> work always :-(

It can be any configuration that matches the .top file you gave to 
grompp. That's the point - you only need one run input file to compute 
the energy of any such configuration you later want. The configuration 
you gave to grompp (or any other tool) doesn't matter.

> 5. Is there any known speedup factor of Gromacs on multicores?

That depends on your simulation system, hardware, network and algorithm. 
Don't bother with fewer than hundreds of atoms per core.

Mark



More information about the gromacs.org_gmx-users mailing list