[gmx-users] Fwd: Bad performance in free energy calulations
Mark Abraham
mark.j.abraham at gmail.com
Tue May 19 16:37:51 CEST 2015
Hi,
mdrun doesn't see any problems, but if something else is reporting 25%
utilization then that probably means you have something else running on
your machine, which is a terrible idea for running mdrun. You should expect
some slowdown wrt to the non-free-energy version of the run - the
implementation of the short-ranged loops for the perturbed atoms is not as
great as the rest.
Mark
On Tue, May 19, 2015 at 3:13 PM Julian Zachmann <
FrankJulian.Zachmann at uab.cat> wrote:
> Dear Gromacs users,
>
> I want to do free energy calculations following this
> <
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/index.html
> >
> tutorial.
> My system contains a GPCR, membrane, small ligand, solvate - in total
> 60.000 atoms. I want to perturb the ligand (31 atoms, 3 hydrogen atoms get
> converted to dummy atoms, one carbon atom get converted to a carbon atom,
> the other 27 ligand atoms just change slightly their charges). Apart from
> the perturbation and an extra simulated annealing step in the
> equilibration, I am following the tutorial as close as possible.
>
> My calculations work, but the computer performance is really bad. I am
> using only 8 processors for each Lambda, so domain decomposition is not
> really an issue (see below) but the CPUs run with only 25% load. Which
> could be the reason? Any ideas?
>
> I have made all files available under this link
> <
> https://drive.google.com/folderview?id=0B2M9aqeJrxnYfjRLbzZ0VkFBTlFraEJaWWJ3MzVSaHlUN2cyVzV6X2ZibjRkek81UVM5S0k&usp=sharing
> >
> .
>
> Thank you very much for your help!!
>
> Julian
>
>
> D O M A I N D E C O M P O S I T I O N S T A T I S T I C S
>
> av. #atoms communicated per step for force: 2 x 74451.3
> av. #atoms communicated per step for LINCS: 2 x 4495.3
>
> Average load imbalance: 2.9 %
> Part of the total run time spent waiting due to load imbalance: 1.4 %
> Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 2
> %
>
>
> R E A L C Y C L E A N D T I M E A C C O U N T I N G
>
> On 8 MPI ranks
>
> Computing: Num Num Call Wall time Giga-Cycles
> Ranks Threads Count (s) total sum %
>
> -----------------------------------------------------------------------------
> Domain decomp. 8 1 150 3.136 65.226 0.3
> DD comm. load 8 1 150 0.208 4.335 0.0
> DD comm. bounds 8 1 150 0.257 5.355 0.0
> Neighbor search 8 1 151 11.329 235.608 1.2
> Comm. coord. 8 1 4850 11.368 236.425 1.2
> Force 8 1 5001 466.392 9699.934 50.6
> Wait + Comm. F 8 1 5001 10.598 220.418 1.2
> PME mesh 8 1 5001 386.976 8048.244 42.0
> NB X/F buffer ops. 8 1 14701 1.670 34.724 0.2
> Write traj. 8 1 3 0.133 2.768 0.0
> Update 8 1 5001 1.474 30.664 0.2
> Constraints 8 1 10002 20.387 423.998 2.2
> Comm. energies 8 1 501 2.255 46.891 0.2
> Rest 4.983 103.631 0.5
>
> -----------------------------------------------------------------------------
> Total 921.165 19158.221 100.0
>
> -----------------------------------------------------------------------------
> Breakdown of PME mesh computation
>
> -----------------------------------------------------------------------------
> PME redist. X/F 8 1 15003 134.013 2787.183 14.5
> PME spread/gather 8 1 20004 179.277 3728.576 19.5
> PME 3D-FFT 8 1 20004 22.603 470.092 2.5
> PME 3D-FFT Comm. 8 1 20004 47.072 979.004 5.1
> PME solve Elec 8 1 10002 3.941 81.965 0.4
>
> -----------------------------------------------------------------------------
>
> Core t (s) Wall t (s) (%)
> Time: 3693.315 921.165 400.9
> (ns/day) (hour/ns)
> Performance: 0.938 25.583
> Finished mdrun on rank 0 Tue May 19 14:01:45 2015
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list