[gmx-users] Pull code slows down mdrun -- is this expected?

Christopher Neale chris.neale at alum.utoronto.ca
Tue May 17 21:27:44 CEST 2016


Dear Users:

I am writing to ask if it is expected that the pull code slows down gromacs in such a way that a single pull group has fairly minor effect, but many groups collectively really bring down the throughput. Based on Mark's response to my previous post about the free energy code slowing things down, I'm guessing this is about non-optimized kernels, but I wanted to make sure.

Specifically, I have an aqueous lipid bilayer system and I was trying to keep it fairly planar by using the pull code in the z-dimension only to restrain the headgroup phosphorus to a specified distance from the bilayer center, using a seperate pull-coord for each lipid.

Without any such restraints, I get 55 ns/day with GPU/CPU execution. However, if I add 1, 2, 4, 16, 64, or 128 pull code restraints, then the speed goes to 52, 51, 50, 45, 32, and 22 ns/day respectively. That is using pull-coord*-geometry = distance. If I use the cylinder geometry, things are even worse: 51, 48, 44, 29, 14, and 9 ns/day for the same respective numbers of pull restraints.

I have also tested that the same slowdown exists on CPU-only runs. Here, without the pull code I get 19 ns/day and with 2, 4, 16, 64, or 128 pull code restraints I get 19, 18, 18, 15, 9, and 6 ns/day respectively.

In case it matters, my usage is like this for a single restraint and analogous for more restraints:

pull=yes
pull-ncoords = 1
pull-ngroups = 2
pull-group1-name = DPPC

pull-coord1-geometry = distance
pull-coord1-type = flat-bottom
pull-coord1-vec = 0 0 1
pull-coord1-start = no
pull-coord1-init = 2.5
pull-coord1-k = 1000
pull-coord1-dim = N N Y
pull-group2-name = DPPC_&_P_&_r_1
pull-coord1-groups = 1 2

** note that I modified the source code to give a useful flat-bottom restraint for my usage, but I benchmarked also with the unmodified code so the timings have nothing to do with the modified code that I will eventually use.

Thank you,
Chris.


More information about the gromacs.org_gmx-users mailing list