[gmx-users] Is it normal for mdrun to go very slow when a large number of 'energygrps' & user-defined potentials are used?

Musselman, Eli D eli-musselman at uiowa.edu
Thu Nov 13 21:11:25 CET 2008


Thanks Mark for your quick response and your suggestion.  I decreased

the density of data points in each of my 274 table*.xvg files

(3 tables for each molecule and one table for all the other atoms) by a factor of 10.

I did notice a small increase in speed (1.584 ns/day to 1.590 ns/day), but obviously

not as fast as without tables (~25ns/day).



If there are other suggestions for speeding up the simulations with a large number

of tables (user-defined potentials), I would be more than grateful.

Again, thanks in advance for your help!



Eli

>>Musselman, Eli D wrote:
>> This works great but ***here's the problem*** : *the simulations run
>> exceedingly slow when I use such a high number of defined 'energygrps'*.
>>  Without using the user defined potential functions I was getting
>> roughly 25 ns/day (which, as always with GROMACS, is amazingly fast!).
>>  With the applied potential functions and 'energygrps' however I am only
>> getting  1.5 ns/day.  I thought one problem might be the number of times
>> that 'mdrun' spends writing to the energy (.edr) file, but I have ruled
>> this out as a possibility because when I increased my 'nstenergy' in my
>> *.mdp file by 100 fold 'mdrun' still runs at the same slow rate.  The
>> slowdown therefore appears to be directly related to the number of
>> 'energygrps.'. To confirm this, I decreased the number of 'energygrps'
>> to 3 (by lumping all 'C1' and 'C2' atoms together into one 'energygrp'
>> and all 'O1' and 'O2' atoms into another 'energygrp') and this
>> accelerated  the simulations dramatically; however, the same user
>> defined potential functions are then applied to both **intra** and
>> **inter**molecular interactions, which is not what I want!
>
>Good diagnostics and description, thanks. It seems quite likely to me
>that this slowdown will be related to cache misses. 95 tables actually
>uses an appreciable amount of memory. More salient for the inner loops,
>however is that each time a new table is being used in each integration
>step, the code will have to go and fetch locations from main memory, and
>when these hit the caches they won't be reused much.
>
>The only simple work-around I can suggest is to decrease the density of
>data points in the table. That will degrade the accuracy, but possibly
>lead to fewer cache misses.
>
>Mark

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081113/33c46858/attachment.html>


More information about the gromacs.org_gmx-users mailing list