[gmx-users] an example to test mdrun-gpu x mdrun

Rossen Apostolov rossen at kth.se
Sat Sep 18 14:09:52 CEST 2010


  Hi Alan,

On 9/14/10 3:21 PM, Alan wrote:
> Hi there,
>
> I am testing on a MBP 17" SL 10.6.4 64 bits and nvidia GeForce 9600M GT
>
> So I got mdrun-gpu compiled and apparently running, but when I try to 
> run 'mdrun' to compare I have a segment fault.
>
> Any other comments to the md.mdp and em.mdp are very welcome too.
>
> ##### To test mdrun-gpu
>
> cat << EOF >| em.mdp
> define                   = -DFLEXIBLE
> integrator               = cg ; steep
> nsteps                   = 200
> constraints              = none
> emtol                    = 1000.0
> nstcgsteep               = 10 ; do a steep every 10 steps of cg
> emstep                   = 0.01 ; used with steep
> nstcomm                  = 1
> coulombtype              = PME
> ns_type                  = grid
> rlist                    = 1.0
> rcoulomb                 = 1.0
> rvdw                     = 1.4
> Tcoupl                   = no
> Pcoupl                   = no
> gen_vel                  = no
> nstxout                  = 0 ; write coords every # step
> optimize_fft             = yes
> EOF
>
> cat << EOF >| md.mdp
> integrator               = md-vv
> nsteps                   = 1000
> dt                       = 0.002
> constraints              = all-bonds
> constraint-algorithm     = shake
> nstcomm                  = 1
> nstcalcenergy            = 1
> ns_type                  = grid
> rlist                    = 1.3
> rcoulomb                 = 1.3
> rvdw                     = 1.3
> vdwtype                  = cut-off
> coulombtype              = PME
PME on the GPUs is not very fast, about 3 times faster than a single core
> Tcoupl                   = Andersen

Andersen works only on with OpenMM. Gromacs accepts it as an option but 
the actual algorithm is not implemented for the CPU version yet.
> nsttcouple               = 1
> tau_t                    = 0.1
> tc-grps                  = system
> ref_t                    = 300
> Pcoupl                   = mttk
> Pcoupltype               = isotropic
> nstpcouple               = 1
> tau_p                    = 0.5
> compressibility          = 4.5e-5
> ref_p                    = 1.0
> gen_vel                  = yes
> nstxout                  = 2 ; write coords every # step

Fetching date from the GPU every 2 steps is way too often. Use a value 
that you will actually use in production runs.

> lincs-iter               = 2
> DispCorr                 = EnerPres
> optimize_fft             = yes
> EOF
>
> wget -c "http://www.pdbe.org/download/1brv" -O 1brv.pdb
>
> pdb2gmx -ff amber99sb -f 1brv.pdb -o Prot.pdb -p Prot.top -water spce 
> -ignh
>
> editconf -bt triclinic -f Prot.pdb -o Prot.pdb -d 1.0
>
> genbox -cp Prot.pdb -o Prot.pdb -p Prot.top -cs
>
> grompp -f em.mdp -c Prot.pdb -p Prot.top -o Prot.tpr
>
> echo 13 | genion -s Prot.tpr -o Prot.pdb -neutral -conc 0.15 -p 
> Prot.top -norandom
>
> grompp -f em.mdp -c Prot.pdb -p Prot.top -o em.tpr
>
> mdrun -v -deffnm em
>
> grompp -f md.mdp -c em.gro -p Prot.top -o md.tpr
>
> mdrun-gpu -v -deffnm md -device 
> "OpenMM:platform=Cuda,memtest=15,deviceid=0,force-device=yes"
>
> [snip]
> Reading file md.tpr, VERSION 4.5.1-dev-20100913-9342b (single precision)
> Loaded with Money
>
>
> Back Off! I just backed up md.trr to ./#md.trr.7#
>
> Back Off! I just backed up md.edr to ./#md.edr.7#
>
> WARNING: OpenMM supports only Andersen thermostat with the 
> md/md-vv/md-vv-avek integrators.
>
>
> WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.
>
>
> WARNING: Non-supported GPU selected (#0, GeForce 9600M GT), forced 
> continuing.Note, that the simulation can be slow or it migth even crash.
>
>
> Pre-simulation ~15s memtest in progress...done, no errors detected
> starting mdrun 'PROTEIN G in water'
> 1000 steps,      2.0 ps.
> step 900, remaining runtime:     4 s
> Writing final coordinates.
>
> step 1000, remaining runtime:     0 s
> Post-simulation ~15s memtest in progress...done, no errors detected
>
> OpenMM run - timing based on wallclock.
>
>                NODE (s)   Real (s)      (%)
>        Time:     44.556     44.556    100.0
>                (Mnbf/s)   (MFlops)   (ns/day)  (hour/ns)
> Performance:      0.000      0.027      3.882      6.182
>
>
> But if I try:
> mdrun -v -deffnm md -nt 1
> [snip]
> starting mdrun 'PROTEIN G in water'
> 1000 steps,      2.0 ps.
> [1]    75786 segmentation fault  mdrun -v -deffnm md -nt 1

It might be due to the Andersen thermostat setting.

>
> Note: using -nt 1 because SHAKE is not supported with domain 
> decomposition.
>
> If using Tcoupl and Pcoupl = no and then I can compare mdrun x 
> mdrun-gpu, being my gpu ~2 times slower than only one core. Well, I 
> definitely don't intended to use mdrun-gpu but I am surprised that it 
> performed that bad (OK, I am using a low-end GPU, but sander_openmm 
> seems to work fine and very fast on my mbp).
>
Try fetching data less often. Also, currently the GPUs are best used for 
implicit solvent simulations

> BTW, in gmx 4.5 manual, there's reference to Andersen thermostat only 
> at section 6.9 GROMACS on GPUs. Is it supposed to be used only with 
> mdrun-gpu?
Yes, at the moment.

Rossen

>
> Any ideas? Thanks,
>
> Alan
>
> -- 
> Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
> Department of Biochemistry, University of Cambridge.
> 80 Tennis Court Road, Cambridge CB2 1GA, UK.
> >>http://www.bio.cam.ac.uk/~awd28 <http://www.bio.cam.ac.uk/%7Eawd28><<

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100918/ccf579d3/attachment.html>


More information about the gromacs.org_gmx-users mailing list