[gmx-users] an example to test mdrun-gpu x mdrun

Alan alanwilter at gmail.com
Tue Sep 14 15:21:43 CEST 2010


Hi there,

I am testing on a MBP 17" SL 10.6.4 64 bits and nvidia GeForce 9600M GT

So I got mdrun-gpu compiled and apparently running, but when I try to run
'mdrun' to compare I have a segment fault.

Any other comments to the md.mdp and em.mdp are very welcome too.

##### To test mdrun-gpu

cat << EOF >| em.mdp
define                   = -DFLEXIBLE
integrator               = cg ; steep
nsteps                   = 200
constraints              = none
emtol                    = 1000.0
nstcgsteep               = 10 ; do a steep every 10 steps of cg
emstep                   = 0.01 ; used with steep
nstcomm                  = 1
coulombtype              = PME
ns_type                  = grid
rlist                    = 1.0
rcoulomb                 = 1.0
rvdw                     = 1.4
Tcoupl                   = no
Pcoupl                   = no
gen_vel                  = no
nstxout                  = 0 ; write coords every # step
optimize_fft             = yes
EOF

cat << EOF >| md.mdp
integrator               = md-vv
nsteps                   = 1000
dt                       = 0.002
constraints              = all-bonds
constraint-algorithm     = shake
nstcomm                  = 1
nstcalcenergy            = 1
ns_type                  = grid
rlist                    = 1.3
rcoulomb                 = 1.3
rvdw                     = 1.3
vdwtype                  = cut-off
coulombtype              = PME
Tcoupl                   = Andersen
nsttcouple               = 1
tau_t                    = 0.1
tc-grps                  = system
ref_t                    = 300
Pcoupl                   = mttk
Pcoupltype               = isotropic
nstpcouple               = 1
tau_p                    = 0.5
compressibility          = 4.5e-5
ref_p                    = 1.0
gen_vel                  = yes
nstxout                  = 2 ; write coords every # step
lincs-iter               = 2
DispCorr                 = EnerPres
optimize_fft             = yes
EOF

wget -c "http://www.pdbe.org/download/1brv" -O 1brv.pdb

pdb2gmx -ff amber99sb -f 1brv.pdb -o Prot.pdb -p Prot.top -water spce -ignh

editconf -bt triclinic -f Prot.pdb -o Prot.pdb -d 1.0

genbox -cp Prot.pdb -o Prot.pdb -p Prot.top -cs

grompp -f em.mdp -c Prot.pdb -p Prot.top -o Prot.tpr

echo 13 | genion -s Prot.tpr -o Prot.pdb -neutral -conc 0.15 -p Prot.top
-norandom

grompp -f em.mdp -c Prot.pdb -p Prot.top -o em.tpr

mdrun -v -deffnm em

grompp -f md.mdp -c em.gro -p Prot.top -o md.tpr

mdrun-gpu -v -deffnm md -device
"OpenMM:platform=Cuda,memtest=15,deviceid=0,force-device=yes"

[snip]
Reading file md.tpr, VERSION 4.5.1-dev-20100913-9342b (single precision)
Loaded with Money


Back Off! I just backed up md.trr to ./#md.trr.7#

Back Off! I just backed up md.edr to ./#md.edr.7#

WARNING: OpenMM supports only Andersen thermostat with the
md/md-vv/md-vv-avek integrators.


WARNING: OpenMM supports only Monte Carlo barostat for pressure coupling.


WARNING: Non-supported GPU selected (#0, GeForce 9600M GT), forced
continuing.Note, that the simulation can be slow or it migth even crash.


Pre-simulation ~15s memtest in progress...done, no errors detected
starting mdrun 'PROTEIN G in water'
1000 steps,      2.0 ps.
step 900, remaining runtime:     4 s
Writing final coordinates.

step 1000, remaining runtime:     0 s
Post-simulation ~15s memtest in progress...done, no errors detected

OpenMM run - timing based on wallclock.

               NODE (s)   Real (s)      (%)
       Time:     44.556     44.556    100.0
               (Mnbf/s)   (MFlops)   (ns/day)  (hour/ns)
Performance:      0.000      0.027      3.882      6.182


But if I try:
mdrun -v -deffnm md -nt 1
[snip]
starting mdrun 'PROTEIN G in water'
1000 steps,      2.0 ps.
[1]    75786 segmentation fault  mdrun -v -deffnm md -nt 1

Note: using -nt 1 because SHAKE is not supported with domain decomposition.

If using Tcoupl and Pcoupl = no and then I can compare mdrun x mdrun-gpu,
being my gpu ~2 times slower than only one core. Well, I definitely don't
intended to use mdrun-gpu but I am surprised that it performed that bad (OK,
I am using a low-end GPU, but sander_openmm seems to work fine and very fast
on my mbp).

BTW, in gmx 4.5 manual, there's reference to Andersen thermostat only at
section 6.9 GROMACS on GPUs. Is it supposed to be used only with mdrun-gpu?

Any ideas? Thanks,

Alan

-- 
Alan Wilter S. da Silva, D.Sc. - CCPN Research Associate
Department of Biochemistry, University of Cambridge.
80 Tennis Court Road, Cambridge CB2 1GA, UK.
>>http://www.bio.cam.ac.uk/~awd28<<
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100914/2b4ee389/attachment.html>


More information about the gromacs.org_gmx-users mailing list