[gmx-developers] Problem with mdrun-openmm in release-4-6

Justin Lemkul jalemkul at vt.edu
Tue Nov 20 00:59:42 CET 2012


Hi,

I'm wondering if anyone has encountered any problems using mdrun-openmm compiled 
from the latest release-4-6.  I hadn't had time to play with it until today, and 
I'm running into a segmentation fault.  The log file seems to indicate that the 
GPU is not being found properly, though it works just fine in version 4.5.5.

The workstation has a Tesla C2075 card, CUDA 4.0 and OpenMM 4.0.

My command:

$GMX/mdrun-gpu -device 
"OpenMM:platform=Cuda,memtest=off,deviceid=0,force-device=yes" -s test.tpr

In 4.5.5, I get this (relevant snippet of log file shown):

OpenMM plugins loaded from directory /usr/local/openmm-4.0/lib/plugins: 
libOpenMMCuda.so, libOpenMMAmoebaCuda.so, libOpenMMAmoebaSerialization.so, 
libOpenMMOpenCL.so, libFreeEnergySerialization.so,
The combination rule of the used force field matches the one used by OpenMM.
Gromacs will use the OpenMM platform: Cuda
Non-supported GPU selected (#0, Tesla C2075), forced continuing.Note, that the 
simulation can be slow or it migth even crash.
Pre-simulation GPU memtest skipped. Note, that faulty memory can cause incorrect 
results!

In release-4.6 I get this (same command except "mdrun-openmm" instead of 
"mdrun-gpu"):

OpenMM plugins loaded from directory /usr/local/openmm-4.0/lib/plugins: 
libOpenMMCuda.so, libOpenMMAmoebaCuda.so, libOpenMMAmoebaSerialization.so, 
libOpenMMOpenCL.so, libFreeEnergySerialization.so,
The combination rule of the used force field matches the one used by OpenMM.
Gromacs will use the OpenMM platform: Cuda
Gromacs will run on the GPU #0 (es 148566
procs_running 1
procs_blocked 0
softirq 33540031 0 182f4&).
Pre-simulation GPU memtest skipped. Note, that faulty memory can cause incorrect 
results!

It seems the name of the GPU is not detected properly and gibberish gets printed 
to the .log file.  Immediately after this, mdrun-openmm seg faults.

I compiled in Debug mode instead of Release mode to try to diagnose, but running 
mdrun-openmm through gdb allows the simulation to run, with the following in the 
.log file:

OpenMM plugins loaded from directory /usr/local/openmm-4.0/lib/plugins: 
libOpenMMCuda.so, libOpenMMAmoebaCuda.so, libOpenMMAmoebaSerialization.so, 
libOpenMMOpenCL.so, libFreeEnergySerialization.so,
The combination rule of the used force field matches the one used by OpenMM.
Gromacs will use the OpenMM platform: Cuda
Gromacs will run on the GPU #0 ().
Pre-simulation GPU memtest skipped. Note, that faulty memory can cause incorrect 
results!

Then the run proceeds, albeit very slowly.

This all strikes me as very weird.  Debug mode works (though it does not print 
the name of the card any more) but release mode fails - any thoughts?

-Justin

-- 
========================================

Justin A. Lemkul, Ph.D.
Research Scientist
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin

========================================



More information about the gromacs.org_gmx-developers mailing list