[gmx-users] mdrun failed

Albert mailmd2011 at gmail.com
Thu May 15 15:28:32 CEST 2014


Hello:

I try to submit Gromacs job with command:

mpirun -np 2 mdrun_mpi -s md.tpr -g -v

but it failed with messages:

--------------------------------------------------------------------------
It looks like opal_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during opal_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  opal_shmem_base_select failed
  --> Returned value -1 instead of OPAL_SUCCESS
--------------------------------------------------------------------------
*** An error occurred in MPI_Init
*** on a NULL communicator
*** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
***    and potentially your MPI job)
[cudaA:5978] Local abort before MPI_INIT completed successfully; not able
to aggregate error messages, and not able to guarantee that all other
processes were killed!

--------------------------------------------------------------------------


I first compiled openmpi with command:

./configure --prefix=/soft/openmpi-1.7.5_intel CC=icc F77=ifort FC=ifort
CXX=icpc

then gromacs-4.6.5 was compiled with command:

env CC=icc F77=ifort CXX=icpc
CMAKE_PREFIX_PATH=/soft/intel/mkl/include/fftw:/soft/openmpi-1.7.5 cmake ..
-DBUILD_SHARED_LIB=OFF -DBUILD_TESTING=OFF
-DCMAKE_INSTALL_PREFIX=/soft/gromacs-4.6.5 -DGMX_FFT_LIBRARY=mkl
-DGMX_MPI=ON -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/usr/local/cuda



thank you very much.

ALbert


More information about the gromacs.org_gmx-users mailing list