[gmx-users] job error in cluster

Albert mailmd2011 at gmail.com
Tue Jan 28 11:12:17 CET 2014


Hello:

I am submitting a gromacs job in a cluster with command:


module load module load gromacs/4.6.5-intel13.1
export MDRUN=mdrun_mpi
#steepest MINI
mpirun -np 1 grompp_mpi -f em.mdp -c ion-em.gro -p topol.top -o em.tpr -n
mpirun -np 64 mdrun_mpi -s em.tpr -c em.gro -v -g em.log &>em.info


but it failed with messages:


----------------------------------------------------------------------------------------
Reading file em.tpr, VERSION 4.6.5 (single precision)

Will use 48 particle-particle and 16 PME only nodes
This is a guess, check the performance at the end of the log file
Using 64 MPI processes
Using 12 OpenMP threads per MPI process

-------------------------------------------------------
Program mdrun_mpi, VERSION 4.6.5
Source code file: /icm/home/magd/gromacs-4.6.5/src/mdlib/nbnxn_search.c, 
line: 2523

Fatal error:
48 OpenMP threads were requested. Since the non-bonded force buffer 
reduction is prohibitively slow with more than
  32 threads, we do not allow this. Use 32 or less OpenMP threads.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

"Everybody Lie Down On the Floor and Keep Calm" (KLF)

Error on node 1, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 1 out of 64

gcq#78: "Everybody Lie Down On the Floor and Keep Calm" (KLF)

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------



More information about the gromacs.org_gmx-users mailing list