[gmx-users] Error when running pbs files using GROMACS on hpc cluster

Justin Lemkul jalemkul at vt.edu
Wed Aug 21 00:18:24 CEST 2019



On 8/20/19 5:56 PM, Anh Vo wrote:
> Hi everyone,
>
> I have submitted pbs files to run GROMACS on a high performance computing
> cluster, but I received error files showing these messages:
>
>
>
> "...
>
>   librdmacm: Warning: couldn\'t read ABI version.
>
>   librdmacm: Warning: assuming: 4
>
>   librdmacm: Fatal: unable to open /dev/infiniband/rdma_cm
>
>   librdmacm: Fatal: unable to open /dev/infiniband/rdma_cm ..."
>
>
> and
>
>
> "--------------------------------------------------------------------------
>
> mpirun noticed that process rank 28 with PID 0 on node shadow-
> 0119.hpc.msstate.edu exited on signal 11 (Segmentation fault).
>
> --------------------------------------------------------------------------
>
> 40 total processes killed (some possibly by mpirun during cleanup)"
>
>
> This error happened when I used mpirun to run my jobs on several nodes (10
> nodes in this case). When I didn't use mpirun and run with only 1 node,
> there is no error produced. The command I used are:
>
>
> *"gmx grompp -f step7_production_01.mdp -c step6.6_equilibration.gro -n
> index.ndx -p topol.top -o step7_1_01.tpr -maxwarn -1*
> *mpirun -np 200 gmx_mpi mdrun -v -deffnm step7_1_01"*
>
>
> Is that mpirun command correct? I'm not sure if those errors is relevant to
> using mpirun.  I don't understand what the errors mean, or how should I fix
> them?
>
> Please help me with this problem. Thank you very much for sharing your time.

None of those are GROMACS errors; they are coming from your system 
indicating there are problems with the MPI library. Consult your sysadmins.

-Justin

-- 
==================================================

Justin A. Lemkul, Ph.D.
Assistant Professor
Office: 301 Fralin Hall
Lab: 303 Engel Hall

Virginia Tech Department of Biochemistry
340 West Campus Dr.
Blacksburg, VA 24061

jalemkul at vt.edu | (540) 231-3129
http://www.thelemkullab.com

==================================================



More information about the gromacs.org_gmx-users mailing list