[gmx-users] LAM OpenMPI conflict?

Francesco Pietra francesco.pietra at accademialucchese.it
Thu Oct 15 18:23:25 CEST 2009


Hi: I was trying to minimize in vacuum a pure CG protein on a four
core (two dual operons), getting errors:

$ lamboot

$ mpirun -np 4 mdrun -s mod21.tpr -o minim_mod21_traj.trr -c
minim_mod21.gro -e minim_mod21_ener.edr
...................
...................

WARNING: Writing ompirun noticed that job rank 2 with PID 4385 on node
tya64 exited on signal 11 (Segmentation fault).
3 processes killed (possibly by Open MPI)
Writing out atom name (SCSP1) longer than 4 characters to .pdb file
.......................
.......................
WARNING: Writing out atom name (SCSP1) longer than 4 characters to
.pd[tya64:04380] [0,0,0]-[0,0,1] mca_oob_tcp_msg_send_handler: writev
failed: Broken pipe (32)
[tya64:04380] [0,0,0] ORTE_ERROR_LOG: Timeout in file
base/pls_base_orted_cmds.c at line 188
[tya64:04380] [0,0,0] ORTE_ERROR_LOG: Timeout in file pls_rsh_module.c
at line 1198
--------------------------------------------------------------------------
mpirun was unable to cleanly terminate the daemons for this job.
Returned value Timeout instead of ORTE_SUCCESS.
==========

GROMACS 3.3  was installed from the Debian provided package, LAM
option. I also have installed (Intel compiled) OpenMPi in a quite
different directory. "which" calls either parallelization support
independently. Do the above error messages really imply a conflict LAM
OpenMPI or should I better look at elsewhere?

thanks

francesco pietra



More information about the gromacs.org_gmx-users mailing list