[gmx-users] errors when running gromacs in parallel
gianluca santarossa
gianluca.santarossa at unimib.it
Thu Jan 15 18:05:01 CET 2004
Dear gmx-users,
I'm learning how to run Gromacs simulations using MPI.
I'm using simple configuration files, only to test parallel simulations.
My computers have 2 Intel(R) Xeon(TM) processors each. I'm using Redhat
7.3 with kernel 2.4.20, and tried both rpm and source packages of
LAM-MPI, fftw and gromacs.
Simulations work fine when using 1 or 2 processors, both with one node
or two nodes. But when I use 3 or more processors I encounter some
errors:
MPI_Send: process in local group is dead (rank 1, MPI_COMM_WORLD)
MPI_Wait: process in local group is dead (rank 3, MPI_COMM_WORLD)
Rank (1, MPI_COMM_WORLD): Call stack within LAM:
Rank (1, MPI_COMM_WORLD): - MPI_Send()
Rank (1, MPI_COMM_WORLD): - MPI_Sendrecv()
Rank (1, MPI_COMM_WORLD): - main()
Rank (3, MPI_COMM_WORLD): Call stack within LAM:
Rank (3, MPI_COMM_WORLD): - MPI_Wait()
Rank (3, MPI_COMM_WORLD): - MPI_Sendrecv()
Rank (3, MPI_COMM_WORLD): - main()
-----------------------------------------------------------------------------
One of the processes started by mpirun has exited with a nonzero exit
code. This typically indicates that the process finished in error.
If your process did not finish in error, be sure to include a "return
0" or "exit(0)" in your C code before exiting the application.
PID 16072 failed on node n0 (192.168.1.139) due to signal 11.
-----------------------------------------------------------------------------
In your opinion, what can the cause be?
Any ideas?
Thanks
gianluca santarossa
More information about the gromacs.org_gmx-users
mailing list