[gmx-users] internal MPI error: GER overflow

Erik Lindahl lindahl at stanford.edu
Tue Sep 24 06:30:42 CEST 2002


Hi,

>
> NNODES=16, MYRANK=11, HOSTNAME=node11
> NNODES=16, MYRANK=13, HOSTNAME=node13
> MPI_Isend: internal MPI error: GER overflow (rank 5, MPI_COMM_WORLD)
> NNODES=16, MYRANK=4, HOSTNAME=node04
> NNODES=16, MYRANK=3, HOSTNAME=node03
> ----------------------------------------------------------------------------- 
>
>
> One of the processes started by mpirun has exited with a nonzero exit
> code.  This typically indicates that the process finished in error.
> If your process did not finish in error, be sure to include a "return
> 0" or "exit(0)" in your C code before exiting the application.
>
> PID 4340 failed on node n2 with exit status 1.
> ----------------------------------------------------------------------------- 
>
> Rank (5, MPI_COMM_WORLD): Call stack within LAM:
> Rank (5, MPI_COMM_WORLD):  - MPI_Isend()
> Rank (5, MPI_COMM_WORLD):  - main()
> sasidhar at cluster:~/ykgqp$ lamhalt
>
> LAM 6.5.6/MPI 2 C++/ROMIO - University of Notre Dame



As the message says, it's an internal MPI error and not in Gromacs. I 
have never seen it myself; unless somebody else
on the list knows what it is, your best bet is probably to

1) recompile the latest version of LAM-MPI (or mpich), fftw and gromacs
2) Contact the LAM-MPI authors at www.lam-mpi.org if it doesn't work out :-)

Cheers,

Erik





More information about the gromacs.org_gmx-users mailing list