[gmx-users] mpirun error?
Justin Kat
justin.kat at mail.mcgill.ca
Wed Feb 16 22:37:30 CET 2011
Dear Gromacs,
My colleague has attempted to issue this command:
mpirun -np 8 (or 7) mdrun_mpi ...... (etc)
According to him, he gets the following error message:
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
------------------------------
--------------------------------------------
-------------------------------------------------------
Program mdrun_mpi, VERSION 4.0.7
Source code file: domdec.c, line: 5888
Fatal error:
There is no domain decomposition for 7 nodes that is compatible with the
given box and a minimum cell size of 0.955625 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS
settings
However, when he uses say, -np 6, he seems to get no error. Any insight on
why this might be happening?
Also, when he saves the output to a file, sometimes he sees the following:
NOTE: Turning on dynamic load balancing
Is this another flag that might be causing the crash? What does that line
mean?
Thanks!
Justin
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20110216/e2548d2c/attachment.html>
More information about the gromacs.org_gmx-users
mailing list