[gmx-users] mpirun error?
Justin A. Lemkul
jalemkul at vt.edu
Wed Feb 16 22:40:44 CET 2011
Justin Kat wrote:
> Dear Gromacs,
>
> My colleague has attempted to issue this command:
>
>
> mpirun -np 8 (or 7) mdrun_mpi ...... (etc)
>
>
> According to him, he gets the following error message:
>
>
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> ------------------------------
> --------------------------------------------
>
> -------------------------------------------------------
> Program mdrun_mpi, VERSION 4.0.7
> Source code file: domdec.c, line: 5888
>
> Fatal error:
> There is no domain decomposition for 7 nodes that is compatible with the
> given box and a minimum cell size of 0.955625 nm
> Change the number of nodes or mdrun option -rcon or -dds or your LINCS
> settings
>
>
> However, when he uses say, -np 6, he seems to get no error. Any insight
> on why this might be happening?
>
When any error comes up, the first port of call should be the Gromacs site,
followed by a mailing list search. In this case, the website works quite nicely:
http://www.gromacs.org/Documentation/Errors#There_is_no_domain_decomposition_for_n_nodes_that_is_compatible_with_the_given_box_and_a_minimum_cell_size_of_x_nm
> Also, when he saves the output to a file, sometimes he sees the following:
>
>
> NOTE: Turning on dynamic load balancing
>
>
> Is this another flag that might be causing the crash? What does that
> line mean?
See the manual and/or Gromacs 4 paper for an explanation of dynamic load
balancing. This is a normal message.
-Justin
--
========================================
Justin A. Lemkul
Ph.D. Candidate
ICTAS Doctoral Scholar
MILES-IGERT Trainee
Department of Biochemistry
Virginia Tech
Blacksburg, VA
jalemkul[at]vt.edu | (540) 231-9080
http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
========================================
More information about the gromacs.org_gmx-users
mailing list