[gmx-users] job error in cluster
José Adriano da Silva
jose.adriano.ds at gmail.com
Tue Jan 28 19:30:46 CET 2014
Apologies friends I posted this email by mistake
2014-01-28 Mark Abraham <mark.j.abraham at gmail.com>
> That's extremely strange. There's a bug there to fix (48 cannot be correct
> in both places it is used). Albert, can you please upload your .tpr to a
> new issue at http://redmine.gromacs.org? Please also add what value of
> OMP_NUM_THREADS is set (perhaps implicitly by the script or your mpirun or
> mpirun setup).
>
> Mark
>
>
> On Tue, Jan 28, 2014 at 11:12 AM, Albert <mailmd2011 at gmail.com> wrote:
>
> > Hello:
> >
> > I am submitting a gromacs job in a cluster with command:
> >
> >
> > module load module load gromacs/4.6.5-intel13.1
> > export MDRUN=mdrun_mpi
> > #steepest MINI
> > mpirun -np 1 grompp_mpi -f em.mdp -c ion-em.gro -p topol.top -o em.tpr -n
> > mpirun -np 64 mdrun_mpi -s em.tpr -c em.gro -v -g em.log &>em.info
> >
> >
> > but it failed with messages:
> >
> >
> > ------------------------------------------------------------
> > ----------------------------
> > Reading file em.tpr, VERSION 4.6.5 (single precision)
> >
> > Will use 48 particle-particle and 16 PME only nodes
> > This is a guess, check the performance at the end of the log file
> > Using 64 MPI processes
> > Using 12 OpenMP threads per MPI process
> >
> > -------------------------------------------------------
> > Program mdrun_mpi, VERSION 4.6.5
> > Source code file: /icm/home/magd/gromacs-4.6.5/src/mdlib/nbnxn_search.c,
> > line: 2523
> >
> > Fatal error:
> > 48 OpenMP threads were requested. Since the non-bonded force buffer
> > reduction is prohibitively slow with more than
> > 32 threads, we do not allow this. Use 32 or less OpenMP threads.
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > -------------------------------------------------------
> >
> > "Everybody Lie Down On the Floor and Keep Calm" (KLF)
> >
> > Error on node 1, will try to stop all the nodes
> > Halting parallel program mdrun_mpi on CPU 1 out of 64
> >
> > gcq#78: "Everybody Lie Down On the Floor and Keep Calm" (KLF)
> >
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
> > with errorcode -1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> --------------------------------------------------------------------------
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> > Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list