[gmx-users] Problem with the mdrun_openmpi on cluster
Smith, Micholas D.
smithmd at ornl.gov
Mon Mar 14 17:22:51 CET 2016
What is your box size (x, y, z)?
What happens if you use half that number of nodes?
===================
Micholas Dean Smith, PhD.
Post-doctoral Research Associate
University of Tennessee/Oak Ridge National Laboratory
Center for Molecular Biophysics
________________________________________
From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of James Starlight <jmsstarlight at gmail.com>
Sent: Monday, March 14, 2016 12:19 PM
To: Discussion list for GROMACS users
Subject: [gmx-users] Problem with the mdrun_openmpi on cluster
Hello,
I am trying to submit job on 64 nodes on mu local cluster using below
combination of software
DO="mpiexec -np 64"
PROG="g_mdrun_openmpi"
$DO $PROG -deffnm sim
obtaining error
Program g_mdrun_openmpi, VERSION 4.5.7
Source code file:
/builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec.c, line: 6436
Fatal error:
There is no domain decomposition for 64 nodes that is compatible with
the given box and a minimum cell size of 2.6255 nm
Change the number of nodes or mdrun option -rcon or your LINCS settings
Could someone provide me trivial sollution/
Thanks!
J.
--
Gromacs Users mailing list
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list