[gmx-users] Re: nodes error
Albert
mailmd2011 at gmail.com
Fri Jan 6 08:45:15 CET 2012
thank you very much for kind reply.
I change my command as following:
mpirun -exe /opt/gromacs/4.5.5/bin/mdrun_mpi_bg -args "-nosum -dlb yes
-v -s npt.tpr -nt 1" -mode VN -np 256
the "-nt 1" option has been added above. but it still doesn't work and
here is the log file
Initializing Domain Decomposition on 256 nodes
Dynamic load balancing: yes
Will sort the charge groups at every domain (re)decomposition
Initial maximum inter charge-group distances:
two-body bonded interactions: 0.435 nm, LJ-14, atoms 1853 1861
multi-body bonded interactions: 0.435 nm, Proper Dih., atoms 1853 1861
Minimum cell size due to bonded interactions: 0.478 nm
Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.819 nm
Estimated maximum distance required for P-LINCS: 0.819 nm
This distance will limit the DD cell size, you can override this with -rcon
Guess for relative PME load: 0.21
Will use 192 particle-particle and 64 PME only nodes
This is a guess, check the performance at the end of the log file
Using 64 separate PME nodes
Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
Optimizing the DD grid for 192 cells with a minimum initial size of 1.024 nm
The maximum allowed number of cells is: X 7 Y 7 Z 7
-------------------------------------------------------
Program mdrun_mpi_bg, VERSION 4.5.5
Source code file: ../../../src/mdlib/domdec.c, line: 6436
Fatal error:
There is no domain decomposition for 192 nodes that is compatible with
the given box and a minimum cell size of
1.02425 nm
Change the number of nodes or mdrun option -rcon or -dds or your LINCS
settings
Look in the log file for details on the domain decomposition
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------
"It's So Fast It's Slow" (F. Black)
More information about the gromacs.org_gmx-users
mailing list