[gmx-users] distance restrained D simulations
jayant james
jayant.james at gmail.com
Thu Sep 30 01:35:40 CEST 2010
Hi!
I am trying to perform distance restrained MD simulations of a protein with
Gromacs4.0.5.
I have a bunch of FRET distances ranging from 10Angs to 40 angs that I am
incorporating simular to NOE distance restraints in NMR.
When I use one processor for the simulations its all fine, but, when I use
multiple processors I get a bunch of errors
lets me start with the "NOTE" found below. Well do not want to increase the
cut-off distance but want the program to use multiple processors. How can I
overcome this problem?
I would appreciate your input
Thanks
JJ
NOTE: atoms involved in distance restraints should be within the longest
cut-off distance, if this is not the case mdrun generates a fatal error, in
that case use particle decomposition (mdrun option -pd)
WARNING: Can not write distance restraint data to energy file with domain
decomposition
Loaded with Money
-------------------------------------------------------
Program mdrun_mpi, VERSION 4.0.5
Source code file: ../../../src/mdlib/domdec.c, line: 5873
Fatal error:
There is no domain decomposition for 4 nodes that is compatible with the
given box and a minimum cell size of 8.89355 nm
Change the number of nodes or mdrun option -rdd or -dds
Look in the log file for details on the domain decomposition
-------------------------------------------------------
"What Kind Of Guru are You, Anyway ?" (F. Zappa)
Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 4
gcq#21: "What Kind Of Guru are You, Anyway ?" (F. Zappa)
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 28700 on
node compute-3-73.local exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
-------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.
NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--
Jayasundar Jayant James
www.chick.com/reading/tracts/0096/0096_01.asp)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100929/4c817776/attachment.html>
More information about the gromacs.org_gmx-users
mailing list