[gmx-users] distance restrained D simulations

jayant james jayant.james at gmail.com
Thu Sep 30 02:51:25 CEST 2010


Yes you are right particle decomposition does not work too!

On Thu, Sep 30, 2010 at 12:19 AM, XAvier Periole <x.periole at rug.nl> wrote:

>
> What is happening is that you've got bonds too long and the
> dd can not manage to cut things in 4 subsystems ...
>
> try particle decomposition but you might end up with the same
> problem :((
>
> On Sep 29, 2010, at 5:35 PM, jayant james wrote:
>
>
> Hi!
> I am trying to perform distance restrained MD simulations of a protein with
> Gromacs4.0.5.
> I have a bunch of FRET distances ranging from 10Angs to 40 angs that I am
> incorporating simular to NOE distance restraints in NMR.
> When I use one processor for the simulations its all fine, but, when I use
> multiple processors I get a bunch of errors
> lets me start with the "NOTE" found below. Well do not want to increase the
> cut-off distance but want the program to use multiple processors. How can I
> overcome this problem?
> I would appreciate your input
> Thanks
> JJ
>
> NOTE: atoms involved in distance restraints should be within the longest
> cut-off distance, if this is not the case mdrun generates a fatal error, in
> that case use particle decomposition (mdrun option -pd)
>
>
> WARNING: Can not write distance restraint data to energy file with domain
> decomposition
> Loaded with Money
>
>
> -------------------------------------------------------
> Program mdrun_mpi, VERSION 4.0.5
> Source code file: ../../../src/mdlib/domdec.c, line: 5873
>
> Fatal error:
> There is no domain decomposition for 4 nodes that is compatible with the
> given box and a minimum cell size of 8.89355 nm
> Change the number of nodes or mdrun option -rdd or -dds
> Look in the log file for details on the domain decomposition
> -------------------------------------------------------
>
> "What Kind Of Guru are You, Anyway ?" (F. Zappa)
>
> Error on node 0, will try to stop all the nodes
> Halting parallel program mdrun_mpi on CPU 0 out of 4
>
> gcq#21: "What Kind Of Guru are You, Anyway ?" (F. Zappa)
>
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 2 with PID 28700 on
> node compute-3-73.local exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> -------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
>
>
>
>
>
>
>
>
> --
> Jayasundar Jayant James
>
> www.chick.com/reading/tracts/0096/0096_01.asp)
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



-- 
Jayasundar Jayant James

www.chick.com/reading/tracts/0096/0096_01.asp)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100930/84ab4b13/attachment.html>


More information about the gromacs.org_gmx-users mailing list