[gmx-users] query for gromacs-4.5.4

Chaitali Chandratre chaitujoshi at gmail.com
Thu Mar 14 13:25:34 CET 2013


Hello Sir,

The job runs for 8 processes given 1 , 2 or 8 nodes but not for more than
that.
16 proceses : Segmentation fault and
For 32 processes it gives :
"Fatal error : 467 particles communicated to PME node 4 are more than 2/3
times
the cut off out of the domain decomposition cell of their charge group in
deimesion x.
This usually means that your system is not well equilibrated"

What can be the reason...

Thanks,
Chaitali

On Tue, Mar 12, 2013 at 3:50 PM, Mark Abraham <mark.j.abraham at gmail.com>wrote:

> It could be anything. But until we see some GROMACS diagnostic messages,
> nobody can tell.
>
> Mark
>
> On Tue, Mar 12, 2013 at 10:08 AM, Chaitali Chandratre <
> chaitujoshi at gmail.com
> > wrote:
>
> > Sir,
> >
> > Thanks for your reply....
> > But the same script runs on some other cluster with apprx same
> > configuration but not on cluster on which I am doing set up.
> >
> > Also job hangs after some 16000 steps but not come out immediately.
> > It might be problem with configuration or what?
> >
> > Thanks...
> >
> > Chaitali
> >
> > On Tue, Mar 12, 2013 at 2:18 PM, Mark Abraham <mark.j.abraham at gmail.com
> > >wrote:
> >
> > > They're just MPI error messages and don't provide any useful GROMACS
> > > diagnostics. Look in the end of the .log file, stderr and stdout for
> > clues.
> > >
> > > One possibility is that your user's system is too small to scale
> > > effectively. Below about 1000 atoms/core you're wasting your time
> unless
> > > you've balanced the load really well. There is a
> > > simulation-system-dependent point below which fatal GROMACS errors are
> > > assured.
> > >
> > > Mark
> > >
> > > On Tue, Mar 12, 2013 at 6:17 AM, Chaitali Chandratre
> > > <chaitujoshi at gmail.com>wrote:
> > >
> > > > Hello Sir,
> > > >
> > > > Actually I have been given work to setup gromacs-4.5.4 in our cluster
> > > which
> > > > is being used
> > > > by users.I am not gromacs user and not aware of its internal details.
> > > > I have got only .tpr file from user and I need to test my
> installation
> > > > using that .tpr file.
> > > >
> > > > It works fine for 2 nodes 8 processes , 1 node 8 processes.
> > > >  But when number of processes are equal to 16 it gives segmentation
> > fault
> > > > and
> > > >  if number of processes are equal to 32 it gives
> > > > error message like
> > > > " HYD_pmcd_pmiserv_send_signal (./pm/pmiserv/pmiserv_cb.c:221):
> assert
> > > > (!closed) failed
> > > >  ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to send SIGUSR1
> > > > downstream
> > > >  HYDT_dmxu_poll_wait_for_event (./tools/demux/demux_poll.c:77):
> > callback
> > > > returned error status
> > > >  HYD_pmci_wait_for_completion (./pm/pmiserv/pmiserv_pmci.c:388):
> error
> > > > waiting for event
> > > > [ main (./ui/mpich/mpiexec.c:718): process manager error waiting for
> > > > completion"
> > > >
> > > > I am not clear like whether problem is there in my installation or
> > what?
> > > >
> > > > Thanks and Regards,
> > > >    Chaitalij
> > > >
> > > > On Wed, Mar 6, 2013 at 5:41 PM, Justin Lemkul <jalemkul at vt.edu>
> wrote:
> > > >
> > > > >
> > > > >
> > > > > On 3/6/13 4:20 AM, Chaitali Chandratre wrote:
> > > > >
> > > > >> Dear Sir ,
> > > > >>
> > > > >> I am new to this installation and setup area. I need some
> > information
> > > > for
> > > > >> -stepout option for
> > > > >>
> > > > >
> > > > > What more information do you need?
> > > > >
> > > > >
> > > > >  mdrun_mpi and also probable causes for segmentation fault in
> > > > >>  gromacs-4.5.4.
> > > > >> (my node having 64 GB mem running with 16 processes, nsteps =
> > > 20000000)
> > > > >>
> > > > >>
> > > > > There are too many causes to name.  Please consult
> > > > http://www.gromacs.org/
> > > > > **Documentation/Terminology/**Blowing_Up<
> > > > http://www.gromacs.org/Documentation/Terminology/Blowing_Up>.
> > > > >  If you need further help, please be more specific, including a
> > > > description
> > > > > of the system, steps taken to minimize and/or equilibrate it, and
> any
> > > > > complete .mdp file(s) that you are using.
> > > > >
> > > > > -Justin
> > > > >
> > > > > --
> > > > > ==============================**==========
> > > > >
> > > > > Justin A. Lemkul, Ph.D.
> > > > > Research Scientist
> > > > > Department of Biochemistry
> > > > > Virginia Tech
> > > > > Blacksburg, VA
> > > > > jalemkul[at]vt.edu | (540) 231-9080
> > > > > http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin<
> > > > http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
> > > > >
> > > > > ==============================**==========
> > > > > --
> > > > > gmx-users mailing list    gmx-users at gromacs.org
> > > > > http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users>
> > > > > * Please search the archive at http://www.gromacs.org/**
> > > > > Support/Mailing_Lists/Search<
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> > > > > * Please don't post (un)subscribe requests to the list. Use the www
> > > > > interface or send it to gmx-users-request at gromacs.org.
> > > > > * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> > > > http://www.gromacs.org/Support/Mailing_Lists>
> > > > >
> > > > --
> > > > gmx-users mailing list    gmx-users at gromacs.org
> > > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > > * Please search the archive at
> > > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > > * Please don't post (un)subscribe requests to the list. Use the
> > > > www interface or send it to gmx-users-request at gromacs.org.
> > > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > > >
> > > --
> > > gmx-users mailing list    gmx-users at gromacs.org
> > > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > > * Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-request at gromacs.org.
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list