[gmx-users] problems with non pbc simulations in parallel

Gavin Melaugh gmelaugh01 at qub.ac.uk
Wed Mar 10 17:09:38 CET 2010


Hi Berk

Cheers for your help

Gavin

Berk Hess wrote:
> Hi,
>
> This is a silly bug with nose-hoover and pbc=no.
> I fixed it for 4.0.8 (if we will ever release that).
>
> To fix it you only need to move a brace up 4 lines in src/mdlib/init.c
> Or you can use the v-rescale thermostat.
>
> Berk
>
> --- a/src/mdlib/init.c
> +++ b/src/mdlib/init.c
> @@ -119,9 +119,9 @@ static void set_state_entries(t_state
> *state,t_inputrec *ir,
> int nnodes)
>      if (ir->epc != epcNO) {
>        state->flags |= (1<<estPRES_PREV);
>      }
> -    if (ir->etc == etcNOSEHOOVER) {
> -      state->flags |= (1<<estNH_XI);
> -    }
> +  }
> +  if (ir->etc == etcNOSEHOOVER) {
> +    state->flags |= (1<<estNH_XI);
>    }
>    if (ir->etc == etcNOSEHOOVER || ir->etc == etcVRESCALE) {
>      state->flags |= (1<<estTC_INT);
>
>
> > Date: Wed, 10 Mar 2010 14:16:38 +0000
> > From: gmelaugh01 at qub.ac.uk
> > To: gmx-users at gromacs.org
> > Subject: [gmx-users] problems with non pbc simulations in parallel
> >
> > Hi all
> >
> > I have installed gromacs-4.0.7-parallel with open mpi. I have
> > successfully ran a few short simulations on 2,3 and 4 nodes using pbc. I
> > am now interested in simulating a cluster of 32 molecules with no pbc in
> > parallel and the simulation doe not proceed. I have set by box vectors
> > to 0 0 0 in the conf.gro file, pbc = no in the mdp file, and use
> > dparticle decomposition. The feedback I get from the following command
> >
> > nohup mpirun -np 2 /local1/gromacs-4.0.7-parallel/bin/mdrun -pd -s &
> >
> > is
> >
> > Back Off! I just backed up md.log to ./#md.log.1#
> > Reading file topol.tpr, VERSION 4.0.7 (single precision)
> > starting mdrun 'test of 32 hexylcage molecules'
> > 1000 steps, 0.0 ps.
> > [emerald:22662] *** Process received signal ***
> > [emerald:22662] Signal: Segmentation fault (11)
> > [emerald:22662] Signal code: Address not mapped (1)
> > [emerald:22662] Failing at address: (nil)
> > [emerald:22662] [ 0] /lib64/libpthread.so.0 [0x7fbc17eefa90]
> > [emerald:22662] [ 1]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(nosehoover_tcoupl+0x74)
> [0x436874]
> > [emerald:22662] [ 2]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(update+0x171) [0x4b2311]
> > [emerald:22662] [ 3]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(do_md+0x2608) [0x42dd38]
> > [emerald:22662] [ 4]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(mdrunner+0xe33) [0x430973]
> > [emerald:22662] [ 5]
> > /local1/gromacs-4.0.7-parallel/bin/mdrun(main+0x5b8) [0x431128]
> > [emerald:22662] [ 6] /lib64/libc.so.6(__libc_start_main+0xe6)
> > [0x7fbc17ba6586]
> > [emerald:22662] [ 7] /local1/gromacs-4.0.7-parallel/bin/mdrun [0x41e1e9]
> > [emerald:22662] *** End of error message ***
> >
> --------------------------------------------------------------------------
> > mpirun noticed that process rank 1 with PID 22662 on node emerald exited
> > on signal 11 (Segmentation fault).
> >
> > p.s I have ran several of these non pbc simulations with the same system
> > in serial and have never experienced a problem. Has anyone ever come
> > across this sort of problem before? and if so could you please provide
> > some advice.
> >
> > Many Thanks
> >
> > Gavin
> >
> > --
> > gmx-users mailing list gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before
> posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> ------------------------------------------------------------------------
> Express yourself instantly with MSN Messenger! MSN Messenger
> <http://clk.atdmt.com/AVE/go/onm00200471ave/direct/01/>




More information about the gromacs.org_gmx-users mailing list