[gmx-users] Vesicle simulation crashed with dry martini force field

Szilárd Páll pall.szilard at gmail.com
Fri Jan 15 23:41:59 CET 2016


Why do you think that it's the domain decomposition load balancing causing
the crash rather than the time-step? You say you ran successfully on less
CPU cores with shorter time-step. What about less cores with 10 fs
time-step?

It would help if you share a less mis-formatted information or even a full
log file.

--
Szilárd

On Fri, Jan 15, 2016 at 10:12 PM, Shule Liu <shuleliu1985 at yahoo.com> wrote:

> Hi,
> I'm trying to simulate a large lipid vesicle ( ~ 100 nm in diameter) with
> the dry martini force field. The system consists of about 1.4 million
> particles. I'm trying to equilibrate the system in NVT ensemble in a
> simulation box with length 120 nm, using the timestep of 10 fs. The
> simulation was running on 144 cores (6 nodes with 24 cores in each node).
> Below is my .mdp input file.
> define                   = -DPOSRES -DPOSRES_FC=1000
> -DBILAYER_LIPIDHEAD_FC=200integrator               = sdtinit
>      = 0.0dt                       = 0.01nsteps                   = 8000000
> nstxout                  = 100000nstvout                  = 10000nstfout
>                = 10000nstlog                   = 10nstenergy
>  = 10000nstxtcout                = 1000xtc_precision            = 100
> nstlist                  = 10ns_type                  = gridpbc
>            = xyzrlist                    = 1.4
> epsilon_r                = 15coulombtype              = Shiftrcoulomb
>             = 1.2vdw_type                 = Shiftrvdw_switch              =
> 0.9rvdw                     = 1.2DispCorr                 = No
> tc-grps                  = systemtau_t                    = 4.0ref_t
>              = 295
> ; Pressure coupling:Pcoupl                   = no
> ; GENERATE VELOCITIES FOR STARTUP RUN:;gen_vel                  =
> yes;gen_temp                 = 295;gen_seed                 =
> 1452274742refcoord_scaling         = allcutoff-scheme            = group
> The simulation crashed with the following error message.
> Step 6105820:Atom 164932 moved more than the distance allowed by the
> domain decomposition (4.000000) in direction Zdistance out of cell
> 127480.656250Old coordinates:   38.785   21.966  103.077New coordinates:
> -477239.938 16192.882 127588.617Old cell boundaries in direction Z:
> 60.580  107.937New cell boundaries in direction Z:   60.632  107.958
> -------------------------------------------------------Program mdrun_mpi,
> VERSION 5.0.4Source code file:
> /scratch/build/git/chemistry-roll/BUILD/sdsc-gromacs-5.0.4/gromacs-5.0.4/src/gromacs/mdlib/domdec.c,
> line: 4390
> Fatal error:An atom moved too far between two domain decomposition
> stepsThis usually means that your system is not well equilibratedFor more
> information and tips for troubleshooting, please check the GROMACSwebsite
> at
> http://www.gromacs.org/Documentation/Errors-------------------------------------------------------
> Error on rank 58, will try to stop all ranksHalting parallel program
> mdrun_mpi on CPU 58 out of 144
> gcq#25: "This Puke Stinks Like Beer" (LIVE)
> [cli_58]: aborting job:application called MPI_Abort(MPI_COMM_WORLD, -1) -
> process 58
> I think my simulation crashed possibly due to the large load imbalance
> generated by the domain decomposition. My system (lipid vesicle with
> implicit solvent) is highly inhomogeneous, therefore the domain
> decomposition algorithm will generate highly inhomogeneous domains with
> some domains empty and some full of particles. I tried to run the
> simulation with less CPUs (96 cores) and smaller timestep (1 fs) and there
> wasn't any problem for over 6 million steps.
> However, I would still like to use more cores and large timestep to
> equilibrate my system. Is there any better way to control the load balance
> and domain decomposition such that I could equilibrate the system more
> efficiently? The dry martini paper said for such kind of vesicle
> simulations domain decomposition scheme should be chosen carefully. Is
> there a guidance for doing so?
> Thanks very much.
> Shule
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list