[gmx-users] micelle disaggregated in serial, but not parallel, runs using sd integrator

Berk Hess gmx3 at hotmail.com
Wed Feb 4 20:28:03 CET 2009


Hi,

SD will tau_t=0.1 will make your dynamics a lot slower.

I don't see a reason why there should be a difference between serial and parallel.
Are all simulations single runs, or do you do restarts?

Did you compare the temperatures to check if there is no strong energy loss
or heating and if there are differences between the different simulations?

Berk

> Date: Wed, 4 Feb 2009 13:35:30 -0500
> From: chris.neale at utoronto.ca
> To: gmx-users at gromacs.org
> Subject: [gmx-users] micelle disaggregated in serial, but not parallel,	runs using sd integrator
> 
> Hello,
> 
> I have been experiencing problems with a detergent micelle falling  
> apart. This micelle spontaneously aggregated in tip4p and was stable  
> for >200 ns. I then took the .gro file from 100 ns after stable  
> micelle formation and began some free energy calculations, during  
> which the micelle partially disaggregated, even at lambda=0. At first  
> I thought that this was related to the free energy code, and indeed  
> the energygrps solution posted by Berk did stop my to-be-annihilated  
> detergent monomer from flying around the box even at lambda=0.00.  
> However, I have been able to reproduce this disaggregation in the  
> absence of the free-energy code, so I believe that there is something  
> else going on and my tests using GMX_NO_SOLV_OPT, separate energygrps,  
> and the code change all indicate that this is a separate issue.
> 
> I have been trying to locate the error for a few days, but each round  
> of tests takes about 24h so the progress is slow. Here is a summary of  
> what I have learned so far.
> 
> A. Do not fall apart by 2.5 ns:
> GMX 3.3.1, 4.0.2, or 4.0.3
> integrator          =  md
> energygrps          =  DPC SOL
> tcoupl              =  Berendsen
> tc_grps             =  DPC     SOL
> tau_t               =  0.1     0.1
> ref_t               =  300.    300.
> 
> B. Partial dissaggregation or irregular micelle shape observed by 2.5 ns:
> GMX 3.3.1, 4.0.2, or 4.0.3
> integrator          =  sd
> energygrps          =  System  --- or ---  DPC SOL
> tc_grps             =  System
> tau_t               =  0.1
> ref_t               =  300.
> * GMX 4.0.3 gives same result with "export GMX_NO_SOLV_OPT=1"
> * GMX 4.0.3 gives same result when compiled with the tip4p  
> optimization code fix.
> * GMX 4.0.3 Using tip3p in place of tip4p gives same result.
> 
> C. Does not fall apart by 7.5 ns when running section B options in parallel.
> 
> Common MDP options:
> comm_mode           =  linear
> nstcomm             =  1
> comm_grps           =  System
> nstlist             =  5
> ns_type             =  grid
> pbc                 =  xyz
> coulombtype         =  PME
> rcoulomb            =  0.9
> fourierspacing      =  0.12
> pme_order           =  4
> vdwtype             =  cut-off
> rvdw_switch         =  0
> rvdw                =  1.4
> rlist               =  0.9
> DispCorr            =  EnerPres
> Pcoupl              =  Berendsen
> pcoupltype          =  isotropic
> compressibility     =  4.5e-5
> ref_p               =  1.
> tau_p               =  4.0
> gen_temp            =  300.
> gen_seed            =  9896
> constraints         =  all-bonds
> constraint_algorithm=  lincs
> lincs-iter          =  1
> lincs-order         =  6
> gen_vel             =  no
> unconstrained-start =  yes
> dt                  =  0.004
> 
> ##################
> 
> My current hypothesis is that the sd integrator somehow functions  
> differently in serial than in parallel in gromacs versions 3.3.1,  
> 4.0.2, and 4.0.3. I suspect that this is not limited to tip4p, since I  
> see disaggregation in tip3p also, although I did not control the tip3p  
> run and this may not be related to the md/sd difference.
> 
> I realize that I may have other problems, for example perhaps I should  
> have used dt=1.0 instead of dt=0.1 while using the sd integrator, but  
> the fact that a parallel run resolved the problem makes me think that  
> it is something else.
> 
> I am currently working to find a smaller test system, but would  
> appreciate it if a developer can comment on the liklihood of my above  
> hypothesis being correct. Also, any suggestions on sets of mdp options  
> that might narrow down the possibilities would be greatly appreciated.
> 
> I have included the entire .mdp file from the 4 core job that ran  
> without disaggregation:
> 
> integrator          =  sd
> comm_mode           =  linear
> nstcomm             =  1
> comm_grps           =  System
> nstlog              =  50000
> nstlist             =  5
> ns_type             =  grid
> pbc                 =  xyz
> coulombtype         =  PME
> rcoulomb            =  0.9
> fourierspacing      =  0.12
> pme_order           =  4
> vdwtype             =  cut-off
> rvdw_switch         =  0
> rvdw                =  1.4
> rlist               =  0.9
> DispCorr            =  EnerPres
> Pcoupl              =  Berendsen
> pcoupltype          =  isotropic
> compressibility     =  4.5e-5
> ref_p               =  1.
> tau_p               =  4.0
> tc_grps             =  System
> tau_t               =  0.1
> ref_t               =  300.
> annealing           =  no
> gen_temp            =  300.
> gen_seed            =  9896
> constraints         =  all-bonds
> constraint_algorithm=  lincs
> lincs-iter          =  1
> lincs-order         =  6
> energygrps          =  SOL DPC DPN
> ; Free energy control stuff
> free_energy              = no
> init_lambda              = 0.00
> delta_lambda             = 0
> sc_alpha                 = 0.0
> sc-power                 = 1.0
> sc-sigma                 = 0.3
> couple-moltype           = DPN
> couple-lambda0           = vdw-q
> couple-lambda1           = vdw
> couple-intramol          = no
> nsteps              =  50000
> tinit               =  7600
> dt                  =  0.004
> nstxout             =  50000
> nstvout             =  50000
> nstfout             =  50000
> nstenergy           =  2500
> nstxtcout           =  2500
> gen_vel             =  no
> unconstrained-start =  yes
> 
> ####
> 
> Note that the free energy code was turned on in the above (with  
> lambda=0). This is because I started the debugging when I thought that  
> the free-energy code / tip4p combination was causing the differences.  
> I also ran this exact mdp file in serial and observed disaggregation  
> in ~2 ns.
> 
> Thank you,
> Chris.
> 
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

_________________________________________________________________
See all the ways you can stay connected to friends and family
http://www.microsoft.com/windows/windowslive/default.aspx
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20090204/b47e3218/attachment.html>


More information about the gromacs.org_gmx-users mailing list