[gmx-users] Re: amber force field in Gromacs

servaas servaas.michielssens at student.kuleuven.be
Fri Dec 4 09:55:59 CET 2009


Hi Alan,

Thank you for all your efforts. This is getting strange, I tried
following your procedure below	but is still does not work for me:

step 3500, will finish Fri Dec  4 09:56:51 2009Warning: 1-4 interaction
between 6 and 10 at distance 1.454 which is larger than the 1-4 table
size 1.000 nm
These are ignored for the rest of the simulation
This usually means your system is exploding,
if not, you should increase table-extension in your mdp file
or with user tables increase the table size

kind regards,

Servaas


> Message: 1
> Date: Thu, 3 Dec 2009 11:23:14 +0000
> From: Alan <alanwilter at gmail.com>
> Subject: [gmx-users] Re: amber force field in Gromacs
> To: gmx-users at gromacs.org
> Message-ID:
>         <cf58c8d00912030323v97c0556oa6aa1942c46253e2 at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
> 
> Dear Servaas,
> 
> Tested again in 'vacuum' and  I saw no problems. Here goes what I did:
> 
> 
> #----------------------------------
> cat << EOF >| leap.in
> verbosity 1
> source leaprc.ff99SB
> ad = sequence { DA5 DA DA3 }
> saveamberparm ad da_amber.top da_amber.crd
> savepdb ad DA.pdb
> quit
> EOF
> tleap -f leap.in >| leap.out
> 
> acpypi -x da_amber.crd -p da_amber.top -d # acpypi generates em.mdp and
> md.mdp
> 
> cat << EOF >| md.mdp
> cpp                      = /usr/bin/cpp
> define                   = ;-DFLEXIBLE
> integrator               = md
> nsteps                   = 250000
> constraints              = none
> emtol                    = 1000.0
> emstep                   = 0.01
> comm_mode                = angular
> ns_type                  = simple
> nstlist                  = 0
> rlist                    = 0
> rcoulomb                 = 0
> rvdw                     = 0
> Tcoupl                   = no
> Pcoupl                   = no
> gen_vel                  = no
> nstxout                  = 100
> pbc                      = no
> EOF
> 
> editconf -bt cubic -d 1.0 -f da_amber_GMX.gro -o da_amber_GMX.gro
> 
> #Single precision
> grompp -f em.mdp -c da_amber_GMX.gro -p da_amber_GMX.top -o em.tpr
> mdrun -v -deffnm em
> 
> grompp -f md.mdp -c em.gro -p da_amber_GMX.top -o md.tpr
> mdrun -v -deffnm md
> 
> vmd md.gro md.trr
> #----------------------------------
> 
> As you may suspect from the beginning it may be something in your mdp file.
> Case the example above works, I would suggest you to try the mdp for solvent
> box I sent before in a long simulation.
> 
> Good luck.
> 
> Regards,
> 
> Alan
> 
> On Wed, Dec 2, 2009 at 11:10, Alan <alanwilter at gmail.com> wrote:
> 
> > Dear Servaas,
> >
> > In tleap did you really did:
> >
> > TLEAP
> > tleap -f leaprc.ff99SB
> > ad = sequence { DA5 DA DA3 }
> > saveamberparm da da_amber.top da_amber.crd
> >
> >
> > If so, it's wrong, it should be:
> >
> > saveamberparm ad da_amber.top da_amber.crd
> >                        ^^^
> > and not 'da'
> >
> > Besides, I tried to reproduce what you did using what I think would be
> > fine and... everything went fine! Energies after minimisation in
> > single and double were almost identical and trajectories diverted
> > normally.
> >
> > Please check what I did.
> >
> > # begin commands
> >
> > cat << EOF >| em.mdp
> > define                   = -DFLEXIBLE
> > integrator               = cg ; steep
> > nsteps                   = 200
> > constraints              = none
> > emtol                    = 1000.0
> > nstcgsteep               = 10 ; do a steep every 10 steps of cg
> > emstep                   = 0.01 ; used with steep
> > nstcomm                  = 1
> > coulombtype              = PME
> > ns_type                  = grid
> > rlist                    = 1.0
> > rcoulomb                 = 1.0
> > rvdw                     = 1.4
> > Tcoupl                   = no
> > Pcoupl                   = no
> > gen_vel                  = no
> > nstxout                  = 0 ; write coords every # step
> > optimize_fft             = yes
> > EOF
> >
> >
> > cat << EOF >| md.mdp
> > integrator               = md
> > nsteps                   = 1000
> > dt                       = 0.002
> > constraints              = all-bonds
> > nstcomm                  = 1
> > ns_type                  = grid
> > rlist                    = 1.2
> > rcoulomb                 = 1.1
> > rvdw                     = 1.0
> > vdwtype                  = shift
> > rvdw-switch              = 0.9
> > coulombtype              = PME-Switch
> > Tcoupl                   = v-rescale
> > tau_t                    = 0.1 0.1
> > tc-grps                  = protein non-protein
> > ref_t                    = 300 300
> > Pcoupl                   = parrinello-rahman
> > Pcoupltype               = isotropic
> > tau_p                    = 0.5
> > compressibility          = 4.5e-5
> > ref_p                    = 1.0
> > gen_vel                  = yes
> > nstxout                  = 2 ; write coords every # step
> > lincs-iter               = 2
> > DispCorr                 = EnerPres
> > optimize_fft             = yes
> > EOF
> >
> >
> > cat << EOF >| leap.in
> > verbosity 1
> > source leaprc.ff99SB
> > ad = sequence { DA5 DA DA3 }
> > solvatebox ad TIP3PBOX 10.0
> > addions ad Na+ 5
> > addions ad Cl- 3
> > saveamberparm ad da_amber.top da_amber.crd
> > savepdb ad DA.pdb
> > quit
> > EOF
> > tleap -f leap.in >| leap.out
> >
> > acpypi -x da_amber.crd -p da_amber.top -d
> >
> > #Single precision
> > grompp -f em.mdp -c da_amber_GMX.gro -p da_amber_GMX.top -o em.tpr
> > mdrun -v -deffnm em
> >
> > #Polak-Ribiere Conjugate Gradients converged to Fmax < 1000 in 22 steps
> > #Potential Energy  = -6.2280516e+04
> > #Maximum force     =  7.5868494e+02 on atom 98
> > #Norm of force     =  1.0447179e+02
> >
> > grompp -f md.mdp -c em.gro -p da_amber_GMX.top -o md.tpr
> > mdrun -v -deffnm md
> >
> > #Double precision
> > grompp_d -f em.mdp -c da_amber_GMX.gro -p da_amber_GMX.top -o em.tpr
> > mdrun_d -v -deffnm em
> >
> > #Polak-Ribiere Conjugate Gradients converged to Fmax < 1000 in 22 steps
> > #Potential Energy  = -6.22813514022256e+04
> > #Maximum force     =  7.58238100790309e+02 on atom 98
> > #Norm of force     =  1.04358667410458e+02
> >
> > grompp_d -f md.mdp -c em.gro -p da_amber_GMX.top -o md.tpr
> > mdrun_d -v -deffnm md
> >
> > # end commands
> >
> > Regards,
> >
> > Alan
> >
> > On Tue, Dec 1, 2009 at 13:56, Alan <alanwilter at gmail.com> wrote:
> > > Dear Servaas,
> > >
> > > I've been following your thread. I am the developer of acpypi and
> > > thanks for giving a try.
> > >
> > > So, as you may already know, you are trying acpypi as amb2gmx.pl so
> > > far, but you also seemed to have read acpypi wikis and realise that
> > > acpypi can help you to generate the whole topology for a ligand.
> > >
> > > However, AFAIU you have only regular NA and not modified ones neither
> > > ligands, right? But then why are you using RED?
> > >
> > > I understand your approach about using tleap to create your whole
> > > system and then convert it to GMX. It should work at first but it is
> > > clearly not as you reported.
> > >
> > > So, here goes some of my recommendations:
> > >
> > > 1) GMX is vacuum is unrealistic and prone for errors. There's no GB
> > > implementation as far as I know.
> > >
> > > 2) Have you try to use pdb2gmx to generate your files from your pdb
> > > directly to GMX?
> > >
> > > 3) When you say that gmx double precision works, is your system in
> > > vacuum or with solvent?
> > >
> > > 4) if using tleap, create your system with solvent and ions and then
> > > use acpypi to convert to gmx.
> > >
> > > The use of amb2gmx or acpypi is to give you a system to be run
> > > immediately in gromacs doing just a grompp and mdrun. Using editconf
> > > will change the parameters of your box and it may have serious
> > > implications besides that in amber we don't have dodecahedron, so if
> > > doing what you're doing then you're not replicating the conditions you
> > > have in amber with those in gmx (although it puzzles me that gmx
> > > double works, with the commands you gave in gmx?).
> > >
> > > I would ask you to give more details and even a detailed step by step
> > > of commands of what you're doing including tleap.
> > >
> > > Regards,
> > > Alan
> > >
> > >
> > >
> > > On Tue, Dec 1, 2009 at 11:00,  <gmx-users-request at gromacs.org> wrote:
> > >>
> > >> Thanks for your suggestion, I tried  without success and  I also tried
> > >> shake. But this is also rather fighting the symptoms than the cause...
> > >> And amber simulations in vacuum do work fine... My personal guess was
> > >> that another parameter in my mdp file was not compatible with the amber
> > >> force field, but I could not figure out which one. I also tried
> > >> different settings, e.g. the one I found on the acpypi wiki.
> > >>
> > >
> > > --
> > > Alan Wilter Sousa da Silva, D.Sc.
> > > PDBe group, PiMS project http://www.pims-lims.org/
> > > EMBL - EBI, Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD, UK
> > > +44 (0)1223 492 583 (office)
> > >
> >
> >
> >
> > --
> > Alan Wilter Sousa da Silva, D.Sc.
> > PDBe group, PiMS project http://www.pims-lims.org/
> > EMBL - EBI, Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD, UK
> > +44 (0)1223 492 583 (office)
> >
> 
> 
> 
> --
> Alan Wilter Sousa da Silva, D.Sc.
> PDBe group, PiMS project http://www.pims-lims.org/
> EMBL - EBI, Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD, UK
> +44 (0)1223 492 583 (office)
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://lists.gromacs.org/pipermail/gmx-users/attachments/20091203/9f205799/attachment-0001.html





More information about the gromacs.org_gmx-users mailing list