[gmx-users] gromacs.org_gmx-users Digest, Vol 159, Issue 3
amitabh jayaswal
amitabhjayaswal at gmail.com
Sun Jul 2 08:14:26 CEST 2017
Dear Friends,
While performing simulation on our protein of interest, if we want to know
the overall charge on our protein then how can we check the same and
proceed further for addition of ions to make the system neutral.
Regards
*Amitabh Jayaswal*
*PhD Bioinformatics Scholar*
*Banaras Hindu University*
*Varanasi, U.P., India | PIN-221005*
*+*
*City Coordinator*
*United Nations*
*Rio+22 Power India Program*
M: +91 (9868 330088 and 7376 019 155)
On Sat, Jul 1, 2017 at 10:50 PM, <
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:
> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
> 1. Re: gmx wham problem (edesantis)
> 2. MDrun -maxh option (Akshay)
> 3. Re: MDrun -maxh option (Mark Abraham)
> 4. Re: MDrun -maxh option (Mark Abraham)
> 5. Checkpoint files error (Apramita Chand)
> 6. Regarding grid spacing (Apramita Chand)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Sat, 01 Jul 2017 14:01:42 +0200
> From: edesantis <edesantis at roma2.infn.it>
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] gmx wham problem
> Message-ID: <4169af7892add52b5bbaa784647f92bd at imap.roma2.infn.it>
> Content-Type: text/plain; charset=UTF-8; format=flowed
>
> Dear Matthew,
> thanks for your opinion.
> How can you establish if the histograms are sufficiently overlapped? Is
> there any thumbs role?
> For what concern the negative sign of reaction coordinates, I think it
> could derive from the choice of the order of the pulling groups and the
> vector along you are pulling.
>
> Thank you again
> Best regards,
> Emiliano
>
>
> On 2017-06-28 17:49, Thompson, Matthew White wrote:
> > It looks like you need to sample more states, 13 is not enough.
> > Probably more like 20-30+ would be needed to get a smooth PMF as is
> > discussed in that tutorial. The weird features in your PMF are from
> > insufficiently overlapping histograms, for example the bump near -2.2
> > nm corresponds to having no histogram there. You also see that you
> > only have one state near -1.2 nm, so that is probably not being
> > sampled enough for WHAM to produce meaningful results.
> >
> > Also I don't understand the meaning of negative distance as a reaction
> > coordinate. If that is the distance between two things, maybe it
> > should be positive. It makes it difficult to understand which values
> > correspond to them being close and far away.
> > ________________________________________
> > From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se
> > [gromacs.org_gmx-users-bounces at maillist.sys.kth.se] on behalf of
> > edesantis [edesantis at roma2.infn.it]
> > Sent: Wednesday, June 28, 2017 10:26 AM
> > To: Gmx users
> > Subject: [gmx-users] gmx wham problem
> >
> > dear all,
> >
> > I am studying the affinity between an antibody and an amyloid peptide;
> > I
> > am interested in the evaluation of the PMF. I have a problem with the
> > PFM shape.
> > I followed the protocol described in the umbrella-sampling tutorial
> > (http://bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-
> tutorials/umbrella/01_pdb2gmx.html)
> > Here below there is the .mdp part for the pulling
> > ; Pull code
> > pull = yes
> > pull_ngroups = 2
> > pull_ncoords = 1
> > pull_group1_name = Chain_Abeta
> > pull_group2_name = Chains_Antibody
> > pull_coord1_type = umbrella ; harmonic biasing force
> > pull_coord1_geometry = direction
> > pull_coord1_groups = 1 2
> > pull_coord1_vec = 38.207 68.611 29.8493
> > pull_coord1_rate = 0.002 ; 0.01 nm per ps = 10 nm per ns
> > pull_coord1_k = 1000 ; kJ mol^-1 nm^-2
> > pull_coord1_start = yes ; define initial COM distance >
> > 0
> >
> > After the pulling simulation, I?ve extracted 13 configuration; for each
> > them, 36 ns of equilibration were performed. These are the mdp
> > directives:
> > ; Pull code
> > pull = yes
> > pull_ngroups = 2
> > pull_ncoords = 1
> > pull_group1_name = Chain_Abeta
> > pull_group2_name = Chains_Antibody
> > pull_coord1_type = umbrella ; harmonic biasing force
> > pull_coord1_geometry = direction
> > pull_coord1_groups = 1 2
> > pull_coord1_vec = 38.207 68.611 29.8493
> > pull_coord1_rate = 0.00
> > pull_coord1_k = 1000 ; kJ mol^-1 nm^-2
> > pull_coord1_start = yes ; define initial COM distance >
> > 0
> >
> > Then I ran the wham command:
> > Gmx wham ?it list_tpr.dat ?if list_pullf.dat -v ?b 20000 ?o ?hist
> > And I?ve obtained the following pictures:
> > http://i66.tinypic.com/11t5zdv.png
> > http://i67.tinypic.com/30u8g8x.png
> > Do you have any idea of why the pfm profile has this strange shape?
> > Could it come from any kind of error I?ve made during the simulations?
> > If there are not errors, it seems that the configurations in which the
> > peptide is far from the antibody are more energetically favoured
> > respect
> > to those in contact with the antibody, but I have some doubts about it?
> >
> > Can you help me?
> > Thank you in advance,
> > best regards,
> > Emiliano
> >
> >
> > --
> > Emiliano De Santis
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
>
> --
> Emiliano De Santis
>
>
> ------------------------------
>
> Message: 2
> Date: Sat, 1 Jul 2017 14:44:12 +0100
> From: Akshay <akshays.sridhar at gmail.com>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] MDrun -maxh option
> Message-ID:
> <CAAjxEXobJw=PpckcHSW8_zs5hEAfWWFH8OVEsMufaTCvdtaCqg@
> mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello All,
>
> The cluster of my University uses a queuing system with a maximum wall-time
> of 12 hours. So, I run mdrun with the option -maxh 11.9 and subsequently
> restart the simulation using the output checkpoint files iteratively.
> However, the -maxh option has not been killing the jobs when I run replica
> exchange jobs across nodes (4 replicas with 2 nodes for each replica (16
> cores per node)). I only get an output error with the job scheduler killing
> the job at the 12 hour mark.
>
> I would love to have suggestions on how to begin my troubleshooting. Could
> it be an installation issue on specific nodes? Or should I reduce the -maxh
> value further to allow time for mdrun to write all the checkpoint files?
>
> Thanks,
> Akshay
>
>
> ------------------------------
>
> Message: 3
> Date: Sat, 01 Jul 2017 14:19:48 +0000
> From: Mark Abraham <mark.j.abraham at gmail.com>
> To: gmx-users at gromacs.org, gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: Re: [gmx-users] MDrun -maxh option
> Message-ID:
> <CAMNuMASv411QcOELE1nGj1sLzZud_FAF2qSFJ62srHnM2s7X8Q at mail.
> gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> Inter-simulation signalling for coordinating stuff like mdrun -maxh for
> replica exchange has not had a great history. I believe it finally works
> properly in 2016, but haven't actually tried it.
>
> Mark
>
> On Sat, Jul 1, 2017 at 3:45 PM Akshay <akshays.sridhar at gmail.com> wrote:
>
> > Hello All,
> >
> > The cluster of my University uses a queuing system with a maximum
> wall-time
> > of 12 hours. So, I run mdrun with the option -maxh 11.9 and subsequently
> > restart the simulation using the output checkpoint files iteratively.
> > However, the -maxh option has not been killing the jobs when I run
> replica
> > exchange jobs across nodes (4 replicas with 2 nodes for each replica (16
> > cores per node)). I only get an output error with the job scheduler
> killing
> > the job at the 12 hour mark.
> >
> > I would love to have suggestions on how to begin my troubleshooting.
> Could
> > it be an installation issue on specific nodes? Or should I reduce the
> -maxh
> > value further to allow time for mdrun to write all the checkpoint files?
> >
> > Thanks,
> > Akshay
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
> ------------------------------
>
> Message: 4
> Date: Sat, 01 Jul 2017 14:19:48 +0000
> From: Mark Abraham <mark.j.abraham at gmail.com>
> To: gmx-users at gromacs.org, gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: Re: [gmx-users] MDrun -maxh option
> Message-ID:
> <CAMNuMASv411QcOELE1nGj1sLzZud_FAF2qSFJ62srHnM2s7X8Q at mail.
> gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi,
>
> Inter-simulation signalling for coordinating stuff like mdrun -maxh for
> replica exchange has not had a great history. I believe it finally works
> properly in 2016, but haven't actually tried it.
>
> Mark
>
> On Sat, Jul 1, 2017 at 3:45 PM Akshay <akshays.sridhar at gmail.com> wrote:
>
> > Hello All,
> >
> > The cluster of my University uses a queuing system with a maximum
> wall-time
> > of 12 hours. So, I run mdrun with the option -maxh 11.9 and subsequently
> > restart the simulation using the output checkpoint files iteratively.
> > However, the -maxh option has not been killing the jobs when I run
> replica
> > exchange jobs across nodes (4 replicas with 2 nodes for each replica (16
> > cores per node)). I only get an output error with the job scheduler
> killing
> > the job at the 12 hour mark.
> >
> > I would love to have suggestions on how to begin my troubleshooting.
> Could
> > it be an installation issue on specific nodes? Or should I reduce the
> -maxh
> > value further to allow time for mdrun to write all the checkpoint files?
> >
> > Thanks,
> > Akshay
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
> ------------------------------
>
> Message: 5
> Date: Sat, 1 Jul 2017 22:42:40 +0530
> From: Apramita Chand <apramita.chand at gmail.com>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] Checkpoint files error
> Message-ID:
> <CA+gTzob39xfWdf2OE--ikckKk0zP5WEJWVBT18_
> 0gPhYEAQqyA at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Dear All,
> After equilibration, I want to pass on the checkpoint files state.cpt for
> production run for which I give the command
> g_mdrun -s md.tpr -c md.gro -o md.trr -cpi state.cpt -cpo state_md.cpt
>
> The job terminates with error that output files for checkpoint files are
> insufficient
> Just 1 out of 3files are present
> Required: npt.trr ,npt.edr
>
> I have the above mentioned files in the folder ,still the error message
> exists
>
> What is the correct procedure to pass checkpoint files?
> Also,for generation of tpr file,we use .gro files from the last run.what
> are the advantages of using .cpt files? Are they essential?
>
> Thanks in advance for your suggestions
>
> Yours sincerely
> Apramita
>
>
> ------------------------------
>
> Message: 6
> Date: Sat, 1 Jul 2017 22:50:14 +0530
> From: Apramita Chand <apramita.chand at gmail.com>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] Regarding grid spacing
> Message-ID:
> <CA+gTzoarqKGa3BB4B1E2HuumUpAU933xQqOeXDrJWLTqYxObwg at mail.gmail.
> com>
> Content-Type: text/plain; charset="UTF-8"
>
> Dear All,
> Which is more appropriate fourier grid spacing for gromos53a6 ff with
> cutoffs being 0.9 and 1.4nm?
> 0.16 or 0.12??
> I've not seen too many papers with 0.16 being used for this forcefield and
> that too with cutoffs like 1.0 for rcoulomb and rvdw.
>
> Is there any problem if I use fourier spacing of 0.16 with these
> cutoffs(0.9,1.4)?
>
> Also,should the grid spacing be same for equilibration and production
> runs??
>
> Yours sincerely,
> Apramita
>
>
> ------------------------------
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> End of gromacs.org_gmx-users Digest, Vol 159, Issue 3
> *****************************************************
>
More information about the gromacs.org_gmx-users
mailing list