[gmx-users] g_tune_pme for multiple nodes

Chandan Choudhury iitdckc at gmail.com
Tue Dec 4 15:10:43 CET 2012


On Tue, Dec 4, 2012 at 7:18 PM, Carsten Kutzner <ckutzne at gwdg.de> wrote:

>
> On Dec 4, 2012, at 2:45 PM, Chandan Choudhury <iitdckc at gmail.com> wrote:
>
> > Hi Carsten,
> >
> > Thanks for the reply.
> >
> > If PME nodes for the g_tune is half of np, then if it exceeds the ppn of
> of
> > a node, how would g_tune perform. What I mean if $NPROCS=36, the its half
> > is 18 ppn, but 18 ppns are not present in a single node  (max. ppn = 12
> per
> > node). How would g_tune function in such scenario?
> Typically mdrun allocates the PME and PP nodes in an interleaved way,
> meaning
> you would end up with 9 PME nodes on each of your two nodes.
>
> Check the -ddorder of mdrun.
>
> Interleaving is normally fastest unless you could have all PME processes
> exclusively
> on a single node.
>

Thanks Carsten for the explanation.

Chandan

>
> Carsten
>
> >
> > Chandan
> >
> >
> > --
> > Chandan kumar Choudhury
> > NCL, Pune
> > INDIA
> >
> >
> > On Tue, Dec 4, 2012 at 6:39 PM, Carsten Kutzner <ckutzne at gwdg.de> wrote:
> >
> >> Hi Chandan,
> >>
> >> the number of separate PME nodes in Gromacs must be larger than two and
> >> smaller or equal to half the number of MPI processes (=np). Thus,
> >> g_tune_pme
> >> checks only up to npme = np/2 PME nodes.
> >>
> >> Best,
> >>  Carsten
> >>
> >>
> >> On Dec 4, 2012, at 1:54 PM, Chandan Choudhury <iitdckc at gmail.com>
> wrote:
> >>
> >>> Dear Carsten and Florian,
> >>>
> >>> Thanks for you useful suggestions. It did work. I still have a doubt
> >>> regarding the execution :
> >>>
> >>> export MPIRUN=`which mpirun`
> >>> export MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> >>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >>> tune.edr -g tune.log
> >>>
> >>> I am suppling $NPROCS as 24 [2 (nodes)*12(ppn)], so that g_tune_pme
> tunes
> >>> the no. of pme nodes. As I am executing it on a single node, mdrun
> never
> >>> checks pme for greater than 12 ppn. So, how do I understand that the
> pme
> >> is
> >>> tuned for 24 ppn spanning across the two nodes.
> >>>
> >>> Chandan
> >>>
> >>>
> >>> --
> >>> Chandan kumar Choudhury
> >>> NCL, Pune
> >>> INDIA
> >>>
> >>>
> >>> On Thu, Nov 29, 2012 at 8:32 PM, Carsten Kutzner <ckutzne at gwdg.de>
> >> wrote:
> >>>
> >>>> Hi Chandan,
> >>>>
> >>>> On Nov 29, 2012, at 3:30 PM, Chandan Choudhury <iitdckc at gmail.com>
> >> wrote:
> >>>>
> >>>>> Hi Carsten,
> >>>>>
> >>>>> Thanks for your suggestion.
> >>>>>
> >>>>> I did try to pass to total number of cores with the np flag to the
> >>>>> g_tune_pme, but it didnot help. Hopefully I am doing something
> silliy.
> >> I
> >>>>> have pasted the snippet of the PBS script.
> >>>>>
> >>>>> #!/bin/csh
> >>>>> #PBS -l nodes=2:ppn=12:twelve
> >>>>> #PBS -N bilayer_tune
> >>>>> ....
> >>>>> ....
> >>>>> ....
> >>>>>
> >>>>> cd $PBS_O_WORKDIR
> >>>>> export
> MDRUN="/cm/shared/apps/gromacs/4.5.5/single/bin/mdrun_mpi_4.5.5"
> >>>> from here on you job file should read:
> >>>>
> >>>> export MPIRUN=`which mpirun`
> >>>> g_tune_pme_4.5.5 -np $NPROCS -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >>>> tune.edr -g tune.log
> >>>>
> >>>>> mpirun -np $NPROCS  g_tune_pme_4.5.5 -np 24 -s md0-200.tpr -c
> tune.pdb
> >> -x
> >>>>> tune.xtc -e tune.edr -g tune.log -nice 0
> >>>> this way you will get $NPROCS g_tune_pme instances, each trying to run
> >> an
> >>>> mdrun job on 24 cores,
> >>>> which is not what you want. g_tune_pme itself is a serial program, it
> >> just
> >>>> spawns the mdrun's.
> >>>>
> >>>> Carsten
> >>>>>
> >>>>>
> >>>>> Then I submit the script using qsub.
> >>>>> When I login to the compute nodes there I donot find and mdrun
> >> executable
> >>>>> running.
> >>>>>
> >>>>> I also tried using nodes=1 and np 12. It didnot work through qsub.
> >>>>>
> >>>>> Then I logged in to the compute nodes and executed g_tune_pme_4.5.5
> -np
> >>>> 12
> >>>>> -s md0-200.tpr -c tune.pdb -x tune.xtc -e tune.edr -g tune.log -nice
> 0
> >>>>>
> >>>>> It worked.
> >>>>>
> >>>>> Also, if I just use
> >>>>> $g_tune_pme_4.5.5 -np 12 -s md0-200.tpr -c tune.pdb -x tune.xtc -e
> >>>> tune.edr
> >>>>> -g tune.log -nice 0
> >>>>> g_tune_pme executes on the head node and writes various files.
> >>>>>
> >>>>> Kindly let me know what am I missing when I submit through qsub.
> >>>>>
> >>>>> Thanks
> >>>>>
> >>>>> Chandan
> >>>>> --
> >>>>> Chandan kumar Choudhury
> >>>>> NCL, Pune
> >>>>> INDIA
> >>>>>
> >>>>>
> >>>>> On Mon, Sep 3, 2012 at 3:31 PM, Carsten Kutzner <ckutzne at gwdg.de>
> >> wrote:
> >>>>>
> >>>>>> Hi Chandan,
> >>>>>>
> >>>>>> g_tune_pme also finds the optimal number of PME cores if the cores
> >>>>>> are distributed on multiple nodes. Simply pass the total number of
> >>>>>> cores to the -np option. Depending on the MPI and queue environment
> >>>>>> that you use, the distribution of the cores over the nodes may have
> >>>>>> to be specified in a hostfile / machinefile. Check g_tune_pme -h
> >>>>>> on how to set that.
> >>>>>>
> >>>>>> Best,
> >>>>>> Carsten
> >>>>>>
> >>>>>>
> >>>>>> On Aug 28, 2012, at 8:33 PM, Chandan Choudhury <iitdckc at gmail.com>
> >>>> wrote:
> >>>>>>
> >>>>>>> Dear gmx users,
> >>>>>>>
> >>>>>>> I am using 4.5.5 of gromacs.
> >>>>>>>
> >>>>>>> I was trying to use g_tune_pme for a simulation. I intend to
> execute
> >>>>>>> mdrun at multiple nodes with 12 cores each. Therefore, I would like
> >> to
> >>>>>>> optimize the number of pme nodes. I could execute g_tune_pme -np 12
> >>>>>>> md.tpr. But this will only find the optimal PME nodes for single
> >> nodes
> >>>>>>> run. How do I find the optimal PME nodes for multiple nodes.
> >>>>>>>
> >>>>>>> Any suggestion would be helpful.
> >>>>>>>
> >>>>>>> Chandan
> >>>>>>>
> >>>>>>> --
> >>>>>>> Chandan kumar Choudhury
> >>>>>>> NCL, Pune
> >>>>>>> INDIA
> >>>>>>> --
> >>>>>>> gmx-users mailing list    gmx-users at gromacs.org
> >>>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>>>>>> * Please search the archive at
> >>>>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>>>>>> * Please don't post (un)subscribe requests to the list. Use the
> >>>>>>> www interface or send it to gmx-users-request at gromacs.org.
> >>>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Dr. Carsten Kutzner
> >>>>>> Max Planck Institute for Biophysical Chemistry
> >>>>>> Theoretical and Computational Biophysics
> >>>>>> Am Fassberg 11, 37077 Goettingen, Germany
> >>>>>> Tel. +49-551-2012313, Fax: +49-551-2012302
> >>>>>> http://www3.mpibpc.mpg.de/home/grubmueller/ihp/ckutzne
> >>>>>>
> >>>>>> --
> >>>>>> gmx-users mailing list    gmx-users at gromacs.org
> >>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>>>>> * Please search the archive at
> >>>>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>>>>> * Please don't post (un)subscribe requests to the list. Use the
> >>>>>> www interface or send it to gmx-users-request at gromacs.org.
> >>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>>>>
> >>>>> --
> >>>>> gmx-users mailing list    gmx-users at gromacs.org
> >>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>>>> * Please search the archive at
> >>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>>>> * Please don't post (un)subscribe requests to the list. Use the
> >>>>> www interface or send it to gmx-users-request at gromacs.org.
> >>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>>
> >>>>
> >>>> --
> >>>> Dr. Carsten Kutzner
> >>>> Max Planck Institute for Biophysical Chemistry
> >>>> Theoretical and Computational Biophysics
> >>>> Am Fassberg 11, 37077 Goettingen, Germany
> >>>> Tel. +49-551-2012313, Fax: +49-551-2012302
> >>>> http://www.mpibpc.mpg.de/grubmueller/kutzner
> >>>>
> >>>> --
> >>>> gmx-users mailing list    gmx-users at gromacs.org
> >>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>>> * Please search the archive at
> >>>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>>> * Please don't post (un)subscribe requests to the list. Use the
> >>>> www interface or send it to gmx-users-request at gromacs.org.
> >>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>>>
> >>> --
> >>> gmx-users mailing list    gmx-users at gromacs.org
> >>> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >>> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >>> * Please don't post (un)subscribe requests to the list. Use the
> >>> www interface or send it to gmx-users-request at gromacs.org.
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >>
> >> --
> >> Dr. Carsten Kutzner
> >> Max Planck Institute for Biophysical Chemistry
> >> Theoretical and Computational Biophysics
> >> Am Fassberg 11, 37077 Goettingen, Germany
> >> Tel. +49-551-2012313, Fax: +49-551-2012302
> >> http://www.mpibpc.mpg.de/grubmueller/kutzner
> >> http://www.mpibpc.mpg.de/grubmueller/sppexa
> >>
> >> --
> >> gmx-users mailing list    gmx-users at gromacs.org
> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> >> * Please don't post (un)subscribe requests to the list. Use the
> >> www interface or send it to gmx-users-request at gromacs.org.
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/grubmueller/kutzner
> http://www.mpibpc.mpg.de/grubmueller/sppexa
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list