[gmx-users] parallelizing gromacs2018.4
praveen kumar
praveenche at gmail.com
Mon Nov 26 07:39:21 CET 2018
Dear all
As per the suggestions, given Now am able to run the simulations in 1 node
with 20 CPU.
"export OMP_NUM_THREADS=4
-np should now become 5 in the mpirun command."
but when I use two nodes instead of 1 node
the simulations slowed done the performance like this,
1 node with 20 CPUs will give ~ 42 ns per day
2 nodes with 40 cpus will give ~ 3 ns per day
The below is the modified script for two nodes
Script for 2 nodes (~ 3 ns per day )
#!/bin/bash
#PBS -N test
#PBS -q mini
#PBS -l nodes=2:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=5
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/
#source /opt/software/intel/initpaths intel64
/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 8 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi mdrun -v
-s NVT1.tpr -deffnm 2
Script for one node (~ 40 ns per day)
Script for 2 nodes
#!/bin/bash
#PBS -N test
#PBS -q mini
#PBS -l nodes=2:ppn=20
#PBS -j oe
#$ -e err.$JOB_ID.$JOB_NAME
#$ -o out.$JOB_ID.$JOB_NAME
cd $PBS_O_WORKDIR
export OMP_NUM_THREADS=5
export I_MPI_FABRICS=shm:dapl
export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/
#source /opt/software/intel/initpaths intel64
/home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 4 -machinefile
$PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi mdrun -v
-s NVT1.tpr -deffnm 2
Any help in this regard is much appreciated.
Thanks
Praveen
On Sat, Nov 24, 2018 at 2:12 AM <
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:
> Send gromacs.org_gmx-users mailing list submissions to
> gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
> gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
> gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
> 1. Re: parallelizing gromacs2018.4 (Abhishek Acharya)
> 2. Re: Error: Cannot set thread affinities on the current
> platform (Neena Susan Eappen)
> 3. Re: Error: Cannot set thread affinities on the current
> platform (Benson Muite)
> 4. free binding energy calculation (marzieh dehghan)
> 5. Re: free binding energy calculation (Benson Muite)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 23 Nov 2018 23:03:12 +0530
> From: Abhishek Acharya <abhi117acharya at gmail.com>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Subject: Re: [gmx-users] parallelizing gromacs2018.4
> Message-ID:
> <CAB1aw3wCgN=
> vr_HedanUYOe+p-4y8eo7abgT2J-pORkV0TZM8A at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hi.
>
> You can add the following line to the PBS script.
>
> export OMP_NUM_THREADS=4
>
> -np should now become 5 in the mpirun command.
>
> Abhishek
>
> On Fri 23 Nov, 2018, 22:20 Mark Abraham, <mark.j.abraham at gmail.com> wrote:
>
> > Hi,
> >
> > Looks like nodes=1:ppn=20 sets the number of openmpi threads per rank to
> be
> > 20, on your cluster. Check the documentation for the cluster and/or talk
> to
> > your admins.
> >
> > Mark
> >
> > On Fri, Nov 23, 2018 at 3:45 PM praveen kumar <praveenche at gmail.com>
> > wrote:
> >
> > > Dear all
> > > I have successfully installed gromacs 2018.4 in local PC and HPC center
> > > (Without GPU)
> > > using these commands
> > > CMAKE_PREFIX_PATH=/home/sappidi/software/fftw-3.3.8
> > > /home/sappidi/software/cmake-3.13.0/bin/cmake ..
> > > -DCMAKE_INCLUDE_PATH=/home/sappidi/software/fftw-3.3.8/include
> > > -DCMAKE_LIBRARY_PATH=/home/sappidi/software/fftw-3.3.8/lib
> > > -DGMX_GUP=OFF
> > > -DGMX_MPI=ON
> > > -DGMX_OPENMP=ON
> > > -DGMX_X11=ON
> -DCMAKE_INSTALL_PREFIX=/home/sappidi/software/gromacs-2018.4
> > > -DCMAKE_CXX_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicxx
> > > -DCMAKE_C_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicc
> > > make && make install
> > > the sample job runs perfectly without using mpirun.
> > > but when I want to run on multiple processors on single node or multi
> > > nodes, I am getting following error message
> > >
> > > "Fatal error:
> > > Your choice of number of MPI ranks and amount of resources results in
> > using
> > > 20
> > > OpenMP threads per rank, which is most likely inefficient. The optimum
> is
> > > usually between 1 and 6 threads per rank. If you want to run with this
> > > setup,
> > > specify the -ntomp option. But we suggest to change the number of MPI
> > > ranks."
> > >
> > > I have tried to rectify the problem using several ways but could not
> > > succeed.
> > > The sample job script file for my HPC run is shown below.
> > >
> > > #!/bin/bash
> > > #PBS -N test
> > > #PBS -q mini
> > > #PBS -l nodes=1:ppn=20
> > > #PBS -j oe
> > > #$ -e err.$JOB_ID.$JOB_NAME
> > > #$ -o out.$JOB_ID.$JOB_NAME
> > > cd $PBS_O_WORKDIR
> > > export I_MPI_FABRICS=shm:dapl
> > > export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/
> > >
> > >
> > > /home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 20 -machinefile
> > > $PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi mdrun
> -v
> > > -s NVT1.tpr -deffnm test9
> > >
> > > I wondering what could be the reason,
> > >
> > > Thanking in advance
> > > Praveen
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
> ------------------------------
>
> Message: 2
> Date: Fri, 23 Nov 2018 19:03:06 +0000
> From: Neena Susan Eappen <neena.susaneappen at mail.utoronto.ca>
> To: "gromacs.org_gmx-users at maillist.sys.kth.se"
> <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] Error: Cannot set thread affinities on the
> current platform
> Message-ID:
> <
> YQXPR0101MB2104CB6171395403D92253FECDD40 at YQXPR0101MB2104.CANPRD01.PROD.OUTLOOK.COM
> >
>
> Content-Type: text/plain; charset="iso-8859-1"
>
> What do these thread affinities refer to?
> Does that error have an impact on simulations? My simulations were still
> completed without any halt in between.
>
> Thank you,
> Neena
>
>
> ------------------------------
>
> Message: 3
> Date: Fri, 23 Nov 2018 19:27:20 +0000
> From: Benson Muite <benson.muite at ut.ee>
> To: "gromacs.org_gmx-users at maillist.sys.kth.se"
> <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] Error: Cannot set thread affinities on the
> current platform
> Message-ID: <1e931899-9b8d-9510-8013-11ca2789beeb at ut.ee>
> Content-Type: text/plain; charset="utf-8"
>
> Generally thread affinities are how software threads are mapped to
> hardware cores:
> https://en.wikipedia.org/wiki/Processor_affinity
> https://computing.llnl.gov/tutorials/openMP/ProcessThreadAffinity.pdf
>
> This may have some impact on speed (but is dependent on the computer
> chip, program being run and data that is being processed), see for example:
> https://software.intel.com/en-us/node/522691
>
> http://developer.amd.com/wp-content/resources/56263-Performance-Tuning-Guidelines-PUB.pdf
>
> It usually should not change results significantly - only expect changes
> in rounding errors.
>
> On 11/23/18 8:03 PM, Neena Susan Eappen wrote:
> > What do these thread affinities refer to?
> > Does that error have an impact on simulations? My simulations were still
> completed without any halt in between.
> >
> > Thank you,
> > Neena
>
>
>
> ------------------------------
>
> Message: 4
> Date: Fri, 23 Nov 2018 11:57:59 -0800
> From: marzieh dehghan <dehghanmarzieh at gmail.com>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] free binding energy calculation
> Message-ID:
> <CA+6Z3GmGHrfmwSFZ-9bFoSeVQSuKjky=
> ESHHPumbDJmLfLpJZQ at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Dear all
>
> I want to calculate free energy calculation under gromacs 5.1.4 and used
> the following link
> "*
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/04_EM.html
> <
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/04_EM.html
> >"*
>
> when I run ./job.sh, I confront to the following error:
> * Right hand side '1.0' for parameter 'sc-power' in parameter file is not
> an integer value *
>
> please let me know how to solve this problem.
> Thanks a lot
> Marzieh
> --
>
>
>
>
> *Marzieh DehghanPhD of BiochemistryInstitute of biochemistry and Biophysics
> (IBB)University of Tehran, Tehran- Iran.*
>
>
> ------------------------------
>
> Message: 5
> Date: Fri, 23 Nov 2018 20:41:02 +0000
> From: Benson Muite <benson.muite at ut.ee>
> To: "gromacs.org_gmx-users at maillist.sys.kth.se"
> <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] free binding energy calculation
> Message-ID: <e5f1886d-2e5e-5a6f-a084-6266aeea9c72 at ut.ee>
> Content-Type: text/plain; charset="utf-8"
>
> Probably not so helpful, but have you tried latest version of the
> tutorial for Gromacs 2018:
>
> http://www.mdtutorials.com/gmx/free_energy/03_workflow.html
>
> On 11/23/18 8:57 PM, marzieh dehghan wrote:
> > Dear all
> >
> > I want to calculate free energy calculation under gromacs 5.1.4 and used
> > the following link
> > "*
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/04_EM.html
> > <
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin/gmx-tutorials/free_energy/04_EM.html
> >"*
> >
> > when I run ./job.sh, I confront to the following error:
> > * Right hand side '1.0' for parameter 'sc-power' in parameter file is not
> > an integer value *
> >
> > please let me know how to solve this problem.
> > Thanks a lot
> > Marzieh
>
>
> ------------------------------
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> End of gromacs.org_gmx-users Digest, Vol 175, Issue 84
> ******************************************************
>
More information about the gromacs.org_gmx-users
mailing list