[gmx-users] parallelizing gromacs2018.4

Abhishek Acharya abhi117acharya at gmail.com
Fri Nov 23 18:33:29 CET 2018


Hi.

You can add the following line to the PBS script.

export OMP_NUM_THREADS=4

-np should now become 5 in the mpirun command.

Abhishek

On Fri 23 Nov, 2018, 22:20 Mark Abraham, <mark.j.abraham at gmail.com> wrote:

> Hi,
>
> Looks like nodes=1:ppn=20 sets the number of openmpi threads per rank to be
> 20, on your cluster. Check the documentation for the cluster and/or talk to
> your admins.
>
> Mark
>
> On Fri, Nov 23, 2018 at 3:45 PM praveen kumar <praveenche at gmail.com>
> wrote:
>
> > Dear all
> > I have successfully installed gromacs 2018.4 in local PC and HPC center
> > (Without GPU)
> > using these commands
> > CMAKE_PREFIX_PATH=/home/sappidi/software/fftw-3.3.8
> > /home/sappidi/software/cmake-3.13.0/bin/cmake ..
> > -DCMAKE_INCLUDE_PATH=/home/sappidi/software/fftw-3.3.8/include
> > -DCMAKE_LIBRARY_PATH=/home/sappidi/software/fftw-3.3.8/lib
> >       -DGMX_GUP=OFF
> > -DGMX_MPI=ON
> > -DGMX_OPENMP=ON
> > -DGMX_X11=ON -DCMAKE_INSTALL_PREFIX=/home/sappidi/software/gromacs-2018.4
> > -DCMAKE_CXX_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicxx
> > -DCMAKE_C_COMPILER=/home/sappidi/software/openmpi-2.0.1/bin/mpicc
> > make && make install
> > the sample job runs perfectly without using mpirun.
> > but when I want to run on multiple processors on single node or multi
> > nodes, I am getting following error message
> >
> > "Fatal error:
> > Your choice of number of MPI ranks and amount of resources results in
> using
> > 20
> > OpenMP threads per rank, which is most likely inefficient. The optimum is
> > usually between 1 and 6 threads per rank. If you want to run with this
> > setup,
> > specify the -ntomp option. But we suggest to change the number of MPI
> > ranks."
> >
> > I have tried to rectify the problem using several ways but could not
> > succeed.
> > The sample job script file for my HPC run is shown below.
> >
> > #!/bin/bash
> > #PBS -N test
> > #PBS -q mini
> > #PBS -l nodes=1:ppn=20
> > #PBS -j oe
> > #$ -e err.$JOB_ID.$JOB_NAME
> > #$ -o out.$JOB_ID.$JOB_NAME
> > cd $PBS_O_WORKDIR
> > export I_MPI_FABRICS=shm:dapl
> > export I_MPI_MPD_TMPDIR=/scratch/sappidi/largefile/
> >
> >
> > /home/sappidi/software/openmpi-2.0.1/bin/mpirun -np 20 -machinefile
> > $PBS_NODEFILE /home/sappidi/software/gromacs-2018.4/bin/gmx_mpi  mdrun -v
> > -s NVT1.tpr -deffnm test9
> >
> > I wondering what could be the reason,
> >
> > Thanking in advance
> > Praveen
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list