[gmx-users] parallel job

Mark Abraham mark.j.abraham at gmail.com
Sat Jun 18 18:25:54 CEST 2016


Hi,

We can't tell how many ranks mdrun ended up using, which e.g. equals the
number of domains it reports. You might also want to think about using a
compiler version that was written this decade. Particularly with AMD
hardware, you very much want your domain structure to match what the
hardware looks like. I dunno what AMD-based node might possibly have 56
cores, but you definitely do not want the thread-MPI ranks spanning more
than one of the internal structural units. GROMACS OpenMP scaling isn't
fantastic, but it's least effective in that sub-case.

Mark

On Sat, Jun 18, 2016 at 5:48 PM Alexander Alexander <
alexanderwien2k at gmail.com> wrote:

> Hello,
>
> Thanks for your response. I guess it is thread-MPI and I do not know why I
> just get only one single gmx although I use for example 56 slots.
>
> Please find below some information printed out in log file as well as part
> of the submission scripts:
> ------
> GROMACS:      gmx mdrun, VERSION 5.1.2
> Executable:   /home/itman/bin/gromacs-5.1.2/bin/gmx
> Data prefix:  /home/itman/bin/gromacs-5.1.2
> Command line:
>   gmx mdrun -deffnm prd -s prd.tpr -ntomp 8 -ntmpi 7
>
> GROMACS version:    VERSION 5.1.2
> Precision:          single
> Memory model:       64 bit
> MPI library:        thread_mpi
> OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)
> GPU support:        disabled
> OpenCL support:     disabled
> invsqrt routine:    gmx_software_invsqrt(x)
> SIMD instructions:  AVX_128_FMA
> FFT library:        fftw-3.3.4-sse2-avx
> RDTSCP usage:       enabled
> C++11 compilation:  disabled
> TNG support:        enabled
> Tracing support:    disabled
> Built on:           Mon Feb 15 17:14:35 CET 2016
> Built by:           itman at univ.m [CMAKE]
> Build OS/arch:      Linux 2.6.32-431.29.2.el6.x86_64 x86_64
> Build CPU vendor:   AuthenticAMD
> Build CPU brand:    AMD Opteron(TM) Processor 6276
> Build CPU family:   21   Model: 1   Stepping: 2
> Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
> sse3 sse4a sse4.1 sse4.2 ssse3 xop
> C compiler:         /usr/lib64/ccache/cc GNU 4.4.7
> C compiler flags:    -mavx -mfma4 -mxop    -Wundef -Wextra
> -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
> -Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
> -funroll-all-loops  -Wno-array-bounds
> C++ compiler:       /usr/lib64/ccache/c++ GNU 4.4.7
> C++ compiler flags:  -mavx -mfma4 -mxop    -Wundef -Wextra
> -Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function
> -O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
> Boost version:      1.55.0 (internal)
>
> ---------
>
> #$ -A gromacs_parallel
> #$ -pe smp 56
> trap '' usr1
> trap '' usr2
> FB_CHEMIE=/home/fb_chem
> export PATH=/home/itman/bin/gromacs-5.1.2/bin:$PATH
> export
>
> LD_LIBRARY_PATH=/home/itman/bin/gromacs-5.1.2${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
>
> gmx mdrun -deffnm prd -s prd.tpr -ntomp 8 -ntmpi 7 >D.log 2>&1
>
> joberror=$?
> exit $joberror
> ------
> Thanks,
> Regards,
> Alex
>
> On Sat, Jun 18, 2016 at 4:53 PM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
> > Hi,
> >
> > Depends what you mean by parallel. The top utility will show you
> processes,
> > and whether via MPI or thread-MPI, there will generally be multiple
> GROMACS
> > processes started from one call of gmx mdrun.
> >
> > Mark
> >
> > On Sat, Jun 18, 2016 at 4:30 PM Alexander Alexander <
> > alexanderwien2k at gmail.com> wrote:
> >
> > > Dear Gromacs user,
> > >
> > > For a gromacs parallel job, I was wondering if gmx would show up as
> just
> > a
> > > single bonded "gmx" when one invoked "top" in command or its
> distributon
> > > over cpu's shows up as a series of "gmx" in top command inside the
> node.
> > >
> > > Regards,
> > > Alex
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list