[gmx-users] parallel job

Szilárd Páll pall.szilard at gmail.com
Sat Jun 18 22:47:11 CEST 2016


Hi,

Assuming the build was made on the same CPUs as the ones used at
runtime and the 7x8=56 threads were chosen intentionally, the machine
it's presumably a quad-socket AMD Opteron 6276 node. That's 64 cores
and 8 NUMA regions (the the internal structure units Mark is referring
to). If that's the case, this run is using OpenMP loops that fit
exactly NUMA regions and don't span across them. However, I see no
"-pin on" which means that it's up to the OS to place threads. That's
not a good sign and is likely causing a severe performance loss.

Coming back the original question, you see a single "gmx" binary
because it's 7 thread-MPI ranks (that are pthreads) are using 8 OpeMP
threads each. All threads, not processes.

So to conclude:
- I suggest you share a log file;
- Pin your threads!
- Use less threads per rank (unless your DD is limited), on AMD
typically 1-2, at most 4 per rank works best.
- If you are anyway not using all cores, consider shifting your mdrun
job over to cores 8-63 (iso 0-55), it may work our better as IIRC core
0 handles interrupts.

Cheers,
--
Szilárd


On Sat, Jun 18, 2016 at 4:25 PM, Mark Abraham <mark.j.abraham at gmail.com> wrote:
> Hi,
>
> We can't tell how many ranks mdrun ended up using, which e.g. equals the
> number of domains it reports. You might also want to think about using a
> compiler version that was written this decade. Particularly with AMD
> hardware, you very much want your domain structure to match what the
> hardware looks like. I dunno what AMD-based node might possibly have 56
> cores, but you definitely do not want the thread-MPI ranks spanning more
> than one of the internal structural units. GROMACS OpenMP scaling isn't
> fantastic, but it's least effective in that sub-case.
>
> Mark
>
> On Sat, Jun 18, 2016 at 5:48 PM Alexander Alexander <
> alexanderwien2k at gmail.com> wrote:
>
>> Hello,
>>
>> Thanks for your response. I guess it is thread-MPI and I do not know why I
>> just get only one single gmx although I use for example 56 slots.
>>
>> Please find below some information printed out in log file as well as part
>> of the submission scripts:
>> ------
>> GROMACS:      gmx mdrun, VERSION 5.1.2
>> Executable:   /home/itman/bin/gromacs-5.1.2/bin/gmx
>> Data prefix:  /home/itman/bin/gromacs-5.1.2
>> Command line:
>>   gmx mdrun -deffnm prd -s prd.tpr -ntomp 8 -ntmpi 7
>>
>> GROMACS version:    VERSION 5.1.2
>> Precision:          single
>> Memory model:       64 bit
>> MPI library:        thread_mpi
>> OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)
>> GPU support:        disabled
>> OpenCL support:     disabled
>> invsqrt routine:    gmx_software_invsqrt(x)
>> SIMD instructions:  AVX_128_FMA
>> FFT library:        fftw-3.3.4-sse2-avx
>> RDTSCP usage:       enabled
>> C++11 compilation:  disabled
>> TNG support:        enabled
>> Tracing support:    disabled
>> Built on:           Mon Feb 15 17:14:35 CET 2016
>> Built by:           itman at univ.m [CMAKE]
>> Build OS/arch:      Linux 2.6.32-431.29.2.el6.x86_64 x86_64
>> Build CPU vendor:   AuthenticAMD
>> Build CPU brand:    AMD Opteron(TM) Processor 6276
>> Build CPU family:   21   Model: 1   Stepping: 2
>> Build CPU features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm
>> misalignsse mmx msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2
>> sse3 sse4a sse4.1 sse4.2 ssse3 xop
>> C compiler:         /usr/lib64/ccache/cc GNU 4.4.7
>> C compiler flags:    -mavx -mfma4 -mxop    -Wundef -Wextra
>> -Wno-missing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
>> -Wno-unused -Wunused-value -Wunused-parameter  -O3 -DNDEBUG
>> -funroll-all-loops  -Wno-array-bounds
>> C++ compiler:       /usr/lib64/ccache/c++ GNU 4.4.7
>> C++ compiler flags:  -mavx -mfma4 -mxop    -Wundef -Wextra
>> -Wno-missing-field-initializers -Wpointer-arith -Wall -Wno-unused-function
>> -O3 -DNDEBUG -funroll-all-loops  -Wno-array-bounds
>> Boost version:      1.55.0 (internal)
>>
>> ---------
>>
>> #$ -A gromacs_parallel
>> #$ -pe smp 56
>> trap '' usr1
>> trap '' usr2
>> FB_CHEMIE=/home/fb_chem
>> export PATH=/home/itman/bin/gromacs-5.1.2/bin:$PATH
>> export
>>
>> LD_LIBRARY_PATH=/home/itman/bin/gromacs-5.1.2${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}
>>
>> gmx mdrun -deffnm prd -s prd.tpr -ntomp 8 -ntmpi 7 >D.log 2>&1
>>
>> joberror=$?
>> exit $joberror
>> ------
>> Thanks,
>> Regards,
>> Alex
>>
>> On Sat, Jun 18, 2016 at 4:53 PM, Mark Abraham <mark.j.abraham at gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > Depends what you mean by parallel. The top utility will show you
>> processes,
>> > and whether via MPI or thread-MPI, there will generally be multiple
>> GROMACS
>> > processes started from one call of gmx mdrun.
>> >
>> > Mark
>> >
>> > On Sat, Jun 18, 2016 at 4:30 PM Alexander Alexander <
>> > alexanderwien2k at gmail.com> wrote:
>> >
>> > > Dear Gromacs user,
>> > >
>> > > For a gromacs parallel job, I was wondering if gmx would show up as
>> just
>> > a
>> > > single bonded "gmx" when one invoked "top" in command or its
>> distributon
>> > > over cpu's shows up as a series of "gmx" in top command inside the
>> node.
>> > >
>> > > Regards,
>> > > Alex
>> > > --
>> > > Gromacs Users mailing list
>> > >
>> > > * Please search the archive at
>> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > > posting!
>> > >
>> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> > >
>> > > * For (un)subscribe requests visit
>> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > > send a mail to gmx-users-request at gromacs.org.
>> > >
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-request at gromacs.org.
>> >
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list