[gmx-users] PRACE benchmarks

Dimitris Dellis ntelll at gmail.com
Thu Jul 14 19:23:04 CEST 2016


Few cents on this.
On our cluster with well configured OpenMPI 1.8.8, 1.10.2 and 1.10.3 (and
2.0.0 in testing)
gromacs-5.1.2 runs smoothly with the above mentioned openmpi versions using
the same prace benchmarks case (CaseB).
Performance with these openmpi versions  (for a 6 minutes maxh, whatever it
means for reported performance) is
1.8.8   : 1.981 ns/day
1.10.2 : 1.979 "
1.10.3 : 1.957 "

Check that your compute nodes /tmp partition size is more than ~100 MB.
With lower /tmp size consider to exclude btl=sm.

Dimitris


On Thu, Jul 14, 2016 at 3:32 PM, Mark Abraham <mark.j.abraham at gmail.com>
wrote:

> Hi,
>
> Any MPI-1.1 conformant implementation is great. The only time GROMACS has
> problems is when the MPI library does... OpenMPI 1.8.6 leaks memory. 1.8.10
> we use locally.
>
> Mark
>
> On Thu, Jul 14, 2016 at 12:45 PM Adam Huffman <adam.huffman at gmail.com>
> wrote:
>
> > Hi Mark,
> >
> > Hmm. Other applications have been working with OpenMPI, but that
> > doesn't invalidate your point.
> >
> > Is there a particular implementation you recommend for GROMACS?
> >
> > Cheers,
> > Adam
> >
> >
> > On Thu, Jul 14, 2016 at 11:00 AM, Mark Abraham <mark.j.abraham at gmail.com
> >
> > wrote:
> > > Hi,
> > >
> > > Looks like your MPI setup is broken. So far, I have only heard of
> people
> > > having problems when using OpenMPI 1.10.x, even the latest patch
> release.
> > >
> > > Mark
> > >
> > > On Thu, Jul 14, 2016 at 11:52 AM Adam Huffman <adam.huffman at gmail.com>
> > > wrote:
> > >
> > >> Hello
> > >>
> > >> I've been trying to run the PRACE benchmarks on a new cluster:
> > >>
> > >> http://www.prace-ri.eu/ueabs/#GROMACS
> > >>
> > >> and this is the command-line I've been running:
> > >>
> > >> mpirun --mca plm_base_verbose 10 --debug-daemons gmx_mpi mdrun -s
> > >> ion_channel.tpr -maxh 0.50 -resethway -noconfout -nsteps 10000 -g
> > >> logfile-${SLURM_JOB_ID}
> > >>
> > >> This was running over several days, so I tried again, with 1000 steps,
> > >> and then even with 10 steps, but the jobs are running seemingly
> > >> without doing anything:
> > >>
> > >> sstat -j 1511 --format
> > >> Nodelist,NTasks,AveCPU,AveDiskRead,AveDiskWrite,AveRSS,AvePages
> > >>             Nodelist   NTasks     AveCPU  AveDiskRead AveDiskWrite
> > >> AveRSS   AvePages
> > >> -------------------- -------- ---------- ------------ ------------
> > >> ---------- ----------
> > >>          ca[127-157]        0  00:00.000            0            0
> > >>      0          0
> > >>
> > >> I'm certainly no expert on GROMACS so I would appreciate it if someone
> > >> could point out any glaring errors in that mdrun invocation. (I added
> > >> the MPI debug flags for separate reasons).
> > >>
> > >> The logfile shows:
> > >>
> > >> Command line:
> > >>   gmx_mpi mdrun -v -s lignocellulose-rf.BGQ.tpr -maxh 0.50 -resethway
> > >> -noconfout -nsteps 10 -g logfile-B-1-1511
> > >>
> > >> GROMACS version:    VERSION 5.1.2
> > >> Precision:          single
> > >> Memory model:       64 bit
> > >> MPI library:        MPI
> > >> OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 32)
> > >> GPU support:        disabled
> > >> OpenCL support:     disabled
> > >> invsqrt routine:    gmx_software_invsqrt(x)
> > >> SIMD instructions:  AVX_256
> > >> FFT library:        fftw-3.3.4-sse2
> > >> RDTSCP usage:       enabled
> > >> C++11 compilation:  enabled
> > >> TNG support:        enabled
> > >> Tracing support:    disabled
> > >> Built on:           Thu 30 Jun 07:39:10 BST 2016
> > >> Built by:           huffmaa at login000.camp.thecrick.org [CMAKE]
> > >> Build OS/arch:      Linux 3.10.0-327.18.2.el7.x86_64 x86_64
> > >> Build CPU vendor:   GenuineIntel
> > >> Build CPU brand:    Intel Xeon E312xx (Sandy Bridge)
> > >> Build CPU family:   6   Model: 42   Stepping: 1
> > >> Build CPU features: aes apic avx clfsh cmov cx8 cx16 lahf_lm mmx msr
> > >> pclmuldq popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
> > >> C compiler:
> > >> /camp/apps/eb/software/OpenMPI/1.10.2-GCC-4.9.3-2.25/bin/mpicc GNU
> > >> 4.9.3
> > >> C compiler flags:    -mavx   -fopenmp -O2 -march=native -Wundef
> > >> -Wextra -Wno-missing-field-initializers -Wno-sign-compare
> > >> -Wpointer-arith -Wall -Wno-unused -Wunused-value -Wunused-parameter
> > >> -O3 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
> > >> -Wno-array-bounds
> > >> C++ compiler:
> > >> /camp/apps/eb/software/OpenMPI/1.10.2-GCC-4.9.3-2.25/bin/mpicxx GNU
> > >> 4.9.3
> > >> C++ compiler flags:  -mavx   -std=c++0x -fopenmp -O2 -march=native
> > >> -Wundef -Wextra -Wno-missing-field-initializers -Wpointer-arith -Wall
> > >> -Wno-unused-function  -O3 -DNDEBUG -funroll-all-loops
> > >> -fexcess-precision=fast  -Wno-array-bounds
> > >> Boost version:      1.60.0 (external)
> > >>
> > >> and nothing after that.
> > >>
> > >> strace of one of the processes shows:
> > >>
> > >> [pid  1374] poll([{fd=5, events=POLLIN}, {fd=12, events=POLLIN},
> > >> {fd=15, events=POLLIN}], 3, 0) = 0 (Timeout)
> > >>
> > >>
> > >> Cheers,
> > >> Adam
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at
> > >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > >> posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe requests visit
> > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > >> send a mail to gmx-users-request at gromacs.org.
> > >>
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list