[gmx-developers] Gromacs benchmark set

Alexey Shvetsov alexxy at omrb.pnpi.spb.ru
Thu Jul 3 10:39:56 CEST 2014


Hi!

В письме от 3 июля 2014 09:26:57 пользователь David van der Spoel написал:
> On 02/07/14 09:28, Alexey Shvetsov wrote:
> > Hi Szilárd,
> > 
> > There are actualy few goals related to such benchmarking:
> > 
> > 1. Check if this hardware suitable to run GROMACS (I already did some
> > tests on my systems such as RecA protein filaments, nucleosomes etc all
> > have sizes ca ~800k - 1.4M atoms). Check scaling and compare it to
> > existing systems (thats why I ask about some kind of standart benchmark
> > set)
> > 
> > 2. Another thing is to check scalability of algorithms (PME, RF)
> > 
> > RSC PetaStream is a special systems (it has very similar design to next
> > gen
> > Xeon Phi KNL systems). It uses Xeon Phi as regular compute nodes connected
> > with multiple InfiniBand links so Xeon Phi to Xeon Phi bandwith is ~6GB/s
> > on medium and large size MPI messages.
> > 
> > As I see on some systems GMX 5.0 on Xeon Phi scales quite well (up to
> > ~100-200 atoms per Xeon Phi thread)
> 
> However a benchmark should not be all about 1M atom systems (nothing
> against those) but should also cover smaller systems, such that we can
> show scaling versus system size too. I could contribute some relatively
> simple liquids.

Yep. Small systems are also interesting task for such type of benchmarks. I 
have few 70k-200k protein and protein-dna  systems. Lipids are also will be 
interesting.

> 
> We should also focus on accuracy. In 5.0 we now have LJ-PME and we
> should push that we can achieve higher accuracy in our calculations with
> this (I'm about to submit a force field paper on this). Hence we should
> not waste (too much) time on low accuracy solutions that are of limited
> practical use - read Reaction Fields.

Benchmarking algorithms and its accurancy are also interesting. So its good 
idea to try =D

> 
> > В письме от 1 июля 2014 17:03:23 пользователь Szilárd Páll написал:
> >> Hi Alexey,
> >> 
> >> There is no official benchmark set yet.
> >> 
> >> The right benchmark set will greatly depend on what your goal is.
> >> There is a wide range of possible ways to set up benchmarks and there
> >> is no single right way to do it. Most importantly, unless the goal is
> >> to i) show off hardware ii) benchmark algorithms, the input systems
> >> should be representative of the production simulations that are/will
> >> be running on the hardware.
> >> 
> >> More concretely, for instance if you want to show decent performance
> >> with Xeon Phi (especially strong scaling), you will probably need huge
> >> input systems, preferably homogeneous and even better without PME
> >> (which  - 3D FFT-s across multiple Phi-s will probably run very
> >> poorly). In contrast, if you use an input system like a 50-70k
> >> membrane protein simulated with PME, you will probably find it hard to
> >> show good performance compared to to an IVB Xeon let alone scaling.
> >> 
> >> IMHO the STFC benchmarks are very disadvantageous for GROMACS (all
> >> inputs use CHARMM FF and related peculiar settings) and therefore they
> >> are not very representative. Moreover they are outdated too.
> >> 
> >> Cheers,
> >> --
> >> Szilárd
> >> 
> >> 
> >> On Mon, Jun 30, 2014 at 3:50 PM, Alexey Shvetsov
> >> 
> >> <alexxy at omrb.pnpi.spb.ru> wrote:
> >>> Hi all!
> >>> 
> >>> We're going to run a series of benchmarks on RSC PetaStream system. Its
> >>> based on Xeon Phi and designed to run native mode codes. Are there some
> >>> kind of representative benchmark set? I'm currently found
> >>> http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx this one. May
> >>> be
> >>> there are some other sets?
> >>> 
> >>> --
> >>> Best Regards,
> >>> Alexey 'Alexxy' Shvetsov, PhD
> >>> Department of Molecular and Radiation Biophysics
> >>> FSBI Petersburg Nuclear Physics Institute, NRC Kurchatov Institute,
> >>> Leningrad region, Gatchina, Russia
> >>> mailto:alexxyum at gmail.com
> >>> mailto:alexxy at omrb.pnpi.spb.ru
> >>> --
> >>> Gromacs Developers mailing list
> >>> 
> >>> * Please search the archive at
> >>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> >>> posting!
> >>> 
> >>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>> 
> >>> * For (un)subscribe requests visit
> >>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> >>> or
> >>> send a mail to gmx-developers-request at gromacs.org.

-- 
Best Regards,
Alexey 'Alexxy' Shvetsov, PhD
Department of Molecular and Radiation Biophysics
FSBI Petersburg Nuclear Physics Institute, NRC Kurchatov Institute,
Leningrad region, Gatchina, Russia
mailto:alexxyum at gmail.com
mailto:alexxy at omrb.pnpi.spb.ru
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 836 bytes
Desc: This is a digitally signed message part.
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20140703/0873c566/attachment-0001.sig>


More information about the gromacs.org_gmx-developers mailing list