[gmx-developers] Gromacs benchmark set
Carsten Kutzner
ckutzne at gwdg.de
Thu Jul 3 10:50:27 CEST 2014
Hi,
we have done quite extensive benchmarking on SuperMUC (up to 32,000 cores)
and on Hydra (up to 5,000 cores and 512 GPUs) with three different
systems (Aquaporin + membrane in water, ~80k atoms, a ribosome ~2 M
atoms, and a big 12 M atoms system). All these are with PME (no LJ-PME however…)
and it could be interesting to run at least one of those systems on the Phi cluster
to see how it compares to the other clusters.
Best,
Carsten
On 03 Jul 2014, at 09:26, David van der Spoel <spoel at xray.bmc.uu.se> wrote:
> On 02/07/14 09:28, Alexey Shvetsov wrote:
>> Hi Szilárd,
>>
>> There are actualy few goals related to such benchmarking:
>>
>> 1. Check if this hardware suitable to run GROMACS (I already did some tests on
>> my systems such as RecA protein filaments, nucleosomes etc all have sizes ca
>> ~800k - 1.4M atoms). Check scaling and compare it to existing systems (thats
>> why I ask about some kind of standart benchmark set)
>>
>> 2. Another thing is to check scalability of algorithms (PME, RF)
>>
>> RSC PetaStream is a special systems (it has very similar design to next gen
>> Xeon Phi KNL systems). It uses Xeon Phi as regular compute nodes connected
>> with multiple InfiniBand links so Xeon Phi to Xeon Phi bandwith is ~6GB/s on
>> medium and large size MPI messages.
>>
>> As I see on some systems GMX 5.0 on Xeon Phi scales quite well (up to ~100-200
>> atoms per Xeon Phi thread)
>
> However a benchmark should not be all about 1M atom systems (nothing against those) but should also cover smaller systems, such that we can show scaling versus system size too. I could contribute some relatively simple liquids.
>
> We should also focus on accuracy. In 5.0 we now have LJ-PME and we should push that we can achieve higher accuracy in our calculations with this (I'm about to submit a force field paper on this). Hence we should not waste (too much) time on low accuracy solutions that are of limited practical use - read Reaction Fields.
>
>
>>
>> В письме от 1 июля 2014 17:03:23 пользователь Szilárd Páll написал:
>>> Hi Alexey,
>>>
>>> There is no official benchmark set yet.
>>>
>>> The right benchmark set will greatly depend on what your goal is.
>>> There is a wide range of possible ways to set up benchmarks and there
>>> is no single right way to do it. Most importantly, unless the goal is
>>> to i) show off hardware ii) benchmark algorithms, the input systems
>>> should be representative of the production simulations that are/will
>>> be running on the hardware.
>>>
>>> More concretely, for instance if you want to show decent performance
>>> with Xeon Phi (especially strong scaling), you will probably need huge
>>> input systems, preferably homogeneous and even better without PME
>>> (which - 3D FFT-s across multiple Phi-s will probably run very
>>> poorly). In contrast, if you use an input system like a 50-70k
>>> membrane protein simulated with PME, you will probably find it hard to
>>> show good performance compared to to an IVB Xeon let alone scaling.
>>>
>>> IMHO the STFC benchmarks are very disadvantageous for GROMACS (all
>>> inputs use CHARMM FF and related peculiar settings) and therefore they
>>> are not very representative. Moreover they are outdated too.
>>>
>>> Cheers,
>>> --
>>> Szilárd
>>>
>>>
>>> On Mon, Jun 30, 2014 at 3:50 PM, Alexey Shvetsov
>>>
>>> <alexxy at omrb.pnpi.spb.ru> wrote:
>>>> Hi all!
>>>>
>>>> We're going to run a series of benchmarks on RSC PetaStream system. Its
>>>> based on Xeon Phi and designed to run native mode codes. Are there some
>>>> kind of representative benchmark set? I'm currently found
>>>> http://www.stfc.ac.uk/CSE/randd/cbg/Benchmark/25241.aspx this one. May be
>>>> there are some other sets?
>>>>
>>>> --
>>>> Best Regards,
>>>> Alexey 'Alexxy' Shvetsov, PhD
>>>> Department of Molecular and Radiation Biophysics
>>>> FSBI Petersburg Nuclear Physics Institute, NRC Kurchatov Institute,
>>>> Leningrad region, Gatchina, Russia
>>>> mailto:alexxyum at gmail.com
>>>> mailto:alexxy at omrb.pnpi.spb.ru
>>>> --
>>>> Gromacs Developers mailing list
>>>>
>>>> * Please search the archive at
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>>>> posting!
>>>>
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or
>>>> send a mail to gmx-developers-request at gromacs.org.
>>
>>
>>
>
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Dept. of Cell & Molec. Biol., Uppsala University.
> Box 596, 75124 Uppsala, Sweden. Phone: +46184714205. Fax: +4618511755.
> spoel at xray.bmc.uu.se spoel at gromacs.org http://folding.bmc.uu.se
> --
> Gromacs Developers mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org.
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa
More information about the gromacs.org_gmx-developers
mailing list