[gmx-users] Testing gromacs on IBM POWER8

Mark Abraham mark.j.abraham at gmail.com
Mon Sep 7 10:59:58 CEST 2015


Hi,

On Wed, Sep 2, 2015 at 9:25 PM Fabricio Cannini <fcannini at gmail.com> wrote:

> On 27-08-2015 04:17, Mark Abraham wrote:
> > Hi,
> >
> > I have no idea what you're trying to show with these graphs. Your
> vertical
> > axis of time makes it looks like a 2.5 year old AMD chip is walking all
> > over POWER8?
> >
> > Other points
> > 1) GROMACS doesn't use configure or fortran, so don't mention that
>
> This is part of a report in which we're testing Quantum Espresso as
> well. So that's why there's configure and fortran there.
>

OK, but do you mention the CMake settings for Espresso? :-)


> > 2) these simulations do not use lapack or blas, so don't mention that
>
> We were not aware of this. Can you please point to benchmarks that use
> blas/lapack, or how can we tell if it does or not ?
>

It's classical MD -> it does not. (mdrun can do some niche calculation
types that do use linear algebra, but these are of niche interest only)


> > 3) you need to clarify what a "CPU" is... core, hardware thread?
>
> We've edited the graphics to make it more clear.
>
> > 4) when you're using fewer cores than the whole node, then you need to
> > report how you are managing thread affinity
>
> We're setting 1 mpi process per core then n openmp threads to its
> hardware threads. OpenMP's BIND_PROC and mpirun's bind-to-core.
>

That's a poor regime for our current OpenMP scaling. For best performance,
probably you want an MPI rank per core or two, and maybe a small number of
hardware threads per core in that rank.

> 5) the splines on the graphs are misleading when reporting discrete
> > quantities
>
> Would it be clearer using solid bars?
>

It's a point, so I'd use a point :-)


> > 6) you need to report times that are measured after auto-tuning completes
>
> Should I discard the time spent before auto-tune completes?
>

Yes, see mdrun -h about -resetstep (and others)

Mark


> > 7) you need to report whether you are using MPI, thread-MPI or OpenMP to
> > distribute work to threads.
>
> We only used MPI and OpenMP.
>
>
>
> TIA,
> Fabricio
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list