[gmx-users] Affinity setting for 1/16 threads failed. Version 5.0.2

Mark Abraham mark.j.abraham at gmail.com
Wed Nov 18 17:03:41 CET 2015


Hi,

Hard to say - where's the information on what processors are in this
cluster?

Mark

On Wed, Nov 18, 2015 at 4:46 PM Siva Dasetty <sdasett at g.clemson.edu> wrote:

> Thank you Mark. Yes I have already taken this issue to gordon and while am
> awaiting for their response I am just wondering if this issue has anything
> to do with the bug#1184: http://redmine.gromacs.org/issues/1184.
>
>
> Thank you again for your quick response.
>
> On Wed, Nov 18, 2015 at 10:31 AM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
> > Hi,
> >
> > These are good issues to take up with the support staff of gordon. mdrun
> > tries to be a good citizen and by default stays out of the way if some
> > other part of the software stack is already managing process affinity. As
> > you can see, doing it right is crucial for good performance. But mdrun
> -pin
> > on always works everywhere we know about.
> >
> > Mark
> >
> > On Wed, Nov 18, 2015 at 3:28 PM Siva Dasetty <sdasett at g.clemson.edu>
> > wrote:
> >
> > > Dear all,
> > >
> > > I am running simulations using version 5.0.2 (default in gordon) and I
> am
> > > having a performance loss from 180 ns/day to 8 ns/day compared to the
> > same
> > > simulations that I previously ran in a different cluster.
> > >
> > > In both the clusters I am using a single node and 16 cpus (no gpus) and
> > the
> > > following is the command line I used:
> > >
> > > mdrun_mpi -s <tpr file> -v -deffnm <output file> -nb cpu -cpi <cpt
> file>
> > > -append -pin on
> > >
> > >
> > > Following is reported in the log file:
> > >
> > >
> > > WARNING: Affinity setting for 1/16 threads failed.
> > >
> > >          This can cause performance degradation! If you think your
> > setting
> > > are correct, contact the GROMACS developers.
> > >
> > >
> > > I even tried running a simulation without the flag -pin on and there is
> > no
> > > change in the performance.
> > >
> > >
> > > Are there any other options that I can try to recover the performance?
> > >
> > >
> > >
> > > Additional Information:
> > >
> > >
> > > The other difference I could see is in the compilers:
> > >
> > >
> > > In gordon (8ns/day):
> > >
> > >
> > > C compiler: /opt/mvapich2/intel/ib/bin/mpicc Intel 13.0.0.20121010
> > >
> > > C compiler flags:    -mavx    -std=gnu99 -w3 -wd111 -wd177 -wd181
> -wd193
> > > -wd271 -wd304 -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418
> > > -wd1419 -wd1572 -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346
> > > -wd11074 -wd11076  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
> > > -ansi-alias
> > >
> > > C++ compiler:       /opt/mvapich2/intel/ib/bin/mpicxx Intel
> > 13.0.0.20121010
> > >
> > > C++ compiler flags:  -mavx    -w3 -wd111 -wd177 -wd181 -wd193 -wd271
> > -wd304
> > > -wd383 -wd424 -wd444 -wd522 -wd593 -wd869 -wd981 -wd1418 -wd1419
> -wd1572
> > > -wd1599 -wd2259 -wd2415 -wd2547 -wd2557 -wd3280 -wd3346 -wd11074
> -wd11076
> > > -wd1782 -wd2282  -O3 -DNDEBUG -ip -funroll-all-loops -alias-const
> > > -ansi-alias
> > >
> > >
> > > In our cluster (180 ns/day):
> > >
> > >
> > > C compiler: /software/openmpi/bin/mpicc GNU 4.8.1
> > >
> > > C compiler flags:    -msse4.1    -Wno-maybe-uninitialized -Wextra
> > -Wno-miss
> > >
> > > ing-field-initializers -Wno-sign-compare -Wpointer-arith -Wall
> > -Wno-unused
> > >
> > > -Wunused-value -Wunused-parameter  -O3 -DNDEBUG -fomit-frame-pointer
> > -funro
> > >
> > > ll-all-loops -fexcess-precision=fast  -Wno-array-bounds
> > >
> > > C++ compiler: /software/openmpi/bin/mpicxx GNU 4.8.1
> > >
> > > C++ compiler flags:  -msse4.1    -Wextra
> -Wno-missing-field-initializers
> > -W
> > >
> > > pointer-arith -Wall -Wno-unused-function  -O3 -DNDEBUG
> > -fomit-frame-pointer
> > >
> > >  -funroll-all-loops -fexcess-precision=fast  -Wno-array-bounds
> > >
> > >
> > >
> > >
> > > Thanks in advance for your help,
> > >
> > > --
> > > Siva Dasetty
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > > send a mail to gmx-users-request at gromacs.org.
> > >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
>
> --
> Siva Dasetty
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list