[gmx-users] Thread affinity setting failed

Mark Abraham mark.j.abraham at gmail.com
Mon Mar 4 13:45:52 CET 2013


On Mon, Mar 4, 2013 at 6:02 AM, Reid Van Lehn <rvanlehn at mit.edu> wrote:

> Hello users,
>
> I ran into a bug I do not understand today upon upgrading from v. 4.5.5 to
> v 4.6. I'm using older 8 core Intel Xeon E5430 machines, and when I
> submitted a job for 8 cores to one of the nodes I received the following
> error:
>
> NOTE: In thread-MPI thread #3: Affinity setting failed.
>       This can cause performance degradation!
>
> NOTE: In thread-MPI thread #2: Affinity setting failed.
>       This can cause performance degradation!
>
> NOTE: In thread-MPI thread #1: Affinity setting failed.
>       This can cause performance degradation!
>
> I ran mdrun simply with the flags:
>
> mdrun -v -ntmpi 8 -deffnm em
>
> Using the top command, I confirmed that no other programs were running and
> that mdrun was in fact only using 5 cores. Reducing -ntmpi to 7, however,
> resulted in no error (only a warning about not using all of the logical
> cores) and mdrun used 7 cores correctly. Since it warned about thread
> affinity settings, I tried setting -pin on -pinoffset 0 even though I was
> using all the cores on the machine. This resulted in the same error.
> However, turning pinning off explicitly with -pin off (rather than -pin
> auto) did correctly give me the all 8 cores again.
>
> While I figured out a solution in this particular instance, my question is
> whether I should be have known from my hardware/settings that pinning
> should be turned off (for future reference), or if this is a bug?
>

I'm not sure - those are 2007-era processors, so there may be some
limitations in what they could do (or how well the kernel and system
libraries support it). So investing time into working out the real problem
is not really worthwhile. Thanks for reporting your work-around, however,
others might benefit from it. If you plan on doing lengthy simulations, you
might like to verify that you get linear scaling with increasing -ntmpi,
and/or compare performance with the MPI version on the same hardware.

Mark



More information about the gromacs.org_gmx-users mailing list