[gmx-developers] Thread affinity in log

Mark Abraham mark.j.abraham at gmail.com
Mon Apr 25 21:23:06 CEST 2016


Hi,

On Mon, Apr 25, 2016 at 8:56 PM Szilárd Páll <pall.szilard at gmail.com> wrote:

> On Mon, Apr 25, 2016 at 8:46 PM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
>> Hi,
>>
>> On Mon, Apr 25, 2016 at 7:19 PM Szilárd Páll <pall.szilard at gmail.com>
>> wrote:
>>
>>> Confirmed, we've ran into it last Friday and 5820 seemed to fix the
>>> issue, but as the author of the change noted, it's is unclear what the
>>> source of the crash is.
>>>
>>> BTW: we need to add a "-pin on" test to the verification matrix  to make
>>> sure the thread pinning code gets tested. It can be post-submit too, but we
>>> have none of those on the horizon so better add an otpion to the current
>>> ones IMO.
>>>
>>
>> Yes and no. Doing it that way
>> * doesn't test the code any more than that it doesn't crash,
>> * creates another degree of freedom of coverage for a matrix to manage,
>> and
>> * creates a situation where Jenkins could thrash if enough threads get
>> pinned to a common core
>>
>> Much better is a unit test
>>
>
> I partially agree, a unit test would be suitable to verify the
> functionality of the pinning code in isolation from the "outer world".
> However, it will not be able to control external conditions (e.g. affinity
> set outside mdrun) - which I was going to mention bu it slipped my mind.
>

I'm not sure what case you're referring to. I don't think we can reasonably
test that some external method has set affinity and that it doesnt work, in
a way that a user or developer could act upon. We only have to test that
when the user asks for pinning that it will work.

That -pin auto and -pin on have different behaviours under different
combinations of inputs is its own standalone unit test that requires no
external dependency - our job in such a unit test is to test that our logic
does what we expect. It's a separate test that given the output of such
logic that we can implement internal pinning - observing success there
needs hardware capable of implementing pinning (or not).

that puts a bunch of unpinned threads doing some simple computation and
>> observes the expected behaviour of similar pinned threads.
>>
>
> Why is doing computation relevant to testing stuff? Pinned or not, threads
> will execute code correctly.
>

The relevant behaviour is that they don't migrate in practice. One can't
test for that unless the conditions are such that the kernel might try to
migrate them, because there's plenty of activity and lots of threads. Of
course, on PowerPC that aspect of the test will automatically pass.

Mark


> That can run on every configuration because it can be a unit test that
>> runs in milliseconds. Of course, we wrote this test before we changed the
>> old working code, right?
>>
>
>>
>> Mark
>>
>>
>>> --
>>> Szilárd
>>>
>>> On Mon, Apr 25, 2016 at 7:07 PM, Vedran Miletić <rivanvx at gmail.com>
>>> wrote:
>>>
>>>> Gladly. This one? https://gerrit.gromacs.org/#/c/5820/
>>>>
>>>> V.
>>>>
>>>> pon, 25. tra 2016. u 19:06 Mark Abraham <mark.j.abraham at gmail.com>
>>>> napisao je:
>>>>
>>>>> Hi,
>>>>>
>>>>> Unsure offhand, but there's a fix in gerrit in this area if you want
>>>>> to try that?
>>>>>
>>>>> Mark
>>>>>
>>>>> On Mon, 25 Apr 2016 19:05 Vedran Miletić <rivanvx at gmail.com> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> since fa1360610d6fcf7eb263ce1181d9954074fd5151 "Make thread affinity
>>>>>> failures always end up in log", I get crashes in mdrun when using tMPI on
>>>>>> any simulation I tried (does not affect OpenMPI). I am seeing this on two
>>>>>> machines using Fedora 23 and 24, GCC 5.3 and 6, respectively.
>>>>>>
>>>>>> Backtrace is
>>>>>>
>>>>>> #0 0x00007ffff78c966f in tMPI_Thread_getspecific (key=...) at
>>>>>> /home/miletivn/workspace/gromacs/src/external/thread_mpi/src/pthreads.c:571
>>>>>> #1 0x00007ffff78cff34 in tMPI_Reduce (sendbuf=0x7fffffffa4dc,
>>>>>> recvbuf=0x7fffffffa4d8, count=1, datatype=0x7ffff7dd6660 <tmpi_int>,
>>>>>> op=TMPI_LAND, root=0, comm=0x0) at
>>>>>> /home/miletivn/workspace/gromacs/src/external/thread_mpi/src/reduce.c:247
>>>>>> #2 0x00007ffff63038a5 in invalidWithinSimulation (cr=0x681bd0,
>>>>>> invalidLocally=false) at
>>>>>> /home/miletivn/workspace/gromacs/src/gromacs/mdrunutility/threadaffinity.cpp:73
>>>>>> #3 0x00007ffff6303c0b in get_thread_affinity_layout (fplog=0x689410,
>>>>>> cr=0x681bd0, hwinfo=0x680230, threads=8, pin_offset=0,
>>>>>> pin_stride=0x7fffffffc634, localityOrder=0x7fffffffc638) at
>>>>>> /home/miletivn/workspace/gromacs/src/gromacs/mdrunutility/threadaffinity.cpp:142
>>>>>> ...
>>>>>>
>>>>>> Variable key looks like
>>>>>>
>>>>>> $1 = {initialized = {value = 0, padding = '\000' <repeats 59 times>},
>>>>>> key = 0x0}
>>>>>>
>>>>>> So key is uninitialized. Any idea why?
>>>>>>
>>>>>> Regards,
>>>>>> Vedran
>>>>>>
>>>>> --
>>>>>> Gromacs Developers mailing list
>>>>>>
>>>>>> * Please search the archive at
>>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
>>>>>> before posting!
>>>>>>
>>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>>
>>>>>> * For (un)subscribe requests visit
>>>>>>
>>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>>>>> or send a mail to gmx-developers-request at gromacs.org.
>>>>>
>>>>> --
>>>>> Gromacs Developers mailing list
>>>>>
>>>>> * Please search the archive at
>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
>>>>> before posting!
>>>>>
>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>
>>>>> * For (un)subscribe requests visit
>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>>>> or send a mail to gmx-developers-request at gromacs.org.
>>>>
>>>>
>>>> --
>>>> Gromacs Developers mailing list
>>>>
>>>> * Please search the archive at
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
>>>> before posting!
>>>>
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>>> or send a mail to gmx-developers-request at gromacs.org.
>>>>
>>>
>>> --
>>> Gromacs Developers mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>> or send a mail to gmx-developers-request at gromacs.org.
>>
>>
>> --
>> Gromacs Developers mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>> or send a mail to gmx-developers-request at gromacs.org.
>>
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20160425/3b3ce4e0/attachment-0001.html>


More information about the gromacs.org_gmx-developers mailing list