[gmx-developers] Hardware threads vs. OpenMP threads

Szilárd Páll pall.szilard at gmail.com
Thu Jun 4 16:13:27 CEST 2015


On Thu, Jun 4, 2015 at 3:13 PM, David van der Spoel
<spoel at xray.bmc.uu.se> wrote:
> On 04/06/15 15:00, Szilárd Páll wrote:
>>
>> On Thu, Jun 4, 2015 at 1:46 PM, David van der Spoel
>> <spoel at xray.bmc.uu.se> wrote:
>>>
>>> On 04/06/15 12:51, Berk Hess wrote:
>>>>
>>>>
>>>> PS There is something strange on that machine. If Gromacs detects 16
>>>> threads, omp_get_num_procs should return 16, not 8.
>>>
>>>
>>> Nope.
>>> The queue system allocates 8 cores out of 16 physical cores to my job.
>>> GROMACS see both values, reports a conflict, and follows the hardware
>>> rather
>>> than OpenMP settings. I would think it should do the reverse.
>>
>>
>> You did not pass  OMP_NUM_THREADS nor "-ntomp" and the job scheduler
>> set the CPUSET, right? In that case there is no reliable way to know
>> what the user wanted unless mdrun stops trying to fill the node with
>> threads and defaults to use nranks=1 and ntomp=1 unless told
>> otherwise.
>
> Indeed, no options to mdrun, in which case I expect mdrun to use the amount
> of core that the queue system allows (8). However it takes 16 cores. Only if
> I explicitly specify 8 cores (-ntomp 8) it will behave nicely. Of course it
> is really the job of the OS and Queue system to limit my amount of physical
> cores.

I think the OpenMP library has the scope of telling an OpenMP parallel
application how many threads it should start per process, but not in
total. If one is using pthreads (=thread-MPI) or TBB instead of
OpenMP, this would not even come up as a question.

On the other hand, my guess is that the queue system set a CPUSET, as
a results the OpenMP library set the max CPU count internally to the
number of bits in this set, but without being told, mdrun would have
to go into quite some complication to know whether this is e.g. a
per-process CPUSET or is it meant to be the total number of threads
across all ranks. And even then, you only need to do "-pin on" and you
can ruin the node sharing setup and pin the mdrun threads to cores
that were reserved for someone else.

BTW, if my above guess regarding CPUSETs is true you should have also
been notified about the external affinity settings which mdrun does
honor.

>
>>
>> Without such a pessimistic behavior, I think the only reasonable
>> solution is to expect that the queue system/user sets the correct
>> OMP_NUM_THREADS value.
>>
>> Note that respecting omp_get_num_procs() could still lead to using 16
>> threads in total as with thread-MPI we can (and do) simply start two
>> ranks when e.g. 16 hardware threads are detected but
>> OMP_NUM_THREADS=8.
>>
>> Fore more technical details around this issue see:
>> http://permalink.gmane.org/gmane.science.biology.gromacs.user/76761
>>
>> --
>> Szilárd
>>
>>
>>
>>>>
>>>> Berk
>>>>
>>>> On 2015-06-04 12:49, Berk Hess wrote:
>>>>>
>>>>>
>>>>> Hi,
>>>>>
>>>>> I don't think anything changed in the master branch.
>>>>>
>>>>> But we do adhere to the OpenMP environment. The value reported in the
>>>>> message comes from omp_get_num_procs, which should be a report about
>>>>> the hardware available. OMP_NUM_THREADS sets the number of OpenMP
>>>>> threads to use, that is respected.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Berk
>>>>>
>>>>> On 2015-06-04 11:21, David van der Spoel wrote:
>>>>>>
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> why does GROMACS in the master branch not adhere to the OpenMP
>>>>>> environment?
>>>>>>
>>>>>> Number of hardware threads detected (16) does not match the number
>>>>>> reported by OpenMP (8).
>>>>>> Consider setting the launch configuration manually!
>>>>>> Reading file md.tpr, VERSION 5.1-beta1-dev-20150603-99a1e1f-dirty
>>>>>> (single precision)
>>>>>> Changing nstlist from 10 to 40, rlist from 1.1 to 1.1
>>>>>>
>>>>>> Using 1 MPI process
>>>>>> Using 16 OpenMP threads
>>>>>>
>>>>>> Cheers,
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> David van der Spoel, Ph.D., Professor of Biology
>>> Dept. of Cell & Molec. Biol., Uppsala University.
>>> Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
>>> spoel at xray.bmc.uu.se    http://folding.bmc.uu.se
>>> --
>>> Gromacs Developers mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>> or
>>> send a mail to gmx-developers-request at gromacs.org.
>
>
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Dept. of Cell & Molec. Biol., Uppsala University.
> Box 596, 75124 Uppsala, Sweden. Phone:  +46184714205.
> spoel at xray.bmc.uu.se    http://folding.bmc.uu.se
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or
> send a mail to gmx-developers-request at gromacs.org.


More information about the gromacs.org_gmx-developers mailing list