[gmx-developers] Hardware threads vs. OpenMP threads
Berk Hess
hess at kth.se
Thu Jun 4 16:05:23 CEST 2015
On 06/04/2015 03:13 PM, David van der Spoel wrote:
> On 04/06/15 15:00, Szilárd Páll wrote:
>> On Thu, Jun 4, 2015 at 1:46 PM, David van der Spoel
>> <spoel at xray.bmc.uu.se> wrote:
>>> On 04/06/15 12:51, Berk Hess wrote:
>>>>
>>>> PS There is something strange on that machine. If Gromacs detects 16
>>>> threads, omp_get_num_procs should return 16, not 8.
>>>
>>> Nope.
>>> The queue system allocates 8 cores out of 16 physical cores to my job.
>>> GROMACS see both values, reports a conflict, and follows the
>>> hardware rather
>>> than OpenMP settings. I would think it should do the reverse.
>>
>> You did not pass OMP_NUM_THREADS nor "-ntomp" and the job scheduler
>> set the CPUSET, right? In that case there is no reliable way to know
>> what the user wanted unless mdrun stops trying to fill the node with
>> threads and defaults to use nranks=1 and ntomp=1 unless told
>> otherwise.
> Indeed, no options to mdrun, in which case I expect mdrun to use the
> amount of core that the queue system allows (8). However it takes 16
> cores. Only if I explicitly specify 8 cores (-ntomp 8) it will behave
> nicely. Of course it is really the job of the OS and Queue system to
> limit my amount of physical cores.
The OpenMP 4.0 manual says:
The omp_get_num_procs routine returns the number of processors that are
available to the device at the time the routine is called.
So the sysadmin decided to change the device from a full node to half a
node. I suppose the intent is clear, but things get inconsistent. If we
run without MPI or with thread-MPI we could consider honoring the max
thread count of omp_get_num_threads. But with real MPI processes there
is no way to uniquely identify the intent of the sysadmin (i.e. max
threads per MPI process or per node?).
Berk
>
>>
>> Without such a pessimistic behavior, I think the only reasonable
>> solution is to expect that the queue system/user sets the correct
>> OMP_NUM_THREADS value.
>>
>> Note that respecting omp_get_num_procs() could still lead to using 16
>> threads in total as with thread-MPI we can (and do) simply start two
>> ranks when e.g. 16 hardware threads are detected but
>> OMP_NUM_THREADS=8.
>>
>> Fore more technical details around this issue see:
>> http://permalink.gmane.org/gmane.science.biology.gromacs.user/76761
>>
>> --
>> Szilárd
>>
>>
>>
>>>>
>>>> Berk
>>>>
>>>> On 2015-06-04 12:49, Berk Hess wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> I don't think anything changed in the master branch.
>>>>>
>>>>> But we do adhere to the OpenMP environment. The value reported in the
>>>>> message comes from omp_get_num_procs, which should be a report about
>>>>> the hardware available. OMP_NUM_THREADS sets the number of OpenMP
>>>>> threads to use, that is respected.
>>>>>
>>>>> Cheers,
>>>>>
>>>>> Berk
>>>>>
>>>>> On 2015-06-04 11:21, David van der Spoel wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> why does GROMACS in the master branch not adhere to the OpenMP
>>>>>> environment?
>>>>>>
>>>>>> Number of hardware threads detected (16) does not match the number
>>>>>> reported by OpenMP (8).
>>>>>> Consider setting the launch configuration manually!
>>>>>> Reading file md.tpr, VERSION 5.1-beta1-dev-20150603-99a1e1f-dirty
>>>>>> (single precision)
>>>>>> Changing nstlist from 10 to 40, rlist from 1.1 to 1.1
>>>>>>
>>>>>> Using 1 MPI process
>>>>>> Using 16 OpenMP threads
>>>>>>
>>>>>> Cheers,
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> David van der Spoel, Ph.D., Professor of Biology
>>> Dept. of Cell & Molec. Biol., Uppsala University.
>>> Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.
>>> spoel at xray.bmc.uu.se http://folding.bmc.uu.se
>>> --
>>> Gromacs Developers mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>> or
>>> send a mail to gmx-developers-request at gromacs.org.
>
>
More information about the gromacs.org_gmx-developers
mailing list