[gmx-developers] Hardware threads vs. OpenMP threads
Berk Hess
hess at kth.se
Thu Jun 4 14:52:47 CEST 2015
Hi,
I would say the OMP_NUM_THREADS goes above anything, since that actually
tells to use that many threads. Even in omp_get_num_procs tell there are
fewer cores, you might want to oversubscribe. I assume OMP_NUM_THREADS
was not set in this case (or it was set to 16), otherwise 8 threads
would have been used.
Would could restrict the number of available hardware threads to
omp_get_num_procs if it conflicts with the number of hardware threads
detected by Gromacs. But I guess that could still be problematic. What
would happen if you ask for half of a node, but ask for 2 MPI processes
that both use OpenMP threads?
Berk
On 2015-06-04 14:27, Mark Abraham wrote:
> Hi,
>
> Node sharing cannot be automagically supported, because there's no
> "reliable" source of information except the user. This is nothing new
> (e.g.
> http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Pinning_threads_to_physical_cores).
> mdrun can't know whether omp_get_num_procs or OMP_NUM_THREADS is more
> reliable in the general case (naturally, every job scheduler is
> different, and we can't even assume that there is a job scheduler who
> might do it right, e.g. the case of users sharing an in-house
> machine). However, if only omp_get_num_procs is set, then maybe we can
> use that rather than assume that the number of hardware threads is
> appropriate to use? We'd still report the difference to the user.
>
> Agree with Berk that a scheduler that only used this mechanism to
> declare the number of available physical cores would be flawed, e.g.
> consider a pthreads or TBB code.
>
> Mark
>
> On Thu, Jun 4, 2015 at 1:46 PM David van der Spoel
> <spoel at xray.bmc.uu.se <mailto:spoel at xray.bmc.uu.se>> wrote:
>
> On 04/06/15 12:51, Berk Hess wrote:
> > PS There is something strange on that machine. If Gromacs detects 16
> > threads, omp_get_num_procs should return 16, not 8.
> Nope.
> The queue system allocates 8 cores out of 16 physical cores to my job.
> GROMACS see both values, reports a conflict, and follows the hardware
> rather than OpenMP settings. I would think it should do the reverse.
>
> >
> > Berk
> >
> > On 2015-06-04 12:49, Berk Hess wrote:
> >> Hi,
> >>
> >> I don't think anything changed in the master branch.
> >>
> >> But we do adhere to the OpenMP environment. The value reported
> in the
> >> message comes from omp_get_num_procs, which should be a report
> about
> >> the hardware available. OMP_NUM_THREADS sets the number of OpenMP
> >> threads to use, that is respected.
> >>
> >> Cheers,
> >>
> >> Berk
> >>
> >> On 2015-06-04 11:21, David van der Spoel wrote:
> >>> Hi,
> >>>
> >>> why does GROMACS in the master branch not adhere to the OpenMP
> >>> environment?
> >>>
> >>> Number of hardware threads detected (16) does not match the number
> >>> reported by OpenMP (8).
> >>> Consider setting the launch configuration manually!
> >>> Reading file md.tpr, VERSION 5.1-beta1-dev-20150603-99a1e1f-dirty
> >>> (single precision)
> >>> Changing nstlist from 10 to 40, rlist from 1.1 to 1.1
> >>>
> >>> Using 1 MPI process
> >>> Using 16 OpenMP threads
> >>>
> >>> Cheers,
> >>
> >
>
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Dept. of Cell & Molec. Biol., Uppsala University.
> Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.
> spoel at xray.bmc.uu.se <mailto:spoel at xray.bmc.uu.se>
> http://folding.bmc.uu.se
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
> before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org
> <mailto:gmx-developers-request at gromacs.org>.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20150604/e1eab428/attachment.html>
More information about the gromacs.org_gmx-developers
mailing list