[gmx-users] Hyper-threading Gromacs 5.0.1
Mark Abraham
mark.j.abraham at gmail.com
Thu Sep 11 14:40:46 CEST 2014
Hi,
Hyper-threading is generally not useful with applications that are compute-
or network-bound, as GROMACS is. You should expect to see maximum
performance when using one real thread per x86 core (and you should find
out how many cores really exist, and not infer it from something else). You
should start with an MPI rank per core (thus one thread per rank), and
consider reducing the number of ranks by having more OpenMP threads per
rank - but this is generally only useful for non-GPU runs when running on a
lot of Intel x86 hardware.
On Thu, Sep 11, 2014 at 3:52 AM, Johnny Lu <johnny.lu128 at gmail.com> wrote:
> Is it a good idea to use 48 OpenMP thread, under 1 MPI thread on 24 Xeon
> Processors?
>
No.
> The mail list say such practice give about 8-20% performance increase
>
If it did, that might have been in context of managing the work done on the
CPU while using a GPU, which is not what you are doing. But without a link,
the reference is useless...
Should I try g_tune_pme when I searched for "imbalance" in the log file and
> found nothing (24 OMP thread under 1 MPI thread on 24 Xeon Processor)? Or
> is that done automatically?
>
You're not using more than one rank, so there's not really any load
imbalance to tune - it's just bad.
Does gromacs support double precision calculation on GPU if the hardware
> supports that?
>
No.
> The optimize fft option is also obsolete.
>
Yes, it got removed before 5.0, but there were a few things left in the
docs which I have now removed. Thanks.
Mark
> Thanks again.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list