[gmx-developers] Gromacs with GPU

Mark Abraham mark.j.abraham at gmail.com
Fri Sep 22 13:19:27 CEST 2017


Hi,

Currently the hwloc support doesn't do much except inspect the hardware and
help produce the report in the log file. Obviously it should be used for
helping with such placement tasks, and that's why we're adopting it, but
nobody has prioritized specializing the task-assignment code yet. For the
moment, such things are still up to the user.

Mark

On Fri, Sep 22, 2017 at 1:10 PM Åke Sandgren <ake.sandgren at hpc2n.umu.se>
wrote:

> Hi!
>
> I am seeing a possible performance enhancement (possibly) when running
> gromacs on nodes with multiple gpu cards.
> (And yes I know this is perhaps a mote point since current GPU cards
> don't have dual engines per card)
>
> System:
> dual socket 14-core broadwell cpus
> 2 K80 cards, one on each socket.
>
> Gromacs built with hwloc support.
>
> When running a dual node (56 core)
>
> gmx_mpi mdrun -npme 4 -s ion_channel_bench00.tpr -resetstep 20000 -o
> bench.trr -x bench.xtc -cpo bench.cpt -c bench.gro -e bench.edr -g
> bench.log -ntomp 7 -pin on -dlb yes
>
> job, (slurm + cgroups), gromacs doesn't fully take hwloc info into
> account. The job correctly gets allocated on cores, but looking at
> nvidia-smi and hwloc-ps i can see that the PP processes are using a
> suboptimal selection of GPU engines.
>
> The PP processes are placed one on each CPU socket (according to which
> process-ids are using the GPUs and the position of those pids according
> to hwloc-ps), but they both uses gpu engines from the same (first) K80
> card.
>
> It would be better to have looked at the hwloc info and selected CUDA
> devices 0,2 (or 1,3) instead of 0,1.
>
>
> Any comments on that?
>
> Attached nvidia-smi + hwloc-ps output
>
> --
> Ake Sandgren, HPC2N, Umea University, S-90187 Umea, Sweden
> Internet: ake at hpc2n.umu.se   Phone: +46 90 7866134 <090-786%2061%2034>
> Fax: +46 90-580 14 <090-580%2014>
> Mobile: +46 70 7716134 <070-771%2061%2034> WWW: http://www.hpc2n.umu.se
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20170922/ebe66145/attachment.html>


More information about the gromacs.org_gmx-developers mailing list