[gmx-users] Can we set the number of pure PME nodes when using GPU&CPU?

Szilárd Páll pall.szilard at gmail.com
Mon Aug 25 19:40:42 CEST 2014


On Mon, Aug 25, 2014 at 7:12 PM, Xingcheng Lin
<linxingcheng50311 at gmail.com> wrote:
> Theodore Si <sjyzhxw at ...> writes:
>
>>
>> Hi,
>>
>> https://onedrive.live.com/redir?
> resid=990FCE59E48164A4!2572&authkey=!AP82sTNxS6MHgUk&ithint=file%2clog
>> https://onedrive.live.com/redir?
> resid=990FCE59E48164A4!2482&authkey=!APLkizOBzXtPHxs&ithint=file%2clog
>>
>> These are 2 log files. The first one  is using 64 cpu cores(64 / 16 = 4
>> nodes) and 4nodes*2 = 8 GPUs, and the second is using 512 cpu cores, no GPU.
>> When we look at the 64 cores log file, we find that in the  R E A L   C
>> Y C L E   A N D   T I M E   A C C O U N T I N G table, the total wall
>> time is the sum of every line, that is 37.730=2.201+0.082+...+1.150. So
>> we think that when the CPUs is doing PME, GPUs are doing nothing. That's
>> why we say they are working sequentially.
>> As for the 512 cores log file, the total wall time is approximately the
>> sum of PME mesh and PME wait for PP. We think this is because
>> PME-dedicated nodes finished early, and the total wall time is the time
>> spent on PP nodes, therefore time spent on PME is covered.
>>
> Hi,
>
> I have a naive question:
>
> In your log file there are only 2 GPUs being detected:
>
> 2 GPUs detected on host gpu42:
>   #0: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
>   #1: NVIDIA Tesla K20m, compute cap.: 3.5, ECC: yes, stat: compatible
>
> In the end you selected 8 GPUs
>
> 8 GPUs user-selected for this run: #0, #0, #0, #0, #1, #1, #1, #1
>
> Did you choose 8 GPUs or 2 GPUs? What is your mdrun command?

That's an outdated message which is only correct if a single rank/GPU
is used. If you use a more up-to-date 4.6.x or 5.0.x version, you'd
get something like this instead:

2 GPUs user-selected for this run.
Mapping of GPUs to the 8 PP ranks in this node: #0, #0,  #0, #0, #1, #1, #1, #1

Cheers,
--
Szilárd

> Thank you,
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list