[gmx-users] gmx 2019 performance issues

Tamas Hegedus tamas at hegelab.org
Tue Jan 15 14:28:21 CET 2019


Thanks for the inputs!
* I will go with the cheaper CPU
* I am looking forward to the gpu-only gromacs; impatiently


On 01/15/2019 01:55 PM, Mark Abraham wrote:
> Hi,
> 
> On Tue, Jan 15, 2019 at 1:30 PM Tamas Hegedus <tamas at hegelab.org> wrote:
> 
>> Hi,
>>
>> I do not really see an increased performance with gmx 2019 using -bonded
>> gpu. I do not see what I miss or misunderstand.
>>
> 
> Unfortunately that is expected in some cases, see
> http://manual.gromacs.org/documentation/current/user-guide/mdrun-performance.html#gpu-accelerated-calculation-of-bonded-interactions-cuda-only.
> Much of the gain is that it is more feasible to spend less on the CPU, so
> gains in performance per $, rather than in raw performance.
> 
> The only thing I see that all cpu run at ~100% with gmx2018, while some
>> of the cpus run only at ~60% with gmx2019.
>>
> 
> QED, probably :-)
> 
> 
>> There are: 196382 Atoms
>> Speeds comes from 500 ps runs.
>>
>>   From one of the log files:
>> Mapping of GPU IDs to the 4 GPU tasks in the 4 ranks on this node:
>>     PP:0,PP:0,PP:2,PME:2
>> PP tasks will do (non-perturbed) short-ranged and most bonded
>> interactions on the GPU
>> PME tasks will do all aspects on the GPU
>>
>> ------------------------------
>> 16 cores 4 GPUs
>> gmx 2018 48ns/day
>> gmx 2019 54ns/day
>>
>> gmx mdrun -nt 16 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -pme gpu
>> -npme 1 -gputasks 0123
>>
>> gmx mdrun -nt 16 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -bonded gpu
>> -pme gpu -npme 1 -gputasks 0123
>>
>> Since the GPUs are not utilized well (some of them are below 50%), my
>> objective is run 2 jobs/node with 8 CPUs and 2 GPUs with higher usage.
>>
>> ------------------------------
>> 8 cores 2 GPUs
>> gmx 2018 33 ns/day
>> gmx 2019 35 ns/day
>>
>> gmx mdrun -nt 8 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -pme gpu
>> -npme 1 -gputasks 0033
>>
>> gmx mdrun -nt 8 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -bonded gpu
>> -pme gpu -npme 1 -gputasks 0022
>>
>> gmx mdrun -ntomp 2 -ntmpi 4 -pin on -v -deffnm md_test -nb gpu -bonded
>> gpu -pme gpu -npme 1 -gputasks 0022
>> Changing -nt to -ntomp did not help to increase performance.
>>
>> And the GPUs are not utilized much better. 1080Ti runs max 60-75%
>>
> 
> Single simulations are unlikely to get much higher utilization, except
> perhaps paired with high-clock CPUs. Multi-simulations are still the way to
> make optimal use of your resources, if throughput-style runs are
> appropriate for the science.
> 
> 
>> ------------------------------
>> The main question:
>> * I use 16 core AMD 2950X with 4 high end GPUs (1080Ti, 2080Ti).
>> * GPUs does not run at 100%, so I would like load more on them and
>> possibly run 2 gmx jobs on the same node.
>>
>> I see two options:
>> * cheaper: decrease the cores from 16 to 8 and push bonded calculations
>> to gpu using gmx 2019
>> * expensive: replace the 16core 2950X to 32core 2990WX
>>
>> 2950X 16 cores 2 GPUs
>> gmx 2018 43 ns/day
>> gmx 2019 43 ns/day
>>
>> 33 ns/day (8core/2GPUs) <<<< 54 (16core/4GPUS)
>> 43 ns/day << 54 (16core/4GPUS)
>>
>> So this could be a compromise if 16/32 cores works similarly as 16/16
>> cores. E.g. 2990 has slower memory access compared to 2950; I do not
>> expect this to influence gmx runs too much. However, if it decreases by
>> 10-15 percentage then most likely it does not worth to invest into the
>> 32 core processor.
>>
> 
> I would suggest the cheaper CPU. We are working actively to implement a
> pure GPU implementation in an upcoming version (but no promises yet!)
> 
> Mark
> 
> 
>> Thanks for your feedbacks.
>> Tamas
>>
>> --
>> Tamas Hegedus, PhD
>> Senior Research Fellow
>> Department of Biophysics and Radiation Biology
>> Semmelweis University     | phone: (36) 1-459 1500/60233
>> Tuzolto utca 37
>> <https://maps.google.com/?q=Tuzolto+utca+37&entry=gmail&source=g>-47
>>    | mailto:tamas at hegelab.org
>> Budapest, 1094, Hungary   | http://www.hegelab.org
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>


-- 
Tamas Hegedus, PhD
Senior Research Fellow
Department of Biophysics and Radiation Biology
Semmelweis University     | phone: (36) 1-459 1500/60233
Tuzolto utca 37-47        | mailto:tamas at hegelab.org
Budapest, 1094, Hungary   | http://www.hegelab.org


More information about the gromacs.org_gmx-users mailing list