[gmx-users] Is it possible to control GPU utilizations when running two simulations in one workstation?

sunyeping sunyeping at aliyun.com
Mon Aug 5 11:41:08 CEST 2019


Hello, Catch ya,

Thank you for your reply. Using "lscpu", the cpu information of the workstation is shown as:

   Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                56
On-line CPU(s) list:   0-55
Thread(s) per core:    2
Core(s) per socket:    14
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
Stepping:              1
CPU MHz:               3199.929
BogoMIPS:              5206.33
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              35840K
NUMA node0 CPU(s):     0-13,28-41
NUMA node1 CPU(s):     14-27,42-55

The first system I ran in the workstation has 79863 atoms (protein-DNA complex). The mdp is here (https://drive.google.com/file/d/1ti5renwgIGb7jy9VXbqskPueH-QFGwW2/view?usp=sharing). The mdrun command to run this simulation was:
     gmx mdrun -v -deffnm prod_1 -nt 24 -gpu_id 0,1
This simulation was assigned to GPU 0 and 1. When there was no other simulation running on the workstation, the utilizations of  of the two GPU were both over 70%.

The second simulation system has 76843 atoms (protein only). The mdp is here (https://drive.google.com/file/d/1bfIXaDdrwbepssVY777iDhH7fAJnasR0/view?usp=sharing). The mdrun command to run this simulation was:
     gmx mdrun -v -deffnm nvt2 -nt 24 -gpu_id 2,3
This simulation was assigned to GPU 2 and 3. The utilizations of GPU 2 and 3 are both higher than 80%. And running this simulation didn't affact the the utilizations of GPU 0 and 1, which were running the first simulation.

After the second simulation was finished, I ran the third simulation, which was run on the same system with the second simulation. The only difference between the second and the third simulations were that the former was NVT and the later was NPT whose mdp file is here (https://drive.google.com/file/d/1WggyzY8nFnCZ2Ap1eAOFQcqKTmj5GKj_/view?usp=sharing) . I started this simulation with the command:
gmx mdrun -v -deffnm nvt2 -nt 24 -gpu_id 2,3
Strangely, the utilizations of GPU 0 and 1 (which were running the first simulation) dropped to 50% and those of GPU 2 and 3  (which were running the third simulation) were only 43%. By "top" command I learned that CPU utilizations of both simulations are 2390%. 

Do you know how to run two simulations without mutually affacting the GPU utilizations of them? 

Best regards.




------------------------------------------------------------------
From:Dallas Warren <dallas.warren at monash.edu>
Sent At:2019 Aug. 5 (Mon.) 05:21
To:gromacs <gmx-users at gromacs.org>; 孙业平 <sunyeping at aliyun.com>
Subject:Re: [gmx-users] Is it possible to control GPU utilizations when running two simulations in one workstation?

What is the difference between the four systems you are referring to? How many atoms is each of them? Do they have exactly the same mdp parameters? What is the CPU utilization like?

Catch ya,

Dr. Dallas Warren
Drug Delivery, Disposition and Dynamics
Monash Institute of Pharmaceutical Sciences, Monash University
381 Royal Parade, Parkville VIC 3052
dallas.warren at monash.edu
---------------------------------
When the only tool you own is a hammer, every problem begins to resemble a nail.

On Sat, 3 Aug 2019 at 19:11, sunyeping <sunyeping at aliyun.com> wrote:
Dear all,

 I am trying to run a two MD simulations on one workstation equipped with 4 GPU. First I started a simulation with the following command:

 gmx mdrun -v -deffnm md -ntmpi 12 -gpu_id 0,1

 By the nvidia-smi command I find the utilizations of GPU 0 and 1 are 74 and 80%, respectively. Then I started another simulation with:

  gmx mdrun -v -deffnm md -ntmpi 12 -gpu_id 2,3

 then the utilizations of GPU 0 and 1 decreased to 20% and 23%, and the utilizations of GPU 2 and 3, which ran the second simulation, are 12 and 15%. Both of the two simulations ran with unbearable low speed.

 I feel it very stange because a few days ago I also ran two simulations on the same workstation with the same mdrun commands, but the utilizations of all four GPU were higer than 70%. Do you know what may affect the GPU utilizations and how to correct it?

 Best regards. 
 -- 
 Gromacs Users mailing list

 * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

 * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

 * For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list