[gmx-users] optimize cpu with gpu node for gromacs
rahul dhakne
rahuldhakne89 at gmail.com
Fri Jun 6 09:15:28 CEST 2014
Dear all Gromacs User,
I am using one GPU node (NVIDIA Tesla C2050, 2.5 Gb ,6 core )
for simulation purpose on Intel(R) core I7 (3.0 *GHz, 16gb*) system. I did
tested on this workstation with Gromacs 4.6. It seems from these that
performance of GPU is imbalanced with CPU. It was showing shorter cut_off
but I am using least of it.
Using 1 MPI thread
Using 8 OpenMP threads
1 GPU detected:
#0: NVIDIA Tesla C2050, compute cap.: 2.0, ECC: yes, stat: compatible
1 GPU auto-selected for this run.
Mapping of GPU to the 1 PP rank in this node: #0
starting mdrun 'Protein '
500000 steps, 1000.0 ps.
Writing final coordinates.
NOTE: The GPU has >20% more load than the CPU. This imbalance causes
performance loss, consider using a shorter cut-off and a finer PME
grid.
Core t (s) Wall t (s) (%)
Time: 26585.222 3827.438 694.6
1h03:47
(ns/day) (hour/ns)
Performance: 22.574 1.063
How to optimize this mdrun to get maximum use of GPU and accelerate the
performance?. How many thread to be given? What to be optimized so that I
can use for simulation purpose? Please let me know this is enough
information.
-- with regard
Rahul
-
More information about the gromacs.org_gmx-users
mailing list