[gmx-users] Gromacs Benchmarks for NVIDIA GeForce RTX 2080

Szilárd Páll pall.szilard at gmail.com
Wed Apr 24 12:55:10 CEST 2019


The benchmark systems are the ones commonly used in GROMACS performance
evaluation
ADH is a 90k/134k system (dodec/cubic) and RNAse is 19k/24k (dodec/cubic)
both setup up with AMBER FF standard setting (referenced can be found on
this admittedly dated page: http://www.gromacs.org/GPU_acceleration)

--
Szilárd


On Thu, Apr 18, 2019 at 8:24 PM Soham Sarkar <soham9038 at gmail.com> wrote:

> Could you please tell me.. how big is your systrm.. How many atoms are
> there?
> Soham
>
> On Thu, 18 Apr 2019, 10:51 pm Jason Hogrefe, <jason.hogrefe at exxactcorp.com
> >
> wrote:
>
> > Dear Gromacs Users,
> >
> > Exxact corporation has conducted benchmarks for Gromacs using NVIDIA RTX
> > 2080 GPUs. We ran them a few months back, but thought the community would
> > be interested in such numbers.
> >
> > System: Exxact TensorEX Gromacs Certified Workstation<
> > https://www.exxactcorp.com/GROMACS-Certified-GPU-Systems>
> > CPU: Intel Xeon Scalable Family Silver 4114 (Skylake) x2
> > GPU: NVIDIA GeForce RTX 2080 x4
> > CUDA: 9.2
> > Gromacs Version: Gromacs 2018.3
> >
> > ==========================================
> >       ##### Running ADH Benchmarks #####
> >
> >       ----- ADH cubic PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 60.385  ns/day
> > 40 CPUs + [1] 1 x GPU: 70.547  ns/day
> > 40 CPUs + [2] 1 x GPU: 60.444  ns/day
> > 40 CPUs + [3] 1 x GPU: 70.753  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 53.474  ns/day
> > 10 CPUs + [1] 1 x GPU: 44.991  ns/day
> > 10 CPUs + [2] 1 x GPU: 45.034  ns/day
> > 10 CPUs + [3] 1 x GPU: 45.853  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 38.128  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 39.226  ns/day
> >
> >        ----- ADH cubic RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 74.364  ns/day
> > 40 CPUs + [1] 1 x GPU: 73.903  ns/day
> > 40 CPUs + [2] 1 x GPU: 74.022  ns/day
> > 40 CPUs + [3] 1 x GPU: 74.105  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 96.189  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 102.489  ns/day
> >
> >    ----- ADH cubic vsites PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 132.120  ns/day
> > 40 CPUs + [1] 1 x GPU: 129.414  ns/day
> > 40 CPUs + [2] 1 x GPU: 129.661  ns/day
> > 40 CPUs + [3] 1 x GPU: 133.058  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 108.044  ns/day
> > 10 CPUs + [1] 1 x GPU: 90.935  ns/day
> > 10 CPUs + [2] 1 x GPU: 103.922  ns/day
> > 10 CPUs + [3] 1 x GPU: 95.532  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 75.409  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 86.649  ns/day
> >
> >     ----- ADH cubic vsites RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 156.230  ns/day
> > 40 CPUs + [1] 1 x GPU: 155.725  ns/day
> > 40 CPUs + [2] 1 x GPU: 155.798  ns/day
> > 40 CPUs + [3] 1 x GPU: 156.289  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 194.495  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 203.785  ns/day
> >
> >         ----- ADH dodec PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 85.505  ns/day
> > 40 CPUs + [1] 1 x GPU: 84.418  ns/day
> > 40 CPUs + [2] 1 x GPU: 84.560  ns/day
> > 40 CPUs + [3] 1 x GPU: 85.463  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 55.158  ns/day
> > 10 CPUs + [1] 1 x GPU: 54.666  ns/day
> > 10 CPUs + [2] 1 x GPU: 49.706  ns/day
> > 10 CPUs + [3] 1 x GPU: 52.324  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 44.456  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 39.953  ns/day
> >
> >          ----- ADH dodec RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 77.585  ns/day
> > 40 CPUs + [1] 1 x GPU: 77.924  ns/day
> > 40 CPUs + [2] 1 x GPU: 78.122  ns/day
> > 40 CPUs + [3] 1 x GPU: 78.215  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 102.690  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 112.896  ns/day
> >
> >      ----- ADH dodec vsites PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 149.222  ns/day
> > 40 CPUs + [1] 1 x GPU: 148.763  ns/day
> > 40 CPUs + [2] 1 x GPU: 150.029  ns/day
> > 40 CPUs + [3] 1 x GPU: 149.848  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 124.922  ns/day
> > 10 CPUs + [1] 1 x GPU: 108.062  ns/day
> > 10 CPUs + [2] 1 x GPU: 108.633  ns/day
> > 10 CPUs + [3] 1 x GPU: 110.386  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 83.872  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 98.177  ns/day
> >
> >        ----- ADH dodec vsites RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 165.883  ns/day
> > 40 CPUs + [1] 1 x GPU: 165.753  ns/day
> > 40 CPUs + [2] 1 x GPU: 165.282  ns/day
> > 40 CPUs + [3] 1 x GPU: 165.279  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 209.722  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 227.262  ns/day
> >
> >       ##### Running RNASE Benchmarks #####
> >
> >         ----- RNASE cubic PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 254.480  ns/day
> > 40 CPUs + [1] 1 x GPU: 263.490  ns/day
> > 40 CPUs + [2] 1 x GPU: 257.038  ns/day
> > 40 CPUs + [3] 1 x GPU: 261.090  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 195.725  ns/day
> > 10 CPUs + [1] 1 x GPU: 193.187  ns/day
> > 10 CPUs + [2] 1 x GPU: 210.165  ns/day
> > 10 CPUs + [3] 1 x GPU: 213.843  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 111.866  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 126.186  ns/day
> >
> >          ----- RNASE cubic RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 150.436  ns/day
> > 40 CPUs + [1] 1 x GPU: 149.031  ns/day
> > 40 CPUs + [2] 1 x GPU: 146.335  ns/day
> > 40 CPUs + [3] 1 x GPU: 148.853  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 241.682  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 301.034  ns/day
> >
> >         ----- RNASE dodec PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 272.546  ns/day
> > 40 CPUs + [1] 1 x GPU: 282.214  ns/day
> > 40 CPUs + [2] 1 x GPU: 270.625  ns/day
> > 40 CPUs + [3] 1 x GPU: 275.279  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 248.857  ns/day
> > 10 CPUs + [1] 1 x GPU: 238.144  ns/day
> > 10 CPUs + [2] 1 x GPU: 250.780  ns/day
> > 10 CPUs + [3] 1 x GPU: 253.910  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 114.938  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 129.019  ns/day
> >
> >          ----- RNASE dodec RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 149.866  ns/day
> > 40 CPUs + [1] 1 x GPU: 146.679  ns/day
> > 40 CPUs + [2] 1 x GPU: 146.757  ns/day
> > 40 CPUs + [3] 1 x GPU: 149.034  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 242.927  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 313.999  ns/day
> >
> >      ----- RNASE dodec vsites PME -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 587.660  ns/day
> > 40 CPUs + [1] 1 x GPU: 589.539  ns/day
> > 40 CPUs + [2] 1 x GPU: 614.706  ns/day
> > 40 CPUs + [3] 1 x GPU: 603.061  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU: 413.630  ns/day
> > 10 CPUs + [1] 1 x GPU: 504.516  ns/day
> > 10 CPUs + [2] 1 x GPU: 429.247  ns/day
> > 10 CPUs + [3] 1 x GPU: 521.992  ns/day
> > Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 244.953  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 289.134  ns/day
> >
> >      ----- RNASE dodec vsites RF -----
> > Sequential Single GPU Run Performance
> > 40 CPUs + [0] 1 x GPU: 310.622  ns/day
> > 40 CPUs + [1] 1 x GPU: 307.186  ns/day
> > 40 CPUs + [2] 1 x GPU: 305.542  ns/day
> > 40 CPUs + [3] 1 x GPU: 308.180  ns/day
> > Multiple Single GPU Run Performance
> > 10 CPUs + [0] 1 x GPU:  10 CPUs + [1] 1 x GPU:  10 CPUs + [2] 1 x GPU:
> 10
> > CPUs + [3] 1 x GPU: Sequential Multi GPU Run Performance
> > 40 CPUs + [0,1] 2 x GPU: 486.990  ns/day
> > 40 CPUs + [0,1,2,3] 4 x GPU: 591.463  ns/day
> >
> > Benchmark Complete
> > =============================================================
> >
> > Best regards,
> >
> > Jason H.
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list