[gmx-users] Workstation choice

Szilárd Páll pall.szilard at gmail.com
Tue Sep 11 16:50:56 CEST 2018


Hi,

On Fri, Sep 7, 2018 at 8:40 PM Olga Selyutina <olga.gluschenko at gmail.com>
wrote:

> Hi,
> A lot of thanks for valuable information.
> If it isn’t difficult for you, could you answer how the growth of
> performance under using the second GPU on the single simulation was changed
> in GROMACS 2018 vs older versions (2016, 5.1, it was 20-30% higher)?
>

Short answer: Greatly depends on the simulation setup and system, under
circumstances ideal for scaling (large input, little constraint work, etc.)
~1.7x without PME offload and max 1.5x with PME offload. Note that the PME
offload of the current release was not optimized for scaling but rather for
single GPU performance.

It's worth to note that a peculiar behavior with scaling is that from 1 to
2 GPUs you get less efficient scaling than from 2 to 4 (assuming that you
have enough work per GPU to scale). This is because domain decomposition
(DD) incurs a "one-time performance hit" due to the additional work
involved in decomposing the system.

2018-09-07 23:25 GMT+07:00 Szilárd Páll <pall.szilard at gmail.com>:
>
> >
> > Are you intending to use it mostly/only for running simulations or also
> as
> > a desktop computer?
> >
> > Yes, it will be mostly used for simulations.
>

If so, I'd recommend getting decent cooling and consider a chassis with
soundproofing if you'll have the machine in an office, especially if you'll
have 2x GPUs.


> > I'm not on the top of pricing details so you should probably look at some
> > configs and get back with concrete CPU + GPU (+price) combinations and we
> > might be able to guesstimate what's best.
> >
> >
> These sets of CPU and GPU are suitable for price (in our region):
> *GPU*
> GTX 1070 ~1700MHz, cuda 1920 - $514
> GTX 1080 ~1700MHz, cuda 2560 - $615
> GTX 1070Ti ~1700MHz, cuda 2432 - $615
> GTX 1080Ti ~1600MHz, cuda 3584 - $930
>

1080 and 1070Ti should be about the same in performance, but the latter
should (in theory) be a little cheaper.


> *CPU*
> Ryzen 7 2700X - $357
> 4200MHz, 8/16 cores/threads, cache L1/L2/L3 768KB/4MB/16MB, 105W, max.T 85C
>

I've presonally not tried the 2nd-gen Ryzen, but based on the performance
of the 1800X and the compute benchmarks I've seen published, these will
likely be the best value for money (possibly the 2600X if that allows
getting better GPUs?).


> Threadripper 1950X - $930
> 4000MHz, 16/32 cores/threads, cache  L1/L2/L3 1.5/8/32MB, 180W, max.T 68C
>

The 1920X still has 12 cores and should cost significantly less.
Side-note: one more thing you're getting with Thredripper vs Ryzen or Intel
Coffe Lake is more PCIe lanes that will matter somewhat if you use two GPUs.

i7 8086K - $515
> 4800MHz, 6/12 cores/threads, cache L2/L3 1.5/12MB, 95W, max.T 100C
>

Special edition CPU, not worth the money unless you like the name ;)


> i7 8700K - $442
> 4600MHz, 6/12 cores/threads, cache L2/L3 1.5/12MB, 95W, max.T 100C
>

Not sure about the perf/price difference, but the 8700 (non-K) might be
reasonable option too.


> The most suitable combinations CPU+GPU are as follows:
> 1) Ryzen 7 2700X + two GTX 1080 - $1587
> 1.1) Ryzen 7 2700X + one GTX 1080 + one GTX 1080*Ti* - $1900 (maybe?)
> 2) Threadripper 1950X + one GTX 1080Ti - $1860
> 3) i7 8700K + two GTX 1080 - $1672
> 4) Ryzen 7 2700X + three GTX 1070 - $1900
> My suggestions:
> Variant 1 seems to be the most suitable.
> Variant 2 seems to be suitable only if the single simulation is running on
> workstation
>

Don't forget that you need a few other components too in particular note
that that the motherboard will be more expensive for Threadripper, cheaper
for Ryzen 7 (and unless Intel boards are also a bit more pricey than the
Ryzens).


> It’s a bit confusing that in synthetic tests/games performance of i7 8700
> is higher than Ryzen 7 2700.
>

Consumer and especially gaming benchmarks rarely reflect the performance of
an HPC workload. Also note that CPU-only GROMACS benchmarks are _not_ a
good indicative of performance with GPU offload as the most
compute-intensive parts of the code that generally run very efficiently on
CPUs too (and would favor Intel Skylake with AVX512 units) runs in these
cases on the GPU. The code that remains is generally less arithmetically
intensive.


> Thanks a lot again for your advice, it has already clarified a lot!
>

Glad to hear. Feel free to follow up if you have further questions (but do
allow some time for replies ;).


> --
> Best regards, Olga Selyutina
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list