[gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?

Szilárd Páll pall.szilard at gmail.com
Tue Feb 24 15:37:49 CET 2015


On Tue, Feb 24, 2015 at 12:32 PM, David McGiven <davidmcgivenn at gmail.com> wrote:
> Hi Szilard,
>
> Thank you very much for your great advice.
>
> 2015-02-20 19:03 GMT+01:00 Szilárd Páll <pall.szilard at gmail.com>:
>
>> On Fri, Feb 20, 2015 at 2:17 PM, David McGiven <davidmcgivenn at gmail.com>
>> wrote:
>> > Dear Gromacs users and developers,
>> >
>> > We are thinking about buying a new cluster of ten or twelve 1U/2U
>> machines
>> > with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series.
>> > Not yet clear the details, we'll see.
>>
>> If you can afford them get the 14/16 or 18 core v3 Haswells, those are
>> *really* fast, but a pair can cost as much as a decent car.
>>
>> Get IVB (v2) if it saves you a decent amount of money compared to v3.
>> The AVX2 with FMA of the Haswell chips is great, but if you run
>> GROMACS with GPUs on them my guess is that a higher frequency v2 will
>> be more advantageous than the v3's AVX2 support. Won't swear on this
>> as I have not tested thoroughly.
>>
>>
> According to an email exchange I had with Carsten Kutzner, for the kind of
> simulations we would like to run (see below), lower frequency v2's give
> better performance-to-price ratio.

That's quite likely the case. Plot the price vs #cores x base
frequency and that will give you a reasonably good idea about
_expected_ performance vs price.

> For instance, we can get from a national reseller :
>
> 2U server (supermicro rebranded I guess)
> 2 x E5-2699V3 18c 2,3Ghz
> 64 GB DDR4
> 2 x GTX980 (certified for the server)
> -
> 13.400 EUR (sans VAT)
>
>
> 2U server (supermicro rebranded I guess)
> 2 x E5-2695V2 12c 2,4 Ghz
> 64 GB DDR3
> 2 x GTX980 (certified for the server)
> -
> 9.140 EUR (sans VAT)
>
> Does that qualify as "saving a decent amount of money" to go for the V2 ? I
> don't think so, also because we care about rack space. Less servers but
> potent ones. The latests haswells are way too overpriced for us.

Well, if you think that almost 50% extra cost is worth it, go for it!

However, let me add a few notes/warnings:
* The Xeon v3's clock is deceiving (borderline lie from Intel), in AVX
mode those 2699V3-s run at around 1.9 GHz; at that point the
difference between the two CPUs becomes quite likely <=25% and if
you'd take an E5-2697v2 which should be only a couple of 100s more
than the 2695v2 the difference would likely become even less;
* Instead of the E5-2699V3 I think you may be better off with the
E5-2697 v3 - especially if both drop the clock by 400 MHz in AVX mode.

> We want to run molecular dynamics simulations of transmembrane proteins
> inside a POPC lipid bilayer, in a system with ~100000 atoms, from which
> almost 1/3 correspond to water molecules and employing usual conditions
> with PME for electorstatics and cutoffs for LJ interactions.
>
> I think we'll go for the V3 version.

Will be a sweet setup, let us know the performance when you have the machine!

>> I've been told in this list that NVIDIA GTX offer the best
>> > performance/price ratio for gromacs 5.0.
>>
>> Yes, that is the case.
>>
>> > However, I am wondering ... How do you guys use the GTX cards in rackable
>> > servers ?
>> >
>> > GTX cards are consummer grade, for personal workstations, gaming, and so
>> on
>> > and it's nearly impossible to find any servers manufacturer like HP,
>> Dell,
>> > SuperMicro, etc. to certify that those cards will function properly on
>> > their servers.
>>
>> Certification can be an issue - unless you buy many and you can cut a
>> deal with a company. There are some companies that do certify servers,
>> but AFAIK most/all are US-based. I won't do public a long
>> advertisement here, but you can find many names if you browse NVIDIA's
>> GPU computing site (and as a matter of fact the AMBER GPU site is
>> quite helpful in this respect too).
>>
>> You can consider getting vanilla server nodes and plug the GTX cards
>> in yourself. In general, I can recommend Supermicro, they have pretty
>> good value servers from 1 to 4U. The easiest is to use the latter
>> because GTX cards will just fit vertically, but it will be a serious
>> waste of rack-space.
>
> With a bit of tinkering you may be able to get
>> GTX cards into 3U, but you'll either need cards with connectors on the
>> back or 90 deg angled 4-pin PCIE power cables. Otherwise you can only
>> fit the cards with PCIE raisers and I have no experience with that
>> setup, but I know some build denser machines with GTX cards.
>>
>> Cheer,
>>
>> --
>> Szilárd
>>
>> > What are your views about this ?
>> >
>> > Thanks.
>> >
>> > Best Regards
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list