[gmx-users] NVIDIA GTX cards in Rackable servers, how do you do it ?

Szilárd Páll pall.szilard at gmail.com
Tue Feb 24 15:41:59 CET 2015


On Tue, Feb 24, 2015 at 1:17 PM, Harry Mark Greenblatt
<harry.greenblatt at weizmann.ac.il> wrote:
> BS"D
>
> Dear David,
>
>   We did some tests with Gromacs and other programs on CPU's with core counts up to 16 per socket, and found that after about 12 cores jobs/threads begin to interfere with each other.  In other words there was a performace penalty when using core counts above 12.  I don't have the details in front of me, but you should  at the very least get a test machine and try running your simulations for short periods with 10, 12, 14, 16 and 18 cores in use to see how Gromacs behaves with these processors (unless someone has done these tests, and can confirm that Gromacs has no issues with 16 or 18 core cpu's).


Please share the details because it is of our interest to understand
and address such issues if they are reproducible.

However, note, that I've ran on CPUs with up to 18 cores (and up to
36-96 threads per socket) and in most cases the multi-threaded code
scales quite well - as long as not combined with DD/MPI. There are
some known multi-threaded scaling issues that are beign addressed for
5.1, but without log files it's hard to know what is the nature of the
"performance penalty" you mention.

Note: HyperThreading and SMT in general changes the situation, but
that's a different topic.

--
Szilárd

>
> Harry
>
>
> On Feb 24, 2015, at 1:32 PM, David McGiven wrote:
>
> Hi Szilard,
>
> Thank you very much for your great advice.
>
> 2015-02-20 19:03 GMT+01:00 Szilárd Páll <pall.szilard at gmail.com<mailto:pall.szilard at gmail.com>>:
>
> On Fri, Feb 20, 2015 at 2:17 PM, David McGiven <davidmcgivenn at gmail.com<mailto:davidmcgivenn at gmail.com>>
> wrote:
> Dear Gromacs users and developers,
>
> We are thinking about buying a new cluster of ten or twelve 1U/2U
> machines
> with 2 Intel Xeon CPU's 8-12 cores each. Some of the 2600v2 or v3 series.
> Not yet clear the details, we'll see.
>
> If you can afford them get the 14/16 or 18 core v3 Haswells, those are
> *really* fast, but a pair can cost as much as a decent car.
>
> Get IVB (v2) if it saves you a decent amount of money compared to v3.
> The AVX2 with FMA of the Haswell chips is great, but if you run
> GROMACS with GPUs on them my guess is that a higher frequency v2 will
> be more advantageous than the v3's AVX2 support. Won't swear on this
> as I have not tested thoroughly.
>
>
> According to an email exchange I had with Carsten Kutzner, for the kind of
> simulations we would like to run (see below), lower frequency v2's give
> better performance-to-price ratio.
>
> For instance, we can get from a national reseller :
>
> 2U server (supermicro rebranded I guess)
> 2 x E5-2699V3 18c 2,3Ghz
> 64 GB DDR4
> 2 x GTX980 (certified for the server)
> -
> 13.400 EUR (sans VAT)
>
>
> 2U server (supermicro rebranded I guess)
> 2 x E5-2695V2 12c 2,4 Ghz
> 64 GB DDR3
> 2 x GTX980 (certified for the server)
> -
> 9.140 EUR (sans VAT)
>
> Does that qualify as "saving a decent amount of money" to go for the V2 ? I
> don't think so, also because we care about rack space. Less servers but
> potent ones. The latests haswells are way too overpriced for us.
>
> We want to run molecular dynamics simulations of transmembrane proteins
> inside a POPC lipid bilayer, in a system with ~100000 atoms, from which
> almost 1/3 correspond to water molecules and employing usual conditions
> with PME for electorstatics and cutoffs for LJ interactions.
>
> I think we'll go for the V3 version.
>
> I've been told in this list that NVIDIA GTX offer the best
> performance/price ratio for gromacs 5.0.
>
> Yes, that is the case.
>
> However, I am wondering ... How do you guys use the GTX cards in rackable
> servers ?
>
> GTX cards are consummer grade, for personal workstations, gaming, and so
> on
> and it's nearly impossible to find any servers manufacturer like HP,
> Dell,
> SuperMicro, etc. to certify that those cards will function properly on
> their servers.
>
> Certification can be an issue - unless you buy many and you can cut a
> deal with a company. There are some companies that do certify servers,
> but AFAIK most/all are US-based. I won't do public a long
> advertisement here, but you can find many names if you browse NVIDIA's
> GPU computing site (and as a matter of fact the AMBER GPU site is
> quite helpful in this respect too).
>
> You can consider getting vanilla server nodes and plug the GTX cards
> in yourself. In general, I can recommend Supermicro, they have pretty
> good value servers from 1 to 4U. The easiest is to use the latter
> because GTX cards will just fit vertically, but it will be a serious
> waste of rack-space.
>
> With a bit of tinkering you may be able to get
> GTX cards into 3U, but you'll either need cards with connectors on the
> back or 90 deg angled 4-pin PCIE power cables. Otherwise you can only
> fit the cards with PCIE raisers and I have no experience with that
> setup, but I know some build denser machines with GTX cards.
>
> Cheer,
>
> --
> Szilárd
>
> What are your views about this ?
>
> Thanks.
>
> Best Regards
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.
>
>
> -------------------------------------------------------------------------
>
> Harry M. Greenblatt
>
> Associate Staff Scientist
>
> Dept of Structural Biology
>
> Weizmann Institute of Science        Phone:  972-8-934-3625
>
> 234 Herzl St.                        Facsimile:   972-8-934-4159
>
> Rehovot, 76100
>
> Israel
>
>
> Harry.Greenblatt at weizmann.ac.il<mailto:Harry.Greenblatt at weizmann.ac.il>
>
>
>
>
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list