[gmx-users] Cluster recommendations

David McGiven davidmcgivenn at gmail.com
Fri Jan 16 12:28:14 CET 2015


Hi Carsten,

Thanks for your answer.

2015-01-16 11:11 GMT+01:00 Carsten Kutzner <ckutzne at gwdg.de>:

> Hi David,
>
> we are just finishing an evaluation to find out which is the optimal
> hardware for Gromacs setups. One of the input systems is an 80,000 atom
> membrane channel system and thus nearly exactly what you want
> to compute.
>
> The biggest benefit you will get by adding one or two consumer-class GPUs
> to your nodes (e.g. NVIDIA GTX 980). That will typically double your
> performace-to-price ratio. This is true for Intel as well as for AMD
> nodes, however the best ratio in our tests was observed with 10-core
> Intel CPUs (2670v2, 2680v2) in combination with a GTX 780Ti or 980,
> ideally two of those CPUs with two GPUs on a node.
>
>
Was there a difference between 2670v2 (2.5 GHz) and 2680v2  (2.8 GHz) ? I'm
wondering if those 0,3 GHz are significative. Or the 0,5 GHz compared to
2690v2 for the matter. There's a significative difference in price indeed.

I'm also wondering if the performance would be better with 16 core Intels
instead of 10 core. I.e E5-2698 v3.

I would like to know which other tests have you done. What about AMD ?

Unless you want to buy expensive FDR14 Infiniband, scaling across two
> or more of those nodes won’t be good (~0.65 parallel efficiency across 2,
> ~0.45 across 4 nodes using QDR infiniband), so I would advise against
> it and go for more sampling on single nodes.
>
>
Well, that puzzles me. Why is it that you get poor performance ? Are you
talking about pure CPU jobs over infiniband, or are you talking about
CPU+GPU jobs over infiniband ?

How come you won't get good performance if a great percentage of
supercomputer centers in the world use InfiniBand ? And I'm sure lots of
users here in the list use gromacs over Infiniband.

Thanks again.

Best Regards,
D


> Best,
>   Carsten
>
>
>
>
> On 15 Jan 2015, at 17:35, David McGiven <davidmcgivenn at gmail.com> wrote:
>
> > Dear Gromacs Users,
> >
> > We’ve got some funding to build a new cluster. It’s going to be used
> mainly
> > for gromacs simulations (80% of the time). We run molecular dynamics
> > simulations of transmembrane proteins inside a POPC lipid bilayer. In a
> > typical system we have ~100000 atoms, from which almost 1/3 correspond to
> > water molecules. We employ usual conditions with PME for electorstatics
> and
> > cutoffs for LJ interactions.
> >
> > I would like to hear your advice on which kind of machines are the best
> > bang-for-the-buck for that kind of simulations. For instance :
> >
> > - Intel or AMD ? My understanding is that Intel is faster but expensive,
> > and AMD is slower but cheaper. So at the end you almost get the same
> > performance-per-buck. Right ?
> >
> > - Many CPUs/Cores x machine or less ? My understanding is that the more
> > cores x machine the lesser the costs. One machine is always cheaper to
> buy
> > and maintain than various. Plus maybe you can save the costs of
> Infiniband
> > if you use large core densities ?
> >
> > - Should we invest in an Infiniband network to run jobs across multiple
> > nodes ? Will the kind of simulations we run benefit from multiple nodes ?
> >
> > - Would we benefit from adding GPU’s to the cluster ? If so, which ones ?
> >
> > We now have a cluster with 48 and 64 AMD Opteron cores x machine (4
> > processors x machine) and we run our gromacs simulations there. We don’t
> > use MPI because our jobs are mostly run in a single node. As I said, with
> > 48 or 64 cores x simulation in a single machine. So far, we’re quite
> > satisfied with the performance we get.
> >
> > Any advice will be greatly appreciated.
> >
> >
> > Best Regards,
> > D.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
>
> --
> Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics
> Am Fassberg 11, 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/grubmueller/kutzner
> http://www.mpibpc.mpg.de/grubmueller/sppexa
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list