[gmx-users] Cluster recommendations

Carsten Kutzner ckutzne at gwdg.de
Fri Jan 16 11:11:58 CET 2015


Hi David,

we are just finishing an evaluation to find out which is the optimal
hardware for Gromacs setups. One of the input systems is an 80,000 atom
membrane channel system and thus nearly exactly what you want
to compute.

The biggest benefit you will get by adding one or two consumer-class GPUs
to your nodes (e.g. NVIDIA GTX 980). That will typically double your
performace-to-price ratio. This is true for Intel as well as for AMD
nodes, however the best ratio in our tests was observed with 10-core
Intel CPUs (2670v2, 2680v2) in combination with a GTX 780Ti or 980,
ideally two of those CPUs with two GPUs on a node. 

Unless you want to buy expensive FDR14 Infiniband, scaling across two
or more of those nodes won’t be good (~0.65 parallel efficiency across 2,
~0.45 across 4 nodes using QDR infiniband), so I would advise against
it and go for more sampling on single nodes.

Best,
  Carsten




On 15 Jan 2015, at 17:35, David McGiven <davidmcgivenn at gmail.com> wrote:

> Dear Gromacs Users,
> 
> We’ve got some funding to build a new cluster. It’s going to be used mainly
> for gromacs simulations (80% of the time). We run molecular dynamics
> simulations of transmembrane proteins inside a POPC lipid bilayer. In a
> typical system we have ~100000 atoms, from which almost 1/3 correspond to
> water molecules. We employ usual conditions with PME for electorstatics and
> cutoffs for LJ interactions.
> 
> I would like to hear your advice on which kind of machines are the best
> bang-for-the-buck for that kind of simulations. For instance :
> 
> - Intel or AMD ? My understanding is that Intel is faster but expensive,
> and AMD is slower but cheaper. So at the end you almost get the same
> performance-per-buck. Right ?
> 
> - Many CPUs/Cores x machine or less ? My understanding is that the more
> cores x machine the lesser the costs. One machine is always cheaper to buy
> and maintain than various. Plus maybe you can save the costs of Infiniband
> if you use large core densities ?
> 
> - Should we invest in an Infiniband network to run jobs across multiple
> nodes ? Will the kind of simulations we run benefit from multiple nodes ?
> 
> - Would we benefit from adding GPU’s to the cluster ? If so, which ones ?
> 
> We now have a cluster with 48 and 64 AMD Opteron cores x machine (4
> processors x machine) and we run our gromacs simulations there. We don’t
> use MPI because our jobs are mostly run in a single node. As I said, with
> 48 or 64 cores x simulation in a single machine. So far, we’re quite
> satisfied with the performance we get.
> 
> Any advice will be greatly appreciated.
> 
> 
> Best Regards,
> D.
> -- 
> Gromacs Users mailing list
> 
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa



More information about the gromacs.org_gmx-users mailing list