[gmx-users] GROMACS performance on 10 GBit/s or 20Gbit/s infiniband

Erik Lindahl lindahl at cbr.su.se
Wed Oct 8 19:41:49 CEST 2008


Hi,

On Oct 6, 2008, at 6:46 AM, Himanshu Khandelia wrote:

> We are buying a new cluster with 8-code nodes and infiniband, and  
> have a
> choice between 10 Gbit/s and 20 Gbit/s transfer rates between nodes.  
> I do
> not immediately see the need for 20GBit/s between nodes, but thought  
> it
> might be worthwhile to ask for the experts' opinions regarding this?
>
> Is there any foreseeable advantage of having 20 Gbit/s connections as
> opposed to 10 Gbit/s?

Yes, but not a whole lot (By 20Gbit I assume you mean DDR Infiniband).

Most cluster nodes only have 8x PCI-E slots, which have lower  
bandwidth than DDR IB. In practice you will probably only get ~ 30%  
improved bandwidth.
There are also newer "ConnectX" IB chipsets that provide significantly  
lower latency, and I'd say that has about the same performance  
advantage (Many DDR IB solutions are not ConnectX)

Ultimately I'd say it depends on how you will use the cluster. If it  
is important to be able to scale to hundreds of corse it is probably  
worth having both DDR and ConnectX. If total throughput and value-for- 
money is a higher priority you can probably make a pretty good deal  
with a cluster vendor if you accept "slower" IB.

Right now I would particularly look at "dual" nodes that have two  
motherboards in 1U (Supermicro, but they are also sold by other  
vendors) with built-in Infiniband adapters!

Cheers,

Erik




More information about the gromacs.org_gmx-users mailing list