[gmx-users] Minimal PCI Bandwidth for Gromacs and Infiniband?

Simon Kit Sang Chu simoncks1994 at gmail.com
Mon Mar 12 09:09:13 CET 2018


Hi everyone,

Our group is also interested in purchasing cloud GPU cluster. Amazon only
supplies GPU cluster connected by 10Gb/s bandwidth. I notice this post but
there is no reply by far. It would be nice if someone give any clue.

Regards,
Simon

2018-03-06 1:31 GMT+08:00 Daniel Bauer <bauer at cbs.tu-darmstadt.de>:

> Hello,
>
> In our group, we have multiple identical Ryzen 1700x / Nvidia GeForce
> 1080 GTX computing nodes and think about interconnecting them via
> InfiniBands.
>
> Does anyone have Information on what Bandwidth is required by GROMACS
> for communication via InfiniBand (MPI + trajectory writing) and how it
> scales with the number of nodes?
>
> The mainboards we are currently using can only run one PCIe slot with 16
> lanes. When using both PICe slots (GPU+InfiniBand), they will run in
> dual x8 mode (thus bandwidth for both GPU and InfiniBand will be reduced
> to 8 GB/s instead of 16 GB/s). Now we wonder if the reduced bandwidth
> will hurt GROMACS performance due to bottlenecks in GPU/CPU
> communication and/or communication via InfiniBand. If this is the case,
> we might have to upgrade to new mainboards with dual x16 support.
>
>
> Best regards,
>
> Daniel
>
> --
> Daniel Bauer, M.Sc.
>
> TU Darmstadt
> Computational Biology & Simulation
> Schnittspahnstr. 2
> 64287 Darmstadt
> bauer at cbs.tu-darmstadt.de
>
> Don't trust atoms, they make up everything.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list