[gmx-users] Minimal PCI Bandwidth for Gromacs and Infiniband?
Daniel Bauer
bauer at cbs.tu-darmstadt.de
Mon Mar 5 18:41:16 CET 2018
Hello,
In our group, we have multiple identical Ryzen 1700x / Nvidia GeForce
1080 GTX computing nodes and think about interconnecting them via
InfiniBands.
Does anyone have Information on what Bandwidth is required by GROMACS
for communication via InfiniBand (MPI + trajectory writing) and how it
scales with the number of nodes?
The mainboards we are currently using can only run one PCIe slot with 16
lanes. When using both PICe slots (GPU+InfiniBand), they will run in
dual x8 mode (thus bandwidth for both GPU and InfiniBand will be reduced
to 8 GB/s instead of 16 GB/s). Now we wonder if the reduced bandwidth
will hurt GROMACS performance due to bottlenecks in GPU/CPU
communication and/or communication via InfiniBand. If this is the case,
we might have to upgrade to new mainboards with dual x16 support.
Best regards,
Daniel
--
Daniel Bauer, M.Sc.
TU Darmstadt
Computational Biology & Simulation
Schnittspahnstr. 2
64287 Darmstadt
bauer at cbs.tu-darmstadt.de
Don't trust atoms, they make up everything.
More information about the gromacs.org_gmx-users
mailing list