[gmx-users] Gromacs and Software RDMA over Converged Ethernet -- is there a point ?

Mark Abraham mark.j.abraham at gmail.com
Sat Jul 23 16:33:35 CEST 2016


Hi,

On Sat, 23 Jul 2016 05:03 Christopher Neale <chris.neale at alum.utoronto.ca>
wrote:

> Dear Gromacs users:
>
> I have access to a new cluster that has GigE interconnect (selected vs. IB
> for reasons other than cost). As expected, systems that scale nicely to two
> nodes with IB end up running faster on 1 node than they do in 2 nodes when
> using GigE. SysAdmins are wondering if software RoCE (Software RDMA over
> Converged Ethernet) will help. Anybody have any experience with this?
>
> here is what the sysadmin said:
>
> "
> For large message sizes (>64k), SoftRoCE can provide performance
> comparable to hardware RoCE.  Latency improvements are more modest, ~50%
> better than straight ethernet but still about 3x higher than hardware RoCE.
>

For context, DD has to do a halo exchange of positions (and then forces
back) on (roughly) an rlist sized slab of particles per neighbouring domain
(27 in general, sometimes in two pulses). At four bytes per float and three
floats per atom, that's more than 5000 atoms to get to a 64K message, and
that's a big ask for more than a few domains.

Some references:
>
>
> http://www.lanl.gov/projects/national-security-education-center/information-science-technology/_assets/docs/2010-si-docs/Team_CYAN_Implementation_and_Comparison_of_RDMA_Over_Ethernet_Presentation.pdf
>
>
> http://www.iosrjournals.org/iosr-jce/papers/Vol15-issue4/N01548187.pdf?id=7557
> "
>
> I found this: http://quick.hcs.ufl.edu/pubs/UF_HPIDC.pdf but that is
> suggesting that there is a speedup when going to multiple nodes even for
> GigE and that is not what I see.
>

Yeah. Note that they chose to use only one of the two "CPUs" per node,
which lowers compute performance relative to network. And if they didn't
take care to set thread affinity manually, that'll slow down the compute
part dramatically further. On a modern Xeon node, your observations seen
much more relevant.

Mark

Thank you,
> Chris.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list