[gmx-users] Parallel Gromacs Benchmarking with Opteron Dual-Core & Gigabit Ethernet
ckutzne at gwdg.de
Tue Jul 24 11:56:19 CEST 2007
Kazem Jahanbakhsh wrote:
> Dear Erik,
>> Remember - compared to the benchmark numbers at www.gromacs.org, your
>> bandwidth is 1/4 and the latency 4 times higher, since you have four
>> cores sharing a single network connection.
> I agree with you about the GbE sharing between 4 cores degardes the
> performance. Fortunately, every Cluster node has two GbE ports. I
> want to know, can I configure lamd in such a manner that every processor
> on every node (with two cores) uses one of these ports for its
> communication purposes? And if this is possible, could you please give me
> some hints or guidance to start this configuration on the cluster?
maybe you want to try OpenMPI instead of LAM. In OpenMPI you can easily
use multiple network adapters and you can even make use of different
characteristics of different adapters, e.g. if one of both has a lower
latency it is be preferred for small messages. Upon run time, you will
have to tell OpenMPI something like
mpirun -np 16 --mca btl_tcp_if_include eth0,eth1 <mpi-program>
> And finally, can you expect any scalability improvements by taking this
In principle, the data throughput can be twice as high compared to a
single Ethernet adapter. If you have two identical NICs then latency is
unaffected and therefore you will only sense a bandwidth effect, i.e.
when your system is big enough. But you could also suffer from
additional bottlenecks within the node, so in my opinion the performance
you can get from a (4 core+2 NICs) setting will at most be that of a
2*(2 core+1 NIC) setting.
More information about the gromacs.org_gmx-users