[gmx-users] Parallel Gromacs Benchmarking with Opteron Dual-Core & Gigabit Ethernet
David van der Spoel
spoel at xray.bmc.uu.se
Mon Jul 23 15:40:00 CEST 2007
Erik Lindahl wrote:
> Hi,
>
>> I read at Gmx site that the DPPC system
>> composed of 121,856 atoms. I saw the gmx topology files, it
>> seems that Gmx makes data decomposition on input data to run in parallel
>> (in our simulation case using "-np 12" for execution
>> on 3 nodes, the data space for every process is about 10156 atoms).
>> I think that the DPPC system's size is not so big enough that someone
>> can sense the scalability of parallel execution in the existence of
>> Gigabit Eth. I mean to see the Cluster scalability in our configuration,
>> we should setup a bigger simulation. Pls correct me, if I'm in mistake.
>
> Well, you can always try different systems (genconf + edit topology),
> but the fact that you see low user CPU usage likely means the nodes are
> busy waiting for the communication (which probably counts as
> kernel/system usage).
>
> Remember - compared to the benchmark numbers at www.gromacs.org, your
> bandwidth is 1/4 and the latency 4 times higher, since you have four
> cores sharing a single network connection.
>
> Gromacs 4 should scale better on any hardware (significantly better with
> PME), but you'll probably never see great scaling with only 4-way shared
> gigabit ethernet. It's available in the head branch of CVS for expert
> users/voluntary guinea-pigs, but entirely unsupported until we release it.
>
in addition with gromacs 3.3 you want to use the -shuffle option for grompp.
--
David van der Spoel, Ph.D.
Molec. Biophys. group, Dept. of Cell & Molec. Biol., Uppsala University.
Box 596, 75124 Uppsala, Sweden. Phone: +46184714205. Fax: +4618511755.
spoel at xray.bmc.uu.se spoel at gromacs.org http://folding.bmc.uu.se
More information about the gromacs.org_gmx-users
mailing list