[gmx-users] Re: Gromacs - LAM/MPI

Ken Chase math at velocet.ca
Mon Nov 11 09:27:26 CET 2002


On Mon, Nov 11, 2002 at 07:46:29AM +0100, K.A. Feenstra's all...
> Ken Chase wrote:
> > 
> [...]
> > If you dont scale well, say at only 30% at 12 nodes, what happens
> > when you run say 3 lam-mpi meshes instead, to soak up all the extra cpu?
> > I realise there's interupt time servicing the requests, as well as a
> > bit more latency from putting lots of data down the link (I saw 8-10
> > Mbps on fast ethernet on dual 1.33Ghz tbird machines for the d.dppc
> > benchmark with 4 dual nodes, and 18-25Mbps for the same config with GBE),
> > but you may end up with a factor of 2 better throughput - assuming you
> > have more than 1 job to run at a time of course!
> 
> I'd rather run three 4-CPU jobs. I'd suggest you run a series of benchmarks
> of a few hours to see what would actually give you the best overall 
> performance. My guess is, from the scaling behavior of Gromacs, that 
> 3x12CPU on 12 CPU's would be slower than 3x4CPU on 12 CPU's.

This is probably true, from what I've seen as well. Soaking up the
CPU with non network-impacting jobs (like Gaussian98 or the like) at
lower priority makes the most sense - gromacs isnt using it, so why not.

If you played with nice levels in gromacs, and really could politically/
organizationally prioritize them that way, then soaking up extra cpu
with a job at nice level 19 from a job running at 0 could work. 

I'd really have to test this to see how well it works. Total speed
isnt always the goal - especially when there are many people fighting
for the CPUs - total throughput of jobs is also very important.

> > Paulo: remember too, scaling is dependant on the ratio of CPU speed to
> > network, not just the raw network speed. I'd suspect P3-450s would scale
> > way better on fast ether than a Xeon 2.2Ghz.
> 
> Right! 
> You could also use e.g. Amber, which scales better than Gromacs, for 
> a similar reason: Amber's single CPU performance is about a factor of 
> ten lower than Gromacs's!

haha! very true.

Doesnt make any sense, until you start measuring how much it costs to run
faster nodes in electricity and cooling. Dual athlons are coming in around
110-140 W, you put up those 1 Ghz EDEN boards from VIA at 6W, I think that
your FLOPS per Watt is going to win, not to mention floorspace (boards are
17cm square! no fans on cpu required) - but furthermore, because its a bit
slower of a CPU, your networking gear will carry you much further. Of course
things will be slower than the same number of 2Ghz XEONs, but if you are
experiencing only 80% scaling efficienty with them at 48 cpus, it would be
inaccurate to estimate the same with the C3/Edens at 1Ghz.  (You may even be
able to get away with using GBE instead of myrinet or scali, which means
you can buy more nodes, which means more speed, and on it goes...)

Comparing 2Ghz Xeon scaling performance vs the posted d.dppc times on the
website isnt really fair - those benchmarks were run against much older CPUs
in the 300-800 Mhz range IIRC - its much easier to scale efficiently to
higher numbers of CPUs.

/kc


> 
> -- 
> Groetjes,
> 
> Anton
>  ________ ___________________________________________________________
> |        | Anton Feenstra                                            |
> | .      | Dept. of Pharmacochemistry - Vrije Universiteit Amsterdam |
> | |----  | De Boelelaan 1083 - 1081 HV Amsterdam - The Netherlands   |
> | |----  | Tel: +31 20 44 47608 - Fax: +31 20 44 47610               |
> | ' __   | Feenstra at chem.vu.nl - http://www.chem.vu.nl/afdelingen/FAR|
> |  /  \  |-----------------------------------------------------------|
> | (    ) | Dept. of Biophysical Chemistry - University of Groningen  |
> |  \__/  | Nijenborgh 4 - 9747 AG Groningen - The Netherlands        |
> |   __   | Tel +31 50 363 4327 - Fax +31 50 363 4800                 |
> |  /  \  | K.A.Feenstra at chem.rug.nl - http://md.chem.rug.nl/~anton   |
> | (    ) |-----------------------------------------------------------|
> |  \__/  | "If You See Me Getting High, Knock Me Down"               |
> |        | (Red Hot Chili Peppers)                                   |
> |________|___________________________________________________________|
> 
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.

-- 
Ken Chase, math at velocet.ca  *  Velocet Communications Inc.  *  Toronto, CANADA 



More information about the gromacs.org_gmx-users mailing list