[gmx-users] Horrendous scaling

David van der Spoel spoel at xray.bmc.uu.se
Wed Jun 25 17:18:00 CEST 2003


On Tue, 2003-06-24 at 16:33, Senthil Kandasamy wrote:
> I guess this topic has been beaten to death, but here it is again.
> My machines are dual processor Athlon 2600+ using ethernet.
> The system is a reasonably large lipid bilayer with proteins(~40000
> atoms.) with PME electrostatics. I use fft_optimize, fft_order=4,
> shuufle and sort options
> 
> 
> 1 processor 		:  in 1 hour walltime, 13.3 ps  of simulation
> 2 processors(1 node)	:  in 1 hour walltime, 17.7 ps of simulation
> 4 processors (2 nodes)	:  in 1-hour walltime, 8.7 ps of simulation!!
> 
> I can live with running on two processors if I can improve scaling by a
> little bit more.
> 
> I will try to increase the fft_order to 6 and see what happens.
> 
> Also, these machines are part of a super cluster using mpich-1.2.5
> Are there any compilation options for mpich  and fftw that would improve
> performance? I do see a lot of suggestions for LAM., but none for mpich.
> 
> 
it looks like mpich isn't using the shared memory on a dual processor
node. Check whether you can find out some details about that. We have
actually seen considerable superscaling on two processors (but not with
PME yet). I would estimate that with well tuned LAM you would get up to
a factor of 1.6 or more than 21 ps/hour for such a system.

> Thanks.
> 
> Senthil
> 
> 
> 
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
-- 
Groeten, David.
________________________________________________________________________
Dr. David van der Spoel, 	Dept. of Cell & Mol. Biology
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



More information about the gromacs.org_gmx-users mailing list