[gmx-users] forcefield and run in parallel

Erik Lindahl lindahl at stanford.edu
Tue Sep 3 19:24:35 CEST 2002

David wrote:
> On Mon, 2002-09-02 at 21:23, Rui Qiao wrote:
>>	I followed the approach David suggested and I think Gromacs is
>>using the new parameters in the simulation. Thanks for all the responses!
>>	I am now trying to run Gromacs in parallel and somehow the speedup
>>is not significant. The basic information is the following:
>>	# of atom: ~5000 (around 1300 water)
>>	Forces: PME, 4th order interpolation, FFT grid: 0.11nm, 
>>		vdw cut-off: 1.1nm
>>	The performance are (on P-III platinuum):
>>	1 node :		23.8h/ns
>>	4 nodes:		13.9h/ns
>>	6 nodes: 		12.8h/ns
>>	8 nodes: 		11.5h/ns
> to be honest, it doesn get any better than this for small systems,
> unless you have a high speed network or an IBM SP2. Erik and I plan to
> work on it, if we get over our need for a couple of hours sleep a day...

There is one way to improve it slightly; the problematic part is the 
grid communication, but if you increase the order of interpolation 
(pme_order in the mdp file) to 6 you should be able to increase the grid 
spacing by almost 50% (fourier_spacing). A smaller grid will communicate 
slightly better!



More information about the gromacs.org_gmx-users mailing list