[gmx-users] forcefield and run in parallel

Rui Qiao ruiqiao at ews.uiuc.edu
Mon Sep 2 21:23:32 CEST 2002

	I followed the approach David suggested and I think Gromacs is
using the new parameters in the simulation. Thanks for all the responses!
	I am now trying to run Gromacs in parallel and somehow the speedup
is not significant. The basic information is the following:
	# of atom: ~5000 (around 1300 water)
	Forces: PME, 4th order interpolation, FFT grid: 0.11nm, 
		vdw cut-off: 1.1nm
	The performance are (on P-III platinuum):
	1 node :		23.8h/ns
	4 nodes:		13.9h/ns
	6 nodes: 		12.8h/ns
	8 nodes: 		11.5h/ns

	It seems that the performance does not scale nicely with
increasing nodes for node>=4. I realize that I am using PME and am having
a small system, but is there a way to somehow boost the performance? 

	While checking the log file, I found there are some information

	Total NODE time on node 0: 922.68
	Average NODE time: 153.78
	Load imbalance reduced performance to 600% of max
	My simulation system has about 1200 atoms that are frozen in the
simulation and they are allocated on node 0 in the simulations. Could this
cause problems like load im-balancing?

Rui Qiao
Research Assistant
Beckman Institute, UIUC

More information about the gromacs.org_gmx-users mailing list