[gmx-users] parallel problem
David
spoel at xray.bmc.uu.se
Wed Sep 3 23:55:01 CEST 2003
On Thu, 2003-09-04 at 02:48, Osmany Guirola Cruz wrote:
> hi
> What did you think about put this option in the configure lammpi
> tcp-short=524288 for use 512kb
> is it rigth?
yes that is fine. 512kb divided by twelve bytes per coordinate (three
reals) means that you can send coordinates for over 43000 atoms in one
step.
>
>
> David wrote:
> > On Thu, 2003-09-04 at 01:52, Osmany Guirola Cruz wrote:
> >
> > > No i have dual PIII 933MHz coupled by tcp/ip
> > >
> > > It is 100 Mbit
> > >
> > > My cluster have a switch i have 32 dual in a sub-net and only one
> > > machine is in my network (PBS)
> > >
> > > I do a simulation whit 9500 molecules of water (SOL) 129 proteins
> > > residue
> > >
> > > No i dont do the gromacs benchmarks , HOW COULD I DO IT?
> > >
> >
> > Download them from gromacs.org...
> > I have done the test with a switched 100 Mbit/s network with dual 800
> > MHz P3s, up until 10 nodes (i.e. 20 cpus).
> >
> >
> > > i forget something , my simulations whith cutoff are shorter than PME
> > >
> >
> > To efficiently use the dual processors you have to select another lam
> > option (rcp=usysv or rcp=sysv).
> >
> > Now the real problem performancewise is PME. In the current 3.1.4
> > version PME does not behave well at all in parallel. On a Scali network
> > I use at most 4 dual Xeon nodes for my runs which have 30000 waters.
> > Since your system is smaller, performance will be even worse. Note that
> > the gromacs scaling benchmark is done with a (twin-range) cut-off rather
> > than PME. If you can live with a cut-off (and after all, the GROMOS96
> > force field was developed for use with a cut-off) you could maybe scale
> > to somewhat more processors:
> > nstlist = 5
> > rlist = 0.9
> > rcoulomb = 1.4
> > rvdw = 1.4
> >
> > See how far you can go with that. Furthermore you want to control how
> > PBS/LAM allocates your processors. The communication is on a ring
> > topology in principle, so if you have two dual processor nodes
> > N0-p0, N0-p1, N1-p0, N1-p1
> > you want the jobs to be allocated in this order (to use the shared
> > memory communication) rather than
> > N0-p0, N1-p0, N0-p1, N1-p1
> >
> > In the first example two of the four communications use shared memory,
> > in the other example none of them do.
> >
> >
> >
> >
> >
> >
> > > Really i need help, i have 32 machines and only use one for my
> > > simulations :-(
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > David wrote:
> > >
> > > > On Wed, 2003-09-03 at 22:16, Osmany Guirola Cruz wrote:
> > > >
> > > >
> > > > > This is not the first time that i make the same question, how should i do gromacs work well whith lam in my linux cluster , a simulation in one machine is shorter than two machines , my last steep was compile the lammpi source whith the option tcp-short=524288 (512kb) and nothing happens .
> > > > >
> > > > > PLEASE HELPMEEEEEEEEEEEEEEEEEEEEEE
> > > > >
> > > > >
> > > >
> > > > I'll just assume you have single processor machines coupled by tcp/ip
> > > > network, is that correct?
> > > >
> > > > Is it 10 Mbit/s, 100 Mbit/s or better?
> > > >
> > > > Do you have a switch between the machines or a hub?
> > > >
> > > > How large is your system to simulate?
> > > >
> > > > Did you try to reproduce the gromacs benchmarks?
> > > >
> > > >
> > > >
> > > > >
--
Groeten, David.
________________________________________________________________________
Dr. David van der Spoel, Dept. of Cell and Molecular Biology
Husargatan 3, Box 596, 75124 Uppsala, Sweden
phone: 46 18 471 4205 fax: 46 18 511 755
spoel at xray.bmc.uu.se spoel at gromacs.org http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
More information about the gromacs.org_gmx-users
mailing list