[gmx-users] Re: problem with gromacs on cluster

Florian Haberl Florian.Haberl at chemie.uni-erlangen.de
Tue Jul 25 14:52:43 CEST 2006


hi,

On Tuesday 25 July 2006 14:34, David van der Spoel wrote:
> Mr. M.N. Manoj wrote:
> > Dear Sir/Madam
> >
> > We have been using gromacs 3.3 on our workstation succesfully
> > Now we have purchased a Rocks 4.2 based cluster (dual xeon processor
> > with 5 nodes)
> > We have been able to compile Gromacs 3.3 with fftw 2.1.5 and MPI-LAM
> > 7.1.2 on front node, its working there.
Update to gmx 3.3.1 and fftw 3.1.1

> > Then we  have copied the installation folder /usr/local/gromacs to each
> > node
easist is to use nfs for all nodes so no copy action is needed. You should 
mount nfs on all nodes.
> >
> > But when we run it, the performance is not better than one single node
> > (master) in terms of ps/day
> > Also from the ganglia it can be seen that hardly anything is happening
> > on compute nodes, though it says its running mdrun_mpi
> >
> > Now we have few questions:
> >
> > 1. What is the exact procedure for installing gromacs on the the FULL
> > CLUSTER ?
> > 2. Do we have to install it (gmx and fftw2) on each node?
> > 3. Any other way ?
see above
>
> your installation is probably fine.
> you have to tell mpirun to use more than one node, try "man mpirun"
>
Next step is to get a queing system like pbs/maui for your cluster, which puts 
all jobs on the nodes.

>
> please ask further questions on the mailing list.

Greetings,

Florian

-- 
-------------------------------------------------------------------------------
 Florian Haberl                        
 Computer-Chemie-Centrum   
 Universitaet Erlangen/ Nuernberg
 Naegelsbachstr 25
 D-91052 Erlangen
 Telephone:  	+49(0) − 9131 − 85 26581
 Mailto: florian.haberl AT chemie.uni-erlangen.de
-------------------------------------------------------------------------------



More information about the gromacs.org_gmx-users mailing list