[gmx-users] Regarding scalling factor of cluster and installation in cluster
Mark Abraham
Mark.Abraham at anu.edu.au
Thu Jun 14 05:40:29 CEST 2007
naga raju wrote:
>
> Dear gmx users,
> I have some problem regarding scalling factor of cluster. Here I have
> given cluster specifications, installation procedure, i request you go
> through query and suggest me.
>
> Here is the system specifications:
> Master node: Intel Pentium 4, 3.0 GHz with 64HT 800 MHz FSB/2 MB L2
> Slave Nodes: Intel Xeon, 3.0 Ghz with HT 800 MHz FSB/2 MB L2
> OS : Redhat Linux Enterprizes 4.0, 64 bit
>
> I downloaded fftw-3.0.1.tar.gz. and gromacs-3.3.1tar.gz from gromacs
> website.
> For parallel installation, I used the fallowing commands(in master node,
> in root),
>
> For fftw instalation..
> ./configure --enable-float --enable-sse --enable-mpi
> make
> make install
>
> For gromacs installation..
> ./configure --enable-mpi
> make
> make mdrun
> make install
> make install-mdrun
> make links
> It was installed in /opt/gromacs/
> I didn't get any error messages
>
> To the gromacs in parallel, I have used the fallowing commands...
> grompp -f equilibrium.mdp -n index.ndx -p dopc.top -c dopc.pdb -np 5 -o
> dopc-eq.tpr
> mpirun -np 5 -machinefile gmxguest -nolocal /opt/gromacs/bin/mdrunmpi
> -s dopc-eq.tpr -c dopc-eq.pdb -o dopc-eq.trr
> (Note: gmxguest file contains five node names) I checked in 5 nodes, job
> is going in all nodes.
>
> The cluster took 22 hours to finish the job but same job with same time
> scale in with one Intel Pentium 4, 3.0 GHz , 64bit, rehat linux OS, it
> took 26 hours.
> My question is why it is happening, is there any problem in installing
> or gromacs not support this type of clusters.
> To improve scalling factor of cluster, what i have to do?
If you look at the bottom section of your log file, you'll see GROMACS
report where it spent fractions of its time. It's also going to be less
painful to test this stuff over two processors, rather than five, and
over a short simulation, not a long one.
IIRC "./configure --enable-mpi" won't install
"/opt/gromacs/bin/mdrunmpi" either in that location or with that name so
I suggest you look at config.log in the installation directory and
report your actual configure line, not what you think you did :-)
Otherwise, thanks for providing the details you did.
You should test your cluster with some simple parallel code in which you
can see the speedup, and report what hardware is doing your interconnects.
Mark
More information about the gromacs.org_gmx-users
mailing list