[gmx-users] Regarding scalling factor of cluster and installation in cluster
naga raju
nagaraju_cy at yahoo.co.in
Thu Jun 14 05:28:21 CEST 2007
Dear gmx users,
I have some problem regarding scalling factor of cluster. Here I have given cluster specifications, installation procedure, i request you go through query and suggest me.
Here is the system specifications:
Master node: Intel Pentium 4, 3.0 GHz with 64HT 800 MHz FSB/2 MB L2
Slave Nodes: Intel Xeon, 3.0 Ghz with HT 800 MHz FSB/2 MB L2
OS : Redhat Linux Enterprizes 4.0, 64 bit
I downloaded fftw-3.0.1.tar.gz. and gromacs-3.3.1tar.gz from gromacs website.
For parallel installation, I used the fallowing commands(in master node, in root),
For fftw instalation..
./configure --enable-float --enable-sse --enable-mpi
make
make install
For gromacs installation..
./configure --enable-mpi
make
make mdrun
make install
make install-mdrun
make links
It was installed in /opt/gromacs/
I didn't get any error messages
To the gromacs in parallel, I have used the fallowing commands...
grompp -f equilibrium.mdp -n index.ndx -p dopc.top -c dopc.pdb -np 5 -o dopc-eq.tpr
mpirun -np 5 -machinefile gmxguest -nolocal /opt/gromacs/bin/mdrunmpi -s dopc-eq.tpr -c dopc-eq.pdb -o dopc-eq.trr
(Note: gmxguest file contains five node names) I checked in 5 nodes, job is going in all nodes.
The cluster took 22 hours to finish the job but same job with same time scale in with one Intel Pentium 4, 3.0 GHz , 64bit, rehat linux OS, it took 26 hours.
My question is why it is happening, is there any problem in installing or gromacs not support this type of clusters.
To improve scalling factor of cluster, what i have to do?
Any suggestion is appreciated.
Thank you in advance.
with regards,
Nagaraju Mulpuri.
---------------------------------
Don't be flakey. Get Yahoo! Mail for Mobile and
always stay connected to friends.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20070613/990bc4a6/attachment.html>
More information about the gromacs.org_gmx-users
mailing list