[gmx-users] energy minimization on a cluster
DEEPESH AGARWAL
deepesh.iitd at gmail.com
Thu Jul 10 10:36:13 CEST 2008
Dear all,
My simulated system specifications -
Protein (394 residues) in a cubic box
water layer - 0.9 nm
total no. of atoms - 113000
default density - 1007 g/l
I am running energy minimization using steepest descent technique on a
intel core 2 duo processor. The thing is it is taking whole lot of
time for minimization only, for instance 2700 steps took almost 40
hrs. Is it normal???
I tried to run it on a cluster by giving a command -
$ grompp_mpi_d -f em1.mdp -c Protein_wb.gro -p Protein.top -np 4 -o input1.tpr
...this command has divided atoms on 4 nodes. But when i ran mdrun it
gave an error, written below....
$ mpirun -np 4 mdrun_mpi_d -np 4 -s input1.tpr -o Protein-em_wb
-c Protein-min_wb1.gro -e Protein_ener -v
Program mdrun_mpi_d, VERSION 3.3.2
Source code file: futil.c, line: 313
File input/output error:
md.log
-------------------------------------------------------
"Player Sleeps With the Fishes" (Ein Bekanntes Spiel Von ID Software)
Halting program mdrun_mpi_d
gcq#236: "Player Sleeps With the Fishes" (Ein Bekanntes Spiel Von ID Software)
[0] MPI Abort by user Aborting program !
[0] Aborting program!
p0_24572: p4_error: : -1
p4_error: latest msg from perror: No such file or directory
-----------------------------------------------------------------------------
It seems that [at least] one of the processes that was started with
mpirun did not invoke MPI_INIT before quitting (it is possible that
more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0).
mpirun can *only* be used with MPI programs (i.e., programs that
invoke MPI_INIT and MPI_FINALIZE). You can use the "lamexec" program
to run non-MPI programs over the lambooted nodes.
-----------------------------------------------------------------------------
Could anybody please give any idea what is going wrong, and how to run
energy minimization on a cluster or any other way to reduce the time
drastically. Thanks in advance.
Deepesh
More information about the gromacs.org_gmx-users
mailing list