[gmx-users] parallel run problems--please help

Jee Eun Rim jrim at stanford.edu
Tue Feb 10 02:40:01 CET 2004


Hello,

I have installed gromacs on an SGI Origin 3800, and tried running the tutorial gmxdemo. 
I have no problem running the serial version, but when I modify the script and try to run the parallel version with mpirun, I get the following errors,

input:
grompp_d -np 1 -f em -c ${MOL}_b4em -p ${MOL} -o ${MOL}_em >& ! output.grompp_em
mpirun -np 1 mdrun_mpi -np 1 -nice 4 -s ${MOL}_em -o ${MOL}_em -c ${MOL}_b4pr -v >& ! output.mdrun_em

output:
MPI: MPI_COMM_WORLD rank 0 has terminated without calling MPI_Finalize()
MPI: aborting job

Also, if I increase the number of processes to 4, I get the following error

input:
grompp_d -np 4 -f pr -c ${MOL}_b4pr -r ${MOL}_b4pr -p ${MOL} -o ${MOL}_pr >& ! output.grompp_pr
mpirun -np 4 mdrun_mpi -np 4 -nice 4 -s ${MOL}_pr -o ${MOL}_pr -c ${MOL}_b4md -v >& ! output.mdrun_pr

output:
Fatal error: run input file cpeptide_pr.tpr was made for 4 nodes,
                 while mdrun_mpi expected it to be for 1 nodes.
Fatal error: run input file cpeptide_pr.tpr was made for 4 nodes,
                 while mdrun_mpi expected it to be for 1 nodes.
Fatal error: run input file cpeptide_pr.tpr was made for 4 nodes,
                 while mdrun_mpi expected it to be for 1 nodes.
Fatal error: run input file cpeptide_pr.tpr was made for 4 nodes,
                 while mdrun_mpi expected it to be for 1 nodes.

Even though grompp_d had the option -np 4. 

Does this mean that gromacs was not compiled/installed correctly? Any help would be greatly appreciated.
By the way, gromacs was compiled with MPICH, with --enable-shared.

Thanks a lot.

Jee.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20040210/ff661aaf/attachment.html>


More information about the gromacs.org_gmx-users mailing list