[gmx-users] Running GROMACS in parallel

Anna Marabotti anna.marabotti at isa.cnr.it
Thu Nov 9 11:38:02 CET 2006


Hi folks,
I'd need an help to run GROMACS in parallel on a cluster with x86_64
architecture, with BioBrew 4.1.2-1 roll (GROMACS is one of the programs
included in the package). After launching lamboot (with no problems) on 5
nodes of this machine (with names lilligridgiga,lilligridgiga2/3/4/5), with
2 CPUs each, I'm trying to do the speptide tutorial running the final full
MD in parallel. Therefore, I used the grompp -np 10 program (all other
settings left as in the tutorial), and after that I made several attempts to
launch mdrun. Here a brief resume of all my trials and of error messages
received:

1) as indicated in the section "Running GROMACS in parallel" of the manual:
$ mpirun -p
lilligridgiga,lilligridgiga2,lilligridgiga3,lilligridgiga4,lilligridgiga5 2
mdrun -v -s full etc (if I understand well, this should mean that mdrun is
made on nodes lilligridgigax, 2 processes on each)
ERROR MESSAGE: mpirun (locate_aschema): 2: No such file or directory

2) as suggested in a previous post in gmx-user archive:
$ mpirun C mdrun -v -s full etc (as in the tutorial) -g flog >& full.job &
ERROR MESSAGES:
>From file flog.log: Fatal error: run input file full.tpr was made for 10
nodes, while mdrun expected it to be for 1 nodes.
>From file full.job: It seems that [at least] one of the processes that was
started with mpirun did not invoke MPI_INIT before quitting (it is possible
that more than one process did not invoke MPI_INIT -- mpirun was only
notified of the first one, which was on node n0). mpirun can *only* be used
with MPI programs (i.e., programs that invoke MPI_INIT and MPI_FINALIZE).
You can use the "lamexec" program to run non-MPI programs over the lambooted
nodes.

3) as suggested in another previous post in gmx-user archive:
$ mpirun -np 10 mdrun -v -s full etc (as in the tutorial) -g flog1 >&
full1.job &
ERROR MESSAGE: 
>From file flog1.log :Fatal error: run input file full.tpr was made for 10
nodes, while mdrun expected it to be for 1 nodes.
File full1.job was saved in multiple copies as #full1.job.1# and therefore I
am not able to see its content

4) as suggested in another previous post in gmx-user archive:
$ mpirun -np 10 mdrun -np 10 -v -s full etc (as in the tutorial) -g flog2 >&
full2.job &
ERROR MESSAGE:
same as in 2)

5) Last trial, following error messages:
$ lamexec -np 10 -mdrun -np 10 -v -s full etc (as in the tutorial) -g flog3
>& full3.job &
ERROR MESSAGE:
>From flog3.log file: no error messages
>From file full3.job: Fatal error: run input file full.tpr was made for 10
nodes, while mdrun expected it to be for 1 nodes.

I really don't understand why mdrun is not working on 10 processors. As you
see I've always checked the gmx-user archive trying to find something that
could help me, but I didn't. Could anybody give me a suggestion? To be clear
at best, I'm attaching here flog.log file and full.job (transformed in
full.txt) file.
Many thanks and regards
Anna
______________________________________________
Anna Marabotti, Ph.D.
Laboratorio di Bioinformatica e Biologia Computazionale
Istituto di Scienze dell'Alimentazione, CNR
Via Roma 52 A/C
83100 Avellino (Italy)
Tel: +39 0825 299651
Fax: +39 0825 299813
Skype: annam1972
E-mail: amarabotti at isa.cnr.it
Web page: http://bioinformatica.isa.cnr.it/anna.htm
____________________________________________________
"If you think you are too small to make a difference, try sleeping with a
mosquito"
-------------- next part --------------
A non-text attachment was scrubbed...
Name: flog.log
Type: application/octet-stream
Size: 1906 bytes
Desc: not available
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20061109/5e211cf8/attachment.obj>
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: full.txt
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20061109/5e211cf8/attachment.txt>


More information about the gromacs.org_gmx-users mailing list