[gmx-users] problem with openmpi and gromacs4.0
gmx3 at hotmail.com
Thu Nov 6 20:33:30 CET 2008
Older tpr versions are compatible with 4.0.
For large systems making a new tpr file with the 4.0 grompp
will inprove the performance.
From: larsson at xray.bmc.uu.se
To: gmx-users at gromacs.org
Subject: Re: [gmx-users] problem with openmpi and gromacs4.0
Date: Thu, 6 Nov 2008 17:56:28 +0100
On Nov 6, 2008, at 15:04 , hui sun wrote: Dear all, Recently, we installed the gromacs4.0 with openmpi parallel program. When we started a simulation with np=1, the simulation was finished normally. But when the same tpr file was run with np=n (n>1), I obtained the following error message: MPI_ABORT invoked on rank 0 in communicator MPI_COMM_WORLD with errorcode -1 My used command is & mpirun -np 32 -hostfile mpd.hosts mdrun_mpi -s large.tpr -o large.trr -c large.gro -g large.log The content of the mpd.hosts file is following, node1 slots=8 . . . nodex slots=8 It must be noted that my tpr file is produced via the gromacs 3.3.3. Is this related to the error?
Most probably you need to run the same version of grompp as mdrun.
(Another tip is to use the option "-deffnm large" instead of -s -o -c -g with large names.)
I don't know what to do now and how can I overcome this. Could you help me? Thanks in advance! Hui Sun 雅虎邮箱，您的终生邮箱！_______________________________________________
gmx-users mailing list gmx-users at gromacs.org
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-request at gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Express yourself instantly with MSN Messenger! Download today it's FREE!
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gromacs.org_gmx-users