[gmx-users] 1replica/1cpu problem

francesco oteri francesco.oteri at gmail.com
Thu Jul 19 10:24:45 CEST 2012


Sorry for the multiple emails but everytime I tried to send the mail I
obtained a message like this:

"The message's content type was not explicitly allowed. Please send
your messages as plain text only. See
http://www.gromacs.org/Support/Mailing_Lists"

So I tried as long as no message has been replied to me.


Concerning the bug fixes, I did the following steps

git clone git://git.gromacs.org/gromacs.git  gromacs-4.5.5-patches
cd gromacs-4.5.5-patches
git checkout --track -b release-4-5-patches origin/release-4-5-patches
git pull
./bootstrap
./configure --prefix=/ibpc/etna/oteri/PKG/gromacs/4.5.5/patched
--without-x --enable-mpi --program-suffix=_mpi --enable-all-static
CFLAGS="-I/ibpc/etna/oteri/PKG/openmpi/1.4.5/include
-I/ibpc/etna/oteri/PKG/fftw/3.3.1/gcc/include"
"LDFLAGS=-L/ibpc/etna/oteri/PKG/fftw/3.3.1/gcc/lib
-L/ibpc/etna/oteri/PKG/openmpi/1.4.5/lib"   LIBSUFFIX=_mpi
make mdrun
make install-mdrun


2012/7/19 Mark Abraham <Mark.Abraham at anu.edu.au>:
> On 19/07/2012 12:32 AM, francesco oteri wrote:
>>
>> Dear gromacs users,
>> I am trying to run a replica exchange simulation using the files you find
>> in http://dl.dropbox.com /u/40545409/gmx_mailinglist/inputs.tgz
>>
>> The 4 replicas have been generated, as following:
>> grompp -p rest2.top -c 03md.gro -n index.ndx -o rest2_0  -f rest2_0.mdp
>> grompp -p rest2.top -c 03md.gro -n index.ndx -o rest2_1  -f rest2_1.mdp
>> grompp -p rest2.top -c 03md.gro -n index.ndx -o rest2_2  -f rest2_2.mdp
>> grompp -p rest2.top -c 03md.gro -n index.ndx -o rest2_3  -f rest2_3.mdp
>>
>> The simulation was started with command, using gromacs 4.5.5  with the
>> latest bug fix:
>
>
> Which bug fix? How did you apply it?
>
> Mark
>
>
>> mpirun -np 4  mdrun_mpi -s rest2_.tpr -multi 4 -replex 1000 >& out1
>>
>> giving the following error:
>>
>> [etna:10799] *** An error occurred in MPI_comm_size
>> [etna:10799] *** on communicator MPI_COMM_WORLD
>> [etna:10799] *** MPI_ERR_COMM: invalid communicator
>> [etna:10799] *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
>> --------------------------------------------------------------------------
>> mpirun has exited due to process rank 0 with PID 10796 on
>> node etna exiting without calling "finalize". This may
>> have caused other processes in the application to be
>> terminated by signals sent by mpirun (as reported here).
>> --------------------------------------------------------------------------
>> [etna:10795] 3 more processes have sent help message
>> help-mpi-errors.txt / mpi_errors_are_fatal
>> [etna:10795] Set MCA parameter "orte_base_help_aggregate" to 0 to see
>> all help / error messages
>>
>>
>> The nice thing is that the same error doesn't appear either if I use
>> the 4.5.5 without applying tha patches!!!
>>
>> mpirun -np 4  mdrun_mpi -s rest2_.tpr -multi 4 -replex 1000 >& out2
>>
>> or the bug fixed with multiple processors per replica:
>>
>> mpirun -np 8  mdrun_mpi -s rest2_.tpr -multi 4 -replex 1000 >& out3
>>
>> Since I have to use more then 4 replicas, I need to run 1cpu/replica.
>>
>> Has someone any idea of the probem?
>>
>> Francesco
>
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Only plain text messages are allowed!
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



-- 
Cordiali saluti, Dr.Oteri Francesco



More information about the gromacs.org_gmx-users mailing list