[gmx-users] Problems with installing mpi-version of Gromacs
Pinchas Aped
aped at mail.biu.ac.il
Tue Jul 6 10:51:58 CEST 2010
A 2nd attempt, hope this time it will go thru...
Dear Mark:
Thanks a lot for the quick response.
On Mon, 28 Jun 2010, Mark Abraham wrote:
>
>
> ----- Original Message -----
> From: Pinchas Aped <aped at mail.biu.ac.il>
> Date: Monday, June 28, 2010 3:43
> Subject: [gmx-users] Problems with installing mpi-version of Gromacs
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
>
>
> -----------------------------------------------------------
> |
>
>
> > Dear All: > > I have installed Gromacs on a Linux (RedHat) cluster, using the following csh script - > > ------------------------------------------------------------------------------------- > #! /bin/csh > > set DIR=/private/gnss/Gromacs > > setenv SOFT ${DIR}/software
> > setenv CPPFLAGS"-I$SOFT/include"
> > setenv LDFLAGS "-L$SOFT/lib"
> > setenv NCPU 4
> > setenvPATH "$PATH":$SOFT/bin
> > cd openmpi-1.2.8; ./configure --prefix=$SOFT; make -j$NCPU; make install
> > cd ../fftw-3.1.3; ./configure --prefix=$SOFT--enable-float; make -j $NCPU; make install
> > cd ../gsl-1.11; ./configure--prefix=$SOFT; make -j $NCPU; make install
> > cd ../gromacs-4.0.7; ./configure--prefix=$SOFT --with-gsl; make -j $NCPU; make install
> > make distclean;./configure --prefix=$SOFT --program-suffix=_mpi --enable-mpi --with-gsl; make mdrun -j $NCPU; make install-mdrun > ------------------------------------------------------------------------------------- > > It seemed to have worked OK, and we could run Gromacs on a single processor. > > When I tried to create a parallel version with the script - > > ------------------------------------------------------------------------------------- > #! /bin/csh > > set DIR=/private/gnss/Gromacs > > setenv SOFT ${DIR}/software
> > setenv CPPFLAGS"-I$SOFT/include"
> > setenv LDFLAGS "-L$SOFT/lib"
> > setenv NCPU 4
> > setenvPATH "$PATH":$SOFT/bin > > cd gromacs-4.0.7; ./configure --prefix=$SOFT--with-gsl --enable-mpi; make -j $NCPU mdrun; make install-mdrun > ------------------------------------------------------------------------------------- > > - the installation log ended with - > > ......... > checking whether the MPI cc command works...configure: error: Cannot compile and link MPI code with mpicc
> > make: *** Norule to make target `mdrun'. Stop.
> > make: *** No rule to make target`install-mdrun'. Stop. > > I can't figure out from this message what is wrong or missing with my MPI.
>
> It looks like you included the scripts the wrong way around, or something. Both scripts should build MPI-enabled mdrun, with the second not naming it with _mpi. See http://www.gromacs.org/index.php?title=Download_%26_Installation/Installation_Instructions for the usual procedure.
>
This is the site from which I took the above script content.
>
> You can inspect the last 100 or so lines of config.log to see the actual error issue.
>
The second installation log file (with MPI) is only 28 line long, and
the lines I have quoted are the first which seem to indicate of a problem.
>
> The issue has probably got nothing to do with GROMACS. Try compiling and running some MPI test program to establish this.
>
I will.
By the way, our cluster has 8 cores nodes (2 quad-core). Is it
possible to run Gromacs in "shared memory" parallel mode, thus avoiding
the need for MPI?
>
> Mark
> |
> -----------------------------------------------------------
>
------------------------------------------------------------------------
Dr. Pinchas Aped Tel.: (+972-3) 531-7683
Department of Chemistry FAX : (+972-3) 738-4053
Bar-Ilan University E-Mail: aped at mail.biu.ac.il
52900 Ramat-Gan, ISRAEL WWW: http://www.biu.ac.il/~aped
------------------------------------------------------------------------
More information about the gromacs.org_gmx-users
mailing list