[gmx-users] Problems with installing mpi-version of Gromacs on Cluster with Debian-Lenny
Christian Mücksch
muecksch at rhrk.uni-kl.de
Thu Jun 24 16:04:48 CEST 2010
Dear All,
I've been trying to compile and get Gromacs (version 4.0.7) working on a
cluster that runs with Debian Lenny.
I set the following variables:
export SOFT=$HOME/GROMACS
export PATH="$PATH":$SOFT/bin
export LDFLAGS="-L$SOFT/lib"
export CPPFLAGS="-I$SOFT/include"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$SOFT/lib
export MPICC=$SOFT/bin/mpicc
and then did
./configure --prefix=$SOFT --disable-float --program-suffix=_mpi_d
--enable-mpi
after I installed the Gromacs-version without mpi. Before configuring
Gromacs I compiled the latest version of OPEN-MPI with gcc-4.3.
I did exactly the same on another cluster that runs with Debian Etch and
everything worked fine and was pretty straightforward.
Here I get the following error during configuring:
checking whether your compiler can handle assembly files (*.s)... no
configure: error: Upgrade your compiler (or disable assembly loops)
The exact error from the config.log looks like this:
configure:32822: checking whether your compiler can handle assembly
files (*.s)
configure:32841: /usr/bin/mpicc -O3 -fomit-frame-pointer
-finline-functions -Wall -Wno-unused -funroll-all-loops -c conftestasm.s
conftestasm.s: Assembler messages:
conftestasm.s:2: Error: bad register name `%rsp'
configure:32844: $? = 1
configure:32857: result: no
configure:32859: error: Upgrade your compiler (or disable assembly loops).
When I compiled Gromacs with the --disable-x86-64-sse option, then my
submitted jobs run extremely slow compared to the other cluster.
Although MPI is running and the CPU-loads are all 100% the speed is
nearly as slow as running on a single CPU.
I also tried compiling with MVAPICH with is already installed on the
cluster but I get the same error.
Unfortunately I could not compile OPEN-MPI with the follwing flags:
export CC="gcc-4.3 -m64"
export CXX="g++-4.3 -m64"
export F77="gfortran-4.3 -m64"
Do you have any idea what could be the cause of the problem so that I
can tell the Cluster-Admin what specific package to install?
Any help would be deeply appreciated since I'm new to this whole MPI
topic and I could not find a way around this problem.
Thanks a lot,
Christian
More information about the gromacs.org_gmx-users
mailing list