[gmx-users] Compile and Run on Xsede

Johnny Lu johnny.lu128 at gmail.com
Tue Sep 23 18:45:26 CEST 2014


Hi.

On stampede of xsede.org <https://portal.xsede.org/tacc-stampede>, I
compiled cmake, and then gromacs 4.6.7 with and without gpu, with the
following cmake configuration:

without gpu:
module load mkl
module load cuda/6.0
module load mvapich2
export MKLROOT=$TACC_MKL_DIR
export MKL_TARGET_ARCH=em64t
export CC=icc
export CXX=icc
/home1/02630/jlu128/software/cmake-3.0.2/bin/cmake .. -DGMX_FFT_LIBRARY=mkl
-DGMX_GPU=ON
-DCMAKE_INSTALL_PREFIX=/home1/02630/jlu128/software/gromacs-4.6.7

After compiling, I type "module list", which gave:
login2.stampede(161)$ module list

Currently Loaded Modules:
  1) TACC-paths   2) Linux   3) cluster-paths   4) intel/13.0.2.146   5)
mvapich2/1.9a2   6) xalt/0.4.0   7) cluster   8) TACC   9) cuda/5.5

with gpu:
same, except with -DGPU=ON for cmake.

But, when I run the gromacs compiled without gpu, I get the following error:
/opt/apps/intel13/mvapich2/1.9/lib:/opt/apps/intel13/mvapich2/1.9/lib/shared:/opt/apps/intel/13/composer_xe_2013.2.146/tbb/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/compiler/lib/intel64:/opt/intel/mic/coi/host-linux-release/lib:/opt/intel/mic/myo/lib:/opt/apps/intel/13/composer_xe_2013.2.146/mpirt/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/ipp/../compiler/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/ipp/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/compiler/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/mkl/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/tbb/lib/intel64:/opt/apps/xsede/gsi-openssh-5.7/lib64:/opt/apps/xsede/gsi-openssh-5.7/lib64

Lmod has detected the following error:
The following module(s) are unknown: "cuda/6.0"

   Please check the spelling or version number. Also try "module spider ..."

/opt/apps/intel/13/composer_xe_2013.2.146/mkl/lib/intel64:
./mdrun: error while loading shared libraries: libiomp5.so: cannot open
shared object file: No such file or directory

Error message when I run the gromacs compiled with gpu:
/opt/apps/intel13/mvapich2/1.9/lib:/opt/apps/intel13/mvapich2/1.9/lib/shared:/opt/apps/intel/13/composer_xe_2013.2.146/tbb/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/compiler/lib/intel64:/opt/intel/mic/coi/host-linux-release/lib:/opt/intel/mic/myo/lib:/opt/apps/intel/13/composer_xe_2013.2.146/mpirt/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/ipp/../compiler/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/ipp/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/compiler/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/mkl/lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/tbb/lib/intel64:/opt/apps/xsede/gsi-openssh-5.7/lib64:/opt/apps/xsede/gsi-openssh-5.7/lib64

Lmod has detected the following error:
The following module(s) are unknown: "cuda/6.0"

   Please check the spelling or version number. Also try "module spider ..."


The following have been reloaded with a version change:
  1) intel/13.0.2.146 => intel/14.0.1.106  2) mvapich2/1.9a2 =>
mvapich2/2.0b

./mdrun: error while loading shared libraries: libcudart.so.6.0: cannot
open shared object file: No such file or directory

The job file that I used (I put them in the bin folder of gromacs).
#!/bin/bash
#----------------------------------------------------
# Example SLURM job script to run hybrid applications
# (MPI/OpenMP or MPI/pthreads) on TACC's Stampede
# system.
#----------------------------------------------------
#SBATCH -J openmp_job     # Job name
#SBATCH -o openmp_job.o%j # Name of stdout output file(%j expands to jobId)
#SBATCH -e openmp_job.o%j # Name of stderr output file(%j expands to jobId)
#SBATCH -p serial         # Serial queue for serial and OpenMP jobs
#SBATCH -N 1              # Total number of nodes requested (16 cores/node)
#SBATCH -n 1              # Total number of mpi tasks requested
#SBATCH -t 00:04:00       # Run time (hh:mm:ss) - 1.5 hours
# The next line is required if the user has more than one project
# #SBATCH -A A-yourproject  # <-- Allocation name to charge job against

# This example will run an OpenMP application using 16 threads

# Set the number of threads per task(Default=1)
echo $LD_LIBRARY_PATH
export OMP_NUM_THREADS=16

# Run the OpenMP application
module load cuda/6.0
module load mvapich2
module load intel/14.0.1.106
./mdrun

How to fix this?

Thanks in advance.


More information about the gromacs.org_gmx-users mailing list