[gmx-users] Gromacs on Stampede

Arun Sharma arunsharma_cnu at yahoo.com
Sun Oct 13 00:30:28 CEST 2013


Hello,

I have a question about running gromacs utilities on Stampede and hopefully someone can point me in the right direction. I compiled gromacs using instructions in this thread and mdrun works fine. Also, some utilities like g_energy, g_analyze (single - core utilities, I believe) seem to be working fine. 

I am interested in computing life time of hydrogen bonds and this calculation is  quite expensive. Is there a way to submit this as a job using 32 or higher cores? When I run g_hbond on my workstation (16 cores) it runs on 16 threads by default. However, I am not sure if it is a good idea to run it on Stampede without submitting it as a job. 

I noticed that g_hbond works on OpenMP, while gromacs was compiled for Mpi according to these instructions. Just curious, if that would be the reason and if there is a suitable workaround for this problem.

As always, help is greatly appreciated. 
Thanks,




On Friday, October 11, 2013 5:31 AM, Arun Sharma <arunsharma_cnu at yahoo.com> wrote:
 
Dear Chris,

Thank you so much for providing the scripts and such detailed instructions. I was trying to load the gromacs module that is already available and was unable to get it to run. 

Thanks to you, I now have a working gromacs installation.




On Thursday, October 10, 2013 2:59 PM, Christopher Neale <chris.neale at mail.utoronto.ca> wrote:

Dear Arun:

here is how I compile fftw and gromacs on stampede. 
I have also included a job script and a script to submit a chain of jobs.
As Szilárd notes, this does not use the MICs, but it is still a rather fast machine.

# Compilation for single precision gromacs plus mdrun_mpi
#
####################################################################
# Compile fftw on stampede:
cd fftw-3.3.3
mkdir exec
export FFTW_LOCATION=$(pwd)/exec
module purge
module load intel/13.0.2.146
export CC=icc
export CXX=icpc
./configure --enable-float --enable-threads --prefix=${FFTW_LOCATION} --enable-sse2
make -j4
make -j4 install
cd ../

####################################################################
# Compile gromacs 4.6.1 on stampede:

cd gromacs-4.6.1
mkdir source
mv * source
mkdir exec
cd exec

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=icpc
export CC=icc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=OFF
make -j4
make -j4 install

cd ../
mkdir exec2
cd exec2

module purge
module load intel/13.0.2.146
module load cmake/2.8.9
module load mvapich2/1.9a2
export FFTW_LOCATION=$(pwd)/../fftw-3.3.3/exec
export CXX=mpicxx
export CC=mpicc
cmake ../source/ \
      -DCMAKE_PREFIX_PATH=$FFTW_LOCATION \
      -DCMAKE_INSTALL_PREFIX=$(pwd) \
      -DGMX_X11=OFF \
      -DCMAKE_CXX_COMPILER=${CXX} \
      -DCMAKE_C_COMPILER=${CC} \
      -DGMX_PREFER_STATIC_LIBS=ON \
      -DGMX_MPI=ON
make -j4 mdrun
make -j4 install-mdrun

cp bin/mdrun_mpi ../exec/bin
cd ../


####################################################################
####################################################################
####################################################################

# Here is a script that you can submit to run gromacs on stampede:
# Set SBATCH -A according to your allocation
# Set SBATCH -N to number of nodes
# Set SBATCH -n to number of nodes x 16 (= number of CPU cores)
# Set PATH and GMXLIB according to your compilation of gromacs
# Remove -notunepme option if you don't mind some of the new optimizations

#!/bin/bash
#SBATCH -J test                     # Job name
#SBATCH -o myjob.%j.out      # Name of stdout output file (%j expands to jobId)
#SBATCH -p normal               # Queue name
#SBATCH -N 7                       # Total number of nodes requested (16 cores/node)
#SBATCH -n 112                    # Total number of mpi tasks requested
#SBATCH -t 48:00:00             # Run time (hh:mm:ss) 

#SBATCH -A TG-XXXXXX      # <-- Allocation name to charge job against

PATH=/home1/02417/cneale/exe/gromacs-4.6.1/exec/bin:$PATH
GMXLIB=/home1/02417/cneale/exe/gromacs-4.6.1/exec/share/gromacs/top


# grompp -f md.mdp -p new.top -c crashframe.gro -o md3.tpr -r restr.gro

ibrun mdrun_mpi -notunepme -deffnm md3 -dlb yes -npme 16 -cpt 60 -cpi md3.cpt -nsteps 5000000000 -maxh 47.9 -noappend

cp md3.cpt backup_md3_$(date|sed "s/ /_/g").cpt

####################################################################
# submit the above script like this:

sbatch script.sh

####################################################################
# or create a chain of jobs like this:

N=8
script=stamp.sh
if [ ! -e last_job_in_chain ]; then
  id=$(sbatch ${script}|tail -n 1 |awk '{print $NF}')
  echo $id > last_job_in_chain
  let "N--"
fi
id=$(cat last_job_in_chain)
for((i=1;i<=N;i++)); do
  id=$(sbatch -d afterany:${id} ${script}|tail -n 1 |awk '{print $NF}')
  echo $id > last_job_in_chain
done

--
gmx-users mailing list    gmx-users at gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-request at gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

--
gmx-users mailing list    gmx-users at gromacs.org
http://lists.gromacs.org/mailman/listinfo/gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-request at gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists


More information about the gromacs.org_gmx-users mailing list