[gmx-users] Compile and Run on Xsede

Johnny Lu johnny.lu128 at gmail.com
Wed Sep 24 00:14:55 CEST 2014


It works now. I didn't know there are "gpudev" and "gpu" queues. Have to
use those to run cuda programs.
There are 3 gpu per node...

i loaded the correct intel version and then the one without cuda ran ok.


On Tue, Sep 23, 2014 at 1:07 PM, Justin Lemkul <jalemkul at vt.edu> wrote:

>
>
> On 9/23/14 12:45 PM, Johnny Lu wrote:
>
>> Hi.
>>
>> On stampede of xsede.org <https://portal.xsede.org/tacc-stampede>, I
>>
>> compiled cmake, and then gromacs 4.6.7 with and without gpu, with the
>> following cmake configuration:
>>
>> without gpu:
>> module load mkl
>> module load cuda/6.0
>> module load mvapich2
>> export MKLROOT=$TACC_MKL_DIR
>> export MKL_TARGET_ARCH=em64t
>> export CC=icc
>> export CXX=icc
>> /home1/02630/jlu128/software/cmake-3.0.2/bin/cmake ..
>> -DGMX_FFT_LIBRARY=mkl
>> -DGMX_GPU=ON
>> -DCMAKE_INSTALL_PREFIX=/home1/02630/jlu128/software/gromacs-4.6.7
>>
>> After compiling, I type "module list", which gave:
>> login2.stampede(161)$ module list
>>
>> Currently Loaded Modules:
>>    1) TACC-paths   2) Linux   3) cluster-paths   4) intel/13.0.2.146   5)
>> mvapich2/1.9a2   6) xalt/0.4.0   7) cluster   8) TACC   9) cuda/5.5
>>
>> with gpu:
>> same, except with -DGPU=ON for cmake.
>>
>> But, when I run the gromacs compiled without gpu, I get the following
>> error:
>> /opt/apps/intel13/mvapich2/1.9/lib:/opt/apps/intel13/
>> mvapich2/1.9/lib/shared:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/tbb/lib/intel64:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/compiler/lib/intel64:/opt/intel/mic/coi/
>> host-linux-release/lib:/opt/intel/mic/myo/lib:/opt/apps/
>> intel/13/composer_xe_2013.2.146/mpirt/lib/intel64:/opt/
>> apps/intel/13/composer_xe_2013.2.146/ipp/../compiler/
>> lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/ipp/
>> lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/
>> compiler/lib/intel64:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/mkl/lib/intel64:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/tbb/lib/intel64:/opt/apps/xsede/gsi-openssh-5.
>> 7/lib64:/opt/apps/xsede/gsi-openssh-5.7/lib64
>>
>> Lmod has detected the following error:
>> The following module(s) are unknown: "cuda/6.0"
>>
>>     Please check the spelling or version number. Also try "module spider
>> ..."
>>
>> /opt/apps/intel/13/composer_xe_2013.2.146/mkl/lib/intel64:
>> ./mdrun: error while loading shared libraries: libiomp5.so: cannot open
>> shared object file: No such file or directory
>>
>> Error message when I run the gromacs compiled with gpu:
>> /opt/apps/intel13/mvapich2/1.9/lib:/opt/apps/intel13/
>> mvapich2/1.9/lib/shared:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/tbb/lib/intel64:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/compiler/lib/intel64:/opt/intel/mic/coi/
>> host-linux-release/lib:/opt/intel/mic/myo/lib:/opt/apps/
>> intel/13/composer_xe_2013.2.146/mpirt/lib/intel64:/opt/
>> apps/intel/13/composer_xe_2013.2.146/ipp/../compiler/
>> lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/ipp/
>> lib/intel64:/opt/apps/intel/13/composer_xe_2013.2.146/
>> compiler/lib/intel64:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/mkl/lib/intel64:/opt/apps/intel/13/composer_xe_
>> 2013.2.146/tbb/lib/intel64:/opt/apps/xsede/gsi-openssh-5.
>> 7/lib64:/opt/apps/xsede/gsi-openssh-5.7/lib64
>>
>> Lmod has detected the following error:
>> The following module(s) are unknown: "cuda/6.0"
>>
>>     Please check the spelling or version number. Also try "module spider
>> ..."
>>
>>
>> The following have been reloaded with a version change:
>>    1) intel/13.0.2.146 => intel/14.0.1.106  2) mvapich2/1.9a2 =>
>> mvapich2/2.0b
>>
>> ./mdrun: error while loading shared libraries: libcudart.so.6.0: cannot
>> open shared object file: No such file or directory
>>
>> The job file that I used (I put them in the bin folder of gromacs).
>> #!/bin/bash
>> #----------------------------------------------------
>> # Example SLURM job script to run hybrid applications
>> # (MPI/OpenMP or MPI/pthreads) on TACC's Stampede
>> # system.
>> #----------------------------------------------------
>> #SBATCH -J openmp_job     # Job name
>> #SBATCH -o openmp_job.o%j # Name of stdout output file(%j expands to
>> jobId)
>> #SBATCH -e openmp_job.o%j # Name of stderr output file(%j expands to
>> jobId)
>> #SBATCH -p serial         # Serial queue for serial and OpenMP jobs
>> #SBATCH -N 1              # Total number of nodes requested (16
>> cores/node)
>> #SBATCH -n 1              # Total number of mpi tasks requested
>> #SBATCH -t 00:04:00       # Run time (hh:mm:ss) - 1.5 hours
>> # The next line is required if the user has more than one project
>> # #SBATCH -A A-yourproject  # <-- Allocation name to charge job against
>>
>> # This example will run an OpenMP application using 16 threads
>>
>> # Set the number of threads per task(Default=1)
>> echo $LD_LIBRARY_PATH
>> export OMP_NUM_THREADS=16
>>
>> # Run the OpenMP application
>> module load cuda/6.0
>> module load mvapich2
>> module load intel/14.0.1.106
>> ./mdrun
>>
>> How to fix this?
>>
>>
> Your "module list" when you installed showed cuda/5.5 was loaded, then you
> try to use cuda/6.0 when you execute mdrun.  Start by compiling against and
> using a consistent version.
>
> -Justin
>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>
> Department of Pharmaceutical Sciences
> School of Pharmacy
> Health Sciences Facility II, Room 601
> University of Maryland, Baltimore
> 20 Penn St.
> Baltimore, MD 21201
>
> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
> http://mackerell.umaryland.edu/~jalemkul
>
> ==================================================
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list