[gmx-users] paralleling Gromacs and using GPU

Hooman Vahidi hooman.vahidi at yahoo.com
Sat Nov 29 14:41:39 CET 2014


Dear all, I own a computing system with a TYAN S8230dual processor and two AMD Opteron 6238 (12-core) CPUs. In other words, mysystem has a 24-core CPU. Its graphics card is GTXTITAN. The installedoperating system is Ubuntu. Gromacs was installed using CMake through thefollowing commands.cmake .. -DGMX_BUILD_OWN_FFTW=ON-DREGRESSIONTEST_DOWNLOAD=ON -DGMX_MPI=on -DGMX_GPU=on I am a beginner in paralleling Gromacs andusing GPU. I would like to ask the following questions:   
   - My system has two CPUs. Is any of the CPUs considered as a node? Or their combination, that is a 24-core CPU, is considered as only one node?
   - When I run the simulation using the following command, all 24 cores along with 24%  to 30% of the GPU are taken up. For the sake of increasing computing speed, is it possible to increase GPU usage, say to 100%? If yes, how is it possible?
Mdrun_mpi –s md.tpr–v –deffnm md   
   - In carrying out a simulation, is it possible to increase the number of CPUs and customize the way GPU is used for the operation? If yes, how is it possible?
   - Is it possible to carry out two different simulations through two separate Terminals and allocate a specific percentage of CPU and GPU to each operation? If yes, how is it possible?
   - Based on my system's specifications, what commands could be used for carrying out a simulation at the highest speed?
Thanks for the time you took reading myquestions. I would really appreciate if you could kindly help me.


More information about the gromacs.org_gmx-users mailing list