[gmx-users] GROMACS-4.6.3 CUDA version on multiple nodes each having 2 GPUs
Prajapati, Jigneshkumar Dahyabhai
j.prajapati at jacobs-university.de
Mon Nov 18 12:08:07 CET 2013
Hi Carsten,
Thanks for reply. Everything is working fine on single node. The problem starts when I move to two nodes.
I have tried with the option that you have mentioned earlier and this is the error I got ,
mpirun -np 4 mdrun
mismatching number of PP MPI processes and GPUs per node.
mdrun was started with 4 PP MPI processes per node, but only 2 GPUs were detected.
Likewise, I have tried many options and on many occasions job runs but without using GPUs on second node . More specifically, my problem is when I use two nodes each having two GPU cards, GROMACS detects GPUs on first node only. It fails to detect GPU cards on second node (check the em.log file in attachments). I am not sure what is the reason. Please let me know if I could try something else.
Thank you.
-Jignesh
________________________________________
From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se [gromacs.org_gmx-users-bounces at maillist.sys.kth.se] on behalf of Carsten Kutzner [ckutzne at gwdg.de]
Sent: Thursday, November 14, 2013 10:54 AM
To: Discussion list for GROMACS users
Subject: Re: [gmx-users] GROMACS-4.6.3 CUDA version on multiple nodes each having 2 GPUs
Hi,
if you run on a single node with 2 GPUs, this command line should work:
> mpirun -np 2 mdrun -v -deffnm $configfile
If you run on two nodes, try this:
mpirun -np 4 mdrun
Choosing -np equal to the total number of GPUs should work (although it might
not be the best option performance-wise).
For a better performance you can try
mpirun -np 4 mdrun -gpu_id 0011 on a single node
or
mpirun -np 8 mdrun -gpu_id 0011 on two nodes
Carsten
On Nov 13, 2013, at 7:55 PM, "Prajapati, Jigneshkumar Dahyabhai" <j.prajapati at jacobs-university.de> wrote:
> Hello,
>
> I am trying to run MPI, OpenMP and CUDA enable GROMACS 4.6.3 on nodes having 12 cores (2 CPUs) and 2 GPUs (Tesla M2090) each. The problem is when I launch job GROMCAS is using only GPUs on first node come across and failing to use GPUs on other nodes.
>
> The command I used for two gpu enable nodes was,
>
> mpirun -np 2 mdrun -v -deffnm $configfile
>
> I tried with many other options but none of them worked. The one thing needs to remember here is that on all the nodes, GPUs got id 0 and 1 so -gpu_id option also didn't work.
>
> This old thread gave me some idea but I didn't understand it completely.
> http://lists.gromacs.org/pipermail/gmx-users/2013-March/079802.html
>
> Please suggests me the possible solutions for this issue.
>
> Thank you
> --Jignesh
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics
Am Fassberg 11, 37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/grubmueller/kutzner
http://www.mpibpc.mpg.de/grubmueller/sppexa
--
gromacs.org_gmx-users mailing list gromacs.org_gmx-users at maillist.sys.kth.se
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
* Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-request at gromacs.org.
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users
mailing list