[gmx-users] GROMACS-4.6.3 CUDA version on multiple nodes each having 2 GPUs
Szilárd Páll
pall.szilard at gmail.com
Thu Nov 14 14:43:00 CET 2013
Hi Jignesh,
I don't get what the issue is, you need to be more specific than
"fails" and "none of them worked." You should provide exact command
line, stderr output and log files as we can't get what exactly is the
error you are getting.
Previously you seemed to hint that you had inhomogeneous hardware
(i.e. nodes with different CPU/GPU setup), but now you're saying that
all nodes are the same - case in which it should all work just fine
with default settings!
Cheers,
--
Szilárd
On Wed, Nov 13, 2013 at 7:55 PM, Prajapati, Jigneshkumar Dahyabhai
<j.prajapati at jacobs-university.de> wrote:
> Hello,
>
> I am trying to run MPI, OpenMP and CUDA enable GROMACS 4.6.3 on nodes having 12 cores (2 CPUs) and 2 GPUs (Tesla M2090) each. The problem is when I launch job GROMCAS is using only GPUs on first node come across and failing to use GPUs on other nodes.
>
> The command I used for two gpu enable nodes was,
>
> mpirun -np 2 mdrun -v -deffnm $configfile
>
> I tried with many other options but none of them worked. The one thing needs to remember here is that on all the nodes, GPUs got id 0 and 1 so -gpu_id option also didn't work.
>
> This old thread gave me some idea but I didn't understand it completely.
> http://lists.gromacs.org/pipermail/gmx-users/2013-March/079802.html
>
> Please suggests me the possible solutions for this issue.
>
> Thank you
> --Jignesh
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users
mailing list