[gmx-users] Launching MPI/OpenMP hybrid run
alex.bjorling
alex.bjorling at gmail.com
Wed Mar 5 09:25:08 CET 2014
Carsten Kutzner wrote
> On 04 Mar 2014, at 13:43, alex.bjorling <
> alex.bjorling@
> > wrote:
>
>> Carsten Kutzner wrote
>>> On 04 Mar 2014, at 13:08, Alexander Björling <
>>
>>> alex.bjorling@
>>
>>> > wrote:
>>>
>>>> Dear users,
>>>>
>>>> I'm trying to run simulations on a cluster of nodes, each sporting two
>>>> AMD
>>>> Opteron 8-core CPU:s. I would like to have one MPI process on each CPU,
>>>> with OpenMP threads on the 8 cores of each. I've compiled gromacs 5.0
>>>> with
>>>> intel MPI and OpenMP enabled.
>>>>
>>>> To get it running I'm just using two nodes, so 4 MPI processes. I've
>>>> been
>>>> using various combinations of the type:
>>>>
>>>> mpirun -np 4 -perhost 2 gmx_mpi mdrun -ntomp 8 ...,
>>>>
>>>> but I never manage to get the MPI process spread out on the two nodes.
>>>> Rather, in the example above, 4 MPI processes with 4 OpenMP threads
>>>> each
>>>> runs on the first node, and the second node does nothing.
>>> Did you provide a hostfile / machinefile containing the names of your
>>> nodes you want the MPI processes to be started on?
>>
>> No, I just asked SLURM for two nodes, the way I would if I was running
>> MPI-only jobs.
> Then SLURM is not positioning your MPI processes correctly.
> This problem should be independent of Gromacs.
You're right. One has to specifically ask SLURM to distribute the processes
(--ntasks-per-socket=1). I recompiled Gromacs using Open MPI (instead of
Intel) and am almost there. With
mpirun -npersocket 1 -bysocket -cpus-per-proc 8 gmx_mpi mdrun -ntomp 8 ...,
I get two processes per node, but unfortunately both run on the same socket,
each at ~300%. Obviously this is an Open MPI issue not a Gromacs one.
Thanks for your response,
Alex
--
View this message in context: http://gromacs.5086.x6.nabble.com/Launching-MPI-OpenMP-hybrid-run-tp5014934p5014961.html
Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
More information about the gromacs.org_gmx-users
mailing list