[gmx-users] Launching MPI/OpenMP hybrid run
ckutzne at gwdg.de
Tue Mar 4 14:07:38 CET 2014
On 04 Mar 2014, at 13:43, alex.bjorling <alex.bjorling at gmail.com> wrote:
> Carsten Kutzner wrote
>> On 04 Mar 2014, at 13:08, Alexander Björling <
>> > wrote:
>>> Dear users,
>>> I'm trying to run simulations on a cluster of nodes, each sporting two
>>> Opteron 8-core CPU:s. I would like to have one MPI process on each CPU,
>>> with OpenMP threads on the 8 cores of each. I've compiled gromacs 5.0
>>> intel MPI and OpenMP enabled.
>>> To get it running I'm just using two nodes, so 4 MPI processes. I've been
>>> using various combinations of the type:
>>> mpirun -np 4 -perhost 2 gmx_mpi mdrun -ntomp 8 ...,
>>> but I never manage to get the MPI process spread out on the two nodes.
>>> Rather, in the example above, 4 MPI processes with 4 OpenMP threads each
>>> runs on the first node, and the second node does nothing.
>> Did you provide a hostfile / machinefile containing the names of your
>> nodes you want the MPI processes to be started on?
> No, I just asked SLURM for two nodes, the way I would if I was running
> MPI-only jobs.
Then SLURM is not positioning your MPI processes correctly.
This problem should be independent of Gromacs.
> View this message in context: http://gromacs.5086.x6.nabble.com/Launching-MPI-OpenMP-hybrid-run-tp5014934p5014936.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> Gromacs Users mailing list
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users