[gmx-users] Multi-level parallelization: MPI + OpenMP

Éric Germaneau germaneau at sjtu.edu.cn
Tue Jul 23 03:25:11 CEST 2013


Dear Szilárd,

I'm making some tests using 2 ranks/node, what I was trying to do.
It seems working now, thank you.

          Éric.

On 07/19/2013 08:56 PM, Szilárd Páll wrote:
> Depending on the level of parallelization (number of nodes and number
> of particles/core) you may want to try:
>
> - 2 ranks/node: 8 cores + 1 GPU, no separate PME (default):
>    mpirun -np 2*Nnodes mdrun_mpi [-gpu_id 01 -npme 0]
>
> - 4 ranks per node: 4 cores + 1 GPU (shared between two ranks), no separate PME
>    mpirun -np 4*Nnodes mdrun_mpi -gpu_id 0011 [-npme 0]
>
> - 4 ranks per node, 2 PP/2PME: 4 cores + 1 GPU (not shared), separate PME
>    mpirun -np 4*Nnodes mdrun_mpi [-gpu_id 01] -npme 2*Nnodes
>
> - at high parallelization you may want to try (especially with
> homogeneous systems) 8 ranks per node
>
> Cheers,
> --
> Szilárd
>
>
> On Fri, Jul 19, 2013 at 4:35 AM, Éric Germaneau <germaneau at sjtu.edu.cn> wrote:
>> Dear all,
>>
>> I'm note a gromacs user,  I've installed gromacs 4.6.3 on our cluster and
>> making some test.
>> Each node of our machine has 16 cores and 2 GPU.
>> I'm trying to figure how to submit efficient multiple nodes LSF jobs using
>> the maximum of resources.
>> After reading the documentation
>> <http://www.gromacs.org/Documentation/Acceleration_and_parallelization#Locking_threads_to_physical_cores>
>> on "Acceleration and parallelization" I got confused and inquire some help.
>> I'm just wondering whether someone with some experiences on this matter.
>> I thank you in advance,
>>
>>                                                  Éric.
>>
>> --
>> /Be the change you wish to see in the world
>> / --- Mahatma Gandhi ---
>>
>> Éric Germaneau <http://hpc.sjtu.edu.cn/index.htm>
>>
>> Shanghai Jiao Tong University
>> Network & Information Center
>> room 205
>> Minhang Campus
>> 800 Dongchuan Road
>> Shanghai 200240
>> China
>>
>> View Éric Germaneau's profile on LinkedIn
>> <http://cn.linkedin.com/pub/%C3%A9ric-germaneau/30/931/986>
>>
>> /Please, if possible, don't send me MS Word or PowerPoint attachments
>> Why? See: http://www.gnu.org/philosophy/no-word-attachments.html/
>>
>> --
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

-- 
/Be the change you wish to see in the world
/ --- Mahatma Gandhi ---

Éric Germaneau <http://hpc.sjtu.edu.cn/index.htm>

Shanghai Jiao Tong University
Network & Information Center
room 205
Minhang Campus
800 Dongchuan Road
Shanghai 200240
China

View Éric Germaneau's profile on LinkedIn 
<http://cn.linkedin.com/pub/%C3%A9ric-germaneau/30/931/986>

/Please, if possible, don't send me MS Word or PowerPoint attachments
Why? See: http://www.gnu.org/philosophy/no-word-attachments.html/




More information about the gromacs.org_gmx-users mailing list