[gmx-users] Can we set the number of pure PME nodes when using GPU&CPU?
Szilárd Páll
pall.szilard at gmail.com
Tue Aug 19 17:11:06 CEST 2014
On Tue, Aug 19, 2014 at 4:19 PM, Theodore Si <sjyzhxw at gmail.com> wrote:
> Hi,
>
> How can we designate which CPU-only nodes to be PME-dedicated nodes?
mpirun -np N mdrun_mpi -npme M
Starts N ranks out of which M will be PME-only and (M-N) PP ranks.
> What
> mdrun options or what configuration should we use to make that happen?
You can change the rank ordering with -ddorder, there are three
available patters. Otherwise, you can do manual rank ordering by
telling MPI how to reorder ranks presented to mdrun itself; e.g. with
MPICH you hcan use the MPICH_RANK_ORDER environment variable.
Cheers,
--
Szilárd
> BR,
> Theo
>
>
> On 8/11/2014 9:36 PM, Mark Abraham wrote:
>>
>> Hi,
>>
>> What Carsten said, if running on nodes that have GPUs.
>>
>> If running on a mixed setup (some nodes with GPU, some not), then
>> arranging
>> your MPI environment to place PME ranks on CPU-only nodes is probably
>> worthwhile. For example, all your PP ranks first, mapped to GPU nodes,
>> then
>> all your PME ranks, mapped to CPU-only nodes, and then use mdrun -ddorder
>> pp_pme.
>>
>> Mark
>>
>>
>> On Mon, Aug 11, 2014 at 2:45 AM, Theodore Si <sjyzhxw at gmail.com> wrote:
>>
>>> Hi Mark,
>>>
>>> This is information of our cluster, could you give us some advice as
>>> regards to our cluster so that we can make GMX run faster on our system?
>>>
>>> Each CPU node has 2 CPUs and each GPU node has 2 CPUs and 2 Nvidia K20M
>>>
>>>
>>> Device Name Device Type Specifications Number
>>> CPU Node IntelH2216JFFKRNodes CPU: 2×Intel Xeon E5-2670(8
>>> Cores,
>>> 2.6GHz, 20MB Cache, 8.0GT)
>>> Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 332
>>> Fat Node IntelH2216WPFKRNodes CPU: 2×Intel Xeon E5-2670(8
>>> Cores,
>>> 2.6GHz, 20MB Cache, 8.0GT)
>>> Mem: 256G(16×16G) ECC Registered DDR3 1600MHz Samsung Memory 20
>>> GPU Node IntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(8
>>> Cores,
>>> 2.6GHz, 20MB Cache, 8.0GT)
>>> Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 50
>>> MIC Node IntelR2208GZ4GC CPU: 2×Intel Xeon E5-2670(8
>>> Cores,
>>> 2.6GHz, 20MB Cache, 8.0GT)
>>> Mem: 64GB(8×8GB) ECC Registered DDR3 1600MHz Samsung Memory 5
>>> Computing Network Switch Mellanox Infiniband FDR Core Switch
>>> 648× FDR Core Switch MSX6536-10R, Mellanox Unified Fabric Manager 1
>>> Mellanox SX1036 40Gb Switch 36× 40Gb Ethernet Switch SX1036, 36× QSFP
>>> Interface 1
>>> Management Network Switch Extreme Summit X440-48t-10G 2-layer
>>> Switch
>>> 48× 1Giga Switch Summit X440-48t-10G, authorized by ExtremeXOS 9
>>> Extreme Summit X650-24X 3-layer Switch 24× 10Giga 3-layer Ethernet
>>> Switch
>>> Summit X650-24X, authorized by ExtremeXOS 1
>>> Parallel Storage DDN Parallel Storage System DDN SFA12K
>>> Storage
>>> System 1
>>> GPU GPU Accelerator NVIDIA Tesla Kepler K20M 70
>>> MIC MIC Intel Xeon Phi 5110P Knights Corner 10
>>> 40Gb Ethernet Card MCX314A-BCBT Mellanox ConnextX-3 Chip 40Gb
>>> Ethernet Card
>>> 2× 40Gb Ethernet ports, enough QSFP cables 16
>>> SSD Intel SSD910 Intel SSD910 Disk, 400GB, PCIE 80
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On 8/10/2014 5:50 AM, Mark Abraham wrote:
>>>
>>>> That's not what I said.... "You can set..."
>>>>
>>>> -npme behaves the same whether or not GPUs are in use. Using separate
>>>> ranks
>>>> for PME caters to trying to minimize the cost of the all-to-all
>>>> communication of the 3DFFT. That's still relevant when using GPUs, but
>>>> if
>>>> separate PME ranks are used, any GPUs on nodes that only have PME ranks
>>>> are
>>>> left idle. The most effective approach depends critically on the
>>>> hardware
>>>> and simulation setup, and whether you pay money for your hardware.
>>>>
>>>> Mark
>>>>
>>>>
>>>> On Sat, Aug 9, 2014 at 2:56 AM, Theodore Si <sjyzhxw at gmail.com> wrote:
>>>>
>>>> Hi,
>>>>>
>>>>> You mean no matter we use GPU acceleration or not, -npme is just a
>>>>> reference?
>>>>> Why we can't set that to a exact value?
>>>>>
>>>>>
>>>>> On 8/9/2014 5:14 AM, Mark Abraham wrote:
>>>>>
>>>>> You can set the number of PME-only ranks with -npme. Whether it's
>>>>> useful
>>>>>>
>>>>>> is
>>>>>> another matter :-) The CPU-based PME offload and the GPU-based PP
>>>>>> offload
>>>>>> do not combine very well.
>>>>>>
>>>>>> Mark
>>>>>>
>>>>>>
>>>>>> On Fri, Aug 8, 2014 at 7:24 AM, Theodore Si <sjyzhxw at gmail.com> wrote:
>>>>>>
>>>>>> Hi,
>>>>>>
>>>>>>> Can we set the number manually with -npme when using GPU
>>>>>>> acceleration?
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Gromacs Users mailing list
>>>>>>>
>>>>>>> * Please search the archive at http://www.gromacs.org/
>>>>>>> Support/Mailing_Lists/GMX-Users_List before posting!
>>>>>>>
>>>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>>>
>>>>>>> * For (un)subscribe requests visit
>>>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>>>>> send a mail to gmx-users-request at gromacs.org.
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>
>>>>> Gromacs Users mailing list
>>>>>
>>>>> * Please search the archive at http://www.gromacs.org/
>>>>> Support/Mailing_Lists/GMX-Users_List before posting!
>>>>>
>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>
>>>>> * For (un)subscribe requests visit
>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>>>> send a mail to gmx-users-request at gromacs.org.
>>>>>
>>>>>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at http://www.gromacs.org/
>>> Support/Mailing_Lists/GMX-Users_List before posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-request at gromacs.org.
>>>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a
> mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list