[gmx-developers] Regression test GmxapiMpiTests fails for GROMACS 2020.1

Paul bauer paul.bauer.q at gmail.com
Wed Apr 15 11:50:48 CEST 2020


Hello,

this is simply an issue of mdrun trying to use all resources on your 
test machine.
You can try to avoid this by setting the appropriate OpenMP environment 
variable so it only uses part of the resources.

Cheers

Paul

On 15/04/2020 11:34, Eric Irrgang wrote:
> I feel like we dealt with this two years ago and then again some time in the last year... I'm not sure how this keeps being a problem. I'll double check the arguments to the CMake test macro. Does this ring any bells for other developers?
>
>> On Apr 15, 2020, at 8:18 AM, Christoph Pospiech <cpospiech at lenovo.com> wrote:
>>
>> Hi,
>>
>> the regression test GmxapiMpiTests fails for GROMACS 2020.1 when run on a node
>> with more than 12 (virtual) cores. Apparently, the test is run with two MPI
>> ranks and chooses the number of threads to fill up the available resources.
>>
>> The error message is the following.
>>
>> On host cmp2645.hpc.eu.lenovo.com 2 GPUs selected for this run.
>> Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
>>   PP:0,PP:1
>> PP tasks will do (non-perturbed) short-ranged interactions on the GPU
>> PP task will update and constrain coordinates on the CPU
>> Using 2 MPI processes
>>
>> Non-default thread affinity set, disabling internal thread affinity
>>
>> Using 32 OpenMP threads per MPI process
>>
>>
>> -------------------------------------------------------
>> Program:     gmxapi-mpi-test, version 2020.1-dev-20200407-9b056e2-unknown
>> Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 626)
>> MPI rank:    0 (out of 2)
>>
>> -------------------------------------------------------
>> Program:     gmxapi-mpi-test, version 2020.1-dev-20200407-9b056e2-unknown
>> Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 626)
>> MPI rank:    1 (out of 2)
>>
>> Fatal error:
>> Your choice of number of MPI ranks and amount of resources results in using 32
>> OpenMP threads per rank, which is most likely inefficient. The optimum is
>> usually between 2 and 6 threads per rank. If you want to run with this setup,
>> specify the -ntomp option. But we suggest to change the number of MPI ranks.
>>
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> -------------------------------------------------------
>> [repeated for the other rank]
>>
>> Trying to set OMP_NUM_THREADS to a fixed value < 6 fails other tests, such as
>> MdrunMpiCoordinationTestsOneRank
>> MdrunMpiCoordinationTestsTwoRanks
>>
>> Please advise! Thanks!
>> -- 
>> Dr. Christoph Pospiech
>> Senior HPC & AI Performance Engineer
>>
>> T +49 (351) 86269826
>> M +49 (171) 7655871
>> E cpospiech at lenovo.com
>>
>> Lenovo Global Technology (Deutschland) GmbH
>> Meitnerstr. 9
>> 70563 Stuttgart
>>
>> Geschäftsführung: Christophe Philippe Marie Laurent und Colm Brendan Gleeson
>> (jeweils einzelvertretungsberechtigt)
>> Prokura: Dieter Stehle & Henrik Bächle (Einzelprokura)
>> Sitz der Gesellschaft: Stuttgart
>> HRB-Nr.: 758298, AG Stuttgart
>> WEEE-Reg.-Nr.: DE79679404
>>
>> -- 
>> Gromacs Developers mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org.


-- 
Paul Bauer, PhD
GROMACS Development Manager
KTH Stockholm, SciLifeLab
0046737308594



More information about the gromacs.org_gmx-developers mailing list