[gmx-developers] Regression test GmxapiMpiTests fails for GROMACS 2020.1
Eric Irrgang
ericirrgang at gmail.com
Wed Apr 15 11:55:02 CEST 2020
GmxapiMpiTests uses OPENMP_THREADS 2, but this functions by setting a command line argument pair: -ntomp 2
This relies on the tests to process command line arguments, which is presumably normally handled through some test fixture that I'm not familiar with.
I imagine that if gmx_register_gtest_test() set OMP_NUM_THREADS in the environment instead of or in addition to setting the CLI argument, that would work, but generally highlights the open question of how to pass run time options or configure the hardware resources through the API.
I'm open to short term and long term suggestions. Maybe this is a workshop topic for next week...
I suppose the workaround is probably to set the OMP_NUM_THREADS environment variable before invoking the test suite.
> On Apr 15, 2020, at 12:34 PM, Eric Irrgang <ericirrgang at gmail.com> wrote:
>
> I feel like we dealt with this two years ago and then again some time in the last year... I'm not sure how this keeps being a problem. I'll double check the arguments to the CMake test macro. Does this ring any bells for other developers?
>
>> On Apr 15, 2020, at 8:18 AM, Christoph Pospiech <cpospiech at lenovo.com> wrote:
>>
>> Hi,
>>
>> the regression test GmxapiMpiTests fails for GROMACS 2020.1 when run on a node
>> with more than 12 (virtual) cores. Apparently, the test is run with two MPI
>> ranks and chooses the number of threads to fill up the available resources.
>>
>> The error message is the following.
>>
>> On host cmp2645.hpc.eu.lenovo.com 2 GPUs selected for this run.
>> Mapping of GPU IDs to the 2 GPU tasks in the 2 ranks on this node:
>> PP:0,PP:1
>> PP tasks will do (non-perturbed) short-ranged interactions on the GPU
>> PP task will update and constrain coordinates on the CPU
>> Using 2 MPI processes
>>
>> Non-default thread affinity set, disabling internal thread affinity
>>
>> Using 32 OpenMP threads per MPI process
>>
>>
>> -------------------------------------------------------
>> Program: gmxapi-mpi-test, version 2020.1-dev-20200407-9b056e2-unknown
>> Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 626)
>> MPI rank: 0 (out of 2)
>>
>> -------------------------------------------------------
>> Program: gmxapi-mpi-test, version 2020.1-dev-20200407-9b056e2-unknown
>> Source file: src/gromacs/taskassignment/resourcedivision.cpp (line 626)
>> MPI rank: 1 (out of 2)
>>
>> Fatal error:
>> Your choice of number of MPI ranks and amount of resources results in using 32
>> OpenMP threads per rank, which is most likely inefficient. The optimum is
>> usually between 2 and 6 threads per rank. If you want to run with this setup,
>> specify the -ntomp option. But we suggest to change the number of MPI ranks.
>>
>> For more information and tips for troubleshooting, please check the GROMACS
>> website at http://www.gromacs.org/Documentation/Errors
>> -------------------------------------------------------
>> [repeated for the other rank]
>>
>> Trying to set OMP_NUM_THREADS to a fixed value < 6 fails other tests, such as
>> MdrunMpiCoordinationTestsOneRank
>> MdrunMpiCoordinationTestsTwoRanks
>>
>> Please advise! Thanks!
>> --
>> Dr. Christoph Pospiech
>> Senior HPC & AI Performance Engineer
>>
>> T +49 (351) 86269826
>> M +49 (171) 7655871
>> E cpospiech at lenovo.com
>>
>> Lenovo Global Technology (Deutschland) GmbH
>> Meitnerstr. 9
>> 70563 Stuttgart
>>
>> Geschäftsführung: Christophe Philippe Marie Laurent und Colm Brendan Gleeson
>> (jeweils einzelvertretungsberechtigt)
>> Prokura: Dieter Stehle & Henrik Bächle (Einzelprokura)
>> Sitz der Gesellschaft: Stuttgart
>> HRB-Nr.: 758298, AG Stuttgart
>> WEEE-Reg.-Nr.: DE79679404
>>
>> --
>> Gromacs Developers mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org.
>
More information about the gromacs.org_gmx-developers
mailing list