[gmx-users] GPU warnings

Thomas Evangelidis tevang3 at gmail.com
Sat Nov 10 17:24:29 CET 2012


On 10 November 2012 03:21, Szilárd Páll <szilard.pall at cbr.su.se> wrote:

> Hi,
>
> You must have an odd sysconf version! Could you please check what is the
> sysconf system variable's name in the sysconf man page (man sysconf) where
> it says something like:
>
>     _SC_NPROCESSORS_ONLN
>              The number of processors currently online.
>
> The first line should be one of the
> following: _SC_NPROCESSORS_ONLN, _SC_NPROC_ONLN,
> _SC_NPROCESSORS_CONF, _SC_NPROC_CONF, but I guess yours is something
> different.
>

The following text is taken from man sysconf:

       These values also exist, but may not be standard.

        - _SC_PHYS_PAGES
              The number of pages of physical memory.  Note that it is
possible for the product of this value and the value of _SC_PAGE_SIZE to
overflow.

        - _SC_AVPHYS_PAGES
              The number of currently available pages of physical memory.

        - _SC_NPROCESSORS_CONF
              The number of processors configured.

        - _SC_NPROCESSORS_ONLN
              The number of processors currently online (available).




> Can you also check what your glibc version is?
>

$ yum list installed | grep glibc
glibc.i686                            2.15-57.fc17
@updates
glibc.x86_64                          2.15-57.fc17
@updates
glibc-common.x86_64                   2.15-57.fc17
@updates
glibc-devel.i686                      2.15-57.fc17
@updates
glibc-devel.x86_64                    2.15-57.fc17
@updates
glibc-headers.x86_64                  2.15-57.fc17               @updates



>
>
> On Fri, Nov 9, 2012 at 5:51 PM, Thomas Evangelidis <tevang3 at gmail.com>wrote:
>
>>
>>
>>
>> > I get these two warnings when I run the dhfr/GPU/dhfr-solv-PME.bench
>>> > benchmark with the following command line:
>>> >
>>> > mdrun_intel_cuda5 -v -s topol.tpr -testverlet
>>> >
>>> > "WARNING: Oversubscribing the available 0 logical CPU cores with 1
>>> > thread-MPI threads."
>>> >
>>> > 0 logical CPU cores? Isn't this bizarre? My CPU is Intel Core i7-3610QM
>>> >
>>>
>>> That is bizzarre. Could you run with "-debug 1" and have a look at the
>>> mdrun.debug output which should contain a message like:
>>> "Detected N processors, will use this as the number of supported hardware
>>> threads."
>>>
>>> I'm wondering, is N=0 in your case!?
>>>
>>> It says "Detected 0 processors, will use this as the number of supported
>> hardware threads."
>>
>>
>>>
>>> > (2.3 GHz). Unlike Albert, I don't see any performance loss, I get 13.4
>>> > ns/day on a single core with 1 GPU and 13.2 ns/day with GROMACS v4.5.5
>>> on 4
>>> > cores (8 threads) without the GPU. Yet, I don't see any performance
>>> gain
>>> > with more that 4 -nt threads.
>>> >
>>> > mdrun_intel_cuda5 -v -nt 2 -s topol.tpr -testverlet : 15.4 ns/day
>>> > mdrun_intel_cuda5 -v -nt 3 -s topol.tpr -testverlet : 16.0 ns/day
>>> > mdrun_intel_cuda5 -v -nt 4 -s topol.tpr -testverlet : 16.3 ns/day
>>> > mdrun_intel_cuda5 -v -nt 6 -s topol.tpr -testverlet : 16.2 ns/day
>>> > mdrun_intel_cuda5 -v -nt 8 -s topol.tpr -testverlet : 15.4 ns/day
>>> >
>>>
>>> I guess there is not much point in not using all cores, is it? Note that
>>> the performance drops after 4 threads because Hyper-Threading with OpenMP
>>> doesn't always help.
>>>
>>>
>>> >
>>> > I have also attached my log file (from "mdrun_intel_cuda5 -v -s
>>> topol.tpr
>>> > -testverlet") in case you find it helpful.
>>> >
>>>
>>> I don't see it attached.
>>>
>>>
>>>
>> I have attached both mdrun_intel_cuda5.debug and md.log files.  They will
>> possibly be filtered by the mailing list but will be delivered to your
>> email.
>>
>> thanksm
>> Thomas
>>
>
>


-- 

======================================================================

Thomas Evangelidis

PhD student
University of Athens
Faculty of Pharmacy
Department of Pharmaceutical Chemistry
Panepistimioupoli-Zografou
157 71 Athens
GREECE

email: tevang at pharm.uoa.gr

          tevang3 at gmail.com


website: https://sites.google.com/site/thomasevangelidishomepage/



More information about the gromacs.org_gmx-users mailing list