[gmx-users] Gromacs 5.0.2 no parallei run
Szilárd Páll
pall.szilard at gmail.com
Fri Oct 3 15:01:55 CEST 2014
PS: you can inspect the affinity of processes using e.g. hwloc-ps.
--
Szilárd
On Fri, Oct 3, 2014 at 2:52 PM, Szilárd Páll <pall.szilard at gmail.com> wrote:
> Hi,
>
> As the note suggests, mdrun noticed that the process affinity mask is
> non-default and to not interfere it did not set thread affinities.
> This typically happens when the affinity of your job is set at
> startup, typically done by the job scheduler, but in some cases the
> MPI runtimes can mess with the thread affinities too. In your case the
> thread affinity is most likely not set correctly and two or more of
> mdrun's threads get pinned to the same core.
>
> The best solution to this problem is to identify what sets the
> incorrect affinity and disable or try passing "-pin on" to mdrun which
> should override the externally set affinity.
>
> Cheers,
> --
> Szilárd
>
>
> On Fri, Oct 3, 2014 at 2:05 PM, Jernej Zidar <jernej.zidar at gmail.com> wrote:
>> Hi all,
>> Lately I noticed I have a rather weird problem with Gromacs on my
>> workstation and I'm unable to pinpoint whether it's a hardware or a
>> software problem.
>>
>> The problem is that none of the jobs are using all the CPUs or
>> threads. I compiled Gromacs with support for OpenMP and without
>> support for MPI and GPU as instructed on the documentation page. FFTW
>> was pulled of the internet during the make stage.
>>
>> After building Gromacs I load up the environment and start a job:
>> mdrun -v -deffnm system1-npt-prod -maxh 0.1 -nsteps -1
>>
>> Until recently, the job would start with 1 MPI thread and 12 OpenMP
>> threads and use all the available CPUs (the machine is a dual Xeon
>> X5650 machine with hyperthreading disabled) as evidenced by the usage
>> in utilities like 'top'/'htop'. The end result is relatively decent
>> performance of ~20 ns/day for the system1.
>>
>> What happens now is that I get this message from Gromacs:
>> Non-default thread affinity set, disabling internal thread affinity
>> Using 1 MPI thread
>> Using 12 OpenMP threads
>>
>> On another machine that is similar in terms of hardware, the job
>> starts without any complaints:
>> Overriding nsteps with value passed on the command line: -1 steps, -0.002 ps
>> Using 1 MPI thread
>> Using 12 OpenMP threads
>>
>> I can't find any error in the log file. It appears the CPUs are
>> somewhat off-limits for me. Running the job as root or rebooting
>> doesn't help.
>>
>> I'm running Debian stable (Wheezy) with nothing much installed.
>>
>> Any advice will be appreciated!
>>
>> Thanks in advance,
>> Jernej Zidar
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list