[gmx-users] Gromacs 4.6.7 with MPI and OpenMP
Szilárd Páll
pall.szilard at gmail.com
Thu May 14 22:19:08 CEST 2015
Malcolm,
On Mon, May 11, 2015 at 4:23 PM, Malcolm Tobias <mtobias at wustl.edu> wrote:
>
> Szilárd,
>
> On Friday 08 May 2015 21:18:12 Szilárd Páll wrote:
>> >> What is your goal with using CPUSETs? Node sharing?
>> >
>> > Correct. While it might be possible to see the cores that have been assigned to the job and do the correct 'pin setting' it would probably be ugly.
>>
>> Not sure what you mean by "see the cores".
>
> Sorry, I tend to anthropomorphize computers when I try to understand them ;-)
>
> If I request X cores from the queueing system, it will create a CPUSET of X cores. If I call omp_get_num_procs(), it will report X even if there are more physical cores on the system. This way OpenMP plays nice with the queuing system.
Regarding the original statement: the answer is no. Using mdrun's
pinning options, you can only pin to a contiguous set of cores.
Secondly, relying on OpenMP not starting more threads than what the
CPUSET defines is dangerous because of mdrun's automated (eager)
resource allocation. As mentioned before, this relies on system calls
to query the number of "processors online" (and on cpuid for the
hardware thread layout) which is not affected by the CPUSET. What the
eagerness means is that if the #threads is not set, mdrun will try to
spawn enough OpenMP threads or thread-MPI ranks) to fill the node.
Similarly, mdrun can start thread-MPI ranks (the deafault non-MPI
build) to fill the cores it detects if the user does not pass -ntmpi.
For details you may want to check this page:
http://www.gromacs.org/Documentation/Acceleration_and_parallelization
Hence, I suggest that you instead set in the job scheduler
OMP_NUM_THREADS based on #ranks and/or #threads/rank requested. Note
that this will still not avoid prevent mdrun from automatically
spawning more thread-MPI ranks than cores you assigned to it in the
CPUSET.
>> Also not sure why is it
>> more ugly to construct a CPUSET than a pin offset, but hey, if you
>> want both performance and node sharing with automated resource
>> allocation, the solution won't be simple, I think.
>
> The queuing system does the work of creating the CPUSET for me. One thing I was worried about is that I'm not guaranteed a contiguous set of processors in the CPUSET. If I ask for 4 cores, I may be assigned 1, 3, 5 and 6 for example. In the end, I think I can live without the performance increase of pin'ing the threads.
That's up to you and your users to decide, but note that the potential
performance loss is definitely not negligible (will easily be measured
in double-digit %), but it will depend on CPU arch, GROMACS simulation
setup, as well as on how cache-intensive are the jobs GROMACS has to
coexist with.
> Since the threads will be confined to the CPUSET, I'm guessing the threads are less likely to migrate.
Not necessarily. E.g. OS jitter can "push" user-space threads around -
and mdrun is quite sensitive and vulnerable too when not pinned.
Cheers,
--
Szilárd
> Cheers,
> Malcolm
>
> --
> Malcolm Tobias
> 314.362.1594
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list