[gmx-developers] interesting read: exascale w/o threads
Berk Hess
hess at kth.se
Thu Oct 8 16:35:55 CEST 2015
On Oct 8, 2015 15:43, Roland Schulz <roland at utk.edu> wrote:
>
>
>
> On Thu, Oct 8, 2015 at 9:08 AM, David van der Spoel <spoel at xray.bmc.uu.se> wrote:
>>
>> On 08/10/15 13:36, Berk Hess wrote:
>> > Interesting read. The essential message is:
>> > "Thus, we believe that a discussion on threads versus processes boils
>> > down to “shared everything by default” versus “shared nothing by default”.
>> > We came to the same conclusion in a discussion some time ago. So the
>> > choice doesn't affect performance, but it strongly affects the code.
>> > For Gromacs I think it's still convenient to have processes+threads,
>> > since we have many data structures with many small arrays that change at
>> > domain decomposition time that are needed by all threads in a domain.
>> > Sharing all these through MPI3 is tedious.
>>
>> Also I think it can be advantageous to have fewer DD cells in case of
>> very inhomogoneous systems (we are working with clusters in the gas phase).
>
>
> Only if MPI is only used to implement DD. We certainly need multiple level of parallelism. But one could use MPI to implement the task level parallelism we would like (mainly running the different forces in parallel rather than sequential).
>
> Roland
I don't see how that can work. We want DD to also run in parallel. And the force tasks need access to many of the data structures we need to share.
Berk
>
>>
>> >
>> > Cheers,
>> >
>> > Berk
>> >
>> > On 10/08/2015 01:04 PM, Szilárd Páll wrote:
>> >>
>> >> "We believe that portions of the HPC community have adopted the point
>> >> of view that somehow threads are “necessary” in order to utilize such
>> >> [manycore/SMP] systems, (1) without fully understanding the
>> >> alternatives, including MPI 3 functionality, (2) underestimating the
>> >> difficulty of utilizing threads efficiently, and (3) without
>> >> appreciating the similarities of threads and processes. This short
>> >> paper, due to space constraints, focuses exclusively on issue (3)
>> >> since we feel it has gotten virtually no attention."
>> >>
>> >> http://www.orau.gov/hpcor2015/whitepapers/Exascale_Computing_without_Threads-Barry_Smith.pdf
>> >>
>> >> --
>> >> Szilárd
>> >>
>> >>
>> >
>> >
>> >
>>
>>
>> --
>> David van der Spoel, Ph.D., Professor of Biology
>> Dept. of Cell & Molec. Biol., Uppsala University.
>> Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.
>> spoel at xray.bmc.uu.se http://folding.bmc.uu.se
>> --
>> Gromacs Developers mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org.
>
>
>
>
> --
> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
> 865-241-1537, ORNL PO BOX 2008 MS6309
More information about the gromacs.org_gmx-developers
mailing list