[gmx-developers] Gromacs in DEEP-EST project

Erik Lindahl erik.lindahl at gmail.com
Mon Jan 14 12:21:53 CET 2019


Hi Peicho,

The general plan is that we'll be moving away from the separate PP vs. PME
nodes in the future, and rather have one MPI task per node that in turns
uses threads/cores with slightly different tasks (in particular to enable
more advanced load balancing, multiple-time-step algorithms, avoid
communication, and also make it easier to handle the cases where multiple
GPUs do PME).

So, for the future I think it would be significantly more impact to try and
help with those efforts, or at least reformulate the work to work well with
that setup.

I would also strongly recommend to start working on *small*
changes/improvements and gradually get them into the main codebase
piece-by-piece while engaging in the rest of the development. There is
likely going to be a lot of other work going on related to
multiple-time-stepping in the near future, so the likelihood that a large
separate branch that has been developed independently with a different type
of multiple-time-stepping can later be merged is unfortunately close to
zero - at that point the ask would be to rewrite it to be compatible and
submitted as a sequence of small changes!

Cheers,

Erik



On Mon, Jan 14, 2019 at 12:02 PM Peicho Petkov <peicho.petkov at gmail.com>
wrote:

> Dear Gromacs developers,
>
> NCSA-Bulgaria aims at optimising Gromacs for Modular Supercomputing
> Architecture as a participant in the DEEP-EST project. Some of you may
> remember my presentation at Gromacs Workshop 2018 in Goettingen, but let me
> illustrate the essence again. We plan to run simulations on two modules
> namely in Booster-Cluster configuration, where the Booster is
> high-performance scalable parallel computing system (based on MIC, GPU or
> another type of accelerators)  and the Cluster module - a Linux cluster
> based on relatively high-performance CPU cores. Both modules’
> intraconnecting networks to be chosen in such a way to ensure good
> performance scalability, while module interconnecting network might be a
> network federation.
> As you know and I’ve already discussed on the Goettingen workshop with
> some of you, the performance of the module interconnection network is of
> crucial importance. While the hardware people in DEEP-EST project are
> working on the module interconnecting design, we decided to try optimising
> PP-PME communications. We started developing asynchronous PME electrostatic
> algorithm. The rough idea is to use multiple-timesteplike approach in such
> a way that the PME nodes’ performance and communication between PP nodes
> and PME nodes become less limiting the overall application performance.
> Even the cluster-booster configuration fails in the future such an
> algorithm might be of interest (basically but not only) of users who run
> relatively big and huge parallel simulations on quite a significant number
> of computing nodes. For instance, such simulations will be less sensitive
> to the PP to PME nodes number ratio and the interconnecting network
> properties.
>
> The main point is to organise communications in the way shown in fig. 1
> (see the attached plot) particularly and only in parallel simulations with
> PME dedicated ranks. The price to be paid is allocating additional memory,
> do some more calculations and a bit lower but controllable accuracy of
> electrostatic interactions treatment. We decided to separate electrostatic
> interaction in two parts fast and slow changing ones as described in [J.
> Chem. Phys., Vol. 115, No. 5, 1 August 2001] and [J. Chem. Phys., Vol. 116,
> No. 14, 8 April 2002]. We consider modifying Verlet cutoff scheme only. To
> do so, we need to add a force and energy calculating kernel which calculate
> pure cut-off electrostatic interactions with switch function mentioned in
> [J. Chem. Phys., Vol. 116, No. 14, 8 April 2002] and fast-changing real
> space terms of PME forces calculated in a separate output buffer. Then we
> estimate fast-changing real space terms of PME forces for each atom and
> save them in a buffer to be subtracted from the corresponding forces and
> energies received from PME nodes in the next time step. The latter is
> because the forces and energies calculated in the reciprocal space have a
> small but non-negligible, rapidly changing component (as discussed in [J.
> Chem. Phys., Vol. 115, No. 5, 1 August 2001]).
>
> More than a month ago we took a snapshot of the gromacs source code and
> started developing a proof of concept implementation of reference kernels
> and added necessary buffers and flags to test the idea, having in mind that
> all of the building blocks needed for the implementation of the algorithm
> are already developed in the source. As a result, we almost double the
> number of operations for calculating non-bonded interactions. The latter
> was done with the clear understanding that it was not the optimal way, but
> we wanted to make as fewer changes in the Gromacs source code as possible.
> We still have not implemented any switch function, and there is energy
> drift as expected. Currently, we are going to develop one reference kernel
> with additional arguments which will take pointers to force and energy
> buffers needed for slow changing force component calculations.
>
> With this letter, we would like to inform you of our activities, and if
> you consider that implementation mentioned above could be of interest of
> Gromacs user community, we will be happy to start working on it with your
> help. Otherwise, we will appreciate your helpful comments.
>
> Best regards,
> Peicho, Valentin and Stoyan Markov.
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org.



-- 
Erik Lindahl <erik.lindahl at dbb.su.se>
Professor of Biophysics, Dept. Biochemistry & Biophysics, Stockholm
University
Science for Life Laboratory, Box 1031, 17121 Solna, Sweden
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20190114/6406d7b7/attachment-0003.html>


More information about the gromacs.org_gmx-developers mailing list