[gmx-developers] Collect all coordinates/velocities/virial in one MPI rank in mdlib/constr.cpp::Impl::apply()

Erik Lindahl erik.lindahl at gmail.com
Tue Jan 11 11:30:57 CET 2022


Hi Lorién,

On Tue, Jan 11, 2022 at 11:21 AM Lorién López Villellas <lorien.lopez at bsc.es>
wrote:

> Hi all.
>
> As a first approach, we want to collect all the coordinates on the master
> rank, execute the solver, and send the updated results to the other ranks.
>

Stop right there ;-)

This approach is never going to scale, and you might as well stick with a
non-MPI implementation. Don't think of having two nodes, but imagine having
a molecule with ~500,000 atoms split over 100 chains that is run on 1000
nodes.  If you try to execute part of the problem on a single node you will
first be killed by the communication, and then by the master node having
10x more work to do even if your algorithm is only 1% of the runtime
(meaning a 10x performance loss).

To have any chance of getting impact for a new parallel constraints
algorithm (which is a VERY worthwhile effort per se), you need to find a
way to

1) Balance the computational load such that all nodes in the system can help
2) Find a way where you only need to communicate to close neighbors - never
ever collect all data on a single node every step.



Cheers,

Erik


-- 
Erik Lindahl <erik.lindahl at dbb.su.se>
Professor of Biophysics, Dept. Biochemistry & Biophysics, Stockholm
University
Science for Life Laboratory, Box 1031, 17121 Solna, Sweden

Note: I frequently do email outside office hours because it is a convenient
time for me to write, but please do not interpret that as an expectation
for you to respond outside your work hours.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20220111/0f746bcb/attachment-0001.html>


More information about the gromacs.org_gmx-developers mailing list