[gmx-users] Several questions about log file.

Mark Abraham mark.j.abraham at gmail.com
Thu Aug 21 17:32:50 CEST 2014

On Thu, Aug 21, 2014 at 7:57 AM, Theodore Si <sjyzhxw at gmail.com> wrote:

> Hi,
> 1. Does "force" in the R E A L   C Y C L E   A N D   T I M E   A C C O U N
> T I N G table mean the time spent on short-range force calculation?

and bonded interactions.

> 2. Does "Comm. coord" mean the communication of atom positions when
> calculating short-range force interaction?

As I've said before, the entries in this table correspond to sections in
the DD-PME flowchart in figure 3.16 of the manual. It's not 1-to-1, but the
correspondence is fairly clear. As you can see there, there can be two
kinds of communication of position coordinates required. So the answer to
your question is "yes" but there can be more to it.

> 3. What forces are waited and communicated in "Wait + Comm. F"?

Real-space forces from DD neighbours, per flowchart.

> Each node of our cluster has two Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz,
> each has 8 cores.
> When we are using 2 nodes, that is 2nodes * 2 cpus * 8 cores = 32 cores,
> we will be
> Using 32 MPI processes
> Using 1 OpenMP thread per MPI proces
> and
> Comm. coord.          24    1      19200       1.483 92.327     1.0
> 1.483s are spent on the coordinate communication
> When we are using 32 nodes, that is 32nodes * 2 cpus * 8 cores = 32 cores,
> we will be
> Using 512 MPI processes
> Using 1 OpenMP thread per MPI proces
> and
> Comm. coord.         384    1      19200       2.094 2086.377     5.2
> 2.094s are spent on the coordinate communication
> 4. Why the time spent on communication of coordinate doesn't scale up as
> the cores are increasing

Someone coded it right ;-) The implementation is talked about in manual
3.17.1. The total volume of communication does increase as there are more
domains(=ranks), but only communication between different nodes will have
an effect (to first order). The actual performance properties will depend
on the qualities of your network, but the total amount of coordinate data
transferred (three 4-byte floats per ~100 atoms per domain-pair) is tiny
compared with what typical networks are designed to handle, so the total
cost of the communication will be dominated by the latency of just sending
a message. The number of domain-pairs sending messages has gone up, of
course, but your results show that the cost is still dominated by latency
(which is not news, of course, hiding such latencies is key for improving
strong scaling of MD).


> BR,
> Theo
> --
> Gromacs Users mailing list
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.

More information about the gromacs.org_gmx-users mailing list