[gmx-users] Why is there a NxN VdW [F] on a separate line?
mark.j.abraham at gmail.com
Wed Aug 6 17:15:38 CEST 2014
On Tue, Aug 5, 2014 at 9:24 PM, Theodore Si <sjyzhxw at gmail.com> wrote:
> Please compare the file 8.log <https://onedrive.live.com/
> 7ZgAg8&ithint=file%2clog> and 512.log <https://onedrive.live.com/
These runs report the use of 8 MPI ranks with 2 OpenMP threads per rank,
and 512 MPI ranks with 1 OpenMP thread per rank. GROMACS (like a lot of
codes) use hybrid MPI/OpenMP parallelism, and when describing a run it is
normally incorrect to mention only one aspect.
Their M E G A - F L O P S A C C O U N T I N part are different as 8.log
> has no standalone NxN VdW [F] and NxN VdW [V&F]. 512.log has the following
> NxN VdW [F] 17.077648 563.562 0.0
> NxN VdW [V&F] 0.002592 0.111 0.0
> Why the difference?
They report on calls to different kernels. Only the forces are required for
MD. Energies (ie. "V") are extra work, so they're only done when necessary.
This was a key optimization in 4.6. In your run, they were not often
And the both have
> NxN Ewald Elec. + VdW [F]
> NxN Ewald Elec. + VdW [V&F]
> Does NxN Ewald Elec. + VdW [F] mean NxN Ewald Elec. and NxN VdW [F]? If
> it is the case, why 512.log has both NxN Ewald Elec. + VdW [F] and NxN VdW
They report on the calls to different kernels. If you have a chunk of atoms
that don't have charges, you'd be pretty happy to call a kernel that didn't
waste time doing that. Likewise if you don't need the energy, mdrun doesn't
compute it. This is discovered at run time, so if you distribute the work
to different numbers of compute units, then one of them might end up with
some clusters that only have atoms that lack charge. The clustering is
opportunistic, so differences are expected. In your next runs, you might
observe the opposite behaviour.
That said, the flops output is scarcely meaningful (even if the reporting
is accurate). Performance is dominated by considerations of load balance,
and the subsequent information deals with that.
> On 8/5/2014 10:11 PM, Mark Abraham wrote:
>> On Tue, Aug 5, 2014 at 4:00 AM, Theodore Si <sjyzhxw at gmail.com> wrote:
>> This is extracted from a log file
>> There's no data. The list cannot accept attachments, so you need to
>> copy-paste a relevant chunk, or upload a log file to a file-sharing
>> of a mdrun of 512 openMP threads without GPU acceleration.
>> mdrun will refuse to run with 512 OpenMP threads - please report your
>> command line rather than your mental model of it.
>> Since the first line and third line both have N*N Vdw [F], does the
>>> include the latter?
>>> No, but there is no line with "N*N Vdw [F]". Please be precise if you
>> asking for detailed information.
>> As we can see, in the log file of a mdrun of 8 openMP threads without GPU
>>> acceleration, there is no standalone N*N Vdw [F], why the difference?
>> Can't tell, don't know what is different between the two runs. My guess is
>> that the former run is actually running on 64 MPI ranks, each of 8 OpenMP
>> threads, in which case you have domain decomposition per MPI rank, and in
>> that case there are separate calls to kernels that are aimed at computing
>> the interactions associated with atoms whose home is in different domains.
>> You should see the ratio vary as the number of ranks varies.
>>> Gromacs Users mailing list
>>> * Please search the archive at http://www.gromacs.org/
>>> Support/Mailing_Lists/GMX-Users_List before posting!
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-request at gromacs.org.
> Gromacs Users mailing list
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users