[gmx-users] time accounting in log file with GPU

Mark Abraham mark.j.abraham at gmail.com
Fri Jul 25 19:41:06 CEST 2014


They report the time since the step that the timers were reset. The log
file will note this event. Whether load is balanced by then/ever depends on
the load.

Mark
On Jul 25, 2014 7:31 PM, "Sikandar Mashayak" <symashayak at gmail.com> wrote:

> Thanks Szilárd.
>
> I am bit confused about the -resethway or -resetstep options. Do they
> exclude the time spent on initialization and load-balancing from the total
> time reported in the log file, i.e., the time reported is the total time
> spent only in the loop/iterations over time-steps?
>
> Thanks,
> Sikandar
>
>
> On Thu, Jul 24, 2014 at 4:30 PM, Szilárd Páll <pall.szilard at gmail.com>
> wrote:
>
> > On Fri, Jul 25, 2014 at 12:48 AM, Sikandar Mashayak
> > <symashayak at gmail.com> wrote:
> > > Thanks Mark. -noconfout option helps.
> >
> > For benchmarking purposes, additionally to -noconfout I suggest also
> using:
> > * -resethway or -resetstep: to exclude initialization and
> > load-balancing at the beginning of the run to get a more realistic
> > performance measurement from a short run
> > * -nsteps N or -maxh: the former is useful if you want to directly
> > compare (e.g. two-sided diff) the timings from the end of the log
> > between multiple runs
> >
> > Cheers,
> > --
> > Szilárd
> >
> > >
> > > --
> > > Sikandar
> > >
> > >
> > > On Thu, Jul 24, 2014 at 3:25 PM, Mark Abraham <
> mark.j.abraham at gmail.com>
> > > wrote:
> > >
> > >> On Fri, Jul 25, 2014 at 12:12 AM, Sikandar Mashayak <
> > symashayak at gmail.com>
> > >> wrote:
> > >>
> > >> > Hi
> > >> >
> > >> > I am running a benchmark test with the GPU. The system consists of
> > simple
> > >> > LJ atoms.
> > >> > And I am running only very basic simulation with NVE ensemble and
> not
> > >> > writing any
> > >> > trajectories or energy values. My grompp.mdp file is attached below.
> > >> >
> > >> > However, in the time accounting table in the md.log, I observe that
> > write
> > >> > traj. and comm energies
> > >> > operations take 40% of time each. So, my question is that even if I
> > have
> > >> > specified not to write
> > >> > trajectories and energies, why is 80% of time being spent on those
> > >> > operations?
> > >> >
> > >>
> > >> Because you're writing a checkpoint file (hint, use mdrun -noconfout),
> > and
> > >> that load is imbalanced so the other cores wait for it in the global
> > >> communication stage in Comm. energies (fairly clear, since they have
> the
> > >> same "Wall time"). Hint - make benchmarks run for about a minute, so
> you
> > >> are not dominated by setup and load-balancing time. Your compute time
> > was
> > >> about 1/20 of a second...
> > >>
> > >> Mark
> > >>
> > >>
> > >> > Thanks,
> > >> > Sikandar
> > >> >
> > >> >      R E A L   C Y C L E   A N D   T I M E   A C C O U N T I N G
> > >> >
> > >> > On 2 MPI ranks
> > >> >
> > >> >  Computing:          Num   Num      Call    Wall time
> > Giga-Cycles
> > >> >                      Ranks Threads  Count      (s)         total sum
> >    %
> > >> >
> > >> >
> > >>
> >
> -----------------------------------------------------------------------------
> > >> >  Domain decomp.         2    1         11       0.006          0.030
> > >> 2.1
> > >> >  DD comm. load          2    1          2       0.000          0.000
> > >> 0.0
> > >> >  Neighbor search        2    1         11       0.007          0.039
> > >> 2.7
> > >> >  Launch GPU ops.        2    1        202       0.007          0.036
> > >> 2.5
> > >> >  Comm. coord.           2    1         90       0.002          0.013
> > >> 0.9
> > >> >  Force                  2    1        101       0.001          0.003
> > >> 0.2
> > >> >  Wait + Comm. F         2    1        101       0.004          0.020
> > >> 1.4
> > >> >  Wait GPU nonlocal      2    1        101       0.004          0.020
> > >> 1.4
> > >> >  Wait GPU local         2    1        101       0.000          0.002
> > >> 0.2
> > >> >  NB X/F buffer ops.     2    1        382       0.001          0.008
> > >> 0.6
> > >> >  Write traj.            2    1          1       0.108          0.586
> > >>  40.2
> > >> >  Update                 2    1        101       0.005          0.025
> > >> 1.7
> > >> >  Comm. energies         2    1         22       0.108          0.588
> > >>  40.3
> > >> >  Rest                                           0.016          0.087
> > >> 5.9
> > >> >
> > >> >
> > >>
> >
> -----------------------------------------------------------------------------
> > >> >  Total                                          0.269          1.459
> > >> 100.0
> > >> >
> > >> >
> > >>
> >
> -----------------------------------------------------------------------------
> > >> >
> > >> >
> > >> > grompp.mdp file:
> > >> >
> > >> > integrator               = md-vv
> > >> > dt                       = 0.001
> > >> > nsteps                   = 100
> > >> > nstlog                   = 0
> > >> > nstcalcenergy            = 0
> > >> > cutoff-scheme            = verlet
> > >> > ns_type                  = grid
> > >> > nstlist                  = 10
> > >> > pbc                      = xyz
> > >> > rlist                    = 0.7925
> > >> > vdwtype                  = Cut-off
> > >> > rvdw                     = 0.7925
> > >> > rcoulomb                 = 0.7925
> > >> > gen_vel                  = yes
> > >> > gen_temp                 = 296.0
> > >> > --
> > >> > Gromacs Users mailing list
> > >> >
> > >> > * Please search the archive at
> > >> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > >> > posting!
> > >> >
> > >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >> >
> > >> > * For (un)subscribe requests visit
> > >> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> > >> > send a mail to gmx-users-request at gromacs.org.
> > >> >
> > >> --
> > >> Gromacs Users mailing list
> > >>
> > >> * Please search the archive at
> > >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > >> posting!
> > >>
> > >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >>
> > >> * For (un)subscribe requests visit
> > >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > >> send a mail to gmx-users-request at gromacs.org.
> > >>
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list