[gmx-users] Wrong calculation of runtime
Berk Hess
gmx3 at hotmail.com
Tue Nov 18 16:46:02 CET 2008
Ah, sorry, I did not read your mail carefully enough.
A bug with the -maxh option was fixed in 4.0.2.
But you are refering to the ns/day performance.
This is indeed incorrect (based on nsteps, not on
the actual number of steps performed).
I will fix it.
Berk
> Date: Tue, 18 Nov 2008 16:30:47 +0100
> From: cseifert at bph.ruhr-uni-bochum.de
> To: gmx-users at gromacs.org
> Subject: RE: [gmx-users] Wrong calculation of runtime
>
> As you can see in the attachment (the full output of the testrun), I am
> using GMX 4.0.2
>
> Are there different versions of 4.0.2 ?
>
>
> On Tue, 2008-11-18 at 16:21 +0100, Berk Hess wrote:
> > Yes, this is a bug.
> > And it has been fixed already in 4.0.2.
> >
> > Berk
> >
> > > Date: Tue, 18 Nov 2008 16:10:51 +0100
> > > From: cseifert at bph.ruhr-uni-bochum.de
> > > To: gmx-users at gromacs.org
> > > Subject: [gmx-users] Wrong calculation of runtime
> > >
> > > Hi.
> > >
> > > I use GMX4.0.2 on a Linux cluster.
> > >
> > > When I start my System (242224 atoms on 8CPUs), I can calculate
> > about 1
> > > ns/day. This is also shown at the end of the mdrun output(output is
> > > marked by "#"):
> > > # NODE (s) Real (s) (%)
> > > # Time: 344.000 344.000 100.0
> > > # 5:44
> > > # (Mnbf/s) (GFlops) (ns/day) (hour/ns)
> > > #Performance: 52.212 14.794 1.005 23.889
> > > #No previous checkpoint file present, assuming this is a new run.
> > >
> > >
> > > But if I abort the run by using -maxh for the same system, the
> > output
> > > gets wrong. Here is an example for a test run: -maxh 0.1:
> > > #starting mdrun 'test'
> > > #1000000 steps, 2000.0 ps.
> > > #step 0
> > > #NOTE: Turning on dynamic load balancing
> > > #
> > > #vol 0.79! imb F 6% step 100, will finish Thu Nov 20 17:03:26 2008
> > > #vol 0.79! imb F 4% step 200, will finish Thu Nov 20 15:55:17 2008
> > > #vol 0.79! imb F 6% step 300, will finish Thu Nov 20 14:37:03 2008
> > > #vol 0.79! imb F 5% step 400, will finish Thu Nov 20 14:39:24 2008
> > > #vol 0.79! imb F 5% step 500, will finish Thu Nov 20 14:40:48 2008
> > > #vol 0.79! imb F 5% step 600, will finish Thu Nov 20 14:41:45 2008
> > > #vol 0.79! imb F 5% step 700, will finish Thu Nov 20 14:42:25 2008
> > > #vol 0.79! imb F 9% step 800, will finish Thu Nov 20 14:42:55 2008
> > > #vol 0.79! imb F 7% step 900, will finish Thu Nov 20 15:01:49 2008
> > > #vol 0.79! imb F 8% step 1000, will finish Thu Nov 20 15:00:17 2008
> > > #vol 0.79! imb F 8% step 1100, will finish Thu Nov 20 14:59:02 2008
> > > #vol 0.79! imb F 8% step 1200, will finish Thu Nov 20 15:11:51 2008
> > > #vol 0.79! imb F 7% step 1300, will finish Thu Nov 20 15:09:54 2008
> > > #vol 0.79! imb F 8% step 1400, will finish Thu Nov 20 15:20:08 2008
> > > #vol 0.79! imb F 8% step 1500, will finish Thu Nov 20 15:17:53 2008
> > > #vol 0.80! imb F 8% step 1600, will finish Thu Nov 20 15:15:55 2008
> > > #vol 0.79! imb F 8% step 1700, will finish Thu Nov 20 15:23:59 2008
> > > #vol 0.79! imb F 8% step 1800, will finish Thu Nov 20 15:21:54 2008
> > > #vol 0.80! imb F 8% step 1900, will finish Thu Nov 20 15:28:48 2008
> > > #vol 0.79! imb F 8% step 2000, will finish Thu Nov 20 15:26:41 2008
> > > #
> > > #Step 2070: Run time exceeded 0.099 hours, will terminate the run
> > > #vol 0.79! imb F 7%
> > > #Step 2080: Run time exceeded 0.099 hours, will terminate the run
> > > #step 2080, will finish Thu Nov 20 15:28:21 2008
> > > #
> > > # Average load imbalance: 7.1 %
> > > # Part of the total run time spent waiting due to load imbalance:
> > 3.1 %
> > > # Steps where the load balancing was limited by -rdd, -rcon and/or
> > -dds:
> > > #X 9 %
> > > #
> > > #
> > > # Parallel run - timing based on wallclock.
> > > #
> > > # NODE (s) Real (s) (%)
> > > # Time: 359.000 359.000 100.0
> > > # 5:59
> > > # (Mnbf/s) (GFlops) (ns/day) (hour/ns)
> > > #Performance: 52.071 14.749 481.337 0.050
> > > #No previous checkpoint file present, assuming this is a new run.
> > >
> > > The ns/day value is totally wrong!
> > > The job stopped after 0.1h=6min with 2000 steps. This equates 4000fs
> > >
> > > (4000fs / 6min) * 60 (min/h) * 24 (h/day) = 0.96 (ns/day)
> > >
> > > Is this a bug?
> > >
> > > Greetings,
> > > Christian.
> > >
> > >
> > > --
> > > M. Sc. Christian Seifert
> > > Department of Biophysics
> > > University of Bochum
> > > ND 04/67
> > > 44780 Bochum
> > > Germany
> > > Tel: +49 (0)234 32 28363
> > > Fax: +49 (0)234 32 14626
> > > E-Mail: cseifert at bph.rub.de
> > > Web: http://www.bph.rub.de
> > >
> > >
> > > _______________________________________________
> > > gmx-users mailing list gmx-users at gromacs.org
> > > http://www.gromacs.org/mailman/listinfo/gmx-users
> > > Please search the archive at http://www.gromacs.org/search before
> > posting!
> > > Please don't post (un)subscribe requests to the list. Use the
> > > www interface or send it to gmx-users-request at gromacs.org.
> > > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> >
> >
> > ______________________________________________________________________
> > Express yourself instantly with MSN Messenger! MSN Messenger
> > _______________________________________________
> > gmx-users mailing list gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> --
> M. Sc. Christian Seifert
> Department of Biophysics
> University of Bochum
> ND 04/67
> 44780 Bochum
> Germany
> Tel: +49 (0)234 32 28363
> Fax: +49 (0)234 32 14626
> E-Mail: cseifert at bph.rub.de
> Web: http://www.bph.rub.de
_________________________________________________________________
Express yourself instantly with MSN Messenger! Download today it's FREE!
http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081118/f6f35751/attachment.html>
More information about the gromacs.org_gmx-users
mailing list