[gmx-developers] Wednesday's GROMACS teleconference

Mark Abraham mark.j.abraham at gmail.com
Mon Feb 18 18:14:43 CET 2013


Hi again devs,

We've got our fortnightly teleconference scheduled again this Wednesday.
Thinking of topics to discuss has been a bit challenging - they can neither
be too vague we can't decide anything or too detailed that only a few
people can make useful input. Suggestions are most welcome!

So far I've come up with

1. replacing rvec with something more friendly to C++
2. coding strategy for whatever will replace do_md()

I've put some initial thoughts on these together, which you can find below.
If someone can identify other suitable topics, do speak up.

Details will be the same as last time
* a Google Hangout will be run by the mark.abraham at scilifelab.se account.
Please mail that account from the Google account with which you might want
to connect, so that I can have you in the Circle before the meeting is due
to start
* start 6pm, end 6:30pm Stockholm time, Wed 20 Feb (should be during
working hours for Americans)
* if there's interest in continued discussion, perhaps on implementation
details, those people can continue on after 6:30
* please use the best quality hardware and connection you reasonably can
(not on your laptop at the local cafe, or with your kids screaming at you).
Know how to mute yourself, or we might have to drop you!
* I'll issue the hangout invitation shortly after 5pm if you want to test
your connection or setup
* I'll post a summary after the meeting of what was
discussed/decided/whatever

People who haven't attended before are welcome. We had 10 connections last
time and things were pretty good, so the technology seems to scale
reasonable well. If you're new, please let me know what part(s) of the
meeting programme is of interest to you so I can help manage discussion
suitably.

If you can't attend, please feel free to contribute in this thread, or
email me, etc.

Cheers,

Mark
GROMACS development manager

Thoughts for Wed 20 Feb
========

*1. planning for an internal coordinate format

   - can’t keep using rvec
      - rvec can’t be put into STL containers (need copy constructor, etc.)
      - rvec guarantees we can’t use aligned loads anywhere (important for
      leveraging SIMD possibilities)
      - makes using RAII harder
      - probably makes writing const-correct code harder
   - we want to be able to use STL containers when that makes code writing,
   review and maintenance easier
   - we need to be able to get flat C arrays of atom coordinates with no
   overhead for compute kernels
   - straightforward suggestion: switch to using an RVec class with a
   4-tuple of reals and use them for x, y, z and q
      - in many places q won’t be used
      - 16-byte alignment for free (opportunities for compiler auto-SIMD)
      - perhaps 4/3 increase in cache traffic where q is not being used
      - std::vector< std::vector<real> > doesn’t map to a flat C array -
      need to write/find a “tuple” class that lets the compiler know what is
      going on, so that std::vector< tuple<real,4> > ends up as a flat
C array of
      xyzqxyzqxyzq...
   - separate vectors for x, y, z and q could be useful because that would
   help avoid the swizzling (group kernels) and coordinate copying (Verlet
   kernels) that currently occurs
      - downside is that x, y, and z are normally used together, so a naive
      approach pretty much guarantees we need 3 cache lines for each
point... if
      we don’t re-use that data a few times, that could kill us
   - internally use some kind of “packed rvec” laid out xxxxyyyyzzzz(qqqq)
   and have some kind of intelligent object that we can use just like we use
   rvec now, e.g. coords[3][YY] magically returns the 8th element of
   xxxxyyyyzzzz
   - the needs of mdrun and analysis tools are different, and we can
   perhaps explore different implementations for each - but a common interface
   would be highly desirable
   - ideally we would not commit in 2013 to an internal representation that
   we might regret in the future... how can we plan to be flexible?
      - run-time polymorphism, e.g. have the coordinate representation
      classes share a common base with virtual functions - probably
too slow, and
      we don’t want to store the virtual function tables
      - code versioning - ugh
      - bury our heads in the sand - we might get lucky and never want to
      change our coordinate representation
      - compile-time polymorphism, e.g. mdrun<RVec> vs mdrun<PackedRVec,4>
         - might also allow a more elegant implementation of double- vs
         mixed-precision
         - code bloat if we want binaries that can run on any x86 if
         different CPUs will want different packings
         - compile-time bloat if compiling more than one such
         representation, as a lot of routines would now be parameterized

2. planning for do_md()

   - http://redmine.gromacs.org/issues/1137 discusses some thoughts about
   how we might like to make the integrator more awesome
   - Main loop inside do_md() is currently ~1300 lines, mostly with heavily
   nested conditionality
   - Currently, the need to pass lots of arguments to and from the
   functions it calls limits our ability to change anything, else we could
   probably break it into
      - ManageSpecialCases()
      - DoNeighbourSearching()
      - CalculateForces()
      - DoFirstUpdate()
      - WriteTrajectories()
      - DoSecondUpdate()
      - WriteEnergies()
      - MoreManagementOfSpecialCases()
      - PrepareForNextIteration()
   - In C++, being able to construct an MDLoop object that contains (lots
   of) objects that already have their own “constant” data will mean we only
   need to pass to methods of those objects any remaining control values for
   the current operation
      - passing of state information managed by letting the MDLoop own that
      data and have the object implementing the strategy ask for what it needs?
   - Those objects will have a lot of inter-relationships, so probably need
   a common interface for (say) thermostat algorithms so that (say) the MDLoop
   update method knows it can just call (say) the thermostat object’s method
   and the result will be correct, whether there’s a barostat involved, or not
      - easily done with an (abstract?) base class and overriding virtual
      functions
         - however, that kind of *dynamic-binding* run-time polymorphism is
         overkill - likely any simulation knows before it gets into
the main loop
         that it’s only ever going to call (say) AndersenThermostat’s methods
         - the overhead from such function calls is probably not a big deal
         - this loop is always going to be heavily dominated by
CalculateForces()
         - inheritance can maximise code re-use
      - can be done by having function pointers that get set up correctly
      in the MDLoop constructor (i.e. “static” run-time polymorphism,
as dictated
      by the .tpr)
         - this might lead to code duplication?
         - might lead to the current kind of conditional-heavy code,
         because it is now the coder’s job to choose the right code path, but
         hopefully only in construction
      - could be done with compile-time polymorphism (i.e. templates)
         - lots of duplicated object code because of the explosion of
         templated possibilities
      - Need to bear in mind that probably this pretty front end will be
   queueing up work requests that will be dynamically dispatched to available
   hardware (obviously the dispatcher will focus on hardware that has the
   right data locality). That seems OK to Mark:
      - we need an interface that makes it reasonably easy to see that the
      physics of our algorithm should be working
      - how the work gets done *should* be somewhat opaque to MDLoop
      - separating the two makes for future extensibility and
      customizability
   - perhaps a good way to start to get a handle on what kinds of objects
   and relationships we needs is to make an ideal flowchart for a plausible
   subset of mdrun functionality, and see what data has to be known where.
   Perhaps Michael can sketch something for us that illustrates what the
   algorithmic requirements of a “full Trotter decomposition framework” would
   be. (But probably not in time for this week!)

*
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20130218/2fa53ade/attachment.html>


More information about the gromacs.org_gmx-developers mailing list