[gmx-users] Comparing various MD packages

Mark Abraham mark.j.abraham at gmail.com
Thu Aug 27 09:45:58 CEST 2015


Hi,

On Tue, Aug 25, 2015 at 8:16 PM Sabyasachi Sahoo <ssahoo.iisc at gmail.com>
wrote:

> Hello all,
> I am interested in comparing various MD packages like LAMMPS, GROMACS,
> DESMOND, NWChem, RedMD. I was looking through literature to find any
> relevant papers regarding the same and found a paper or two that do so. But
> they used different set of bench-marking problem sets, and no single common
> benchmark was used to compare all of them.
>
> However I fail to understand, how to draw any common conclusion amongst the
> molecular dynamics packages, since they have used different benchmark
> problems for every package that they tested on. Though we know that every
> molecular dynamics package has different objective and each of them try to
> meet different goals, but shouldn't we compare all MD packages w.r.t. same
> benchmark problem rather than taking different benchmark problem suitable
> to individual MD package. Because at the end of the day, along
> with optimizations being done for specific subset of MD problems, all
> MD packages are implementing same (or rather similar) set of mechanics
> equations with the goal of simulation molecules.
>

This sounds reasonable, but will generate more heat than light.
* As soon as you have a standard benchmark, then people are tempted to
engineer for the benchmark rather than for actual usefulness (e.g. the use
of LINPACK to measure Top500 place).
* There may be a sufficiently similar subset of functionality common to all
packages, but very few users of any of the packages run in that mode, so
the comparison doesn't mean anything.
* Then you have the problem of choosing test hardware - some MD code that
runs only the CUDA GPUs on a single node is not going to be comparable with
one that runs over MPI+CUDA using both CPU and GPU.
* Then there's the qualities that you might actually compare - ideally
you'd want a metric of simulation rate for given simulation quality, but
the quality required for different kinds of actual science is probably
different and isn't well understood anyway.
* If you have to buy your own hardware, then the cost analysis will be
different from whether you can win free time on a big national computing
resources (and does it matter that such a code might be "scaleable" to lots
of nodes because it is slow?).
* Then there's the choice of date you do the comparison on, and how that
lines up with simulation packages' development trajectories (and if you
published such a comparison in a journal, it would be out of date before it
saw press).

If we were to select any one or two benchmark problem for comparing MD
> packages, can anyone please suggest some such good benchmark problems,
> since it will be harder for me to narrow down the set of such benchmark
> problems, owing to my non-biological sciences background. Any material
> related to this topic throwing more light in this direction will be of
> great help to me. More insight for its better understanding is also hugely
> appreciated.
>
> P.S.: If it is already asked question, or something similar was previously
> asked, can you please give me the link for same?
>

You can see results of previous such efforts online, e.g.
http://www.hecbiosim.ac.uk/benchmarks/BioMolBenchIII.pdf

Mark


> Thanks in advance
>
> --
> Yours sincerely,
> Saby
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list