[gmx-users] Announcement: Large biomolecule benchmark report

chris.neale at utoronto.ca chris.neale at utoronto.ca
Fri Mar 16 14:53:01 CET 2012

You should absolutely publish this. it would be of great interest. You  
can mitigate your chances of running into problems with the overview  
by sending a version of the manuscript to the developers of each  
software and asking them to provide a short paragraph, each of which  
you could include in a final section of responses from the developers.

A manuscript such as this (and indeed the information that you have  
already made available) will be very useful for many reasons. One  
reason is that when new PhD students start learning about simulations,  
they tend to use the package that has been adopted by their research  
group and trust the (probably biased and partially uninformed)  
statements of their senior colleagues.


-- original message --

Thanks a a lot to you and also to Szilárd for your feedback and
encouragement.  I am very happy to see that this work is indeed useful
especially to developers.

We have no plans to make this into a 'proper' publication.  I am not
sure how much interested the simulation community would be because, to
be honest, I have no overview what has been done in this area (besides
the few benchmarks studies I have cited).

Thanks again,

On Thu, 15 Mar 2012 22:02:21 +0100
David van der Spoel <spoel at xray.bmc.uu.se> wrote:

[Hide Quoted Text]
On 2012-03-15 14:37, Hannes Loeffler wrote:
Dear all,

we proudly announce our third benchmarking report on (large)
biomolecular systems carried out on various HPC platforms.  We have
expanded our repertoire to five MD codes (AMBER, CHARMM, GROMACS,
LAMMPS and NAMD) and to five protein and protein-membrane systems
ranging from 20 thousand to 3 million atoms.

Please find the report on
where we also offer the raw runtime data.  We also plan to release
the complete joint benchmark suite at a later date (as soon as we
have access to a web server with sufficient storage space).

We are open to any questions or comments related to our reports.
It looks very interesting, and having benchmarks done by independent  
researchers is the best way to avoid bias. The differences are quite
revealing, but it's also good that you point to problems compiling
gromacs. Is this going to be submitted for publication somewhere too?

Thanks for doing this, it must have been quite a job!

Kind regards,
Hannes Loeffler
STFC Daresbury
Scanned by iCritical.

More information about the gromacs.org_gmx-users mailing list