[gmx-users] Testing

Roland Schulz roland at utk.edu
Fri Aug 1 22:11:52 CEST 2008

On Fri, Aug 1, 2008 at 3:48 PM, David van der Spoel <spoel at xray.bmc.uu.se>wrote:

> Roland Schulz wrote:
>> On Fri, Aug 1, 2008 at 4:47 AM, David van der Spoel <spoel at xray.bmc.uu.se<mailto:
>> spoel at xray.bmc.uu.se>> wrote:
>>    chris.neale at utoronto.ca <mailto:chris.neale at utoronto.ca> wrote:
>>        Roll back to gcc 3.x.
>>        There is information available that says something like "don't
>>        use gcc 4.x, it is broken", but I stand by my previous comments
>>        that it is unfortunate that it is up to the end user to search
>>        the gromacs archives to find this out, not withstanding that it
>>        is a gcc-based problem.
>>        In my opinion, you're fortunate to have found this out and there
>>        are probably *lots* of people running gcc 4.x installations of
>>        gromacs right now.
>>    We discussed including the test set with the distribution, which
>>    would simplify the procedure, but decided against it, because the
>>    distribution would become a lot bigger.
>>    Maybe we should reconsider this?
>> I think we also need some larger tests with few hundred thousands up to a
>> few million atoms. Because I had several issues in the last weeks that mdrun
>> binaries with different compilers were all working fine with smaller systems
>> but gave total different results (became unstable) with some compiler (I had
>> problems with: gcc 4.2.0 with barcelona patches, pgi, IBM xlc, it worked
>> with gcc 4.2.4 and pathscale). Without common tests for these larger systems
>> it is hard to report these problems.
> From a software point of view this is just coincidence.

I agree that this is often the case but not always. There was e.g. a
(rounding) problem in the domain decomposition which only occurred when
using more than 20 or so cells in one dimension. You can only find this with
really large tests.

> Often it turns out that if you hit such a problem, be it due to the
> software or due to the compiler, that you can reduce the size of the still
> reproduce it. But more tests and more diverse tests would be good. We don't
> have any coarse grained tests, and very few non-water tests. However, you
> don't want to run hundreds of tests for each installation, so the problem
> remains...

Yes I agree.

I guess I'm not really talking about the current test-set for installation
tests. You don't want to go trough Million atoms test for each installation.

I think we should have an additional test-set for manual testing. Mainly to
help write bug-reports for MPI related problems. So that you can write the
bugreport with a known, shared test-case in case you aren't able to reduce
the problem to a small case (yourself).

> Maybe we should endorse a few specific compilers (and versions), and bail
> out on others, unless you force configure to use those?

I think this is a great idea. And even better if we combine it with a
nightly test-built on all endorsed compilers. We could use
http://buildbot.net/trac for that. I'm happy to provide nightly build and
test run results for the systems I have access to. I can also help setting
up the automatic build server.


Center for Molecular Biophysics ORNL/UT cmb.ornl.gov
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20080801/cdbaa978/attachment.html>

More information about the gromacs.org_gmx-users mailing list