[gmx-users] cpu utilization

Dean Johnson dtj at uberh4x0r.org
Fri Oct 17 21:24:00 CEST 2003


On Friday, October 17, 2003, at 02:42  PM, David wrote:
> Which benchmark?
>
> I'd suggest using the DPPC benchmark and run grompp -shuffle.
> Furthermore for other benchmarks you have to have a reasonable ratio of
> computation/communication, e.g. at least 3000-5000 atoms per processor
> to scale. It would be interesting to see some scaling benchmarks using
> Amber and Gromacs for the same calculation.
>

I have been using the DPCC benchmark, as it gives me lots of external 
benchmarking data to work with. As soon as I become comfortable that 
Gromacs is behaving as it should, I think I can post some Amber vs. 
Gromacs benchmarks. That is predicated on whether the bio-geeks believe 
the two benchmarks are apples to apples. We are looking to Gromacs for 
the performance in our particular case. For instance, We get about the 
same performance on a single G5 cpu as we get on a 16-cpu xeon cluster 
(roughly 1ns per 3 days). I am using the shuffle and sort and it seems 
to behave better.

>
>>
>> Also, perhaps a bit of evidence, it has a real hard time cleaning up
>> after itself. The output indicating performance and stuff comes *way*
>> before the jobs eventually die and some never die. I had to write a
>> simple  'killgro' command to nuke everything for good. Any ideas?
>>
> MPICH problems? Most gromacs users are using LAM. Once more, since TCP
> communication in Linux is slow compared to Shared mem, you want to run
> 8x2 cpus rather than 16x1.
>

Our experience with Amber is that 16x1 is considerably faster than 8x2, 
which leads me to believe that its pummel the memory a little too much. 
Whats odd is that you can run 2x16x1 decidedly faster than 1x16x2. I am 
speculating that it gets into some self-regulating behavior and the two 
runs stay out of each others way. Thats just a silly guess on my part.

	-Dean




More information about the gromacs.org_gmx-users mailing list