[gmx-developers] some benchs on cray xt4 and xt5
hessb at mpip-mainz.mpg.de
Wed Dec 10 15:15:42 CET 2008
Have you looked at the cycle counts at the end of the log files?
I expect that most time is consumed by the energy summation
when using that many cpu's.
Try running with the option -nosum
Also, if you are using PME, you need relatively long cut-off and a
coarse PME grid
for optimal performance, otherwise PME takes relatively too much time.
I would use something like: cut-off = 1.2, grid_spacing=0.16
andrea spitaleri wrote:
> Dear all,
> I am using gromacs-4.0.2 on two systems: cray xt4 and xt5 (csc louhi). Here you are in short a table
> with some tests:
> MD simulation 9ns on a system protein+water (ca. 200,000 atoms):
> 128 cpu 64 pme 15h 30min on hector (xt4)
> 128 cpu 64 pme 15h 20min on louhi (xt4)
> 128 cpu 64 pme 20h on louhi (xt5)
> 256 cpu 128 pme 12h on hector (xt4)
> 256 cpu 128 pme 21h on louhi (xt5)
> One explanation should be (from one of the administrators):
> "One possibility for this is, that Gromacs is message intensive, and is
> thefore slower on xt5 because of the xt5 architecture. (Basically 2
> nodes (8 cores) share the same Hypertransport, whereas on xt4 each node
> (4 cores) has that of its own, see eg.
> http://www.csc.fi/english/pages/louhi_guide/hardware/computenodes/index_html )"
> what do you think about it?
> thanks in advance
More information about the gromacs.org_gmx-developers