[gmx-users] MD PME in parallel
e.akhmatskaya at fle.fujitsu.com
e.akhmatskaya at fle.fujitsu.com
Tue Mar 11 20:26:45 CET 2003
Hi David,
>I have run it using LAM and SCALI networks, in both cases it crashes at
>the end when writing the coordinates (confout.gro). It writes roughly
>6000 lines out of 23000.
Sounds familiar to me! I had this scenario on the Linux cluster too.
>You implied somehow that the problem only occurs when you have no water
on node 0.
This is what I thought. Now I think it is more complicated. It depends on
distribution of water molecules but in more sophisticated way.
>There is a workaround for that, the -load option of grompp
>allows you to modify the division over nodes, e.g.:
>grompp -load "1.1 1.0 1.0 1.0 1.0"
Thanks for the idea! Yes, playing with this option I can make those
benchmarks running on all machines. However, performance becomes very
upsetting. This is not surprising as I change the load blindly. Perhaps, I
can play further and change the load on a few more processors to improve
load balance but I still believe that there should be some nice fix for the
code. I didn't find any so far ...
Cheers,
Elena.
_____________________________________________
Elena Akhmatskaya
Research Scientist
Physical & Life Sciences
Fujitsu Laboratories of Europe Ltd (FLE)
Hayes Park Central
Hayes End Road
Hayes, Middlesex
UB4 8FE
UK
tel: +44 (0) 2086064859
e-mail: e.akhmatskaya at fle.fujitsu.com
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: InterScan_Disclaimer.txt
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20030311/26bab7db/attachment.txt>
More information about the gromacs.org_gmx-users
mailing list