[gmx-users] Re: Re: loab imbalance

#ZHAO LINA# ZHAO0139 at ntu.edu.sg
Tue Apr 6 17:56:04 CEST 2010


> lina wrote:
> >> On 6/04/2010 5:39 PM, lina wrote:
> >>> Hi everyone,
> >>>
> >>> Here is the result of the mdrun which was performed on 16cpus. I am
> not
> >>> clear about it, was it due to using MPI reason? or some other
> reasons.
> >>>
> >>> Writing final coordinates.
> >>>
> >>>   Average load imbalance: 1500.0 %
> >>>   Part of the total run time spent waiting due to load imbalance:
> 187.5 %
> >>>   Steps where the load balancing was limited by -rdd, -rcon and/or
> -dds:
> >>> X 0 % Y 0 %
> >>>
> >>> NOTE: 187.5 % performance was lost due to load imbalance
> >>>        in the domain decomposition.
> >> You ran an inefficient but otherwise valid computation. Check out
> the 
> >> manual section on domain decomposition to learn why it was
> inefficient, 
> >> and whether you can do better.
> >>
> >> Mark
> > 
> > I search the "decomposition" keyword on Gromacs manual, no match
> found.
> > Are you positive about that? Thanks any way, but can you make it more
> 
> The title of section 3.17 is "Domain Decomposition" and discusses the
> algorithm 
> and mdrun parameters relevant to controlling performance.
> 
> -Justin
> 
> Sorry, the acroread is really not sensitive in searching "domain" or
> "decomposition" on version 3.3. I did the same search on version 3.2 and
> I found it. I am not so familiar with this manual. Thanks again.
> 
> lina 

Since you performance roughly doubles when going from 8 to 16 cores,
I think there is no significant load imbalance and there is a bug in the routine
that calculates the total load imbalance.

In my previous mail I asked which Gromacs version you are using.
Could you please tell me this?

Berk


Hi, it's version 4.0.7. Sorry, mainly I will consider using the latest (better stable) version, so last time I forgot to answer you.
I also run it on cluster before, it does not have such problems (by my careless mistakes, I overwrite it when I tried to compare the results from the multicores, and it's days ago and I barely could not remember exactly, I worried I might ignore the problems, so I re-run it again, and the results will wait for a while). Thanks for those suggestions they gave for me to read the manuals. 
The machine I used is single multicore machine with 64 bits kernel.

Thanks and regards,

lina



-------------- next part --------------
A non-text attachment was scrubbed...
Name: winmail.dat
Type: application/ms-tnef
Size: 3716 bytes
Desc: not available
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100406/827a59ea/attachment.bin>


More information about the gromacs.org_gmx-users mailing list