[gmx-users] mdrun mpi segmentation fault in high load situation

Mark Abraham Mark.Abraham at anu.edu.au
Fri Dec 24 13:53:58 CET 2010


On 24/12/2010 9:59 PM, Wojtyczka, André wrote:
>>> I'm not sure that PD has any advantage here. From memory it has to
>>> create a 128x1x1 grid, and you can direct that with DD also.
>> See mdrun -h -hidden for -dd.
>>
>> Mark
>>
>>> The contents of your .log file will be far more helpful than stdout in
>>> diagnosing what condition led to the problem.
>>>
>>> Mark
>>>
>>>>>> So the only difference is the number of cores I am using.
>>>>>>
> I used -dd but then my system consists only of 4 or slightly more domains
> which gives me almost no advantage over -pd. The minimum size of a domain
> is connected to the largest bond length which in my case is half of the box
> size or more.

If it were more than half the box size, then since that restricts the 
minimum diameter of the DD cell, surely DD would produce a single 
domain. Either way, it sounds like the ratio of system size to bond 
length is too small to permit efficient GROMACS-style parallelism. Not 
all systems are worth parallelising, even if you have a good algorithm 
for the case at hand... and both DD and PD are targeted at the usual 
situation in MD where the box size is many times larger than the typical 
bond length.

Mark

> I will post my .log file but it will probably be next year.
>
> So merry christmas and a jolly time.
> André
>
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------
> Forschungszentrum Juelich GmbH
> 52425 Juelich
> Sitz der Gesellschaft: Juelich
> Eingetragen im Handelsregister des Amtsgerichts Dueren Nr. HR B 3498
> Vorsitzender des Aufsichtsrats: MinDirig Dr. Karl Eugen Huthmacher
> Geschaeftsfuehrung: Prof. Dr. Achim Bachem (Vorsitzender),
> Dr. Ulrich Krafft (stellv. Vorsitzender), Prof. Dr.-Ing. Harald Bolt,
> Prof. Dr. Sebastian M. Schmidt
> ------------------------------------------------------------------------------------------------
> ------------------------------------------------------------------------------------------------




More information about the gromacs.org_gmx-users mailing list