[gmx-users] Error while scaling mdrun for more number of nodes.

Berk Hess gmx3 at hotmail.com
Fri Sep 25 11:18:04 CEST 2009


Why do you want to run on exactly 57 nodes?
That is a nasty prime number.
I guess 56 or 60 nodes would work fine.


Date: Fri, 25 Sep 2009 14:39:27 +0530
From: viveksharma.iitb at gmail.com
To: gmx-users at gromacs.org
Subject: [gmx-users] Error while scaling mdrun for more number of nodes.

Hi There,
I was trying to rum mdrun on large number of nodes. When I tried the run on 57 nodes, I got an error which is pasted below.
Program mpi_mdrun_d, VERSION 4.0.3

Source code file: domdec_setup.c, line: 147

Fatal error:
Could not find an appropriate number of separate PME nodes. i.e. >= 0.409991*#nodes (44) and <= #nodes/2 (57) and reasonable performance wise (grid_x=63, grid_y=63).  

Use the -npme option of mdrun or change the number of processors or the PME grid dimensions, see the manual for details.
then I tried with -npme option as "-npme 20", this time it failed with the following error.

Program mpi_mdrun_d, VERSION 4.0.3
Source code file: domdec.c, line: 5858

Fatal error:
There is no domain decomposition for 94 nodes that is compatible with the given box and a minimum cell size of 1.025 nm

Change the number of nodes or mdrun option -rcon or -dds or your LINCS settings
Look in the log file for details on the domain decomposition
Same system was running fine when I tried it on 4 nodes.

I havn't used gromacs4.0 very well, so i don't understand these errors.
Please suggest me a way to get out of these errors, It will be really helpful if anybody can explain me these errors.

With thanks in advance.

Thanks & Regards,
What can you do with the new Windows Live? Find out
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20090925/eff9bacf/attachment.html>

More information about the gromacs.org_gmx-users mailing list