[gmx-developers] EM with gmx4.0.2
Berk Hess
hessb at mpip-mainz.mpg.de
Wed Nov 19 14:00:40 CET 2008
Hi,
Something strange happened in mdrun, since it says the grid size is 0.
Could you mail me the tpr file that generated this error?
You could use -npme 0, but your relative pme load (0.73) is extremely high.
For better performance you should increase your cut-off and PME grid spacing
by the same factor, grompp probably also gave you a note about this.
With a better relative pme load, mdrun will run fine without further
options.
Berk
andrea spitaleri wrote:
> Hi there,
> we having trouble in EM using gmx-4.0.2. Basically the problem is that the run crash immediately
> complaining about the number of separate PME:
>
>
>> mdrun -s box -v -deffnm boxem
>>
>
> Program mdrun, VERSION 4.0.2
> Source code file: domdec_setup.c, line: 132
>
> Fatal error:
> Could not find an appropriate number of separate PME nodes. i.e. >= 0.727885*#nodes (11) and <=
> #nodes/2 (8) and reasonable performance wis
> e (grid_x=88, grid_y=88).
> Use the -npme option of mdrun or change the number of processors or the PME grid dimensions, see the
> manual for details.
>
> while:
>
>
>> mdrun -s box -npme 8 -v -deffnm boxem
>>
> gives:
> Program mdrun, VERSION 4.0.2
> Source code file: domdec.c, line: 5860
>
> Fatal error:
> The size of the domain decomposition grid (0) does not match the number of nodes (8). The total
> number of nodes is 16
>
> We are able to minimize the system using gmx-3.3.3 and then run the MD with gmx-4.0.2 without troubles.
> our em.mdp is:
>
> define = -DFLEXIBLE
> integrator = steep
> dt = 0.002 ; ps
> nsteps = 2000
> nstlist = 5
> ns_type = grid
> pbc = xyz
> rlist = 0.9
> table-extension = 2
> coulombtype = PME
> rcoulomb = 0.9
> rvdw = 0.9
> fourierspacing = 0.15
> fourier_nx = 0
> fourier_ny = 0
> fourier_nz = 0
> pme_order = 4
> ewald_rtol = 1e-5
> optimize_fft = yes
> ;
> ; Energy minimizing stuff
> ;
> emtol = 1000
> emstep = 0.01
> lincs_iter = 4
>
>
> any help is well accepted.
>
> thanks in advance
>
> andrea
>
>
More information about the gromacs.org_gmx-developers
mailing list