[gmx-users] Problem with the mdrun_openmpi on cluster

James Starlight jmsstarlight at gmail.com
Mon Mar 14 18:19:56 CET 2016


I tried to increase size on the system providding much bigger bilayer
in the system

for this task I obtained another error also relevant to DD

Program g_mdrun_openmpi, VERSION 4.5.7
Source code file:
/builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec_con.c, line: 693

Fatal error:
DD cell 0 2 1 could only obtain 0 of the 1 atoms that are connected
via vsites from the neighboring cells. This probably means your vsite
lengths are too long compared to the domain decomposition cell size.
Decrease the number of domain decomposition grid cells.
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

"It's So Fast It's Slow" (F. Black)

Error on node 9, will try to stop all the nodes
Halting parallel program g_mdrun_openmpi on CPU 9 out of 64


BTW I checked the bottom of the syste,.gro file and found the next
sizes which are seems too small for my syste, consisted for several
hundreds of lipid, arent it7

  15.00000  15.00000  15.00000   0.00000   0.00000   0.00000   0.00000
  0.00000   0.00000


for my case that gro file was produced automatically using MARTINI method

./insane.py -f test.pdb -o system.gro -p system.top -pbc cubic -box
15,15,15 -l DPPC:4 -l DOPC:3 -l CHOL:3 -salt 0.15 -center -sol W


Will be very thankful for any help!!

J.

2016-03-14 17:31 GMT+01:00 James Starlight <jmsstarlight at gmail.com>:
> also below my mdp options which might be relevant- I am using here a
> MArtini ff of the system for simulating of GPCR embedded within
> lidips.
>
> title           = production run for GPCR
> dt               =  0.02
> nsteps           =  500000000
> nstxout          =  0
> nstvout          =  0
> nstlog           =  5000
> nstxtcout        =  5000
> xtc-precision    =  5000
> nstlist         = 1                 ; Frequency to update the neighbor
> list and long range forces
> ns_type         = grid          ; Method to determine neighbor list
> (simple, grid)
> rlist           = 1.2           ; Cut-off for making neighbor list
> (short range forces)
> coulombtype     = PME           ; Treatment of long range
> electrostatic interactions
> rcoulomb        = 1.2           ; Short-range electrostatic cut-off
> rvdw            = 1.2           ; Short-range Van der Waals cut-off
> pbc                 = xyz               ; Periodic Boundary
> Conditionscoulomb         =  1.2
>
>
>
> what -rcon flag provided with mdrub should do following the advises of
> error mesage7
>
> 2016-03-14 17:27 GMT+01:00 James Starlight <jmsstarlight at gmail.com>:
>> forgot to add that I tried to decrease number of CPUs to 16 and error
>> was the same
>>
>> 2016-03-14 17:26 GMT+01:00 James Starlight <jmsstarlight at gmail.com>:
>>> the error is likely when I try to run not very big system on large
>>> number of CPUs in parallel
>>>
>>>
>>> my system is receptor embedded within the membrane consisted of 120 lipids
>>> that was produced by grompp
>>>
>>> Initializing Domain Decomposition on 64 nodes
>>> Dynamic load balancing: no
>>> Will sort the charge groups at every domain (re)decomposition
>>> Initial maximum inter charge-group distances:
>>>     two-body bonded interactions: 1.174 nm, Bond, atoms 3610 3611
>>>   multi-body bonded interactions: 1.174 nm, Improper Dih., atoms 3604 3610
>>> Minimum cell size due to bonded interactions: 1.200 nm
>>> Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 2.626 nm
>>> Estimated maximum distance required for P-LINCS: 2.626 nm
>>> This distance will limit the DD cell size, you can override this with -rcon
>>> Guess for relative PME load: 0.87
>>> Using 0 separate PME nodes, as guessed by mdrun
>>> Optimizing the DD grid for 64 cells with a minimum initial size of 2.626 nm
>>> The maximum allowed number of cells is: X 3 Y 3 Z 3
>>>
>>>
>>> are there sollutions for this kind of system probably altering some cutoffs etc/
>>>
>>> 2016-03-14 17:22 GMT+01:00 Smith, Micholas D. <smithmd at ornl.gov>:
>>>> What is your box size (x, y, z)?
>>>>
>>>> What happens if you use half that number of nodes?
>>>>
>>>> ===================
>>>> Micholas Dean Smith, PhD.
>>>> Post-doctoral Research Associate
>>>> University of Tennessee/Oak Ridge National Laboratory
>>>> Center for Molecular Biophysics
>>>>
>>>> ________________________________________
>>>> From: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> on behalf of James Starlight <jmsstarlight at gmail.com>
>>>> Sent: Monday, March 14, 2016 12:19 PM
>>>> To: Discussion list for GROMACS users
>>>> Subject: [gmx-users] Problem with the mdrun_openmpi on cluster
>>>>
>>>> Hello,
>>>>
>>>> I am trying to submit job on 64 nodes on mu local cluster using below
>>>> combination of software
>>>>
>>>>
>>>> DO="mpiexec -np 64"
>>>> PROG="g_mdrun_openmpi"
>>>>
>>>>
>>>> $DO $PROG -deffnm sim
>>>>
>>>>
>>>> obtaining error
>>>>
>>>>
>>>>
>>>> Program g_mdrun_openmpi, VERSION 4.5.7
>>>> Source code file:
>>>> /builddir/build/BUILD/gromacs-4.5.7/src/mdlib/domdec.c, line: 6436
>>>>
>>>> Fatal error:
>>>> There is no domain decomposition for 64 nodes that is compatible with
>>>> the given box and a minimum cell size of 2.6255 nm
>>>> Change the number of nodes or mdrun option -rcon or your LINCS settings
>>>>
>>>> Could someone provide me trivial sollution/
>>>>
>>>> Thanks!
>>>>
>>>> J.
>>>> --
>>>> Gromacs Users mailing list
>>>>
>>>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>>>
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
>>>>
>>>> --
>>>> Gromacs Users mailing list
>>>>
>>>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>>>
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list