[gmx-users] parallel run breaks !!
Mark.Abraham at anu.edu.au
Tue Apr 19 02:43:16 CEST 2011
On 4/18/2011 4:39 PM, delara aghaie wrote:
> Dear Gromacs users
> I run DPPC monolayer on Tip4p2005 water layer system with gromacs 3.3
> I had no problem before but when I run the system with command
> grompp_mpi -c .gro -f .mdp -n .ndx -p .top -o .tpr -np 8
> and then qsub .ll
> the run starts but it has break down several times, each ime after for
> ex 987654 steps or more steps or less.
If you've changed nothing, and it used to work, and now doesn't, then
something about your computer has changed. We can't really help there -
and there must be better people to ask.
> I get in .o.ii file this message:
> PBS has allocated the following nodes:
> mpiexec.intel -genv I_MPI_DEVICE ssm -genv I_MPI_DEBUG 0 -genv
> I_MPI_PIN yes -genv I_MPI_PIN_MODE lib -genv I_MPI_FALLBACK_DEVICE
> disable -genv DAPL_MAX_CM_RETRIES 500 -genv DAPL_MAX_CM_RESPONSE_TIME
> 300 -genv I_MPI_DAPL_CONNECTION_TIMEOUT 300 -machinefile
> /tmp/pbs.5455562.cx1/tmp.VRhkM11825 -n 8 -wdir /tmp/pbs.5455562.cx1
> mdrun_mpi -v -s topol.tpr -np 8
> Job output begins below
> mpdallexit: cannot connect to local mpd
> (/tmp/pbs.5455562.cx1/mpd2.console_dmohamma_110411.074928); possible
> 1. no mpd is running on this host
> 2. an mpd is running but was started without a "console" (-n option)
> what can be the solvation of the possible problem?
> D. Aghaie
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gromacs.org_gmx-users