[gmx-users] Reg Gromacs in cluster problem

Justin Lemkul jalemkul at vt.edu
Wed Mar 26 15:59:25 CET 2014



On 3/26/14, 10:26 AM, vidhya sankar wrote:
> Dear Justin,
>                       Thank you for your Previous reply  I am Running Gromacs on Cluster  with 16 Processor
> I am using the following script to submit Job
>
> #!/bin/bash
>
> #PBS -l nodes=compute-0-5:ppn=16
> #PBS -l walltime=900:10:5
> hostname
> date
>
>
> cd $PBS_O_WORKDIR
> echo "files copied from" $PBS_O_WORKDIR
> echo "to computing directory" $TMPDIR
> cd $TMPDIR
>
>
> cp $PBS_O_WORKDIR/newplumed04RestartHILLS.dat  $TMPDIR/
>
> mpirun=/opt/openmpi/bin/mpirun
>
> LD_LIBRARY_PATH=/share/apps/gromacsplu/lib
>
> source="/share/apps/gromacsplu//bin"
> MDRUN="/share/apps/gromacsplu/bin/mdrun_mpi_d"
> $mpirun -np 16 $MDRUN -s CNTPEPRSOLIONSfullplumed.tpr -npme 0 -cpi CNTPEPRSOLIONSfullplumed_prev.cpt -append  -nt 1  -plumed newplumed04RestartHILLS.dat -v -deffnm CNTPEPRSOLIONSfullplumed &>/dev/null
>
> cp --force $TMPDIR/* $PBS_O_WORKDIR/out14/
>
> rm -rf $TMPDIR
> date
>
>
> But after 19 hours Running My Node Become down  and shows as  follows
>
> state = down,job-exclusive
>       np = 16
>       ntype = cluster
>
>
> When i Try to Access My node using the following Command
>
> ssh compute-0-5   then it shows the follwong error
>
> ssh: connect to host compute-0-5 port 22: No route to host
>
> I would be very garteful if you Give any suggestion
>

This isn't a Gromacs problem.  Contact your sysadmin.

-Justin

-- 
==================================================

Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow

Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 601
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201

jalemkul at outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul

==================================================


More information about the gromacs.org_gmx-users mailing list