[gmx-users] Fwd: Re: Gromacs MD simulation query

charles k.davis charles_p140045bt at nitc.ac.in
Tue Jul 12 05:37:10 CEST 2016


---------- Forwarded message ----------
From: "charles k.davis" <charles_p140045bt at nitc.ac.in>
Date: 11-Jul-2016 15:46
Subject: Re: Gromacs MD simulation query
To: <erik.marklund at chem.ox.ac.uk>
Cc:

Dear Dr. Erik,

mdrun_mpi is available on all nodes. The system administrator told "MPI is
installed o master node and the path of the same has exported to all the
nodes".

What we have done is, till the second last command we ran the job in
workstation. In order to do the last mdrun only we need to
cluster/supercomputer.

command we follow in the cluster is,

*qsub script.sh*

and script.sh contains the following lines,

#! /bin/bash
#PBS - q small
#PBS -e errorfile.err
#PBS -o logfile.log
#PBS -l select=2:ncpus=16
tpdir=`echo $PBS_JOBID | cut -f 1 -d .`
tempdir=/scratch/job$tpdir
mkdir -p $tempdir
cd $tempdir
cp -R $PBS_O_WORKDIR/* .
mpirun -np 32 -hostfile $PBS_NODEFILE mdrun_mpi -v -s input.tpr -c
output.gro
mv ../job$tpdir $PBS_O_WORKDIR/.

Last command in our normal workstation procedure is,

*mdrun -v -deffnm <.tpr file name>*

So is it possible to replace the* mdrun_mpi -v -s input.tpr -c output.gro*
part with* mdrun -v -deffnm <.tpr file name>*

the steps we follow are attached with this mail.

Many Thanks,

Charles K Davis

PhD Student
School of Biotechnology,
National Institute of Technology,
Calicut, Kerala, India-673 601
Cell Phone: +91 9495595458

On Sat, Jul 9, 2016 at 9:43 PM, charles k.davis <
charles_p140045bt at nitc.ac.in> wrote:

>

> Dear Dr. Erik,
>
> Thank you for the reply. I will have a talk with my system administrator
and get back to you with the details. Hope I'm not bothering you.
>
> Regards,
>
> Charles K Davis
>
> PhD Student
> School of Biotechnology,
> National Institute of Technology,
> Calicut, Kerala, India-673 601
> Cell Phone: +91 9495595458


More information about the gromacs.org_gmx-users mailing list