[gmx-users] GROMACS on a parallel Linux cluster

Peter Spijker pspijker at wag.caltech.edu
Mon Mar 1 18:53:00 CET 2004


On the cluster we have to use PBS-scripting. But from your answer I can
conclude that it might be a problem with the cluster than with the
GROMACS-code?

Peter

----- Original Message -----
From: "Ilya Chorny" <ichorny at maxwell.compbio.ucsf.edu>
To: <gmx-users at gromacs.org>
Sent: Monday, March 01, 2004 6:48 PM
Subject: RE: [gmx-users] GROMACS on a parallel Linux cluster


> Did you try running the jobs without PBS( i.e. just using LAM or MPICH)
> I run parallel jobs all the time and they work just fine.
>
> Ilya
>
>
> -----Original Message-----
> From: gmx-users-admin at gromacs.org [mailto:gmx-users-admin at gromacs.org]
> On Behalf Of Peter Spijker
> Sent: Monday, March 01, 2004 9:42 AM
> To: gmx-users at gromacs.org
> Subject: [gmx-users] GROMACS on a parallel Linux cluster
>
> Hi all,
>
> With the help of David van der Spoel I was able to get GROMACS running
> on a
> parallel machine. But it behaves very awkward. Using the same PBS-script
> to
> submit jobs (except for the differnence in number of nodes and
> processors)
> result in a random acceptance of the job. I was wondering if this has
> happened to someone else before. I thought maybe GROMACS failed with a
> certain request. Below there is some technical information. Thanks for
> helping me.
>
> Kind regards,
>
> Peter Spijker
> California Institute of Technology
>
> ---
>
> I am using the DPPC-benchmark system downloadable from the GROMACS
> website.
>
> Lines from the PBS-script (with $MDP the MDP-file is mentioned, and so
> on ;
> for MPIRUN the number of nodes is needed as input, not the number of
> processors):
>
> #!/bin/csh
>
> #PBS -l nodes=4:ppn=1
> #PBS -N GROMACS_4_1
> #PBS -q workq
> #PBS -o std.out
> #PBS -e std.err
>
> ### Set variables
> set NOD=4
>
> [...]
>
> ### Script Commands
> cd $PBS_O_WORKDIR
>
> ### Set Environments
> setenv CONV_RSH ssh
> setenv LAMRSH "ssh -x"
> setenv LD_LIBRARY_PATH "/usr/lib"
>
> ### Write info about nodes used
> set n=`wc -l < $PBS_NODEFILE`
> echo 'PBS_NODEFILE ' $PBS_NODEFILE ' has ' $n ' lines'
> cat $PBS_NODEFILE
> echo
>
> echo $PBS_NODEFILE
>
> ### Run simulation
> lamboot $PBS_NODEFILE
> /gromacs_mpi/i686-pc-linux-gnu/bin/grompp -f $MDP -c $GRO -p $TOP -o
> $TPR -np $NOD -deshuf $NDX -shuffle -sort
> mpirun -c $NOD /gromacs_mpi/i686-pc-linux-gnu/bin/mdrun -s $TPR -np $NOD
>
>
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
>
>
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
>




More information about the gromacs.org_gmx-users mailing list