[gmx-users] GROMACS on a parallel Linux cluster
Erik Lindahl
lindahl at csb.stanford.edu
Mon Mar 1 20:38:00 CET 2004
Hi Peter,
PBS can behave rather strange at times. I'd recommend using the
'torque' batch scheduler from www.supercluster.org instead. Actually,
it is based on the free PBS code, but they have improved it to scale
much better for large systems and to be fault tolerant.
We are running our own 600-CPU cluster with Torque+maui scheduler, and
Gromacs works great both for single and parallel runs.
Cheers,
Erik
On Mar 1, 2004, at 6:51 PM, Peter Spijker wrote:
> On the cluster we have to use PBS-scripting. But from your answer I can
> conclude that it might be a problem with the cluster than with the
> GROMACS-code?
>
> Peter
>
> ----- Original Message -----
> From: "Ilya Chorny" <ichorny at maxwell.compbio.ucsf.edu>
> To: <gmx-users at gromacs.org>
> Sent: Monday, March 01, 2004 6:48 PM
> Subject: RE: [gmx-users] GROMACS on a parallel Linux cluster
>
>
>> Did you try running the jobs without PBS( i.e. just using LAM or
>> MPICH)
>> I run parallel jobs all the time and they work just fine.
>>
>> Ilya
>>
>>
>> -----Original Message-----
>> From: gmx-users-admin at gromacs.org [mailto:gmx-users-admin at gromacs.org]
>> On Behalf Of Peter Spijker
>> Sent: Monday, March 01, 2004 9:42 AM
>> To: gmx-users at gromacs.org
>> Subject: [gmx-users] GROMACS on a parallel Linux cluster
>>
>> Hi all,
>>
>> With the help of David van der Spoel I was able to get GROMACS running
>> on a
>> parallel machine. But it behaves very awkward. Using the same
>> PBS-script
>> to
>> submit jobs (except for the differnence in number of nodes and
>> processors)
>> result in a random acceptance of the job. I was wondering if this has
>> happened to someone else before. I thought maybe GROMACS failed with a
>> certain request. Below there is some technical information. Thanks for
>> helping me.
>>
>> Kind regards,
>>
>> Peter Spijker
>> California Institute of Technology
>>
>> ---
>>
>> I am using the DPPC-benchmark system downloadable from the GROMACS
>> website.
>>
>> Lines from the PBS-script (with $MDP the MDP-file is mentioned, and so
>> on ;
>> for MPIRUN the number of nodes is needed as input, not the number of
>> processors):
>>
>> #!/bin/csh
>>
>> #PBS -l nodes=4:ppn=1
>> #PBS -N GROMACS_4_1
>> #PBS -q workq
>> #PBS -o std.out
>> #PBS -e std.err
>>
>> ### Set variables
>> set NOD=4
>>
>> [...]
>>
>> ### Script Commands
>> cd $PBS_O_WORKDIR
>>
>> ### Set Environments
>> setenv CONV_RSH ssh
>> setenv LAMRSH "ssh -x"
>> setenv LD_LIBRARY_PATH "/usr/lib"
>>
>> ### Write info about nodes used
>> set n=`wc -l < $PBS_NODEFILE`
>> echo 'PBS_NODEFILE ' $PBS_NODEFILE ' has ' $n ' lines'
>> cat $PBS_NODEFILE
>> echo
>>
>> echo $PBS_NODEFILE
>>
>> ### Run simulation
>> lamboot $PBS_NODEFILE
>> /gromacs_mpi/i686-pc-linux-gnu/bin/grompp -f $MDP -c $GRO -p $TOP -o
>> $TPR -np $NOD -deshuf $NDX -shuffle -sort
>> mpirun -c $NOD /gromacs_mpi/i686-pc-linux-gnu/bin/mdrun -s $TPR -np
>> $NOD
>>
>>
>> _______________________________________________
>> gmx-users mailing list
>> gmx-users at gromacs.org
>> http://www.gromacs.org/mailman/listinfo/gmx-users
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>>
>>
>> _______________________________________________
>> gmx-users mailing list
>> gmx-users at gromacs.org
>> http://www.gromacs.org/mailman/listinfo/gmx-users
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>>
>
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list