[gmx-users] REMD run on higher nodes.

Mark Abraham mark.j.abraham at gmail.com
Mon Aug 5 11:52:41 CEST 2013


Not sure what you're asking, but if you're providing twice as much
hardware, then invoke mpiexec_mpt suitably to tell it to use all of
that. Then, if you invoke mdrun_mpi the same way as you do now, it
will work out it can use twice as much hardware per replica.

Mark

On Mon, Aug 5, 2013 at 7:55 AM, suhani nagpal <suhani.nagpal at gmail.com> wrote:
> Greetings
>
> I'm running REMD of 96 replicas where the run.pbs is the following:
>
> #!/bin/tcsh
> #PBS -S /bin/tcsh
> #PBS -l walltime=00:15:00
> #PBS -q workq
> #PBS -l select=8:ncpus=12:mpiprocs=12
> #PBS -l place=scatter:excl
> #PBS -V
>
> # Go to the directory from which you submitted the job
> cd $PBS_O_WORKDIR
> setenv MPI_GROUP_MAX 1024
> setenv MPI_UNBUFFERED_STDIO 1
>
> #mpiexec_mpt -np 24 ./exefile
> mpiexec_mpt -np 96 /lustre/applications/GROMACS/gromacs-4.5.5/bin/mdrun_mpi
> -s md_.tpr -multi 96 -replex 2000 -cpi state_.cpt -noappend
>
>
> So each replica runs at one processor.
>
> Now, I want to run the remd at 16 nodes ( double ) so that each replica is
> subjected to 2 processors.
>
>
> Kindly assist !
>
> Thanks
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists



More information about the gromacs.org_gmx-users mailing list