[gmx-users] FEP calculations on multiple nodes

Vikas Dubey vikasdubey055 at gmail.com
Tue Aug 22 14:18:24 CEST 2017

Hi, I use the following script for my cluster. Also, I think problem is
calculation specific. I have run a quite a few normal simulations , it
works fine :

#SBATCH --job-name=2_1_0
#SBATCH --mail-type=ALL
#SBATCH --time=24:00:00
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --ntasks-per-core=2
#SBATCH --cpus-per-task=4
#SBATCH --constraint=gpu
#SBATCH --output out.txt
#SBATCH --error  err.txt
# load modules and run simulation
module load daint-gpu
module load GROMACS
export CRAY_CUDA_MPS=1

srun -n $SLURM_NTASKS --ntasks-per-node=$SLURM_NTASKS_PER_NODE -c
$SLURM_CPUS_PER_TASK gmx_mpi mdrun -deffnm md_0

On 22 August 2017 at 06:11, Nikhil Maroli <scinikhil at gmail.com> wrote:

> Okay, you might need to consider
> gmx mdrun -v -ntmpi XX -ntomp XX -deffnm XXXX  -gpu_id XXX
> http://manual.gromacs.org/documentation/5.1/user-guide/
> mdrun-performance.html
> http://www.gromacs.org/Documentation/Errors#There_is_
> no_domain_decomposition_for_n_nodes_that_is_compatible_with_
> the_given_box_and_a_minimum_cell_size_of_x_nm
> --
> Gromacs Users mailing list
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.

More information about the gromacs.org_gmx-users mailing list