[gmx-users] Running gromacs in parallel
Mark Abraham
mark.j.abraham at gmail.com
Tue Aug 23 18:34:06 CEST 2016
Hi,
The GROMACS log files will report whether they were built with MPI support,
and how many ranks the MPI system told the GROMACS executable that were
available. Assuming you've built with MPI support (rather than thread-MPI),
then you'll need to read your slurm and cluster documentation to work out
how to use hostfiles, etc. to tell slurm that all 64 processes should work
together.
Try another non-GROMACS MPI program to see whether you can make it work.
The odds are very much in favour of you needing to configure correctly
either your build of GROMACS or the slurm setup or the slurm usage.
Mark
On Tue, Aug 23, 2016 at 2:49 AM Savio James <saviojam at gmail.com> wrote:
> Hi All,
>
> I am trying to run gromacs in parallel on 2 nodes. Each node is 16 physical
> cores/32 logical cores and job scheduler is slurm. I installed version
> 5.1.3 with DGMX_MPI=on.
>
> When I run a slurm script like srun -n 64 gmx_mpi mdrun -ntomp 1,
>
> it runs 64 independent simulations. I get 64 log files, energy files and
> trajectory files. The cluster support at my university has also not been
> able to figure out why this happens. Any help would be appreciated.
>
> I am able to run the simulation on a single node. If i run, gmx_mpi mdrun
> -ntomp 32, I am able to run my simulations.It is an issue only when I use
> multiple nodes.
>
> Thanks
> Savio
>
>
>
>
>
>
>
>
>
>
> --
> Savio James Poovathingal
> Graduate Student
> University of Minnesota
> 107 Akerman Hall
> 110 Union St SE
> Minneapolis, MN 55455*-*0153
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list