[gmx-users] Well known domain decomposition

Alex alexanderwien2k at gmail.com
Sat Apr 21 18:45:53 CEST 2018


Thanks.

On Sat, Apr 21, 2018 at 10:27 AM, Mark Abraham <mark.j.abraham at gmail.com>
wrote:

> Hi,
>
> You should consult your cluster's docs to see how to submit, say, a single
> node job with 8 MPI ranks and two cores per rank. Having done so, e.g.
>
> mpirun gmx_mpi mdrun
>
Yes, I use "mpirun -x LD_LIBRARY_PATH -x BASH_ENV -np 32 gmx_mpi mdrun" for
more than one node and "mpirun -np 16 gmx_mpi mdrun" for using gromacs in
one node.

> will just honour that. But maybe you want to be more explicit with
>
> mpirun -np 8 gmx_mpi mdrun -ntomp 2
>
> Then later you might consider using two nodes and 4 threads per rank.
>
I used different combination of something like "-ntomp 2 -npme 8 -ntomp_pme
1" but no success.

>
> Note that there are several examples in the user guide, to help you with
> these things.
>
With knowing that each node has 16 slots and the job works fine with
"mpirun -np *8*" but neither with "mpirun -np *16*" or "mpirun -np *32 or
64* (in multinode)", my question is that what could be the best or at least
possible combination of -ntomp, -npme or -ntomp_pme ... so that the job can
run with 16 or 32 or 64 slots?

Thank you.
Alex

Thank

>
> Mark
>
> On Sat, Apr 21, 2018, 15:07 Alex <alexanderwien2k at gmail.com> wrote:
>
> > Dear all,
> > The cluster where my gromac job is submitted on has several nodes each
> with
> > 16 slots.
> > My calculation works fine when it is submitted on a single node using 8
> > slots out of 16, but it crashes when the 16 slots in a single node are
> used
> > due to the well known domain decomposition issue.
> > It also crashes with the same error when it is summited over more than
> one
> > node, using 32 slots.
> > I played blindly around with different available options of -ntomp/
> > -npme/-ntomp_pme/ ... or also export OMP_NUM_THREADS=8 but no success. I
> > tried the -nt = 8  in single node also but it does not work as the
> gromacs
> > had not been compiled without thread-MPI in our machine.
> > Actually I can not do that much about the compilation or recompile it
> again
> > with proper options as it has been compiled by our IT administration to
> be
> > used with VOTCA program (in my case),
> >
> > So, in such cluster, would you please help me to figure out the problem
> and
> > run the simulation by higher than 8 number of slots? at least with 16
> slots
> > in a single node or even better with two or 4 nodes.
> >
> > Thank you.
> > Regards,
> > Alex
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list