[gmx-users] multi threading on rocks cluster

Mark Abraham mark.j.abraham at gmail.com
Fri Mar 14 11:55:16 CET 2014


On Fri, Mar 14, 2014 at 10:46 AM, michael.b <mbx0009 at yahoo.com> wrote:

>
> dear all,
> I have a problem with multithreading on a cluster.
> Our (vanilla) rocks cluster  has several nodes each with 8 cores. I used to
> be
> able to start a job with "mdrun_d -nt 8" (NOT mdrun_mpi) and the directive
> "-pe mpi_smp 8" in the qsub job file,  and the calculations would
> automatically
> be distributed over the eight cores in a single node.
>

So, what has changed?


> Now judging from the speed of calculations, it seems as if only one core
> was
> used, but in the log file there is a statement: "Using 8 MPI threads" ...
> is
> there any
> way to make sure that all cores are used, and is there a way to check how
> many cores are really used by a given job?
>

Using top or htop is a good start. You should see an mdrun process per
core, and nothing else significant going on. You should also inspect the
log file for mdrun complaining about thread affinities, which it will
manage correctly itself unless it detects external management, which it
will respect. Likewise with managing the number of OpenMP threads per
process.


> thanks!
> michael
>
> ps: I used a tpr file made with grompp version 4.6.5 and the mdrun on the
> cluster
> is actually 4.6.3 ... mdrun does not complain, but can this be the problem?
>

No.

pps: i found that when using "cutoff-scheme = Verlet" in the mdp file the
> jobs
> run faster than otherwise, but even then they are slower than
> expected ...
>

That's probably OpenMP partly saving the situation. This suggests there's
some external process competing for the cpu(s), which top will show.


> ppps: i these runs i do not use PME, but cut-off for electrostatics, does
> this
> perhaps affect multi-threading?
>

No, but it does affect whether your problem is worth solving ;-)

Mark


>
>
>
> --
> View this message in context:
> http://gromacs.5086.x6.nabble.com/multi-threading-on-rocks-cluster-tp5015146.html
> Sent from the GROMACS Users Forum mailing list archive at Nabble.com.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list