[gmx-users] (no subject)
pall.szilard at gmail.com
Tue May 23 16:23:39 CEST 2017
Please do not post questions to the list "owner" (=admin) address. Post
your questions to the users' list instead.
Are you sure you are using the MPI-enabled GROMACS installation? Look in
the mdrun log header (posting the link to the whole log uploaded to some
sharing service might also help identifying the issue).
On Sat, May 20, 2017 at 1:09 AM, Li, Zhixia <zhixia2 at illinois.edu> wrote:
> Hi all,
> Recently, I want to try hybrid parallelization on two nodes with 12 cores
> per node. I want it to run 1 mpi per node and 12 openmp per mpi. The
> command I use is:
> mpirun -np 2 gmx_mpi mdrun -ntomp 12
> But I got the output like the following. It seems it only uses one node.
> Does anyone know how to realize it? And is it the issue of compiling? Thank
> Number of logical cores detected (12) does not match the number reported
> by OpenMP (6).
> Consider setting the launch configuration manually!
> Running on 1 node with total 12 cores, 12 logical cores
> Hardware detected on host taub217 (the node of MPI rank 0):
> CPU info:
> Vendor: GenuineIntel
> Brand: Intel(R) Xeon(R) CPU X5650 @ 2.67GHz
> SIMD instructions most likely to fit this hardware: SSE4.1
> SIMD instructions selected at GROMACS compile time: SSE4.1
> Reading file em4.tpr, VERSION 5.1.4 (single precision)
> The number of OpenMP threads was set by environment variable
> OMP_NUM_THREADS to 12 (and the command-line setting agreed with that)
> Using 2 MPI processes
> Using 12 OpenMP threads per MPI process
> WARNING: Oversubscribing the available 12 logical CPU cores with 24
> This will cause considerable performance loss!
> NOTE: Your choice of number of MPI ranks and amount of resources results
> in using 12 OpenMP threads per rank, which is most likely inefficient. The
> optimum i$
> Non-default thread affinity set probably by the OpenMP library,
> disabling internal thread affinity
> Zhixia Li
More information about the gromacs.org_gmx-users