[gmx-users] Regarding Gromacs 5.0 parallel installation

Mark Abraham mark.j.abraham at gmail.com
Sat Dec 6 15:39:32 CET 2014


On Sat, Dec 6, 2014 at 9:47 AM, Bikash Ranjan Sahoo <
bikash.bioinformatics at gmail.com> wrote:

> Dear All,
>      I am facing some problem in runiing mdrun using -nt flag. In my
> cluster I have installed gromacs 4.5.5 and 5.0. For checking I made 5.0
> with DOUBLE=ON. Now I can run  mdrun using -nt= 30 in Gromacs 4.5.5 by
> allowing mdrun in 30 CPUs. But the same command is not working in Gromacs
> 5.0. The mdrun_d  -s em.tpr -nt 30 is showing error. After careful
> inspection, I came to know that Gromacs 5.0 is unable to access the threads
> by default. I tried many ways using many different flags in each trial
> installation(e.g,. -DGMX_THREAD_MPI=ON   or  -DGMX_SHARED_THREAD=ON or
> -DGMX_FLOAT=ON -DGMX_SSE=ON). But the error is telling Non-default thread
> affinity set. Even during cmake run, i got few warnings.
> The command
> -DCMAKE_INSTALL_PREFIX=/user1/tanpaku/bussei/bics at 1986
> /Bikash/cmake/gro/gro5.0
> CMake Warning:
>   Manually-specified variables were not used by the project:

The fact that
doesn't mention GMX_SHARED_THREAD is a fine clue that it is not a thing ;-)
You will get thread-MPI and OpenMP working by default if you are using a
recent compiler on properly configured machine. Fortunately that's what's
happening anyway if you use the above CMake command.

> This means the thread sharing is not successful. How can I modify the cmake
> command. In my cluster there are 288 CPUs (SGI Al0x UV 100;  CPU - Intel
> Xeon X7542). In the same cluster gromacs 4.5.5 is working fine, but gromacs
> 5.0 is not running for mdrun. Other commands like pdb2gmx, grompp,
> editconf, solvate, genion etc are running well. But mdrun is not running as
> it is unable to share the threads/nodes.
> can somebody suggest how to set the environment as per the last line of
> error message is concern (Highlighted in red)
> GROMACS:      gmx mdrun, VERSION 5.0.2
> Executable:   /user1/Bikash/gro5.0/bin/gmx
> Library dir:  /user1Bikash/gro5.0/share/gromacs/top
> Command line:
>   mdrun_d -v -s em.tpr -nt 30
> Back Off! I just backed up md.log to ./#md.log.2#
> Number of hardware threads detected (288) does not match the number
> reported by OpenMP (276).
> Consider setting the launch configuration manually!
> Reading file em.tpr, VERSION 5.0.2 (single precision)
> The number of OpenMP threads was set by environment variable
> Non-default thread affinity set, disabling internal thread affinity
> Using 5 MPI threads
> Segmentation fault

Something about your cluster environment is totally crazy if gcc 4.3 is
installed and one process thinks it can see all 288 hardware threads. The
thread-MPI build of GROMACS will work only a single shared-memory node.
>From Googling, I'd guess the way your Altix UV 100 is set up tries to
pretend 24 6-core nodes are a single shared-memory node, but this is being
double crossed if some other part of your environment is setting
OMP_NUM_THREADS to 6 (which is probably the number of real cores on a
single actual node) and the OpenMP runtime is reacting to that and
subtracting off 6 cores times 2 hardware hyperthreads from 288 to get 276.
So, you should find out what is managing OMP_NUM_THREADS and do that
better. You can kind-of mimic the 4.5.5 behaviour explicitly with mdrun -nt
30 -ntomp 1, but probably that will not help. The thing causing the
segfault is probably the "let's pretend to be a shared memory node"
actually not being implemented the way more recent Gromacs expects a real
shared-memory node to work.

In your case, I would read the documentation for your machine carefully,
and then either
1_ turn off the "pretend to be a big shared-memory node" mode, configure
Gromacs with cmake -DGMX_MPI=on to use real MPI, and run mpirun -np 30
mdrun_mpi, or
2) configure Gromacs with cmake -DGMX_OPENMP=off, which will lead to mdrun
-nt 30 working more-or-less the way Gromacs 4.5 did, but might still
segfault depending on what was actually causing it, or
3) turn off the clever mode and do 2).


> Gromacs Users mailing list
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.

More information about the gromacs.org_gmx-users mailing list