[gmx-users] Gromacs 4.6.7 with MPI and OpenMP

Malcolm Tobias mtobias at wustl.edu
Fri May 8 15:00:43 CEST 2015


Hi Mark,

On Friday 08 May 2015 11:51:03 Mark Abraham wrote:
> >
> > I'm attempting to build gromacs on a new cluster and following the same
> > recipies that I've used in the past, but encountering a strange behavior:
> > It claims to be using both MPI and OpenMP, but I can see by 'top' and the
> > reported core/walltime that it's really only generating the MPI processes
> > and no threads.
> >
> 
> I wouldn't take the output from top completely at face value. Do you get
> the same performance from -ntomp 1 as -ntomp 4?

I'm not relying on top. I also mentioned that the core/walltime as reported by Gromacs suggests that it's only utilizing 2 cores.  I've also been comparing the performance to an older cluster.
 
> 
> > We're running a hetergenous environment, so I tend to build with
> > MPI/OpenMP/CUDA and the Intel compilers, but I'm seeing this same sort of
> > behavior with the GNU compilers.  Here's how I'm configuring things:
> >
> > [root at login01 build2]# cmake -DGMX_FFT_LIBRARY=mkl -DGMX_MPI=ON
> > -DGMX_GPU=ON -DCUDA_TOOLKIT_ROOT_DIR=/opt/cuda -DGMX_OPENMP=ON
> > -DCMAKE_INSTALL_PREFIX=/act/gromacs-4.6.7_take2 .. | tee cmake.out
> >
> 
> You need root access for "make install." Period. 

Yes Mark, I ran 'make install' as root.  

> Routinely using root means you've probably hosed your system some time...

In 20+ years of managing Unix systems I've managed to hose many a system.

> > Using 2 MPI processes
> > Using 4 OpenMP threads per MPI process
> >
> > although I do see this warning:
> >
> > Number of CPUs detected (16) does not match the number reported by OpenMP
> > (1).
> >
> 
> Yeah, that happens. There's not really a well-defined standard, so once the
> OS, MPI and OpenMP libraries all combine, things can get messy. 

Understood.  On top of that we're using CPUSETs with our queuing system which can interfere with how the tasks are distributed.  I've tried running the job outside of the queuing system and have seen the same behavior.

> But if people go around using root routinely... ;-)

As soon as I figure out how to manage a computing cluster without becoming root I'll let you know  ;-)

I've got dozens of Gromacs users, so I'm attempting to build the fastest, most versatile binary that I can.  Any help that people can offer is certainly appreciated.

Cheers,
Malcolm
 

-- 
Malcolm Tobias
314.362.1594




More information about the gromacs.org_gmx-users mailing list