[gmx-users] gromacs 2018 with OpenMPI + OpenMP

Szilárd Páll pall.szilard at gmail.com
Wed Dec 12 23:11:23 CET 2018


On Wed, Dec 12, 2018 at 12:14 PM Deepak Porwal <deepak.porwal at iiitb.net> wrote:
>
> Hi
> I build gromacs with OpenMPI + OpenMP.
> When I am trying to run adh/adh_dodec workload with binding the MPI threads
> to core/l3cache, I am seeing some warnings.
> Command I used to run:  mpirun --map-by ppr:1:l3cache:pe=2  -x
> OMP_NUM_THREADS=4 -x OMP_PROC_BIND=TRUE -x OMP_PLACES=cores gmx_mpi mdrun

Are you sure that's better than mdrun's internal thread binding?

>
> ------------------------------------------------------------
>
> Program:     gmx grompp, version 2018.2
>
> Source file: src/gromacs/utility/futil.cpp (line 406)
>
>
>
> Fatal error:
>
> Won't make more than 99 backups of topol.tpr for you.
>
> The env.var. GMX_MAXBACKUP controls this maximum, -1 disables backups.

There's the error: at startup mdrun will by default back up every file
that already exists in the same location where a file would be
written. List your output directory and it should be pretty obvious
what the issue is.

--
Szilárd

>
>
> For more information and tips for troubleshooting, please check the GROMACS
>
> website at http://www.gromacs.org/Documentation/Errors
>
> -------------------------------------------------------
>
> --------------------------------------------------------------------------
>
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
>
> with errorcode 1.
>
>
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
>
> You may or may not see output from other processes, depending on
>
> exactly when Open MPI kills them.
>
> --------------------------------------------------------------------------
>
> --------------------------------------------------------------------------
>
> WARNING: a request was made to bind a process. While the system
>
> supports binding the process itself, at least one node does NOT
>
> support binding memory to the process location.
>
>
>
>   Node:  llvm-sp38
>
>
>
> Open MPI uses the "hwloc" library to perform process and memory
>
> binding. This error message means that hwloc has indicated that
>
> processor binding support is not available on this machine.
>
>
>
> On OS X, processor and memory binding is not available at all (i.e.,
>
> the OS does not expose this functionality).
>
>
>
> On Linux, lack of the functionality can mean that you are on a
>
> platform where processor and memory affinity is not supported in Linux
>
> itself, or that hwloc was built without NUMA and/or processor affinity
>
> support. When building hwloc (which, depending on your Open MPI
>
> installation, may be embedded in Open MPI itself), it is important to
>
> have the libnuma header and library files available. Different linux
>
> distributions package these files under different names; look for
>
> packages with the word "numa" in them. You may also need a developer
>
> version of the package (e.g., with "dev" or "devel" in the name) to
>
> obtain the relevant header files.
>
>
>
> If you are getting this message on a non-OS X, non-Linux platform,
>
> then hwloc does not support processor / memory affinity on this
>
> platform. If the OS/platform does actually support processor / memory
>
> affinity, then you should contact the hwloc maintainers:
>
> https://github.com/open-mpi/hwloc.
>
>
> This is a warning only; your job will continue, though performance may
>
> be degraded.
> -----------------------------------------------------------------------
> Though the benchmark runs and give some score can we have some solution to
> avoid this warning and do proper binding. I also tried with threadMPI and
> didn't see this warnings.
>
> --
> Thanks,
> Deepak
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list