[gmx-users] gromacs 2018 with OpenMPI + OpenMP
Szilárd Páll
pall.szilard at gmail.com
Mon Dec 17 19:44:02 CET 2018
On Sun, Dec 16, 2018 at 12:16 PM Deepak Porwal <deepak.porwal at iiitb.net>
wrote:
> > Are you sure that's better than mdrun's internal thread binding?
> yes even with warnings I see better performance, so surely if we can remove
> warnings
Which warnings are you referring to?
> and bind properly
I'm curious, can you elaborate what you mean? mdrun binds its threads quite
"properly" with "-pin on"
> we will see much better performance than thread
> MPI.
>
thread pinning is unrelated to thread-MPI, mdrun can pin its threads with
regular MPI too.
--
Szilárd
>
> On Thu, Dec 13, 2018 at 3:41 AM Szilárd Páll <pall.szilard at gmail.com>
> wrote:
>
> > On Wed, Dec 12, 2018 at 12:14 PM Deepak Porwal <deepak.porwal at iiitb.net>
> > wrote:
> > >
> > > Hi
> > > I build gromacs with OpenMPI + OpenMP.
> > > When I am trying to run adh/adh_dodec workload with binding the MPI
> > threads
> > > to core/l3cache, I am seeing some warnings.
> > > Command I used to run: mpirun --map-by ppr:1:l3cache:pe=2 -x
> > > OMP_NUM_THREADS=4 -x OMP_PROC_BIND=TRUE -x OMP_PLACES=cores gmx_mpi
> mdrun
> >
> > Are you sure that's better than mdrun's internal thread binding?
> >
> > >
> > > ------------------------------------------------------------
> > >
> > > Program: gmx grompp, version 2018.2
> > >
> > > Source file: src/gromacs/utility/futil.cpp (line 406)
> > >
> > >
> > >
> > > Fatal error:
> > >
> > > Won't make more than 99 backups of topol.tpr for you.
> > >
> > > The env.var. GMX_MAXBACKUP controls this maximum, -1 disables backups.
> >
> > There's the error: at startup mdrun will by default back up every file
> > that already exists in the same location where a file would be
> > written. List your output directory and it should be pretty obvious
> > what the issue is.
> >
> > --
> > Szilárd
> >
> > >
> > >
> > > For more information and tips for troubleshooting, please check the
> > GROMACS
> > >
> > > website at http://www.gromacs.org/Documentation/Errors
> > >
> > > -------------------------------------------------------
> > >
> > >
> >
> --------------------------------------------------------------------------
> > >
> > > MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> > >
> > > with errorcode 1.
> > >
> > >
> > >
> > > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > >
> > > You may or may not see output from other processes, depending on
> > >
> > > exactly when Open MPI kills them.
> > >
> > >
> >
> --------------------------------------------------------------------------
> > >
> > >
> >
> --------------------------------------------------------------------------
> > >
> > > WARNING: a request was made to bind a process. While the system
> > >
> > > supports binding the process itself, at least one node does NOT
> > >
> > > support binding memory to the process location.
> > >
> > >
> > >
> > > Node: llvm-sp38
> > >
> > >
> > >
> > > Open MPI uses the "hwloc" library to perform process and memory
> > >
> > > binding. This error message means that hwloc has indicated that
> > >
> > > processor binding support is not available on this machine.
> > >
> > >
> > >
> > > On OS X, processor and memory binding is not available at all (i.e.,
> > >
> > > the OS does not expose this functionality).
> > >
> > >
> > >
> > > On Linux, lack of the functionality can mean that you are on a
> > >
> > > platform where processor and memory affinity is not supported in Linux
> > >
> > > itself, or that hwloc was built without NUMA and/or processor affinity
> > >
> > > support. When building hwloc (which, depending on your Open MPI
> > >
> > > installation, may be embedded in Open MPI itself), it is important to
> > >
> > > have the libnuma header and library files available. Different linux
> > >
> > > distributions package these files under different names; look for
> > >
> > > packages with the word "numa" in them. You may also need a developer
> > >
> > > version of the package (e.g., with "dev" or "devel" in the name) to
> > >
> > > obtain the relevant header files.
> > >
> > >
> > >
> > > If you are getting this message on a non-OS X, non-Linux platform,
> > >
> > > then hwloc does not support processor / memory affinity on this
> > >
> > > platform. If the OS/platform does actually support processor / memory
> > >
> > > affinity, then you should contact the hwloc maintainers:
> > >
> > > https://github.com/open-mpi/hwloc.
> > >
> > >
> > > This is a warning only; your job will continue, though performance may
> > >
> > > be degraded.
> > > -----------------------------------------------------------------------
> > > Though the benchmark runs and give some score can we have some solution
> > to
> > > avoid this warning and do proper binding. I also tried with threadMPI
> and
> > > didn't see this warnings.
> > >
> > > --
> > > Thanks,
> > > Deepak
> > > --
> > > Gromacs Users mailing list
> > >
> > > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > >
> > > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > >
> > > * For (un)subscribe requests visit
> > > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
>
>
>
> --
> Thanks,
> Deepak
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list