[gmx-users] Regarding parallel run in Gromacs 5.0
Bikash Ranjan Sahoo
bikash.bioinformatic at protein.osaka-u.ac.jp
Fri Dec 5 12:36:34 CET 2014
Dear Dr. Abraham,
I have attached the md.log file. The command that I executed was
-s em.tpr -nt 20".*
Kindly suggest me how can I run using 20 CPUs. I will be highly obliged if
you kindly send me the exact command for parallel minimization or exact
command for re-installation of Gromacs 5.0.2 in parallel. I am little
confused at the
*cmake .. rest flags ~ ~ ~.*
The full installation command I ran was
*cmake .. -DGMX_BUILD_OWN_FFTW=ON
-DCMAKE_INSTALL_PREFIX:PATH=/user1/tanpaku/bussei/bics at 1986/Bikash/cmake/gro/gro5.0
Is it right ? I have all _mpi binaries in my gromacs and pdb2gmx_mpi,
editconf_mpi, grompp_mpi is running nice. Kindly suggest.
In anticipation of your reply
Bikash, Osaka Japan
On Fri, Dec 5, 2014 at 8:25 PM, Mark Abraham <mark.j.abraham at gmail.com>
> On Fri, Dec 5, 2014 at 8:21 AM, Bikash Ranjan Sahoo <
> bikash.bioinformatics at gmail.com> wrote:
> > Dear All
> > I have upgraded my Gromacs v4.5.5 to 5.0.
> Please get the latest 5.0.2 release, in that case ;-)
> I am unable to run parallel
> > minimization and simulation in cluster. Previously (v4.5.5) I was
> > simulating using the command “mdrun –v –s em.tpr –nt 30 &” and
> > thus assigning 30 CPU’s for the system. However, in the new version it is
> > not working, and giving the below mentioned errors. The command as per
> > Dr. Justin tutorial “gmx mdrun -v -deffnm em” is also not working for my
> > cluster installation, but running fine in my local computer. Kindly help
> > how to run minimization using mdrun in parallel.
> > GROMACS: gmx mdrun, VERSION 5.0
> > Executable: Bikash/cmake/gro/gro5.0/bin/gmx
> > Library dir: Bikash/cmake/gro/gro5.0/share/gromacs/top
> > Command line:
> > gmx mdrun -nt 30 -deffnm em
> > Back Off! I just backed up em.log to ./#em.log.1#
> > Number of hardware threads detected (288) does not match the number
> > reported by OpenMP (276).
> That's pretty bizarre. What kind of computer is this? Can you share your
> whole log file (e.g. on a file-sharing service), please?
> > Consider setting the launch configuration manually!
> > Reading file em.tpr, VERSION 5.0 (single precision)
> > The number of OpenMP threads was set by environment variable
> > OMP_NUM_THREADS to 6
> > Non-default thread affinity set, disabling internal thread affinity
> > Using 5 MPI threads
> Gromacs 4.6 and up are able to use OpenMP with the Verlet cut-off scheme,
> so the interpretation of -nt 30 changes if the standard environment
> variable OMP_NUM_THREADS is set. In your case the cluster / job script is
> probably setting it for you for some reason. You can force the old
> interpretation with -ntmpi 30, or you can set the variable more
> appropriately with
> export OMP_NUM_THREADS=1
> Segmentation fault
> However, this should never happen, so learning more from the log file would
> be valuable.
> > Thanking you
> > In anticipation of your reply
> > Bikash
> > Osaka, Japan
> > --
> > Gromacs Users mailing list
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> Gromacs Users mailing list
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users