[gmx-users] Regarding parallel run in Gromacs 5.0

Mark Abraham mark.j.abraham at gmail.com
Fri Dec 5 12:25:39 CET 2014


Hi,

On Fri, Dec 5, 2014 at 8:21 AM, Bikash Ranjan Sahoo <
bikash.bioinformatics at gmail.com> wrote:

> ​​Dear All
>
> I have upgraded my Gromacs v4.5.5 to 5.0.


Please get the latest 5.0.2 release, in that case ;-)

I am unable to run parallel
> minimization and simulation in cluster. Previously (v4.5.5) I was
> simulating using the command “mdrun –v –s em.tpr –nt 30 &” and
> thus assigning 30 CPU’s for the system. However, in the new version it is
> not working, and giving the below mentioned errors.  The command as per the
> Dr. Justin tutorial “gmx mdrun -v -deffnm em” is also not working for my
> cluster installation, but running fine in my local computer. Kindly help me
> how to run minimization using mdrun in parallel.
>
>
>
> GROMACS: gmx mdrun, VERSION 5.0
> Executable: Bikash/cmake/gro/gro5.0/bin/gmx
> Library dir: Bikash/cmake/gro/gro5.0/share/gromacs/top
> Command line:
> gmx mdrun -nt 30 -deffnm em
>
>
> Back Off! I just backed up em.log to ./#em.log.1#
>
> Number of hardware threads detected (288) does not match the number
> reported by OpenMP (276).
>

That's pretty bizarre. What kind of computer is this? Can you share your
whole log file (e.g. on a file-sharing service), please?


> Consider setting the launch configuration manually!
> Reading file em.tpr, VERSION 5.0 (single precision)
> The number of OpenMP threads was set by environment variable
> OMP_NUM_THREADS to 6
>
> Non-default thread affinity set, disabling internal thread affinity
> Using 5 MPI threads
>

Gromacs 4.6 and up are able to use OpenMP with the Verlet cut-off scheme,
so the interpretation of -nt 30 changes if the standard environment
variable OMP_NUM_THREADS is set. In your case the cluster / job script is
probably setting it for you for some reason. You can force the old
interpretation with -ntmpi 30, or you can set the variable more
appropriately with

export OMP_NUM_THREADS=1

Segmentation fault
>

However, this should never happen, so learning more from the log file would
be valuable.

Mark


>
>
> ​Thanking you
> In anticipation of your reply
> Bikash
> Osaka, Japan​
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list