[gmx-users] problem with MPI,thread and number of CPU
Mark Abraham
mark.j.abraham at gmail.com
Mon May 16 21:35:15 CEST 2016
Hi,
On Mon, May 16, 2016 at 9:13 PM <khourshaeishargh at mech.sharif.ir> wrote:
>
>
> <style type="text/css">
> p { margin-bottom: 0.1in; line-height: 120%; }a:link { }</style>
>
>
> Dear Gromacs users
>
>
> During simulating DPPC membrane I confront with several Problems. I
> intend to express them here and I really appreciate it if you help me.
> Briefly I started to simulate a DPPC containing 2000 W and 128 DPPC
> molecules. the trend ( I mean the way it increase or decrease) of
> Pressure xx is important to me. as one can expect, due to the size of my
> system ( number of atom) the result is so erratic ( about 200bar).
> so I enlarge the system in X-direction considering consistent trend for
> both system. so I replicated system in X-direction, minimized it and do
> an NPT on it.
>
A common mistake here is to concatenate boxes without leaving a suitable
spacing between them, because atoms whose centers are close to the old
edges of the box can now be too close to their new neighbours.
> during NPT, my working job blew up with this error :
>
>
> An atom moved too far between two domain decomposition steps
>
>
> does this error relate to the number of cpu-cores, because when I
> decrease number of the cores from 16 to 8 using -nb 8, I don't see
> this error again, but there is no progress in the number of steps !!!
>
Different parallelism leads to different forces, and (in this case)
different ways for the simulation to blow up, particularly if my guess is
correct.
> also I have question about MPI. In my laptop when I simulate a box
> ( 128
> DPPC with 2000 W), the system runs with :
>
>
> Using 1 MPI thread
>
> Using 4 OpenMP threads
>
>
> when I do the same, in High performance Computer of the
> university, Its
> output is totally different which also uses automatically :
>
>
> Using 24 MPI threads
>
> Using 1 OpenMP thread per tMPI thread
>
>
> but when I used -ntmpi 1 to change MPI threads to 1, it gave the
> same
> answer as my laptop. so what is the optimum number of -ntmpi and -nt ?! I
> should notice that I now what is the meaning of MPI and thread, but
> don't now what is the optimum number !!!!
>
It varies. mdrun has some heuristics that consider the kind of processor,
the number of cores, the number of atoms, the presence of GPUs, the value
of environment variables like OMP_NUM_THREADS...
Mark
>
> best regards
>
>
> Ali
>
>
> ==================
>
>
> Ali khourshaei shargh (khourshaeishargh at mech.sharif.ir)
>
>
> Department of Mechanical Engineering
>
>
> Sharif University of Technology, Tehran, Iran
>
>
>
>
>
>
>
>
>
>
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list