[gmx-users] Help w.r.t enhancing the node performance for simulation

Prasanth G, Research Scholar prasanthghanta at sssihl.edu.in
Sat Dec 29 09:58:49 CET 2018


Dear all,
I was able to overcome the issue, by introducing the command "mpirun -np x"
before the command.
Here is the exact command-

mpirun -np 32 gmx_mpi mdrun -v -deffnm md_0_10 -cpi md_0_10.cpt -append
-ntomp 4

Thank you.


On Fri, Dec 28, 2018 at 12:12 PM Prasanth G, Research Scholar <
prasanthghanta at sssihl.edu.in> wrote:

> Dear all,
>
> Though the GROMACS was configured with MPI support during installation.
>
>  installation cmake.txt
> <https://drive.google.com/a/sssihl.edu.in/file/d/1Io1QhMJg7x88LhRj6_iTsXdUxZkAmIbV/view?usp=drive_web>
>
> I am able to use only one MPI process on the node for the simulation.
> This happens when i try to use ntmpi
>
>  ntmpi 4 ntomp 8.txt
> <https://drive.google.com/a/sssihl.edu.in/file/d/152ea2HmpEL4_gSSn2_A_L0MoIwoTt8iy/view?usp=drive_web>
>
> I am attaching the md log file and md.mdp of a previous simulation here.
>
>  md.mdp
> <https://drive.google.com/a/sssihl.edu.in/file/d/1h6Lsb0MzJ8b3U4jPIMUn3T9DcnIxNvDi/view?usp=drive_web>
>
>  md_0_10.log
> <https://drive.google.com/a/sssihl.edu.in/file/d/141LtTSoishQG3Q6mqbHibHCbpXgxA5OS/view?usp=drive_web>
>
> I am also attaching the nvsmi log
>
>  nvsmi log.txt
> <https://drive.google.com/a/sssihl.edu.in/file/d/1Agh_0BsPKw5x5_sTsVShCfAZaOnm7Bud/view?usp=drive_web>
>
> I had tried to decrease the number of threads for running current
> simulation and here are the results
>
>  ntomp 8
> <https://drive.google.com/a/sssihl.edu.in/file/d/1KdlqxWs7peqwftvW1bhsYVAYNZLF6s-H/view?usp=drive_web>
>
>  ntomp 16
> <https://drive.google.com/a/sssihl.edu.in/file/d/1Md3rwKdl8h1WYVMpbON0avQN7ZF46Vl7/view?usp=drive_web>
>
>  ntomp 32
> <https://drive.google.com/a/sssihl.edu.in/file/d/1v5vIu2BbU7zs9HnZw9xMXM29hLkJa2LL/view?usp=drive_web>
>
> Can you please suggest a solution, as I am currently getting efficiency of
> about 2.5ns/day.
> Thanks in advance.
>
> --
> Regards,
> Prasanth.
>


-- 
Regards,
Prasanth.


More information about the gromacs.org_gmx-users mailing list