[gmx-users] Help w.r.t enhancing the node performance for simulation
Prasanth G, Research Scholar
prasanthghanta at sssihl.edu.in
Sat Jan 12 08:04:14 CET 2019
Dear all,
Could you please tell me, if this is normal during running a simulation. I
am running mdrun with -v flag and i receive this,
.
.
vol 0.23! imb F 2264% pme/F 0.57 step 1496300, will finish Mon Jan 21
00:13:42 2019
vol 0.23 imb F 3249% pme/F 0.75 step 1496400, will finish Mon Jan 21
00:13:49 2019
vol 0.23 imb F 0% pme/F 1.95 step 1496500, will finish Mon Jan 21
00:13:47 2019
vol 0.23 imb F 526% pme/F 1.33 step 1496600, will finish Mon Jan 21
00:13:50 2019
vol 0.21! imb F 130% pme/F 1.65 step 1496700, will finish Mon Jan 21
00:13:49 2019
.
.
The command i used for the mdrun is as follows:
mpirun -np 64 gmx_mpi mdrun -v -deffnm md_0_10 -cpi md_0_10.cpt -append
-ntomp 1 -npme 32
gmx_mpi versio is as follows:
GROMACS: gmx_mpi, version 2018
Executable: /usr/local/gromacs2018/bin/gmx_mpi
Data prefix: /usr/local/gromacs2018
Working dir: /home/bio1
Command line:
gmx_mpi --version
GROMACS version: 2018
Precision: single
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support: CUDA
SIMD instructions: AVX_256
FFT library: fftw-3.3.5-fma-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: hwloc-1.11.0
Tracing support: disabled
Built on: 2018-12-29 04:38:41
Built by: root at node04 [CMAKE]
Build OS/arch: Linux 4.4.0-21-generic x86_64
Build CPU vendor: Intel
Build CPU brand: Intel(R) Xeon(R) CPU E5-4610 v2 @ 2.30GHz
Build CPU family: 6 Model: 62 Stepping: 4
Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt intel lahf
mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp sse2
sse3 sse4.1 sse4.2 ssse3 tdt x2apic
C compiler: /usr/bin/cc GNU 5.4.0
C compiler flags: -mavx -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
C++ compiler: /usr/bin/c++ GNU 5.4.0
C++ compiler flags: -mavx -std=c++11 -O3 -DNDEBUG -funroll-all-loops
-fexcess-precision=fast
CUDA compiler: /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler
driver;Copyright (c) 2005-2016 NVIDIA Corporation;Built on
Tue_Jan_10_13:22:03_CST_2017;Cuda compilation tools, release 8.0, V8.0.61
CUDA compiler
flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_60,code=compute_60;-gencode;arch=compute_61,code=compute_61;-use_fast_math;-Wno-deprecated-gpu-targets;-D_FORCE_INLINES;;
;-mavx;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver: 8.0
CUDA runtime: 8.0
Thank you in advance.
On Sat, Dec 29, 2018 at 2:33 PM Prasanth G, Research Scholar <
prasanthghanta at sssihl.edu.in> wrote:
> Dear all,
> I was able to overcome the issue, by introducing the command "mpirun -np
> x" before the command.
> Here is the exact command-
>
> mpirun -np 32 gmx_mpi mdrun -v -deffnm md_0_10 -cpi md_0_10.cpt -append
> -ntomp 4
>
> Thank you.
>
>
> On Fri, Dec 28, 2018 at 12:12 PM Prasanth G, Research Scholar <
> prasanthghanta at sssihl.edu.in> wrote:
>
>> Dear all,
>>
>> Though the GROMACS was configured with MPI support during installation.
>>
>> installation cmake.txt
>> <https://drive.google.com/a/sssihl.edu.in/file/d/1Io1QhMJg7x88LhRj6_iTsXdUxZkAmIbV/view?usp=drive_web>
>>
>> I am able to use only one MPI process on the node for the simulation.
>> This happens when i try to use ntmpi
>>
>> ntmpi 4 ntomp 8.txt
>> <https://drive.google.com/a/sssihl.edu.in/file/d/152ea2HmpEL4_gSSn2_A_L0MoIwoTt8iy/view?usp=drive_web>
>>
>> I am attaching the md log file and md.mdp of a previous simulation here.
>>
>> md.mdp
>> <https://drive.google.com/a/sssihl.edu.in/file/d/1h6Lsb0MzJ8b3U4jPIMUn3T9DcnIxNvDi/view?usp=drive_web>
>>
>> md_0_10.log
>> <https://drive.google.com/a/sssihl.edu.in/file/d/141LtTSoishQG3Q6mqbHibHCbpXgxA5OS/view?usp=drive_web>
>>
>> I am also attaching the nvsmi log
>>
>> nvsmi log.txt
>> <https://drive.google.com/a/sssihl.edu.in/file/d/1Agh_0BsPKw5x5_sTsVShCfAZaOnm7Bud/view?usp=drive_web>
>>
>> I had tried to decrease the number of threads for running current
>> simulation and here are the results
>>
>> ntomp 8
>> <https://drive.google.com/a/sssihl.edu.in/file/d/1KdlqxWs7peqwftvW1bhsYVAYNZLF6s-H/view?usp=drive_web>
>>
>> ntomp 16
>> <https://drive.google.com/a/sssihl.edu.in/file/d/1Md3rwKdl8h1WYVMpbON0avQN7ZF46Vl7/view?usp=drive_web>
>>
>> ntomp 32
>> <https://drive.google.com/a/sssihl.edu.in/file/d/1v5vIu2BbU7zs9HnZw9xMXM29hLkJa2LL/view?usp=drive_web>
>>
>> Can you please suggest a solution, as I am currently getting efficiency
>> of about 2.5ns/day.
>> Thanks in advance.
>>
>> --
>> Regards,
>> Prasanth.
>>
>
>
> --
> Regards,
> Prasanth.
>
--
Regards,
Prasanth.
More information about the gromacs.org_gmx-users
mailing list