[gmx-developers] g_hbond OpenMP parallelization only for ACF

Erik Marklund erik.marklund at chem.ox.ac.uk
Mon Aug 18 10:21:19 CEST 2014


Hi,

Hadn't seen that bug with -don before. Could you please file a redmine issue so that this gets properly documented? With input files that trigger the segfault.

How big is the grid? I've had near-linear scaling before.

Kind regards,
Erik


On 18 Aug 2014, at 02:26, Bin Liu <fdusuperstring at gmail.com> wrote:

> Hi Everyone,
> 
> Thank Erik and David for your replies. After some playing with g_hbond, I found some interesting phenomena which are both interesting and annoying. 
> 
> First, it seems the -don flag corrupts the g_hbond. When I turned on this flag, g_hbond would either show "Segmentation fault" immediately, or run single_threaded (judged from system monitor) and very slowly for quite a while, and then show "Segmentation fault". With this flag, g_hbond never finished its job gracefully. I tested it using both GROMACS 4.6.5 (compiled by gcc 4.6 and 4.7) and GROMACS 5.0 .
> 
> Second, if -don turned off, apparently the stage of grid search runs with OpenMP enabled, and g_hbond can function properly. However, the system monitor showed the load was quite evenly distributed on all the 8 logical cores (i7-3770K), but the CPU utilizations for all the logical cores were around 50%. At first, I thought maybe it was relevant to Hyperthreading of 3770K and the physical cores were kept busy all the time. However my experiment on another machine equipped with an ancient CPU (Core2 quad 6600, no HT) showed exactly the same phenomenon. Actually the speed of g_hbond in these two cases were still satisfactory. I just want to present some useful information that might indicate some performance issue with the OpenMP implementation of g_hbond and arouse the attention of the developer(s) to investigate it. 
> 
> I attached the compilation information for GROMACS 4.6.5 on my 3770K machine.
> 
> Regards,
> 
> Bin
> 
> Gromacs version:    VERSION 4.6.5
> Precision:          single
> Memory model:       64 bit
> MPI library:        thread_mpi
> OpenMP support:     enabled
> GPU support:        enabled
> invsqrt routine:    gmx_software_invsqrt(x)
> CPU acceleration:   AVX_256
> FFT library:        fftw-3.3.3-sse2
> Large file support: enabled
> RDTSCP usage:       enabled
> Built on:           Sat Dec 21 12:37:38 EST 2013
> Built by:           main at MainPC [CMAKE]
> Build OS/arch:      Linux 3.5.0-41-generic x86_64
> Build CPU vendor:   GenuineIntel
> Build CPU brand:    Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz
> Build CPU family:   6   Model: 58   Stepping: 9
> Build CPU features: aes apic avx clfsh cmov cx8 cx16 f16c htt lahf_lm mmx msr nonstop_tsc pcid pclmuldq pdcm popcnt pse rdrnd rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt
> C compiler:         /usr/bin/gcc GNU gcc (Ubuntu/Linaro 4.6.4-1ubuntu1~12.04) 4.6.4
> C compiler flags:   -mavx    -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value   -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3 -DNDEBUG
> C++ compiler:       /usr/bin/c++ GNU c++ (Ubuntu/Linaro 4.6.4-1ubuntu1~12.04) 4.6.4
> C++ compiler flags: -mavx   -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value   -fomit-frame-pointer -funroll-all-loops -fexcess-precision=fast  -O3 -DNDEBUG
> CUDA compiler:      /usr/local/cuda-5.0/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2012 NVIDIA Corporation;Built on Fri_Sep_21_17:28:58_PDT_2012;Cuda compilation tools, release 5.0, V0.2.1221
> CUDA compiler flags:-gencode;arch=compute_20,code=sm_20;-gencode;arch=compute_20,code=sm_21;-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_35,code=compute_35;-use_fast_math;-ccbin=/usr/bin/gcc;-Xcompiler;-fPIC ; -mavx;-Wextra;-Wno-missing-field-initializers;-Wno-sign-compare;-Wall;-Wno-unused;-Wunused-value;-fomit-frame-pointer;-funroll-all-loops;-fexcess-precision=fast;-O3;-DNDEBUG
> CUDA driver:        5.50
> CUDA runtime:       5.0
> -- 
> Gromacs Developers mailing list
> 
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before posting!
> 
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> 
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers or send a mail to gmx-developers-request at gromacs.org.



More information about the gromacs.org_gmx-developers mailing list