[gmx-users] grompp not possible with annealing - Gromacs 2019.1
Tafelmeier, Stefanie
Stefanie.Tafelmeier at zae-bayern.de
Fri Sep 13 11:06:30 CEST 2019
Dear Mark,
many thanks for your answer.
Unfortunately I face another problem when installing Gromacs 2019.2 or 2019.3.
The regressiontests fail (no. 42 & 46).
This problem already occur when I installed Gromacs 2019.1. Then the issue was solved by using the newest versions of GCC and CUDA. https://mailman-1.sys.kth.se/pipermail/gromacs.org_gmx-users/2019-January/124028.html
If I want to do this now, I have the problem that the newest CUDA version does not support the newest GCC
-------
CMake Error at cmake/gmxManageNvccConfig.cmake:192 (message):
NVCC/C++ compiler combination does not seem to be supported. CUDA
frequently does not support the latest versions of the host compiler, so
you might want to try an earlier C++ compiler version and make sure your
CUDA compiler and driver are as recent as possible.
-------
Hence, I have to use the GCC 8.2, which then only allows me to use Gromacs 2019.1 and so no annealing is possible.
This is a bit tricky.
If you have any suggestions on how to use annealing anyway I would appreciate a lot.
Many thanks,
Steffi
-----Ursprüngliche Nachricht-----
Von: gromacs.org_gmx-users-bounces at maillist.sys.kth.se [mailto:gromacs.org_gmx-users-bounces at maillist.sys.kth.se] Im Auftrag von Mark Abraham
Gesendet: Dienstag, 10. September 2019 17:56
An: Discussion list for GROMACS users
Betreff: Re: [gmx-users] grompp not possible with annealing - Gromacs 2019.1
Hi,
Thanks for the report - but it's probably fixed already (
http://manual.gromacs.org/documentation/2019.2/release-notes/2019/2019.2.html#fix-segmentation-fault-when-preparing-simulated-annealing-inputs)
so I suggest you get the latest 2019.x release?
Mark
On Tue, 10 Sep 2019 at 17:15, Tafelmeier, Stefanie <
Stefanie.Tafelmeier at zae-bayern.de> wrote:
> Dear all,
>
> I try to use simulation annealing, but unfortunately the grompp - command
> leads to an error.
> It is not a known Gromacs error, but it doesn't finish the job and it
> says: Speicherzugriffsfehler (Speicherabzug geschrieben) (which means
> something like "Memory Access Error")
>
> The only output file produced is the mdout.mdp, which content seems
> correct.
>
> The screen text is given below as well as the grompp.mdp content and the
> details to the system used.
>
> There have been some issues as well to get Gromacs installed on the
> workstation. Not sure if this could be connected.
> Many thanks already for your help.
>
> Greetings,
> Steffi
>
>
>
> ------------------------------------------------------------------------------------------------
> Screen text:
>
> GROMACS: gmx grompp, version 2019.1
> Executable: /usr/local/gromacs/bin/gmx
> Data prefix: /usr/local/gromacs
> Working dir:
> /home/pcm-mess/Schreibtisch/StTa/Vergleich_FF/OPLS/fixed_layer/freeze_grp/temp_grps/annealing
> Command line:
> gmx grompp -f grompp_OPLS_anneal.mdp -v
>
> checking input for internal consistency...
> Setting the LD random seed to -28800458
> processing topology...
> Generated 330891 of the 330891 non-bonded parameter combinations
> Generating 1-4 interactions: fudge = 0.5
> Generated 330891 of the 330891 1-4 parameter combinations
> Excluding 3 bonded neighbours molecule type 'Other'
> turning all bonds into constraints...
>
> NOTE 1 [file unknown]:
> You are using constraints on all bonds, whereas the forcefield has been
> parametrized only with constraints involving hydrogen atoms. We suggest
> using constraints = h-bonds instead, this will also improve performance.
>
> processing coordinates...
> double-checking input for internal consistency...
> Setting gen_seed to -1070611718
> Velocities were taken from a Maxwell distribution at 280 K
> Removing all charge groups because cutoff-scheme=Verlet
> renumbering atomtypes...
> converting bonded parameters...
> initialising group options...
> processing index file...
> Analysing residue names:
> There are: 3600 Other residues
> Analysing residues not classified as Protein/DNA/RNA/Water and splitting
> into groups...
> Speicherzugriffsfehler (Speicherabzug geschrieben)
>
> ------------------------------------------------------------------------------------------------
>
>
> ------------------------------------------------------------------------------------------------
> Content grompp.mdp (as an example):
>
> include = -I../top
> define =
> cutoff-scheme = Verlet
> integrator = md
> dt = 0.001
> nsteps = 800000
> nstxout = 2000
> nstvout = 2000
> nstlog = 2000
> nstenergy = 2000
> nstlist = 10
> ns-type = grid
> pbc = xyz
> rlist = 1
> coulombtype = PME
> rcoulomb = 1
> rvdw = 1
> tcoupl = v-rescale
> tc-grps = other
> tau-t = 0.1
> ref-t = 280
> Pcoupl = Berendsen
> pcoupltype = anisotropic
> tau-p = 10
> compressibility = 8.7e-5 8.7e-5 8.7e-5 0 0 0
> ref-p = 1.0 1.0 1.0 0 0 0
> constraints = all-bonds
> gen-vel = yes
> gen-temp = 280
> gen-seed = -1
> annealing = single
> annealing-npoints = 2
> annealing-time = 0 60
> annealing-temp = 280 280
>
> ------------------------------------------------------------------------------------------------
>
>
> ------------------------------------------------------------------------------------------------
> System details:
>
> GROMACS version: 2019.1
> Precision: single
> Memory model: 64 bit
> MPI library: thread_mpi
> OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
> GPU support: CUDA
> SIMD instructions: AVX_512
> FFT library: fftw-3.3.8-sse2-avx-avx2-avx2_128-avx512
> RDTSCP usage: enabled
> TNG support: enabled
> Hwloc support: disabled
> Tracing support: disabled
> C compiler: /usr/bin/cc GNU 8.2.0
> C compiler flags: -mavx512f -mfma -O3 -DNDEBUG -funroll-all-loops
> -fexcess-precision=fast
> C++ compiler: /usr/bin/c++ GNU 8.2.0
> C++ compiler flags: -mavx512f -mfma -std=c++11 -O3 -DNDEBUG
> -funroll-all-loops -fexcess-precision=fast
> CUDA compiler: /usr/local/cuda-10.1/bin/nvcc nvcc: NVIDIA (R) Cuda
> compiler driver;Copyright (c) 2005-2019 NVIDIA Corporation;Built on
> Fri_Feb__8_19:08:17_PST_2019;Cuda compilation tools, release 10.1, V10.1.105
> CUDA compiler
> flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=compute_75;-use_fast_math;-D_FORCE_INLINES;;
> ;-mavx512f;-mfma;-std=c++11;-O3;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
> CUDA driver: 10.10
> CUDA runtime: 10.10
>
> Running on 1 node with total 44 cores, 88 logical cores, 1 compatible GPU
> Hardware detected:
> CPU info:
> Vendor: Intel
> Brand: Intel(R) Xeon(R) Gold 6152 CPU @ 2.10GHz
> Family: 6 Model: 85 Stepping: 4
> Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh
> cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq
> pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt
> x2apic
> Number of AVX-512 FMA units: 2
> Hardware topology: Basic
> Sockets, cores, and logical processors:
> Socket 0: [ 0 44] [ 1 45] [ 2 46] [ 3 47] [ 4 48] [
> 5 49] [ 6 50] [ 7 51] [ 8 52] [ 9 53] [ 10 54] [ 11 55]
> [ 12 56] [ 13 57] [ 14 58] [ 15 59] [ 16 60] [ 17 61] [ 18
> 62] [ 19 63] [ 20 64] [ 21 65]
> Socket 1: [ 22 66] [ 23 67] [ 24 68] [ 25 69] [ 26 70] [
> 27 71] [ 28 72] [ 29 73] [ 30 74] [ 31 75] [ 32 76] [ 33 77]
> [ 34 78] [ 35 79] [ 36 80] [ 37 81] [ 38 82] [ 39 83] [ 40
> 84] [ 41 85] [ 42 86] [ 43 87]
> GPU info:
> Number of GPUs detected: 1
> #0: NVIDIA Quadro P6000, compute cap.: 6.1, ECC: no, stat: compatible
>
> ------------------------------------------------------------------------------------------------
>
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
--
Gromacs Users mailing list
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list