[gmx-users] random initial failure

Harry Mark Greenblatt harry.greenblatt at weizmann.ac.il
Tue Feb 12 13:20:27 CET 2019


BS”D

Ok, thanks for the reply;  I will try and do that test hopefully tomorrow.


Harry



On 11 Feb 2019, at 4:12 PM, Szilárd Páll <pall.szilard at gmail.com<mailto:pall.szilard at gmail.com>> wrote:

Harry,

That does not seem normal. Have you tried to run on CPU-only and see
if that reproduces the issue (e.g. run mdrun -nsteps 0 -nb cpu -pme
cpu a few times).

--
Szilárd

On Mon, Feb 11, 2019 at 11:25 AM Harry Mark Greenblatt
<harry.greenblatt at weizmann.ac.il<mailto:harry.greenblatt at weizmann.ac.il>> wrote:

BS”D

Dear All,

 Trying to run a system with about 70,000 atoms, including waters, of a trimeric protein.  Went through minimization, PT and NPT equilibration.

Most of the time, it starts and runs fine.  But once in about 5 tries, I get:


125000000 steps, 250000.0 ps.

-------------------------------------------------------
Program:     gmx mdrun, version 2019
Source file: src/gromacs/mdlib/sim_util.cpp (line 752)
MPI rank:    4 (out of 6)

Fatal error:
Step 0: The total potential energy is 3.21792e+36, which is extremely high.
The LJ and electrostatic contributions to the energy are 28531.2 and -228106,
respectively. A very high potential energy can be caused by overlapping
interactions in bonded interactions or very large coordinate values. Usually
this is caused by a badly- or non-equilibrated initial configuration,
incorrect interactions or parameters in the topology.

For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors


If the system was truly non-equilibrated, I would have thought it would fail every time.  Yet most runs are fine.


Below are the hardware and build details.


Please let me know what other details you would like to know.

Thanks

Harry


GROMACS version:    2019
Precision:          single
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     enabled (GMX_OPENMP_MAX_THREADS = 64)
GPU support:        CUDA
SIMD instructions:  AVX_512
FFT library:        fftw-3.3.7-sse2-avx-avx2-avx2_128-avx512
RDTSCP usage:       enabled
TNG support:        enabled
Hwloc support:      disabled
Tracing support:    disabled
C compiler:         /usr/local/gcc/gcc-6.4.0/bin/gcc GNU 6.4.0
C compiler flags:    -mavx512f -mfma     -O2 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
C++ compiler:       /usr/local/gcc/gcc-6.4.0/bin/c++ GNU 6.4.0
C++ compiler flags:  -mavx512f -mfma    -std=c++11   -O2 -DNDEBUG -funroll-all-loops -fexcess-precision=fast
CUDA compiler:      /usr/local/cuda/bin/nvcc nvcc: NVIDIA (R) Cuda compiler driver;Copyright (c) 2005-2018 NVIDIA Corporation;Built on Tue_Jun_12_23:07:04_CDT_2018;Cuda compilation tools, release 9.2, V9.2.148
CUDA compiler flags:-gencode;arch=compute_30,code=sm_30;-gencode;arch=compute_35,code=sm_35;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_52,code=sm_52;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_70,code=compute_70;-use_fast_math;;; ;-mavx512f;-mfma;-std=c++11;-O2;-DNDEBUG;-funroll-all-loops;-fexcess-precision=fast;
CUDA driver:        9.20
CUDA runtime:       9.20


Running on 1 node with total 36 cores, 36 logical cores, 2 compatible GPUs
Hardware detected:
 CPU info:
   Vendor: Intel
   Brand:  Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz
   Family: 6   Model: 85   Stepping: 4
   Features: aes apic avx avx2 avx512f avx512cd avx512bw avx512vl clfsh cmov cx8 cx16 f16c fma hle htt intel lahf mmx msr nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdrnd rdtscp rtm sse2 sse3 sse4.1 sse4.2 ssse3 tdt x2apic
   Number of AVX-512 FMA units: 2
 Hardware topology: Basic
   Sockets, cores, and logical processors:
     Socket  0: [   0] [   1] [   2] [   3] [   4] [   5] [   6] [   7] [   8] [   9] [  10] [  11] [  12] [  13] [  14] [  15] [  16] [  17]
     Socket  1: [  18] [  19] [  20] [  21] [  22] [  23] [  24] [  25] [  26] [  27] [  28] [  29] [  30] [  31] [  32] [  33] [  34] [  35]
 GPU info:
   Number of GPUs detected: 2
   #0: NVIDIA Tesla V100-PCIE-16GB, compute cap.: 7.0, ECC: yes, stat: compatible
   #1: NVIDIA Tesla V100-PCIE-16GB, compute cap.: 7.0, ECC: yes, stat: compatible


--------------------------------------------------------------------
Harry M. Greenblatt
Associate Staff Scientist
Dept of Structural Biology           harry.greenblatt at weizmann.ac.il<mailto:harry.greenblatt at weizmann.ac.il><mailto:harry.greenblatt at weizmann.ac.il>
Weizmann Institute of Science        Phone:  972-8-934-6340
234 Herzl St.                        Facsimile:   972-8-934-3361
Rehovot, 7610001
Israel

--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.
--
Gromacs Users mailing list

* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org<mailto:gmx-users-request at gromacs.org>.


--------------------------------------------------------------------
Harry M. Greenblatt
Associate Staff Scientist
Dept of Structural Biology           harry.greenblatt at weizmann.ac.il<mailto:harry.greenblatt at weizmann.ac.il>
Weizmann Institute of Science        Phone:  972-8-934-6340
234 Herzl St.                        Facsimile:   972-8-934-3361
Rehovot, 7610001
Israel



More information about the gromacs.org_gmx-users mailing list