[gmx-users] mdrun no error, but hangs no results

Shi, Yu (shiy4) shiy4 at mail.uc.edu
Wed Jul 17 18:24:29 CEST 2013


Dear gmx-users,

My problem is weird.
My mdrun worked well using the old  serial version 4.5.5 (about two years ago). And I have these top, ndx, mdp, and gro files.
Basing on those old files, for the serial 4.6.2, the grompp works through, resulting the .tpr file successfully.
After that when I make the mdrun,
 mdrun -v -s em-nv.tpr -deffnm ss
it only shows:
Reading file em-nv.tpr, VERSION 4.6.2 (double precision)
Using 8 MPI threads
Killed
and there is no further processing. Later, it is killed.

Also, using cmake my installation process works well, so does anyone meet this problem before?


And part of the logfile is:
Log file opened on Wed Jul 17 12:17:30 2013
Host: opt-login03.osc.edu  pid: 32177  nodeid: 0  nnodes:  1
Gromacs version:    VERSION 4.6.2
Precision:          double
Memory model:       64 bit
MPI library:        thread_mpi
OpenMP support:     disabled
GPU support:        disabled
invsqrt routine:    gmx_software_invsqrt(x)
CPU acceleration:   SSE2
FFT library:        fftw-3.3-sse2
Large file support: enabled
RDTSCP usage:       enabled
Built on:           Wed Jul 17 10:51:22 EDT 2013
Built by:           ucn1118 at opt-login03.osc.edu [CMAKE]
Build OS/arch:      Linux 2.6.18-308.11.1.el5 x86_64
Build CPU vendor:   AuthenticAMD
Build CPU brand:    Dual-Core AMD Opteron(tm) Processor 8218
Build CPU family:   15   Model: 65   Stepping: 2
Build CPU features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr pse rdtscp sse2 sse3
C compiler:         /usr/bin/cc GNU cc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-48)
C compiler flags:   -msse2    -Wextra -Wno-missing-field-initializers -Wno-sign-compare -Wall -Wno-unused -Wunused-value -Wno-unknown-pragmas   -fomit-frame-pointer -funroll-all-loops  -O3 -DNDEBUG
.
.
.
.
Initializing Domain Decomposition on 8 nodes
Dynamic load balancing: no
Will sort the charge groups at every domain (re)decomposition
Using 0 separate PME nodes, as there are too few total
 nodes for efficient splitting
Optimizing the DD grid for 8 cells with a minimum initial size of 0.000 nm
Domain decomposition grid 8 x 1 x 1, separate PME nodes 0
PME domain decomposition: 8 x 1 x 1
Domain decomposition nodeid 0, coordinates 0 0 0

Using 8 MPI threads

Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: AuthenticAMD
Brand:  Dual-Core AMD Opteron(tm) Processor 8218
Family: 15  Model: 65  Stepping:  2
Features: apic clfsh cmov cx8 cx16 htt lahf_lm mmx msr pse rdtscp sse2 sse3
Acceleration most likely to fit this hardware: SSE2
Acceleration selected at GROMACS compile time: SSE2

Table routines are used for coulomb: FALSE
Table routines are used for vdw:     FALSE
Will do PME sum in reciprocal space.

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------





More information about the gromacs.org_gmx-users mailing list