[gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

Mohammed I Sorour Mohammed.Sorour at temple.edu
Fri Aug 2 17:09:24 CEST 2019


This is the output


Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
Job ID:           341185
Username:         tuf73544
Group:            chem
Job Name:         NVT
Session:          45173
Limits:           walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
Resources:
 cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
Queue:            normal
Account:
Deleting /dev/shm/*...
----------------------------------------
End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019
----------------------------------------
Command line:
  gmx mdrun -deffnm nvt


Running on 1 node with total 28 cores, 28 logical cores
Hardware detected:
  CPU info:
    Vendor: Intel
    Brand:  Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
    SIMD instructions most likely to fit this hardware: AVX2_256
    SIMD instructions selected at GROMACS compile time: SSE4.1

  Hardware topology: Full, with devices

Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
machine, which is better.

*Reading file nvt.tpr, VERSION 2016.3 (single precision)*
Changing nstlist from 10 to 20, rlist from 1 to 1.029

The number of OpenMP threads was set by environment variable
OMP_NUM_THREADS to 1

Will use 24 particle-particle and 4 PME only ranks
This is a guess, check the performance at the end of the log file
Using 28 MPI threads
Using 1 OpenMP thread per tMPI thread

starting mdrun 'DNA in water'
500000 steps,   1000.0 ps.

step 40 Turning on dynamic load balancing, because the performance loss due
to load imbalance is 4.1 %.


Writing final coordinates.

 Average load imbalance: 1.5 %
 Part of the total run time spent waiting due to load imbalance: 1.2 %
 Steps where the load balancing was limited by -rdd, -rcon and/or -dds: X 0
% Y 0 % Z 0 %
 Average PME mesh/force load: 0.752
 Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %


               Core t (s)   Wall t (s)        (%)
       Time:    71996.002     2571.286     2800.0
                         42:51
                 (ns/day)    (hour/ns)
Performance:       33.602        0.714

On Fri, Aug 2, 2019 at 11:00 AM John Whittaker <
johnwhittake at zedat.fu-berlin.de> wrote:

> > Hi Justin,
> >
> > Yes, I'm using a queuing system with a submission script.
> >
> > #-l nodes=1:ppn=16
> >
> > #PBS -l walltime=10:00:00
> >
> > #PBS ­-qmedium
> >
> > #PBS -N NVT
> >
> > #PBS -e out.err
> >
> > #PBS -o out
> >
> >
> >
> > module load gromacs
> >
> >
> > cd ${PBS_O_WORKDIR}/;
> >
> >
> > gmx mdrun -deffnm nvt
> >
> >
> > Well, based on your hint, I executed a trial mdrun job without using the
> > queue. It seems to be working well and reading the .tpr file, I had to
> > kill
> > the job once I made sure that it is reading the tpr file due to the
> > regulations of using the cluster out of the queue.
> > So, I should suspect that there is something wrong with executing the
> > script. It is worthy to note that I used the same script without any kind
> > of change multiple times and it's working well. This's more confusing for
> > me now, any hints?
>
> What is the output from the cluster? I'm guessing the output is in the
> file called "out" that's created each time the simulation fails.
>
>
> >
> > Thanks for the ionization/neutralization advice; surely I did that.
> >
> > Thanks,
> > Mohammed
> >
> > On Thu, Aug 1, 2019 at 9:22 PM Justin Lemkul <jalemkul at vt.edu> wrote:
> >
> >>
> >>
> >> On 8/1/19 7:04 PM, Mohammed I Sorour wrote:
> >> >> Hi,
> >> >
> >> > that's what the ls -l prints,
> >> >
> >> >
> >> >
> >> >
> >> >> ls -l
> >> >> total 193120
> >> >> drwxr-xr-x 2 tuf73544 chem     4096 Jul 26 17:20 amber99sb_dyes.ff
> >> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:56 em.gro
> >> >> -rw-r--r-- 1 tuf73544 chem      191 Aug  1 15:04
> >> equilibration_NVT_script
> >> >> -rw-r--r-- 1 tuf73544 chem 61421681 Jul 31 19:11 full_solv_ions.gro
> >> >> -rw-r--r-- 1 tuf73544 chem     2164 Jul  8  2016 ions.itp
> >> >> -rw-r--r-- 1 tuf73544 chem 33866956 Jul 31 19:08 ions.tpr
> >> >> -rw-r--r-- 1 tuf73544 chem    11962 Aug  1 14:57 mdout.mdp
> >> >> -rw-r--r-- 1 tuf73544 chem    11962 Aug  1 14:56 #mdout.mdp.1#
> >> >> -rw-r--r-- 1 tuf73544 chem     1875 Jul 27 08:22 nvt.mdp
> >> >> -rw-r--r-- 1 tuf73544 chem 39455312 Aug  1 14:57 nvt.tpr
> >> >> -rw------- 1 tuf73544 chem     1032 Aug  1 15:04 out
> >> >> -rw------- 1 tuf73544 chem     2395 Aug  1 15:04 out.err
> >> >> -rw-r--r-- 1 tuf73544 chem    38899 Jul 31 18:58
> >> posre_DNA_chain_A.itp
> >> >> -rw-r--r-- 1 tuf73544 chem    39953 Jul 31 18:58
> >> posre_DNA_chain_B.itp
> >> >> -rw-r--r-- 1 tuf73544 chem     3215 Aug  1 14:57 residuetypes.dat
> >> >> -rw-r--r-- 1 tuf73544 chem     4873 Jul 18 13:03 specbond.dat
> >> >> -rw-r--r-- 1 tuf73544 chem    69176 Jul 16 11:40 tip3p.gro
> >> >> -rw-r--r-- 1 tuf73544 chem   588482 Jul 31 18:58
> >> topol_DNA_chain_A.itp
> >> >> -rw-r--r-- 1 tuf73544 chem   589283 Jul 31 18:58
> >> topol_DNA_chain_B.itp
> >> >> -rw------- 1 tuf73544 chem     1264 Jul 31 19:10 topol.top
> >> >> drwxr-xr-x 3 tuf73544 chem     4096 Aug  1 14:18 trial
> >>
> >> Are you executing mdrun interactively, or via some kind of queuing
> >> system with a submission script?
> >>
> >> Also you should *not* be running dynamics on a system with such a net
> >> charge. Add salt and neutralize! It's not the source of your problem,
> >> though.
> >>
> >> -Justin
> >>
> >> --
> >> ==================================================
> >>
> >> Justin A. Lemkul, Ph.D.
> >> Assistant Professor
> >> Office: 301 Fralin Hall
> >> Lab: 303 Engel Hall
> >>
> >> Virginia Tech Department of Biochemistry
> >> 340 West Campus Dr.
> >> Blacksburg, VA 24061
> >>
> >> jalemkul at vt.edu | (540) 231-3129
> >> http://www.thelemkullab.com
> >>
> >> ==================================================
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send
> > a mail to gmx-users-request at gromacs.org.
>
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list