[gmx-users] mdrun: In option s, required option was not provided and the default file 'topol' does not exist or not accessible and non-integer charges

Carlos Navarro carlos.navarro87 at gmail.com
Fri Aug 2 17:34:26 CEST 2019


Did you try replacing the line
cd ${PBS_O_WORKDIR}/
with
cd ‘YOUR_CURRENT_PATH’/
?
Maybe as people already pointed out, the variable is not working properly.
Maybe this could work.
Best,

——————
Carlos Navarro Retamal
Bioinformatic Engineering. PhD.
Postdoctoral Researcher in Center of Bioinformatics and Molecular
Simulations
Universidad de Talca
Av. Lircay S/N, Talca, Chile
E: carlos.navarro87 at gmail.com or cnavarro at utalca.cl

On August 2, 2019 at 5:20:33 PM, Mohammed I Sorour (
mohammed.sorour at temple.edu) wrote:

Yeah, I deeply appreciate your help. But any idea/recommendation why the
same command doesn't work through the script? I can't run any jobs,
especially such a big calculation, out of the queue system.

On Fri, Aug 2, 2019 at 11:13 AM Justin Lemkul <jalemkul at vt.edu> wrote:

>
>
> On 8/2/19 11:09 AM, Mohammed I Sorour wrote:
> > This is the output
> >
> >
> > Begin Batch Job Epilogue Sat Aug 2 09:07:19 EDT 2019
> > Job ID: 341185
> > Username: tuf73544
> > Group: chem
> > Job Name: NVT
> > Session: 45173
> > Limits: walltime=01:00:00,neednodes=1:ppn=28,nodes=1:ppn=28
> > Resources:
> >
> cput=19:59:39,vmem=1986008kb,walltime=00:42:53,mem=198008kb,energy_used=0
> > Queue: normal
> > Account:
> > Deleting /dev/shm/*...
> > ----------------------------------------
> > End Batch Job Epilogue Sat Aug 2 09:08:39 EDT 2019
> > ----------------------------------------
> > Command line:
> > gmx mdrun -deffnm nvt
> >
> >
> > Running on 1 node with total 28 cores, 28 logical cores
> > Hardware detected:
> > CPU info:
> > Vendor: Intel
> > Brand: Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
> > SIMD instructions most likely to fit this hardware: AVX2_256
> > SIMD instructions selected at GROMACS compile time: SSE4.1
> >
> > Hardware topology: Full, with devices
> >
> > Compiled SIMD instructions: SSE4.1, GROMACS could use AVX2_256 on this
> > machine, which is better.
> >
> > *Reading file nvt.tpr, VERSION 2016.3 (single precision)*
> > Changing nstlist from 10 to 20, rlist from 1 to 1.029
> >
> > The number of OpenMP threads was set by environment variable
> > OMP_NUM_THREADS to 1
> >
> > Will use 24 particle-particle and 4 PME only ranks
> > This is a guess, check the performance at the end of the log file
> > Using 28 MPI threads
> > Using 1 OpenMP thread per tMPI thread
> >
> > starting mdrun 'DNA in water'
> > 500000 steps, 1000.0 ps.
> >
> > step 40 Turning on dynamic load balancing, because the performance loss
> due
> > to load imbalance is 4.1 %.
> >
> >
> > Writing final coordinates.
> >
> > Average load imbalance: 1.5 %
> > Part of the total run time spent waiting due to load imbalance: 1.2 %
> > Steps where the load balancing was limited by -rdd, -rcon and/or -dds:
> X 0
> > % Y 0 % Z 0 %
> > Average PME mesh/force load: 0.752
> > Part of the total run time spent waiting due to PP/PME imbalance: 3.1 %
> >
> >
> > Core t (s) Wall t (s) (%)
> > Time: 71996.002 2571.286 2800.0
> > 42:51
> > (ns/day) (hour/ns)
> > Performance: 33.602 0.714
>
> This output indicates that the job finished successfully and did not
> produce the original error you posted.
>
> -Justin
>
> --
> ==================================================
>
> Justin A. Lemkul, Ph.D.
> Assistant Professor
> Office: 301 Fralin Hall
> Lab: 303 Engel Hall
>
> Virginia Tech Department of Biochemistry
> 340 West Campus Dr.
> Blacksburg, VA 24061
>
> jalemkul at vt.edu | (540) 231-3129
> http://www.thelemkullab.com
>
> ==================================================
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
-- 
Gromacs Users mailing list

* Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!

* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists

* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send
a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list