[gmx-users] Regarding submitting a job in the cluster
Justin Lemkul
jalemkul at vt.edu
Tue Jan 30 17:54:33 CET 2018
On 1/30/18 11:52 AM, Dilip H N wrote:
> Hello,
>
> We have a new cluster now, and gromacs has been installed in it.
> Now if i want to want to call gromacs and run the command, i have to type:
>
> /apps/gromacs-2016.2/bin/gmx_mpi grompp -f nptmd.mdp -c npt.gro -t npt.cpt
> -p topol.top -o nptmd.tpr
> /apps/gromacs-2016.2/bin/gmx_mpi mdrun -v -s nptmd.tpr -deffnm nptmd
>
> and it running as follows:
> --------------------------------------------------------------------------
> WARNING: There is at least non-excluded one OpenFabrics device found,
> but there are no active ports detected (or Open MPI was unable to use
> them). This is most certainly not what you wanted. Check your
> cables, subnet manager configuration, etc. The openib BTL will be
> ignored for this job.
>
> Local host: master
> --------------------------------------------------------------------------
> [1517328530.673617] [master:10000:0] sys.c:744 MXM WARN
> Conflicting CPU frequencies detected, using: 1664.99
> [1517328530.676824] [master:10000:0] ib_dev.c:695 MXM ERROR There
> are no Mellanox cards detected.
> [1517328530.690993] [master:10000:0] ib_dev.c:695 MXM ERROR There
> are no Mellanox cards detected.
> [1517328530.700884] [master:10000:0] sys.c:744 MXM WARN
> Conflicting CPU frequencies detected, using: 1664.99
> :-) GROMACS - gmx mdrun, 2016.2 (-:
> .
> .
> Command line:
> gmx_mpi mdrun -v -s nptmd.tpr -deffnm nptmd
> NOTE: Error occurred during GPU detection:
> CUDA driver version is insufficient for CUDA runtime version
> Can not use GPU acceleration, will fall back to CPU kernels.
> Running on 1 node with total 16 cores, 16 logical cores, 0 compatible GPUs
> Hardware detected on host master (the node of MPI rank 0):
> CPU info:
> Vendor: Intel
> Brand: Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
> SIMD instructions most likely to fit this hardware: AVX2_256
> SIMD instructions selected at GROMACS compile time: AVX2_256
>
> Hardware topology: Basic
> Reading file nptmd.tpr, VERSION 2016.2 (single precision)
> Changing nstlist from 10 to 40, rlist from 1.2 to 1.217
>
> Using 1 MPI process
> Using 16 OpenMP threads
>
> My question is
> 1] Is it every time should i call the gromacs (ie., /apps/gromacs/bin...)
> and then give the commands..??
> 2] By default it is taking 16 cores ... why is it so..?? (but our cluster
> has 32 cores in master node and 160 cores in compute nodes)
> 3] the time taken to finish this production run is almost 3 hrs, (but if i
> run the same job in my desktop, it is taking the same time. where on my
> desktop it was showing as
> Running on 1 node with total 4 cores, 8 logical cores
> Hardware detected:
> CPU info:
> Vendor: Intel
> Brand: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz
> SIMD instructions most likely to fit this hardware: AVX2_256
> SIMD instructions selected at GROMACS compile time: AVX2_256
> Hardware topology: Basic
> Reading file nvt.tpr, VERSION 2016.2 (single precision)
> Changing nstlist from 10 to 25, rlist from 1.2 to 1.224
> Using 1 MPI thread
> Using 8 OpenMP threads ).
>
> So the cluster should be taking less time right to finish the job...?? but
> this is not happening...
>
> Do i need to write any script..?? in order to describe how many nodes,
> cores etc., should be taken..??
>
>
> Any suggestions are highly appreciated.
>
Questions about how to properly use your hardware should be directed to
your system administrator, not this mailing list. GROMACS is doing what
it's supposed to; it's up to you to understand what your hardware is and
how to use it.
-Justin
--
==================================================
Justin A. Lemkul, Ph.D.
Assistant Professor
Virginia Tech Department of Biochemistry
303 Engel Hall
340 West Campus Dr.
Blacksburg, VA 24061
jalemkul at vt.edu | (540) 231-3129
http://www.biochem.vt.edu/people/faculty/JustinLemkul.html
==================================================
More information about the gromacs.org_gmx-users
mailing list