[gmx-users] gromacs.org_gmx-users Digest, Vol 150, Issue 127
Groenhof, Gerrit
ggroenh at gwdg.de
Tue Nov 1 12:49:51 CET 2016
Changing the cut-off scheme to group should fix the first issue. In fact, that is also what the output seems to suggest...
The second issue suggests that you grompp does not have access to the correct force field files. Proving information on where the *.itp files are should solve that issue too
Gerrit
e Did you read the message
Message: 3
Date: Mon, 31 Oct 2016 14:06:18 +0100
From: Sylwia Kacprzak <sylwia.kacprzak at physchem.uni-freiburg.de>
To: gromacs.org_gmx-users at maillist.sys.kth.se
Subject: [gmx-users] QM/MM with gromacs 2016
Message-ID: <581741CA.9020601 at physchem.uni-freiburg.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Dear all,
I am new to qm/mm calculations. Before trying my own calculations, I
have tried to
run the tutorial on thymine dimer repair
(http://wwwuser.gwdg.de/~ggroenh/SaoCarlos2008/html/tutorial.html).
I have downloaded all the files from the tutorial (mdout.mdp qmmm1.gro
qmmm1.mdp qmmm.ndx qmmm.top).
However after running the command
gmx grompp -f qmmm1.mdp -p qmmm.top -n qmmm.ndx -c qmmm1.gro
I have a number of errors (please look below).
Can anyone help me further with this problems?
Best greetings
Sylwia
NOTE 1 [file qmmm1.mdp, line 89]:
qmmm1.mdp did not specify a value for the .mdp option "cutoff-scheme".
Probably it was first intended for use with GROMACS before 4.6. In 4.6,
the Verlet scheme was introduced, but the group scheme was still the
default. The default is now the Verlet scheme, so you will observe
different behaviour.
Ignoring obsolete mdp entry 'title'
Ignoring obsolete mdp entry 'cpp'
Ignoring obsolete mdp entry 'domain-decomposition'
Ignoring obsolete mdp entry 'nstcheckpoint'
Ignoring obsolete mdp entry 'optimize_fft'
Replacing old mdp entry 'unconstrained_start' by 'continuation'
Replacing old mdp entry 'nstxtcout' by 'nstxout-compressed'
Replacing old mdp entry 'xtc_grps' by 'compressed-x-grps'
Replacing old mdp entry 'xtc-precision' by 'compressed-x-precision'
Back Off! I just backed up mdout.mdp to ./#mdout.mdp.1#
NOTE 2 [file qmmm1.mdp]:
With Verlet lists the optimal nstlist is >= 10, with GPUs >= 20. Note
that with the Verlet scheme, nstlist has no effect on the accuracy of
your simulation.
NOTE 3 [file qmmm1.mdp]:
Setting nstcalcenergy (100) equal to nstenergy (10)
NOTE 4 [file qmmm1.mdp]:
nstcomm < nstcalcenergy defeats the purpose of nstcalcenergy, setting
nstcomm to nstcalcenergy
NOTE 5 [file qmmm1.mdp]:
The Berendsen thermostat does not generate the correct kinetic energy
distribution. You might want to consider using the V-rescale thermostat.
ERROR 1 [file qmmm1.mdp]:
QMMM is currently only supported with cutoff-scheme=group
Setting the LD random seed to -1378559983
Generated 2211 of the 2211 non-bonded parameter combinations
Generating 1-4 interactions: fudge = 0.5
Generated 2211 of the 2211 1-4 parameter combinations
-------------------------------------------------------
Program: gmx grompp, version 2016
Source file: src/gromacs/gmxpreprocess/toppush.cpp (line 1343)
Fatal error:
Atomtype amber99_25 not found
For more information and tips for troubleshooting, please check the GROMACS
------------------------------
Message: 4
Date: Mon, 31 Oct 2016 19:53:42 +0500
From: maria khan <mariabiochemist1 at gmail.com>
To: gromacs.org_gmx-users at maillist.sys.kth.se
Subject: [gmx-users] Md simulation error..
Message-ID:
<CAEnLide=EmThPJqdE_sEhpTnENV4RcetxSYAitTWGVxQmMixrQ at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Dear Justin A. Lemkul, than you so much..i Will follow that link you hv
provided..i will concern you for further progress.
regards.
------------------------------
Message: 5
Date: Tue, 1 Nov 2016 00:34:18 +0900
From: Mijiddorj Batsaikhan <b.mijiddorj at gmail.com>
To: gromacs.org_gmx-users at maillist.sys.kth.se
Subject: [gmx-users] Bonds between P and O in bilayer head groups
Message-ID:
<CABgRApv9_-zpuozho7afpDkzYiCSnnmb8GntZ0UCwMVWJ4OgFw at mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Dear gmx users,
I want to simulate membranes system using gromacs v5. Initial structure was
built by Charmm-gui. Head groups' bonds between two O- and P are single in
the Charmm topology file as a following:
ATOM H11B HAL2 0.09 ! alpha4
ATOM P PL 1.50 ! (-) O13 O12
ATOM O13 O2L -0.78 ! \ / alpha3
ATOM O14 O2L -0.78 ! P (+)
ATOM O12 OSLP -0.57 ! / \ alpha2
ATOM O11 OSLP -0.57 ! (-) O14 O11
ATOM C1 CTL2 -0.08 ! | alpha1
ATOM HA HAL2 0.09 ! HA---C1---HB
ATOM HB HAL2 0.09 ! | theta1
One of the them is double bond in the nomenclatures.
(1) Which one is preferable for the simulation?
(2) Are some metal ions needed near the head groups for the simulation?
Best regards,
Mijiddorj
------------------------------
Message: 6
Date: Mon, 31 Oct 2016 16:59:13 +0000
From: Irem Altan <irem.altan at duke.edu>
To: "gmx-users at gromacs.org" <gmx-users at gromacs.org>
Subject: Re: [gmx-users] nvidia tesla p100
Message-ID: <663B9BF1-105E-4703-99CB-65218FC312D5 at duke.edu>
Content-Type: text/plain; charset="utf-8"
Hi,
I should add that the problem described below still persist, even after I set OMP_NUM_THREADS=16.
Best,
Irem
> On Oct 30, 2016, at 4:54 PM, Irem Altan <irem.altan at duke.edu> wrote:
>
> Hi,
>
> Thank you. It turns out that I hadn?t requested the correct number of GPUs in the submission script, so it now sees the GPUs. There are more problems, however. I?m using 5.1.2, because 2016 doesn?t seem to have been properly setup on the cluster that I?m using (Bridges-Pittsburgh). I?m having trouble figuring out the optimum number of threads and such for the nodes in this cluster. The nodes have 2 nVidia Tesla P100 GPUs, and 2 Intel Xeon CPUs with 16 cores each. Therefore I request 2 tasks per node, and use the following command to run mdrun:
>
> mpirun -np $SLURM_NPROCS gmx_mpi mdrun -ntomp 2 -v -deffnm npt
>
> where $SLURM_NPROCS gets set to 32 automatically (this is what fails with version 2016, apparently).
>
> This results in the following messages in the output:
>
> Number of logical cores detected (32) does not match the number reported by OpenMP (1).
> Consider setting the launch configuration manually!
>
> Running on 1 node with total 32 logical cores, 2 compatible GPUs
> Hardware detected on host gpu047.pvt.bridges.psc.edu (the node of MPI rank 0):
> CPU info:
> Vendor: GenuineIntel
> Brand: Intel(R) Xeon(R) CPU E5-2683 v4 @ 2.10GHz
> SIMD instructions most likely to fit this hardware: AVX2_256
> SIMD instructions selected at GROMACS compile time: AVX2_256
> GPU info:
> Number of GPUs detected: 2
> #0: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat: compatible
> #1: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat: compatible
>
> Reading file npt.tpr, VERSION 5.1.2 (single precision)
> Changing nstlist from 20 to 40, rlist from 1.017 to 1.073
>
> Using 2 MPI processes
> Using 2 OpenMP threads per MPI process
>
> On host gpu047.pvt.bridges.psc.edu 2 compatible GPUs are present, with IDs 0,1
> On host gpu047.pvt.bridges.psc.edu 2 GPUs auto-selected for this run.
> Mapping of GPU IDs to the 2 PP ranks in this node: 0,1
>
> I?m concerned with the first message. Does this mean that I cannot fully utilize the 32 cores? The resulting simulation speed is comparable to my previous system with a single K80 GPU and 6 cores. Am I doing something wrong, or have the system administrators compiled/set-up Gromacs incorrectly?
>
> Best,
> Irem
>
>> On Oct 29, 2016, at 7:20 PM, Mark Abraham <mark.j.abraham at gmail.com> wrote:
>>
>> Hi,
>>
>> Sure, any CUDA build of GROMACS will run on such a card, but you want
>> 2016.1 for best performance. Your problem is likely that you haven't got a
>> suitably new driver installed. What does nvidia-smi report?
>>
>> Mark
>>
>> On Sun, Oct 30, 2016 at 1:13 AM Irem Altan <irem.altan at duke.edu> wrote:
>>
>>> Hi,
>>>
>>> I was wondering, does Gromacs support nVidia Tesla P100 cards? I?m trying
>>> to run a simulation on a node with this GPU, but whatever I tried, I can?t
>>> get Gromacs to detect a cuda-capable card:
>>>
>>> NOTE: Error occurred during GPU detection:
>>> no CUDA-capable device is detected
>>> Can not use GPU acceleration, will fall back to CPU kernels.
>>>
>>> Is it even supported?
>>>
>>> Best,
>>> Irem
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=kL_5IPwUj2qYwzF8NMs3kdFyk3V4ilYMF4zb2qeYfQs&e= before
>>> posting!
>>>
>>> * Can't post? Read https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=Uwy0aXJ0_3T2hPb32VuUWE7wKVw4PrlAmZJH4DzIimc&e=
>>>
>>> * For (un)subscribe requests visit
>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=ky74TNYxnThRGXSpVRqs65QQ1eaUEC4e_e8yHbJf730&e= or
>>> send a mail to gmx-users-request at gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=kL_5IPwUj2qYwzF8NMs3kdFyk3V4ilYMF4zb2qeYfQs&e= before posting!
>>
>> * Can't post? Read https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=Uwy0aXJ0_3T2hPb32VuUWE7wKVw4PrlAmZJH4DzIimc&e=
>>
>> * For (un)subscribe requests visit
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=cWmX7E4_RUr0e19IBMMY-DQIY87Axl89tDMXf7U3hpY&s=ky74TNYxnThRGXSpVRqs65QQ1eaUEC4e_e8yHbJf730&e= or send a mail to gmx-users-request at gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists_GMX-2DUsers-5FList&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=zrhtPVqyZXIsYX4Xv1X-SFoSQtMtsxjIS7twzw7CrmQ&s=vb0mcrtc1qQej7sVbZ7o3NrRUcXZH2nGT_yawiNScg8&e= before posting!
>
> * Can't post? Read https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gromacs.org_Support_Mailing-5FLists&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=zrhtPVqyZXIsYX4Xv1X-SFoSQtMtsxjIS7twzw7CrmQ&s=1tU1--VGvRCPQ0J0uhJsLw7u07Jb_73ocM4UuRc8mqI&e=
>
> * For (un)subscribe requests visit
> https://urldefense.proofpoint.com/v2/url?u=https-3A__maillist.sys.kth.se_mailman_listinfo_gromacs.org-5Fgmx-2Dusers&d=CwIGaQ&c=imBPVzF25OnBgGmVOlcsiEgHoG1i6YHLR0Sj_gZ4adc&r=r1Wl_e-3DAvYeqhtCRi2Mbok8HBpo_RH4ll0E7Hffr4&m=zrhtPVqyZXIsYX4Xv1X-SFoSQtMtsxjIS7twzw7CrmQ&s=VgGmxp3ei4UucbUK_l4Tc2H6kXbHpdAQHgjBl5xYXnE&e= or send a mail to gmx-users-request at gromacs.org.
------------------------------
--
Gromacs Users mailing list
* Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
* For (un)subscribe requests visit
https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
End of gromacs.org_gmx-users Digest, Vol 150, Issue 127
*******************************************************
More information about the gromacs.org_gmx-users
mailing list