[gmx-users] QMMM with GROMACS and DFTB3

S M Bargeen Turzo smbargeen.turzo.2016 at owu.edu
Tue Mar 27 16:49:37 CEST 2018


Thanks. I found in the ORCA manual(8.14.1) that ORCA can only be interfaced
with 4.0.4 up to version 4.5.5 of GROMACS and nothing is mentioned about
2016 and/or 2018.
​​When you ask about the speed of calculation, I am not exactly sure what
you mean by that. If you are asking how big my system is, then it contains
about 13870 atoms, it is a protein and a small molecule. Interested to find
reaction pathway through QM/MM optimization(geometry opt, transition state
opt).


On Tue, Mar 27, 2018 at 5:20 AM, <
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:

> Send gromacs.org_gmx-users mailing list submissions to
>         gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
>         gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
>         gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
>    1. Re: Confusions about rlist>=rcoulomb and rlist>=rvdw in the
>       mdp options in Gromacs 2016 (or 2018) user guide?? (Szil?rd P?ll)
>    2. Re: cudaMallocHost filed: unknown error (Szil?rd P?ll)
>    3. QMMM with GROMACS and DFTB3 (S M Bargeen Turzo)
>    4. Re: QMMM with GROMACS and DFTB3 (dgfd dgdfg)
>    5. Umbrella sampling: window distance - harmonic force constant
>       (Hermann, Johannes)
>    6. Box shape changing from rectangle to parallelogram at     NVT
>       (Marlon Sidore)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Mon, 26 Mar 2018 15:27:03 +0200
> From: Szil?rd P?ll <pall.szilard at gmail.com>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Cc: Discussion list for GROMACS users
>         <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: Re: [gmx-users] Confusions about rlist>=rcoulomb and
>         rlist>=rvdw in the mdp options in Gromacs 2016 (or 2018) user
> guide??
> Message-ID:
>         <CANnYEw5v2-2XmFzSkAssEgZxohFb0UoJBhc2F5pJ
> 5ojbMZxFTw at mail.gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> rlist >= rcoulomb / rvdw is the correct one, the list cutoff has to be
> at least as long as the longest of the interaction cut-offs.
> --
> Szil?rd
>
>
> On Mon, Mar 26, 2018 at 1:05 PM, huolei peng <horaldraman at gmail.com>
> wrote:
> > In the user guide of Gromacs 2016 (or 2018), it shows " rlist>=rcoulomb "
> >  or "rlist>=rvdw" in several places (see the links below), which are in
> > contrast to those in version of 5.15.  Are those changes due to typos?
> > Thanks.
> >
> >
> > http://manual.gromacs.org/documentation/2016-current/
> user-guide/mdp-options.html
> >
> > http://manual.gromacs.org/documentation/current/user-
> guide/mdp-options.html
> >
> > http://manual.gromacs.org/documentation/5.1-current/
> user-guide/mdp-options.html
> >
> >
> >
> > Best,
> >
> > Peng
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
>
> ------------------------------
>
> Message: 2
> Date: Mon, 26 Mar 2018 15:29:35 +0200
> From: Szil?rd P?ll <pall.szilard at gmail.com>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Subject: Re: [gmx-users] cudaMallocHost filed: unknown error
> Message-ID:
>         <CANnYEw6ZLTedb1uCRJ=A39T5NxYYnXQo_fOTZvzTw0zgDvaEgw at mail.
> gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> As a side-note, your mdrun invocation does not seem suitable for GPU
> accelerated runs, you'd most likely be better of running fewer ranks.
> --
> Szil?rd
>
>
> On Fri, Mar 23, 2018 at 9:26 PM, Christopher Neale
> <chris.neale at alum.utoronto.ca> wrote:
> > Hello,
> >
> > I am running gromacs 5.1.2 on single nodes where the run is set to use
> 32 cores and 4 GPUs. The run command is:
> >
> > mpirun -np 32 gmx_mpi mdrun -deffnm MD -maxh $maxh -dd 4 4 2 -npme 0
> -gpu_id 00000000111111112222222233333333 -ntomp 1 -notunepme
> >
> > Some of my runs die with this error:
> > cudaMallocHost of size 1024128 bytes failed: unknown error
> >
> > Below is the relevant part of the .log file. Searching the internet
> didn't turn up any solutions. I'll contact sysadmins if you think this is
> likely some problem with the hardware or rogue jobs. In my testing, a
> collection of 24 jobs had 6 die with this same error message (including the
> "1024128 bytes" and "pmalloc_cuda.cu, line: 70"). All on different nodes,
> and all those node next took repeat jobs that run fine. When the error
> occured, it was always right at the start of the run.
> >
> >
> > Thank you for your help,
> > Chris.
> >
> >
> >
> > Command line:
> >   gmx_mpi mdrun -deffnm MD -maxh 0.9 -dd 4 4 2 -npme 0 -gpu_id
> 00000000111111112222222233333333 -ntomp 1 -notunepme
> >
> >
> > Number of logical cores detected (72) does not match the number reported
> by OpenMP (2).
> > Consider setting the launch configuration manually!
> >
> > Running on 1 node with total 36 cores, 72 logical cores, 4 compatible
> GPUs
> > Hardware detected on host ko026.localdomain (the node of MPI rank 0):
> >   CPU info:
> >     Vendor: GenuineIntel
> >     Brand:  Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
> >     SIMD instructions most likely to fit this hardware: AVX2_256
> >     SIMD instructions selected at GROMACS compile time: AVX2_256
> >   GPU info:
> >     Number of GPUs detected: 4
> >     #0: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat:
> compatible
> >     #1: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat:
> compatible
> >     #2: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat:
> compatible
> >     #3: NVIDIA Tesla P100-PCIE-16GB, compute cap.: 6.0, ECC: yes, stat:
> compatible
> >
> > Reading file MD.tpr, VERSION 5.1.2 (single precision)
> > Can not increase nstlist because verlet-buffer-tolerance is not set or
> used
> > Using 32 MPI processes
> > Using 1 OpenMP thread per MPI process
> >
> > On host ko026.localdomain 4 GPUs user-selected for this run.
> > Mapping of GPU IDs to the 32 PP ranks in this node:
> 0,0,0,0,0,0,0,0,1,1,1,1,1,1,1,1,2,2,2,2,2,2,2,2,3,3,3,3,3,3,3,3
> >
> > NOTE: You assigned GPUs to multiple MPI processes.
> >
> > NOTE: Your choice of number of MPI ranks and amount of resources results
> in using 1 OpenMP threads per rank, which is most likely inefficient. The
> optimum is usually between 2 and 6 threads per rank.
> >
> >
> > NOTE: GROMACS was configured without NVML support hence it can not
> exploit
> >       application clocks of the detected Tesla P100-PCIE-16GB GPU to
> improve performance.
> >       Recompile with the NVML library (compatible with the driver used)
> or set application clocks manually.
> >
> >
> > -------------------------------------------------------
> > Program gmx mdrun, VERSION 5.1.2
> > Source code file: /net/scratch3/cneale/exe/
> KODIAK/GROMACS/source/gromacs-5.1.2/src/gromacs/gmxlib/cuda_tools/
> pmalloc_cuda.cu, line: 70
> >
> > Fatal error:
> > cudaMallocHost of size 1024128 bytes failed: unknown error
> >
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > -------------------------------------------------------
> >
> > Halting parallel program gmx mdrun on rank 31 out of 32
> > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 31
> >
> >
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
>
> ------------------------------
>
> Message: 3
> Date: Mon, 26 Mar 2018 15:38:33 -0400
> From: S M Bargeen Turzo <smbargeen.turzo.2016 at owu.edu>
> To: gromacs.org_gmx-users at maillist.sys.kth.se
> Subject: [gmx-users] QMMM with GROMACS and DFTB3
> Message-ID:
>         <CAGTVu_9mVbOLzHAwn7xyO_9A4G9-P4P0uwCJkK3zb_N33j-2AA at mail.
> gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello everyone,
>
> I was wondering if GROMACS 2016(or 2018) can be interfaced with any QM
> program like GAMESS-US or ORCA.
> So far from my searches I found out that it can be done using GROMACS
> version 5 and DFTB3 according to this website:
> http://cbp.cfn.kit.edu/joomla/index.php/downloads/18-
> gromacs-with-qm-mm-using-dftb3
>
> However the official gromacs website does not mention anything about DFTB3
> or ORCA, so I need some guidance here regarding which version of GROMACS
> should I be compiling with which QM program.
>
>
> Thanks
> -Bargeen
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 27 Mar 2018 10:30:34 +0300
> From: dgfd dgdfg <roinato at mail.ru>
> To: gmx-users at gromacs.org
> Subject: Re: [gmx-users] QMMM with GROMACS and DFTB3
> Message-ID: <1522135834.968787612 at f429.i.mail.ru>
> Content-Type: text/plain; charset=utf-8
>
> http://wwwuser.gwdg.de/~ggroenh/qmmm.html ?
> or
> 8.13.1 ORCA and Gromacs chapter
> in orca4 manual.
>
> What will be the speed of calculation?
>
> ------------------------------
>
> Message: 5
> Date: Tue, 27 Mar 2018 10:44:21 +0200
> From: "Hermann, Johannes" <J.Hermann at lrz.tu-muenchen.de>
> To: Gromacs <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: [gmx-users] Umbrella sampling: window distance - harmonic
>         force constant
> Message-ID: <a2f32ea1-1b66-142c-4c62-0eed7294a05d at lrz.tum.de>
> Content-Type: text/plain; charset=utf-8; format=flowed
>
> Dear All, dear Justin,
>
> I am playing around with my umbrella sampling setup and I was looking at
> your paper which you linked in your umbrella sampling tutorial
> ("Assessing the Stability of Alzheimer?s Amyloid Protofibrils Using
> Molecular Dynamics").
> Up to a distance of 2nm you use a 0.1nm spacing, beyond a 0.2nm spacing.
> Which harmonic force constant pull_coord1_k do you use for the 0.1nm
> spacing? In comparison to the 0.2nm spacing, where pull_coord1_k=1000.
> Is there a general rule of thumb between window distance and force
> constant? Or is it always try and error while checking the histograms?
> Thank you very much.
>
> All the best
>
> Johannes
>
> --
> ______________________________________
> *Technische Universit?t M?nchen*
> *Johannes Hermann, M.Sc.*
> Lehrstuhl f?r Bioverfahrenstechnik
> Boltzmannstr. 15
> D-85748 Garching
> Tel: +49 8928915730
> Fax: +49 8928915714
>
> Email: j.hermann at lrz.tum.de
> http://www.biovt.mw.tum.de/
>
>
>
> ------------------------------
>
> Message: 6
> Date: Tue, 27 Mar 2018 11:20:21 +0200
> From: Marlon Sidore <marlon.sidore at gmail.com>
> To: "gromacs.org_gmx-users"
>         <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: [gmx-users] Box shape changing from rectangle to
>         parallelogram at        NVT
> Message-ID:
>         <CAAASxxpbCJisjLh001jtKWdZ2Uthq3oPy2hydcu5s89nRUxWxg at mail.
> gmail.com>
> Content-Type: text/plain; charset="UTF-8"
>
> Hello,
>
> I am trying to set up a DAFT approach for membrane segments using MARTINI (
> http://www.cgmartini.nl/index.php/329-the-daft-approach-to-
> membrane-protein-assembly)
> and I've made simulation boxes as small as in the original paper.
> However, I'm not sure if everything is fine or not: the shape of the box
> changes from a rectangle to a parallelogram (see pictures).
> Since I've never seen that, I'm wondering if that's a problem - boundary
> conditions don't seem problematic (no holes when vizualizing them with
> VMD), but still I doubt.
>
> I am linking to my drive since eduroam won't let me use another image
> hosting site:
> https://drive.google.com/open?id=1PKtivk4lSygMgWey-N5PGhwGZ3JQJJw_
>
> Best regards,
>
> Marlon Sidore
>
>
> PhD Student
> Laboratoire d'Ing?nierie des Syst?mes Macromol?culaire (LISM)
> CNRS - UMR7255
> 31, Chemin Joseph Aiguier
> 13402 cedex 20 Marseille
> France
>
>
> ------------------------------
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> End of gromacs.org_gmx-users Digest, Vol 167, Issue 132
> *******************************************************
>


More information about the gromacs.org_gmx-users mailing list