From dommert at icp.uni-stuttgart.de Sun Jul 3 10:34:02 2011 From: dommert at icp.uni-stuttgart.de (Dommert Florian) Date: Sun, 03 Jul 2011 10:34:02 +0200 Subject: [gmx-developers] Extended and improved version of g_pme_error Message-ID: <1309682042.2545.21.camel@fermi> Hello, for the next release I have extended the tool g_pme_error. Now it is capable to tune the error introduced by Ewald or SPME to a certain absolute error or, if a norm is provided, to a relative error. Furthermore the calculations were drastically speeded up by using tables. In contrast to the earlier version were just the tuning of beta took minutes on several processors, the complete tuning can now be done within seconds. I appended a patch against the lastest release-4-5-branch (commit 3260c261ebd668b07beb044bfa1e8c140a743cd5), that you can try out. /Flo -- Florian Dommert Dipl. - Phys. Institute for Computational Physics University Stuttgart Pfaffenwaldring 27 70569 Stuttgart EMail: dommert at icp.uni-stuttgart.de Homepage: http://www.icp.uni-stuttgart.de/~icp/Florian_Dommert Tel.: +49 - (0)711 - 68563613 Fax.: +49 - (0)711 - 68563658 -------------- next part -------------- A non-text attachment was scrubbed... Name: g_pme_error.patch.tar.bz2 Type: application/x-bzip-compressed-tar Size: 9194 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 198 bytes Desc: This is a digitally signed message part URL: From gonnet at maths.ox.ac.uk Tue Jul 5 17:07:38 2011 From: gonnet at maths.ox.ac.uk (Pedro Gonnet) Date: Tue, 05 Jul 2011 16:07:38 +0100 Subject: [gmx-developers] Fairly detailed question regarding cell lists in Gromacs in general and nsgrid_core specifically Message-ID: <1309878458.1932.80.camel@laika> Hi, I'm trying to understand how Gromacs builds its neighbor lists and have been looking, more specifically, at the function nsgrid_core in ns.c. If I understand the underlying data organization correctly, the grid (t_grid) contains an array of cells in which the indices of charge groups are stored. Pairs of such charge groups are identified and stored in the neighbor list (put_in_list). What I don't really understand is how these pairs are identified. Usually one would loop over all cells, loop over each charge group therein, loop over all neighboring cells and store the charge groups therein which are within the cutoff distance. I assume that the first loop, over all cells, is somehow computed with the for-loops starting at lines 2135, 2151 and 2173 of ns.c. However, I don't really understand how this is done: What do these loops loop over exactly? In any case, the coordinates of the particle in the outer loop seem to land in the variables XI, YI and ZI. The inner loop (for-loops starting in lines 2213, 2216 and 2221 of ns.c) then runs through the neighboring cells. If I understand correctly, cj is the id of the neighboring cell, nrj the number of charge groups in that cell and cgj0 the offset of the charge groups in the data. What I don't really understand here are the lines 2232--2241: /* Check if all j's are out of range so we * can skip the whole cell. * Should save some time, especially with DD. */ if (nrj == 0 || (grida[cgj0] >= max_jcg && (grida[cgj0] >= jcg1 || grida[cgj0+nrj-1] < jcg0))) { continue; } Apparently, some cells can be excluded, but what are the exact criteria? The test on nrj is somewhat obvious, but what is stored in grid->a? There is probably no short answer to my questions, but if anybody could at least point me to any documentation or description of how the neighbors are collected in this routine, I would be extremely thankful! Cheers, Pedro From gonnet at maths.ox.ac.uk Wed Jul 6 10:52:40 2011 From: gonnet at maths.ox.ac.uk (Pedro Gonnet) Date: Wed, 06 Jul 2011 09:52:40 +0100 Subject: [gmx-developers] Re: Fairly detailed question regarding cell lists in Gromacs in general and nsgrid_core specifically In-Reply-To: <1309878458.1932.80.camel@laika> References: <1309878458.1932.80.camel@laika> Message-ID: <1309942360.1978.12.camel@laika> Hello again, I had another long look at the code and at the older Gromacs papers and realized that the main loop over charge groups starts on line 2058 of ns.c and that the loops in lines 2135, 2151 and 2173 are for the periodic images. I still, however, have no idea what the second condition in lines 2232--2241 of ns.c mean: /* Check if all j's are out of range so we * can skip the whole cell. * Should save some time, especially with DD. */ if (nrj == 0 || (grida[cgj0] >= max_jcg && (grida[cgj0] >= jcg1 || grida[cgj0+nrj-1] < jcg0))) { continue; } Does anybody know what max_jcg, jcg1 and jcg0 are? Or does anybody know where this is documented in detail? Cheers, Pedro On Tue, 2011-07-05 at 16:07 +0100, Pedro Gonnet wrote: > Hi, > > I'm trying to understand how Gromacs builds its neighbor lists and have > been looking, more specifically, at the function nsgrid_core in ns.c. > > If I understand the underlying data organization correctly, the grid > (t_grid) contains an array of cells in which the indices of charge > groups are stored. Pairs of such charge groups are identified and stored > in the neighbor list (put_in_list). > > What I don't really understand is how these pairs are identified. > Usually one would loop over all cells, loop over each charge group > therein, loop over all neighboring cells and store the charge groups > therein which are within the cutoff distance. > > I assume that the first loop, over all cells, is somehow computed with > the for-loops starting at lines 2135, 2151 and 2173 of ns.c. However, I > don't really understand how this is done: What do these loops loop over > exactly? > > In any case, the coordinates of the particle in the outer loop seem to > land in the variables XI, YI and ZI. The inner loop (for-loops starting > in lines 2213, 2216 and 2221 of ns.c) then runs through the neighboring > cells. If I understand correctly, cj is the id of the neighboring cell, > nrj the number of charge groups in that cell and cgj0 the offset of the > charge groups in the data. > > What I don't really understand here are the lines 2232--2241: > > /* Check if all j's are out of range so we > * can skip the whole cell. > * Should save some time, especially with DD. > */ > if (nrj == 0 || > (grida[cgj0] >= max_jcg && > (grida[cgj0] >= jcg1 || grida[cgj0+nrj-1] < jcg0))) > { > continue; > } > > Apparently, some cells can be excluded, but what are the exact criteria? > The test on nrj is somewhat obvious, but what is stored in grid->a? > > There is probably no short answer to my questions, but if anybody could > at least point me to any documentation or description of how the > neighbors are collected in this routine, I would be extremely thankful! > > Cheers, Pedro > > From hess at cbr.su.se Mon Jul 11 11:26:00 2011 From: hess at cbr.su.se (Berk Hess) Date: Mon, 11 Jul 2011 11:26:00 +0200 Subject: [gmx-developers] Re: Fairly detailed question regarding cell lists in Gromacs in general and nsgrid_core specifically In-Reply-To: <1309942360.1978.12.camel@laika> References: <1309878458.1932.80.camel@laika> <1309942360.1978.12.camel@laika> Message-ID: <4E1AC1A8.1050402@cbr.su.se> Hi, This code is for parallel neighbor searching. We have to ensure that pairs are not assigned to multiple processes. In addition with particle decomposition we want to ensure load balancing. With particle decomposition jcg0=icg and jcg1=icg+0.5*#icg, this ensures the two above conditions. For domain decomposition we use the eighth shell method, which use up till 8 zones. Only half of the 8x8 zone pairs should interact. For domain decomposition jcg0 and jcg1 are set such that only the wanted zone pairs interact (zones are ordered such that only consecutive j-zones interact, so a simply check suffices). Berk On 07/06/2011 10:52 AM, Pedro Gonnet wrote: > Hello again, > > I had another long look at the code and at the older Gromacs papers and > realized that the main loop over charge groups starts on line 2058 of > ns.c and that the loops in lines 2135, 2151 and 2173 are for the > periodic images. > > I still, however, have no idea what the second condition in lines > 2232--2241 of ns.c mean: > > /* Check if all j's are out of range so we > * can skip the whole cell. > * Should save some time, especially with DD. > */ > if (nrj == 0 || > (grida[cgj0]>= max_jcg&& > (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) > { > continue; > } > > Does anybody know what max_jcg, jcg1 and jcg0 are? Or does anybody know > where this is documented in detail? > > Cheers, Pedro > > > On Tue, 2011-07-05 at 16:07 +0100, Pedro Gonnet wrote: >> Hi, >> >> I'm trying to understand how Gromacs builds its neighbor lists and have >> been looking, more specifically, at the function nsgrid_core in ns.c. >> >> If I understand the underlying data organization correctly, the grid >> (t_grid) contains an array of cells in which the indices of charge >> groups are stored. Pairs of such charge groups are identified and stored >> in the neighbor list (put_in_list). >> >> What I don't really understand is how these pairs are identified. >> Usually one would loop over all cells, loop over each charge group >> therein, loop over all neighboring cells and store the charge groups >> therein which are within the cutoff distance. >> >> I assume that the first loop, over all cells, is somehow computed with >> the for-loops starting at lines 2135, 2151 and 2173 of ns.c. However, I >> don't really understand how this is done: What do these loops loop over >> exactly? >> >> In any case, the coordinates of the particle in the outer loop seem to >> land in the variables XI, YI and ZI. The inner loop (for-loops starting >> in lines 2213, 2216 and 2221 of ns.c) then runs through the neighboring >> cells. If I understand correctly, cj is the id of the neighboring cell, >> nrj the number of charge groups in that cell and cgj0 the offset of the >> charge groups in the data. >> >> What I don't really understand here are the lines 2232--2241: >> >> /* Check if all j's are out of range so we >> * can skip the whole cell. >> * Should save some time, especially with DD. >> */ >> if (nrj == 0 || >> (grida[cgj0]>= max_jcg&& >> (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) >> { >> continue; >> } >> >> Apparently, some cells can be excluded, but what are the exact criteria? >> The test on nrj is somewhat obvious, but what is stored in grid->a? >> >> There is probably no short answer to my questions, but if anybody could >> at least point me to any documentation or description of how the >> neighbors are collected in this routine, I would be extremely thankful! >> >> Cheers, Pedro >> >> > From chicago.ecnu at gmail.com Mon Jul 11 15:39:00 2011 From: chicago.ecnu at gmail.com (chicago.ecnu) Date: Mon, 11 Jul 2011 21:39:00 +0800 Subject: [gmx-developers] Which function used for postion restraint? Message-ID: <201107112138572653287@gmail.com> Dear Gromacs Developers, Harmonic potentials are used in Gromacs for imposing restraints on the motion of the system. V = 1/2 * k * (x-x0)^2 And the forces are : F = -k * (x-x0) . ... I want to use constant force for the restraint. Just scale the force : F = -k * (x-x0) / sqrt ( (x-x0)**2+(y-y0)**2 + (z-z0)**2 ) ... This was very simple. But I don't know where the position restraint code is . Could you please tell me where I can start with ? Which function or keyword I should search ? Many thanks for your help ! Best Wishes, Yours Sincrely, Chiango Ji From gonnet at maths.ox.ac.uk Mon Jul 11 22:34:07 2011 From: gonnet at maths.ox.ac.uk (Pedro Gonnet) Date: Mon, 11 Jul 2011 21:34:07 +0100 Subject: [gmx-developers] Re: gmx-developers Digest, Vol 87, Issue 3 In-Reply-To: <20110711101353.9F4D425C84@struktbio205.bmc.uu.se> References: <20110711101353.9F4D425C84@struktbio205.bmc.uu.se> Message-ID: <1310416447.17699.84.camel@laika> Hi Berk, Thanks for the reply! I still don't really understand what's going on though... My problem is the following: on a single CPU, the nsgrid_core function requires roughly 40% more time than on two CPUs. Using a profiler, I tracked down this difference to the condition /* Check if all j's are out of range so we * can skip the whole cell. * Should save some time, especially with DD. */ if (nrj == 0 || (grida[cgj0]>= max_jcg&& (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) { continue; } being triggered substantially more often in the two-CPU case than in the single-CPU case. In my understanding, in both cases (two or single CPU), the same number of cell pairs need to be inspected and hence roughly the same computational costs should incurred. How, in this case, do the single-CPU and two-CPU cases differ? In the single-cell case are particles in cells i and j traversed twice, e.g. (i,j) and (j,i)? Many thanks, Pedro On Mon, 2011-07-11 at 12:13 +0200, gmx-developers-request at gromacs.org wrote: > Date: Mon, 11 Jul 2011 11:26:00 +0200 > From: Berk Hess > Subject: Re: [gmx-developers] Re: Fairly detailed question regarding > cell lists in Gromacs in general and nsgrid_core specifically > To: Discussion list for GROMACS development > > Message-ID: <4E1AC1A8.1050402 at cbr.su.se> > Content-Type: text/plain; charset=UTF-8; format=flowed > > Hi, > > This code is for parallel neighbor searching. > We have to ensure that pairs are not assigned to multiple processes. > In addition with particle decomposition we want to ensure load balancing. > With particle decomposition jcg0=icg and jcg1=icg+0.5*#icg, this ensures > the two above conditions. > For domain decomposition we use the eighth shell method, which use > up till 8 zones. Only half of the 8x8 zone pairs should interact. > For domain decomposition jcg0 and jcg1 are set such that only the wanted > zone pairs interact (zones are ordered such that only consecutive j-zones > interact, so a simply check suffices). > > Berk > > On 07/06/2011 10:52 AM, Pedro Gonnet wrote: > > Hello again, > > > > I had another long look at the code and at the older Gromacs papers and > > realized that the main loop over charge groups starts on line 2058 of > > ns.c and that the loops in lines 2135, 2151 and 2173 are for the > > periodic images. > > > > I still, however, have no idea what the second condition in lines > > 2232--2241 of ns.c mean: > > > > /* Check if all j's are out of range so we > > * can skip the whole cell. > > * Should save some time, especially with DD. > > */ > > if (nrj == 0 || > > (grida[cgj0]>= max_jcg&& > > (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) > > { > > continue; > > } > > > > Does anybody know what max_jcg, jcg1 and jcg0 are? Or does anybody know > > where this is documented in detail? > > > > Cheers, Pedro > > > > > > On Tue, 2011-07-05 at 16:07 +0100, Pedro Gonnet wrote: > >> Hi, > >> > >> I'm trying to understand how Gromacs builds its neighbor lists and have > >> been looking, more specifically, at the function nsgrid_core in ns.c. > >> > >> If I understand the underlying data organization correctly, the grid > >> (t_grid) contains an array of cells in which the indices of charge > >> groups are stored. Pairs of such charge groups are identified and stored > >> in the neighbor list (put_in_list). > >> > >> What I don't really understand is how these pairs are identified. > >> Usually one would loop over all cells, loop over each charge group > >> therein, loop over all neighboring cells and store the charge groups > >> therein which are within the cutoff distance. > >> > >> I assume that the first loop, over all cells, is somehow computed with > >> the for-loops starting at lines 2135, 2151 and 2173 of ns.c. However, I > >> don't really understand how this is done: What do these loops loop over > >> exactly? > >> > >> In any case, the coordinates of the particle in the outer loop seem to > >> land in the variables XI, YI and ZI. The inner loop (for-loops starting > >> in lines 2213, 2216 and 2221 of ns.c) then runs through the neighboring > >> cells. If I understand correctly, cj is the id of the neighboring cell, > >> nrj the number of charge groups in that cell and cgj0 the offset of the > >> charge groups in the data. > >> > >> What I don't really understand here are the lines 2232--2241: > >> > >> /* Check if all j's are out of range so we > >> * can skip the whole cell. > >> * Should save some time, especially with DD. > >> */ > >> if (nrj == 0 || > >> (grida[cgj0]>= max_jcg&& > >> (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) > >> { > >> continue; > >> } > >> > >> Apparently, some cells can be excluded, but what are the exact criteria? > >> The test on nrj is somewhat obvious, but what is stored in grid->a? > >> > >> There is probably no short answer to my questions, but if anybody could > >> at least point me to any documentation or description of how the > >> neighbors are collected in this routine, I would be extremely thankful! > >> > >> Cheers, Pedro > >> > >> > > > > > > ------------------------------ > From waleed_zalloum at yahoo.com Tue Jul 12 17:45:37 2011 From: waleed_zalloum at yahoo.com (waleed zalloum) Date: Tue, 12 Jul 2011 16:45:37 +0100 (BST) Subject: [gmx-developers] GPU Message-ID: <1310485537.49834.YahooMailNeo@web26503.mail.ukl.yahoo.com> Dear All, I am a final year PhD student at the university of Manchester, UK. I want to use GPU GROMACS to simulate a?system?consisted of DNA and protein. I was wondering, I have a computer with a GeForce 310M with CUDA, on the GROMACS web page ?this GPU is not listed in the?compatible ones.?Can I use this GPU by any means to run the MD simulation using GROMACS? Thank you? Waleed ? ============================================== Waleed A. Zalloum, MSc of pharmacy and Pharmaceutical sciences, Molecular Modeling School of Pharmacy and Pharmaceutical Sciences, Faculty of Medical and Human Sciences, The University of Manchester Manchester, UK Third year PhD student. E-mail: waleed_zalloum at yahoo.com Mobile: +44(0)7863763084 =============================================== -------------- next part -------------- An HTML attachment was scrubbed... URL: From akohlmey at cmm.chem.upenn.edu Tue Jul 12 17:54:31 2011 From: akohlmey at cmm.chem.upenn.edu (Axel Kohlmeyer) Date: Tue, 12 Jul 2011 11:54:31 -0400 Subject: [gmx-developers] GPU In-Reply-To: <1310485537.49834.YahooMailNeo@web26503.mail.ukl.yahoo.com> References: <1310485537.49834.YahooMailNeo@web26503.mail.ukl.yahoo.com> Message-ID: On Tue, Jul 12, 2011 at 11:45 AM, waleed zalloum wrote: > Dear All, > I am a final year PhD student at the university of Manchester, UK. I want to > use GPU GROMACS to simulate a?system?consisted of DNA and protein. I was > wondering, I have a computer with a GeForce 310M with CUDA, on the GROMACS > web page ?this GPU is not listed in the?compatible ones.?Can I use this GPU > by any means to run the MD simulation using GROMACS? CUDA compatible GPUs are listed on this page. http://developer.nvidia.com/cuda-gpus keep in mind however, that CUDA compatible doesn't necessarily translate into a huge acceleration. how much speedup you get depends on the GPU architecture, clock rates and memory bandwidth and number of multiprocessors in the GPU. cheers, axel. axel. > Thank you > Waleed > > ============================================== > Waleed A. Zalloum, > MSc of pharmacy and Pharmaceutical sciences, > Molecular Modeling > School of Pharmacy and Pharmaceutical Sciences, > Faculty of Medical and Human Sciences, > The University of Manchester > Manchester, UK > Third year PhD student. > E-mail: waleed_zalloum at yahoo.com > Mobile: +44(0)7863763084 > =============================================== > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-developers-request at gromacs.org. > -- Dr. Axel Kohlmeyer? ? akohlmey at gmail.com http://sites.google.com/site/akohlmey/ Institute for Computational Molecular Science Temple University, Philadelphia PA, USA. From szilard.pall at cbr.su.se Tue Jul 12 18:30:42 2011 From: szilard.pall at cbr.su.se (=?ISO-8859-1?B?U3ppbOFyZCBQ4Wxs?=) Date: Tue, 12 Jul 2011 18:30:42 +0200 Subject: [gmx-developers] GPU In-Reply-To: References: <1310485537.49834.YahooMailNeo@web26503.mail.ukl.yahoo.com> Message-ID: Hi, As this is not a development-related question I'm moving the discussion to the user's list. Future replies should be sent *only* to gmx-users at gromacs.org. As Axel pointed out, the list of CUDA-compatible devices is much broader than the list of cards we label compatible. The compatibility check is just a safety measure as GPUs not considered compatible will be slower than most recent CPUs (see performance comparison on the website http://goo.gl/eo0jS) especially in explicit water simulations. However, you can always try by using the "force-device=yes" option and see what performance you get, but you shouldn't expect much from a mobile GPU. Cheers, -- Szil?rd >> Thank you >> Waleed >> >> ============================================== >> Waleed A. Zalloum, >> MSc of pharmacy and Pharmaceutical sciences, >> Molecular Modeling >> School of Pharmacy and Pharmaceutical Sciences, >> Faculty of Medical and Human Sciences, >> The University of Manchester >> Manchester, UK >> Third year PhD student. >> E-mail: waleed_zalloum at yahoo.com >> Mobile: +44(0)7863763084 >> =============================================== >> -- >> gmx-developers mailing list >> gmx-developers at gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-developers >> Please don't post (un)subscribe requests to the list. Use the >> www interface or send it to gmx-developers-request at gromacs.org. >> > > > > -- > Dr. Axel Kohlmeyer? ? akohlmey at gmail.com > http://sites.google.com/site/akohlmey/ > > Institute for Computational Molecular Science > Temple University, Philadelphia PA, USA. > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-developers-request at gromacs.org. > From uhlig.frank at googlemail.com Wed Jul 13 10:08:57 2011 From: uhlig.frank at googlemail.com (Frank Uhlig) Date: Wed, 13 Jul 2011 10:08:57 +0200 Subject: [gmx-developers] GMX + ORCA QM/MM In-Reply-To: References: Message-ID: Dear gmx-developers, I have a few comments concerning QM/MM in Gromacs in conjunction with Orca. I am using the latest Gromacs version 4.5.4 and the latest Orca version 2.8.0 to perform QM/MM calculations. 1) it is a bit misleading that in the help of the configure script it is written: --without-qmmm-orca ? ? Use ORCA for QM-MM and the respective for the other three possible programs for QM/MM calculations... 2) I followed the instructions on this webpage: http://wwwuser.gwdg.de/~ggroenh/qmmm.html --> this means ./configure --with-qmmm-orca --without-qmmm-gaussian to build a QM/MM version of GMX together with Orca. The build goes fine and seems to work... I also tried to build the GMX/ORCA-QM/MM version via CMake (i.e., ccmake). Although I activated "orca" as GMX_QMMM_PROGRAM in the gui and (re-)configured, the variable GMX_QMMM_ORCA does not get set in the src/config.h file. Thus, the obtained build will not work for QM/MM calculations... 3) If I configure gromacs as described in the first part of 2) above I obtain a version that seems to work at first. After some experimenting with the general setup I encountered some problems though. I attached all files necessary files to illustrate and reproduce those problems. When putting the QM residues first in the [ molecules ] section in the topology file, grompp fails with a segmentation fault. When putting the QM residues last in the [ molecules ] section in the topology file, mdrun fails with a segmentation fault (mdrun -nt 1) before calling Orca. When putting the QM residues (and all the other residues) in a disordered fashion in the topology file (and not the QM residues first or last) the calculations runs just fine. The included examples all contain the same configuration. They only differ in the order of the residues in the conf.gro, topol.top and index.ndx files. I also included the debug information for the two failing tests. I am not too familiar with C, so I would appreciate your help. If you have any suggestion on how to fix these issues or at least further information on where they are stemming from, please let me know. Best regards and thanks in advance, Frank -------------- next part -------------- A non-text attachment was scrubbed... Name: qmmm_problem.tar.bz2 Type: application/x-bzip2 Size: 7895 bytes Desc: not available URL: From ggroenh at gwdg.de Wed Jul 13 11:16:31 2011 From: ggroenh at gwdg.de (Gerrit Groenhof) Date: Wed, 13 Jul 2011 11:16:31 +0200 Subject: [gmx-developers] GMX + ORCA QM/MM In-Reply-To: References: Message-ID: <4E1D626F.4000300@gwdg.de> Hi, I am on vacation actually. Although it looks like a gmx problem, I forward this email to Cristoph Riplinger, who did the orca interface. Gerrit On 07/13/2011 10:08 AM, Frank Uhlig wrote: > Dear gmx-developers, > > I have a few comments concerning QM/MM in Gromacs in conjunction with > Orca. I am using the latest Gromacs version 4.5.4 and the latest Orca > version 2.8.0 to perform QM/MM calculations. > > 1) it is a bit misleading that in the help of the configure script it > is written: > > --without-qmmm-orca Use ORCA for QM-MM > > and the respective for the other three possible programs for QM/MM > calculations... > > 2) I followed the instructions on this webpage: > > http://wwwuser.gwdg.de/~ggroenh/qmmm.html > > --> this means ./configure --with-qmmm-orca --without-qmmm-gaussian > > to build a QM/MM version of GMX together with Orca. The build goes > fine and seems to work... > > I also tried to build the GMX/ORCA-QM/MM version via CMake (i.e., > ccmake). Although I activated "orca" as GMX_QMMM_PROGRAM in the gui > and (re-)configured, the variable GMX_QMMM_ORCA does not get set in > the src/config.h file. Thus, the obtained build will not work for > QM/MM calculations... > > 3) If I configure gromacs as described in the first part of 2) above I > obtain a version that seems to work at first. After some experimenting > with the general setup I encountered some problems though. I attached > all files necessary files to illustrate and reproduce those problems. > > When putting the QM residues first in the [ molecules ] section in the > topology file, grompp fails with a segmentation fault. > When putting the QM residues last in the [ molecules ] section in the > topology file, mdrun fails with a segmentation fault (mdrun -nt 1) > before calling Orca. > When putting the QM residues (and all the other residues) in a > disordered fashion in the topology file (and not the QM > residues first or last) the calculations runs just fine. > > The included examples all contain the same configuration. They only > differ in the order of the residues in the conf.gro, topol.top and > index.ndx files. > > I also included the debug information for the two failing tests. I am > not too familiar with C, so I would appreciate your help. If you have > any suggestion on how to fix these issues or at least further > information on where they are stemming from, please let me know. > > Best regards and thanks in advance, > > Frank From ggroenh at gwdg.de Wed Jul 13 12:03:10 2011 From: ggroenh at gwdg.de (Gerrit Groenhof) Date: Wed, 13 Jul 2011 12:03:10 +0200 Subject: [gmx-developers] GMX + ORCA QM/MM In-Reply-To: References: Message-ID: <4E1D6D5E.2040702@gwdg.de> I had a look anyway. On grompp: Since 4.0, the QM atoms need to be in one topology file. Thus if you have 6 water, you need an atoms section with 6 waters. Dividing the QM atoms over multiple topologies does not work. see below. On mdrun, the above problem works with gmx/gaussian. I never worked with gmx/orca before, but frmo the gromacs side there seems no problem anymore. Hope this helps. Best wishes, Gerrit ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; ; defaults and all atom types ; ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; [ defaults ] ; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ 1 2 yes 1.0 1.0 [ atomtypes ] ; name mass charge ptype sigma epsilon OW 8 16.0 -0.8476 A 0.3165492 0.650299 HW 1 1.0 0.4238 A 0.0 0.0 ;;;;;;;;;;;;;;; ; SPC/E water ; ;;;;;;;;;;;;;;; [ moleculetype ] ; molname nrexcl SOL 1 [ atoms ] ; nr type resnr residue atom cgnr charge mass 1 OW 1 SOL OW 1 -0.8476 2 HW 1 SOL HW1 1 0.4238 3 HW 1 SOL HW2 1 0.4238 [ settles ] ; OW funct doh dhh 1 1 0.1 0.1633 [ exclusions ] 1 2 3 2 1 3 3 1 2 [ moleculetype ] ; molname nrexcl QM 1 [ atoms ] ; nr type resnr residue atom cgnr charge mass 1 OW 1 QM OW 1 -0.8476 2 HW 1 QM HW1 1 0.4238 3 HW 1 QM HW2 1 0.4238 4 OW 1 QM OW 1 -0.8476 5 HW 1 QM HW1 1 0.4238 6 HW 1 QM HW2 1 0.4238 7 OW 1 QM OW 1 -0.8476 8 HW 1 QM HW1 1 0.4238 9 HW 1 QM HW2 1 0.4238 10 OW 1 QM OW 1 -0.8476 11 HW 1 QM HW1 1 0.4238 12 HW 1 QM HW2 1 0.4238 13 OW 1 QM OW 1 -0.8476 14 HW 1 QM HW1 1 0.4238 15 HW 1 QM HW2 1 0.4238 16 OW 1 QM OW 1 -0.8476 17 HW 1 QM HW1 1 0.4238 18 HW 1 QM HW2 1 0.4238 [ system ] something weird [ molecules ] QM 1 SOL 58 On 07/13/2011 10:08 AM, Frank Uhlig wrote: > Dear gmx-develo11pers, > > I have a few comments concerning QM/MM in Gromacs in conjunction with > Orca. I am using the latest Gromacs version 4.5.4 and the latest Orca > version 2.8.0 to perform QM/MM calculations. > > 1) it is a bit misleading that in the help of the configure script it > is written: > > --without-qmmm-orca Use ORCA for QM-MM > > and the respective for the other three possible programs for QM/MM > calculations... > > 2) I followed the instructions on this webpage: > > http://wwwuser.gwdg.de/~ggroenh/qmmm.html > > --> this means ./configure --with-qmmm-orca --without-qmmm-gaussian > > to build a QM/MM version of GMX together with Orca. The build goes > fine and seems to work... > > I also tried to build the GMX/ORCA-QM/MM version via CMake (i.e., > ccmake). Although I activated "orca" as GMX_QMMM_PROGRAM in the gui > and (re-)configured, the variable GMX_QMMM_ORCA does not get set in > the src/config.h file. Thus, the obtained build will not work for > QM/MM calculations... > > 3) If I configure gromacs as described in the first part of 2) above I > obtain a version that seems to work at first. After some experimenting > with the general setup I encountered some problems though. I attached > all files necessary files to illustrate and reproduce those problems. > > When putting the QM residues first in the [ molecules ] section in the > topology file, grompp fails with a segmentation fault. > When putting the QM residues last in the [ molecules ] section in the > topology file, mdrun fails with a segmentation fault (mdrun -nt 1) > before calling Orca. > When putting the QM residues (and all the other residues) in a > disordered fashion in the topology file (and not the QM > residues first or last) the calculations runs just fine. > > The included examples all contain the same configuration. They only > differ in the order of the residues in the conf.gro, topol.top and > index.ndx files. > > I also included the debug information for the two failing tests. I am > not too familiar with C, so I would appreciate your help. If you have > any suggestion on how to fix these issues or at least further > information on where they are stemming from, please let me know. > > Best regards and thanks in advance, > > Frank From Ansgar.Esztermann at mpi-bpc.mpg.de Wed Jul 13 12:19:21 2011 From: Ansgar.Esztermann at mpi-bpc.mpg.de (Esztermann, Ansgar) Date: Wed, 13 Jul 2011 12:19:21 +0200 Subject: [gmx-developers] Unit tests and CTest Message-ID: <5AC4DD3B-007C-4A79-91A6-6E6E8BEBC87C@mpi-bpc.mpg.de> Hello List, I've set up a branch that supports unit tests via CTest. You can find it at https://github.com/aeszter/gromacs. Test results (including coverage) are at http://my.cdash.org/index.php?project=Gromacs Regards, A. -- Ansgar Esztermann DV-Systemadministration Max-Planck-Institut f?r biophysikalische Chemie, Abteilung 105 From xiaoyingw11 at 163.com Thu Jul 14 05:48:16 2011 From: xiaoyingw11 at 163.com (=?GBK?B?z/7Tog==?=) Date: Thu, 14 Jul 2011 11:48:16 +0800 (CST) Subject: [gmx-developers] implicit solvent Message-ID: <1abb800f.12463.13126c25977.Coremail.xiaoyingw11@163.com> Dear developers, I'm doing implicit solvent in gromacs 4.5.2 with amber03 force field but it appears some problem. I have done energy minimization .Then mdrun in NVT,but there is always LINCS error .When I make impolicit_solvent=no,it can run successfully. I have send a email to gmx-users but there is not a good solution. Is there a problem in the parameter settings? Can you give me some advice of my mdp file ? mdp file is in the attachment. Thank you very much! Best wishes! -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: md.mdp Type: application/octet-stream Size: 10418 bytes Desc: not available URL: From jalemkul at vt.edu Thu Jul 14 06:07:04 2011 From: jalemkul at vt.edu (Justin A. Lemkul) Date: Thu, 14 Jul 2011 00:07:04 -0400 Subject: [gmx-developers] implicit solvent In-Reply-To: <1abb800f.12463.13126c25977.Coremail.xiaoyingw11@163.com> References: <1abb800f.12463.13126c25977.Coremail.xiaoyingw11@163.com> Message-ID: <4E1E6B68.4080200@vt.edu> ?? wrote: > Dear developers, > I'm doing implicit solvent in gromacs 4.5.2 with amber03 force > field but it appears some problem. I have done energy minimization .Then > mdrun in NVT,but there is always LINCS error .When I make > impolicit_solvent=no,it can run successfully. I have send a email to > gmx-users but there is not a good solution. Is there a problem in the > parameter settings? Can you give me some advice of my mdp file ? mdp > file is in the attachment. Thank you very much! I am CC'ing this message back to gmx-users where it belongs. Your question is not related to development and thus does not belong on gmx-developers. Are you running in parallel? If so, what you're seeing is probably related to a bug report I just filed: http://redmine.gromacs.org/issues/777 -Justin -- ======================================== Justin A. Lemkul Ph.D. Candidate ICTAS Doctoral Scholar MILES-IGERT Trainee Department of Biochemistry Virginia Tech Blacksburg, VA jalemkul[at]vt.edu | (540) 231-9080 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin ======================================== From hess at cbr.su.se Thu Jul 14 11:59:08 2011 From: hess at cbr.su.se (Berk Hess) Date: Thu, 14 Jul 2011 11:59:08 +0200 Subject: [gmx-developers] Re: gmx-developers Digest, Vol 87, Issue 3 In-Reply-To: <1310416447.17699.84.camel@laika> References: <20110711101353.9F4D425C84@struktbio205.bmc.uu.se> <1310416447.17699.84.camel@laika> Message-ID: <4E1EBDEC.1010207@cbr.su.se> Hi, On a single core the particles (or more accurately: charge groups) are not ordered according to the ns grid. The ordering is only done with domain decomposition. This results in a lot of cache misses during single core neighbor search (note that Gromacs now runs multi-core by default, so this is not really an issue that is worth improving). I think the condition is almost never triggered single core, as we make sure we only check cell pairs where cg pairs are nearly always in range. With domain decomposition this is no longer the case, since DD zones will not necessarily overlap with grid cells, especially with dynamic load balancing or with a triclinic unit cell. This might be improved in future versions. Berk On 07/11/2011 10:34 PM, Pedro Gonnet wrote: > Hi Berk, > > Thanks for the reply! > > I still don't really understand what's going on though... My problem is > the following: on a single CPU, the nsgrid_core function requires > roughly 40% more time than on two CPUs. > > Using a profiler, I tracked down this difference to the condition > > /* Check if all j's are out of range so we > * can skip the whole cell. > * Should save some time, especially with DD. > */ > if (nrj == 0 || > (grida[cgj0]>= max_jcg&& > (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) > { > continue; > } > > being triggered substantially more often in the two-CPU case than in the > single-CPU case. In my understanding, in both cases (two or single CPU), > the same number of cell pairs need to be inspected and hence roughly the > same computational costs should incurred. > > How, in this case, do the single-CPU and two-CPU cases differ? In the > single-cell case are particles in cells i and j traversed twice, e.g. > (i,j) and (j,i)? > > Many thanks, > Pedro > > > On Mon, 2011-07-11 at 12:13 +0200, gmx-developers-request at gromacs.org > wrote: >> Date: Mon, 11 Jul 2011 11:26:00 +0200 >> From: Berk Hess >> Subject: Re: [gmx-developers] Re: Fairly detailed question regarding >> cell lists in Gromacs in general and nsgrid_core specifically >> To: Discussion list for GROMACS development >> >> Message-ID:<4E1AC1A8.1050402 at cbr.su.se> >> Content-Type: text/plain; charset=UTF-8; format=flowed >> >> Hi, >> >> This code is for parallel neighbor searching. >> We have to ensure that pairs are not assigned to multiple processes. >> In addition with particle decomposition we want to ensure load balancing. >> With particle decomposition jcg0=icg and jcg1=icg+0.5*#icg, this ensures >> the two above conditions. >> For domain decomposition we use the eighth shell method, which use >> up till 8 zones. Only half of the 8x8 zone pairs should interact. >> For domain decomposition jcg0 and jcg1 are set such that only the wanted >> zone pairs interact (zones are ordered such that only consecutive j-zones >> interact, so a simply check suffices). >> >> Berk >> >> On 07/06/2011 10:52 AM, Pedro Gonnet wrote: >>> Hello again, >>> >>> I had another long look at the code and at the older Gromacs papers and >>> realized that the main loop over charge groups starts on line 2058 of >>> ns.c and that the loops in lines 2135, 2151 and 2173 are for the >>> periodic images. >>> >>> I still, however, have no idea what the second condition in lines >>> 2232--2241 of ns.c mean: >>> >>> /* Check if all j's are out of range so we >>> * can skip the whole cell. >>> * Should save some time, especially with DD. >>> */ >>> if (nrj == 0 || >>> (grida[cgj0]>= max_jcg&& >>> (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) >>> { >>> continue; >>> } >>> >>> Does anybody know what max_jcg, jcg1 and jcg0 are? Or does anybody know >>> where this is documented in detail? >>> >>> Cheers, Pedro >>> >>> >>> On Tue, 2011-07-05 at 16:07 +0100, Pedro Gonnet wrote: >>>> Hi, >>>> >>>> I'm trying to understand how Gromacs builds its neighbor lists and have >>>> been looking, more specifically, at the function nsgrid_core in ns.c. >>>> >>>> If I understand the underlying data organization correctly, the grid >>>> (t_grid) contains an array of cells in which the indices of charge >>>> groups are stored. Pairs of such charge groups are identified and stored >>>> in the neighbor list (put_in_list). >>>> >>>> What I don't really understand is how these pairs are identified. >>>> Usually one would loop over all cells, loop over each charge group >>>> therein, loop over all neighboring cells and store the charge groups >>>> therein which are within the cutoff distance. >>>> >>>> I assume that the first loop, over all cells, is somehow computed with >>>> the for-loops starting at lines 2135, 2151 and 2173 of ns.c. However, I >>>> don't really understand how this is done: What do these loops loop over >>>> exactly? >>>> >>>> In any case, the coordinates of the particle in the outer loop seem to >>>> land in the variables XI, YI and ZI. The inner loop (for-loops starting >>>> in lines 2213, 2216 and 2221 of ns.c) then runs through the neighboring >>>> cells. If I understand correctly, cj is the id of the neighboring cell, >>>> nrj the number of charge groups in that cell and cgj0 the offset of the >>>> charge groups in the data. >>>> >>>> What I don't really understand here are the lines 2232--2241: >>>> >>>> /* Check if all j's are out of range so we >>>> * can skip the whole cell. >>>> * Should save some time, especially with DD. >>>> */ >>>> if (nrj == 0 || >>>> (grida[cgj0]>= max_jcg&& >>>> (grida[cgj0]>= jcg1 || grida[cgj0+nrj-1]< jcg0))) >>>> { >>>> continue; >>>> } >>>> >>>> Apparently, some cells can be excluded, but what are the exact criteria? >>>> The test on nrj is somewhat obvious, but what is stored in grid->a? >>>> >>>> There is probably no short answer to my questions, but if anybody could >>>> at least point me to any documentation or description of how the >>>> neighbors are collected in this routine, I would be extremely thankful! >>>> >>>> Cheers, Pedro >>>> >>>> >> >> >> ------------------------------ >> > From koschke at mpip-mainz.mpg.de Wed Jul 20 11:44:48 2011 From: koschke at mpip-mainz.mpg.de (Konstantin Koschke) Date: Wed, 20 Jul 2011 11:44:48 +0200 Subject: [gmx-developers] Dynamic Temperature Coupling Groups and Parallel Implementation Message-ID: <4E26A390.4030700@mpip-mainz.mpg.de> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Dear developers, I'm a bit lost in the following issue and I hope that my questions are qualified for the developers list: My aim is to restrict the 'stochastic velocity rescaling' thermostat to a certain geometric region, e.g. a slab. What I did so far: I used the concept of temperature coupling groups and put my (single) atoms into certain groups, based on their instantaneous (e.g.) z-coordinate. This results in slabs, where each slab has its own ref-t and tau-t. (http://www.mpip-mainz.mpg.de/~koschke/dynamicTCexample.png visualizes the different tc-groups using different colors). By updating cTC[atom index] in do_update_md() (e.g. at each step) and recounting the number of degrees of freedom nrdf for each coupling group, I was able to implement "dynamic TC groups" for the serial version of mdrun. Problems arise as soon as one needs a parallel implementation of the above idea (I won't use MPI communication; shared memory only, threaded runs are all I need). When the data gets split/copied to the different domains and distributed among threads, opts->nrdf becomes a local variable and recounting nrdf for each tc group stops being trivial. That is because do_update_md() can not access a global nrdf[] variable (it does not exist). One workaround would be, to pass the reference of the "original" opts variable to each thread, allowing them to increase a global nrdf[] as needed and thus, recount and update nrdf correctly (this workaround would only work for shared memory architectures - again, that would be ok; I am aware of the need for mutexes). I was wondering if you see a smarter way of updating cTC[] and nrdf[] in the parallel version of mdrun. My suggested workaround gets increasingly dirty with each modified line of code. Also, I am not sure where I can find the "original" inputrec->opts->nrdf variable - I thought mdrunner() in runner.c is the place to look, but debugging runner.c makes my brain melt. Any help is appreciated! Cheers konstantin - -- Konstantin Koschke Max Planck Institute for Polymer Research Theory Group PO Box 3148 D 55021 Mainz, Germany phone: +49 6131 379 481 email: koschke at mpip-mainz.mpg.de -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.12 (GNU/Linux) Comment: Using GnuPG with SUSE - http://enigmail.mozdev.org/ iQEcBAEBAgAGBQJOJqOQAAoJEBfd2kJEvB25N/IH/RtpRAWTjXUdXEG3N90zmy09 0WwaqE53/oEAn3r3sluxivA+/eshBqc2fIyhbJb1WO7jkX5ikRirYaUJ1spmtN9x 77hvDnCwWw1/CdWXxg09FqEIy2SmC0ouXU6KLF58MlmKxfIwSwgFalHnrxfkvBo5 pT2nnMHOQMd/aHx9ZxjMnrpM6LWpdSimpes/AoUCb/3VvEj2aSCWeF8n6OijSG7r c9krYr9KIvS30Klshha1K7RNA2PrmjiegbixomnTFCFdMuTdrBpeebqL60Rke6a5 67lL26k+nB0++Ar59doK/n+EQ/l7D1u7UBNpEx6aFusnCWv7W1Z/V7emHJsA+Xg= =oTLm -----END PGP SIGNATURE----- From bcostescu at gmail.com Thu Jul 21 16:15:43 2011 From: bcostescu at gmail.com (Bogdan Costescu) Date: Thu, 21 Jul 2011 16:15:43 +0200 Subject: [gmx-developers] Reproducible runs with DLB Message-ID: Dear GROMACS developers, I need to be able to restart from an earlier point in a simulation and exactly reproduce the original simulation while running in parallel with DD. Although I save the state of the simulation in a checkpoint file (using mdrun -cpnum), upon restart with the same number of ranks, there are differences, small at the beginning but which become larger later, which seem to appear due to the different DD cell sizes as they are modified by the dynamic load balancing (DLB). Turning DLB off (mdrun -dlb no) or running in reproducible mode (mdrun -reprod) makes the restart exactly reproduce the original (at least based on the criteria I'm interested in), however the run is significantly slower - the molecular system is not homogeneous, so DLB helps a lot in redistributing the calculations. If my understanding of the issue is correct, saving the state of the DD together with the checkpoint data and loading it upon restart would allow me to keep DLB enabled and exactly reproduce the original run. Is this so ? What are the difficulties in doing it ? If this is doable, is someone with a good understanding of DD willing to guide me in implementing it ? Of course, if someone with a good understanding of DD would be willing to implement it, I'd be more than glad to test it :-) Thanks in advance! Bogdan From Mark.Abraham at anu.edu.au Thu Jul 21 16:30:08 2011 From: Mark.Abraham at anu.edu.au (Mark Abraham) Date: Fri, 22 Jul 2011 00:30:08 +1000 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: References: Message-ID: <4E2837F0.8010605@anu.edu.au> On 22/07/2011 12:15 AM, Bogdan Costescu wrote: > Dear GROMACS developers, > > I need to be able to restart from an earlier point in a simulation and > exactly reproduce the original simulation while running in parallel > with DD. Although I save the state of the simulation in a checkpoint > file (using mdrun -cpnum), upon restart with the same number of ranks, > there are differences, small at the beginning but which become larger > later, which seem to appear due to the different DD cell sizes as they > are modified by the dynamic load balancing (DLB). Turning DLB off > (mdrun -dlb no) or running in reproducible mode (mdrun -reprod) makes > the restart exactly reproduce the original (at least based on the > criteria I'm interested in), however the run is significantly slower - > the molecular system is not homogeneous, so DLB helps a lot in > redistributing the calculations. > > If my understanding of the issue is correct, saving the state of the > DD together with the checkpoint data and loading it upon restart would > allow me to keep DLB enabled and exactly reproduce the original run. > Is this so ? Sounds right. > What are the difficulties in doing it ? Extending the checkpoint file format is not programmer-friendly, never mind writing save-and-restore code for DD. I suggest you look at the hidden options to mdrun that allow you to impose a particular DD grid that gives satisfactory performance. See "mdrun -h -hidden". You might have to reverse engineer how to use these from the code. Mark > If this is > doable, is someone with a good understanding of DD willing to guide me > in implementing it ? Of course, if someone with a good understanding > of DD would be willing to implement it, I'd be more than glad to test > it :-) > > Thanks in advance! > Bogdan From bcostescu at gmail.com Thu Jul 21 17:26:06 2011 From: bcostescu at gmail.com (Bogdan Costescu) Date: Thu, 21 Jul 2011 17:26:06 +0200 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: <4E2837F0.8010605@anu.edu.au> References: <4E2837F0.8010605@anu.edu.au> Message-ID: On Thu, Jul 21, 2011 at 16:30, Mark Abraham wrote: > Extending the checkpoint file format is not programmer-friendly, never mind > writing save-and-restore code for DD. If it would have been programmer-friendly, wouldn't it have been done already ? :-) Saving DD state was meant to be done at the same time as the checkpoint to have a restart point for both the molecular system state and the distribution of the atoms on nodes. But it doesn't have to be in the same file - the checkpoint file can remain as it is and an additional one can contain the DD state, as long as they are named similarly (f.e. state_stepX.dd) to know which ones to be used together. > I suggest you look at the hidden options to mdrun that allow you to impose a > particular DD grid that gives satisfactory performance. See "mdrun -h > -hidden". You might have to reverse engineer how to use these from the code. I'm already using '-dd x y z' for both the tests with and without DLB. PME is not used in some of the simulations (so playing with -npme has no meaning) and -dlb and -reprod I've already mentioned in my previous message. Are there other options that you refer to ? I understand that saving of DD state is not an easy feat. Do you consider this to be a waste of time ? Even if the answer is positive I would still be interested in it, as it would allow significantly faster while also reproducible for my simulations. Cheers, Bogdan From x.periole at rug.nl Thu Jul 21 18:02:45 2011 From: x.periole at rug.nl (XAvier Periole) Date: Thu, 21 Jul 2011 10:02:45 -0600 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: References: <4E2837F0.8010605@anu.edu.au> Message-ID: Hi, nothing I can help with here but having the reprod mode running with the dlb would be really useful! And an even more useful option would be to be able to write out conformations more often than in the original run. That would allow one run long simulations and go back and zoom in a particular time period of the simulation where some interesting event occurred. XAvier. On Jul 21, 2011, at 9:26 AM, Bogdan Costescu wrote: > On Thu, Jul 21, 2011 at 16:30, Mark Abraham > wrote: >> Extending the checkpoint file format is not programmer-friendly, >> never mind >> writing save-and-restore code for DD. > > If it would have been programmer-friendly, wouldn't it have been done > already ? :-) > > Saving DD state was meant to be done at the same time as the > checkpoint to have a restart point for both the molecular system state > and the distribution of the atoms on nodes. But it doesn't have to be > in the same file - the checkpoint file can remain as it is and an > additional one can contain the DD state, as long as they are named > similarly (f.e. state_stepX.dd) to know which ones to be used > together. > >> I suggest you look at the hidden options to mdrun that allow you to >> impose a >> particular DD grid that gives satisfactory performance. See "mdrun -h >> -hidden". You might have to reverse engineer how to use these from >> the code. > > I'm already using '-dd x y z' for both the tests with and without DLB. > PME is not used in some of the simulations (so playing with -npme has > no meaning) and -dlb and -reprod I've already mentioned in my previous > message. Are there other options that you refer to ? > > I understand that saving of DD state is not an easy feat. Do you > consider this to be a waste of time ? Even if the answer is positive I > would still be interested in it, as it would allow significantly > faster while also reproducible for my simulations. > > Cheers, > Bogdan > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-developers-request at gromacs.org. From roland at utk.edu Thu Jul 21 18:18:35 2011 From: roland at utk.edu (Roland Schulz) Date: Thu, 21 Jul 2011 09:18:35 -0700 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: <4319c04119354b3da7cd3044c570f76f@CH1PRD0202HT009.namprd02.prod.outlook.com> References: <4319c04119354b3da7cd3044c570f76f@CH1PRD0202HT009.namprd02.prod.outlook.com> Message-ID: Hi, take a look at GMX_DLB_FLOP and GMX_DD_LOAD environment variables defined in domdec.c. They might help with what you are trying to do. Roland ---------- Forwarded message ---------- From: Bogdan Costescu Date: Thu, Jul 21, 2011 at 7:15 AM Subject: [gmx-developers] Reproducible runs with DLB To: "gmx-developers at gromacs.org" Dear GROMACS developers, I need to be able to restart from an earlier point in a simulation and exactly reproduce the original simulation while running in parallel with DD. Although I save the state of the simulation in a checkpoint file (using mdrun -cpnum), upon restart with the same number of ranks, there are differences, small at the beginning but which become larger later, which seem to appear due to the different DD cell sizes as they are modified by the dynamic load balancing (DLB). Turning DLB off (mdrun -dlb no) or running in reproducible mode (mdrun -reprod) makes the restart exactly reproduce the original (at least based on the criteria I'm interested in), however the run is significantly slower - the molecular system is not homogeneous, so DLB helps a lot in redistributing the calculations. If my understanding of the issue is correct, saving the state of the DD together with the checkpoint data and loading it upon restart would allow me to keep DLB enabled and exactly reproduce the original run. Is this so ? What are the difficulties in doing it ? If this is doable, is someone with a good understanding of DD willing to guide me in implementing it ? Of course, if someone with a good understanding of DD would be willing to implement it, I'd be more than glad to test it :-) Thanks in advance! Bogdan -- gmx-developers mailing list gmx-developers at gromacs.org http://lists.gromacs.org/mailman/listinfo/gmx-developers Please don't post (un)subscribe requests to the list. Use the www interface or send it to gmx-developers-request at gromacs.org. -- ORNL/UT Center for Molecular Biophysics cmb.ornl.gov 865-241-1537, ORNL PO BOX 2008 MS6309 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Mark.Abraham at anu.edu.au Thu Jul 21 23:35:14 2011 From: Mark.Abraham at anu.edu.au (Mark Abraham) Date: Fri, 22 Jul 2011 07:35:14 +1000 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: References: <4E2837F0.8010605@anu.edu.au> Message-ID: <4E289B92.8050100@anu.edu.au> On 22/07/2011 1:26 AM, Bogdan Costescu wrote: > On Thu, Jul 21, 2011 at 16:30, Mark Abraham wrote: >> Extending the checkpoint file format is not programmer-friendly, never mind >> writing save-and-restore code for DD. > If it would have been programmer-friendly, wouldn't it have been done > already ? :-) > > Saving DD state was meant to be done at the same time as the > checkpoint to have a restart point for both the molecular system state > and the distribution of the atoms on nodes. But it doesn't have to be > in the same file - the checkpoint file can remain as it is and an > additional one can contain the DD state, as long as they are named > similarly (f.e. state_stepX.dd) to know which ones to be used > together. > >> I suggest you look at the hidden options to mdrun that allow you to impose a >> particular DD grid that gives satisfactory performance. See "mdrun -h >> -hidden". You might have to reverse engineer how to use these from the code. > I'm already using '-dd x y z' for both the tests with and without DLB. > PME is not used in some of the simulations (so playing with -npme has > no meaning) and -dlb and -reprod I've already mentioned in my previous > message. Are there other options that you refer to ? Yes. Check out the instruction I suggest. > > I understand that saving of DD state is not an easy feat. Do you > consider this to be a waste of time ? Even if the answer is positive I > would still be interested in it, as it would allow significantly > faster while also reproducible for my simulations. Could be done. Not all that easy. Mark From Mark.Abraham at anu.edu.au Thu Jul 21 23:35:58 2011 From: Mark.Abraham at anu.edu.au (Mark Abraham) Date: Fri, 22 Jul 2011 07:35:58 +1000 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: References: <4E2837F0.8010605@anu.edu.au> Message-ID: <4E289BBE.2040205@anu.edu.au> On 22/07/2011 2:02 AM, XAvier Periole wrote: > > Hi, > > nothing I can help with here but having the reprod mode running with the > dlb would be really useful! It relies on observing timings... how can that be reproducible? > And an even more useful option would be to be able to write out > conformations more often than in the original run. That would allow one > run long simulations and go back and zoom in a particular time > period of the simulation where some interesting event occurred. Hacking some environment variable to do this seems feasible. Mark > XAvier. > > On Jul 21, 2011, at 9:26 AM, Bogdan Costescu wrote: > >> On Thu, Jul 21, 2011 at 16:30, Mark Abraham >> wrote: >>> Extending the checkpoint file format is not programmer-friendly, >>> never mind >>> writing save-and-restore code for DD. >> >> If it would have been programmer-friendly, wouldn't it have been done >> already ? :-) >> >> Saving DD state was meant to be done at the same time as the >> checkpoint to have a restart point for both the molecular system state >> and the distribution of the atoms on nodes. But it doesn't have to be >> in the same file - the checkpoint file can remain as it is and an >> additional one can contain the DD state, as long as they are named >> similarly (f.e. state_stepX.dd) to know which ones to be used >> together. >> >>> I suggest you look at the hidden options to mdrun that allow you to >>> impose a >>> particular DD grid that gives satisfactory performance. See "mdrun -h >>> -hidden". You might have to reverse engineer how to use these from >>> the code. >> >> I'm already using '-dd x y z' for both the tests with and without DLB. >> PME is not used in some of the simulations (so playing with -npme has >> no meaning) and -dlb and -reprod I've already mentioned in my previous >> message. Are there other options that you refer to ? >> >> I understand that saving of DD state is not an easy feat. Do you >> consider this to be a waste of time ? Even if the answer is positive I >> would still be interested in it, as it would allow significantly >> faster while also reproducible for my simulations. >> >> Cheers, >> Bogdan >> -- >> gmx-developers mailing list >> gmx-developers at gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-developers >> Please don't post (un)subscribe requests to the list. Use the >> www interface or send it to gmx-developers-request at gromacs.org. > From mrs5pt at eservices.virginia.edu Thu Jul 21 23:43:34 2011 From: mrs5pt at eservices.virginia.edu (Shirts, Michael (mrs5pt)) Date: Thu, 21 Jul 2011 21:43:34 +0000 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: <4E289BBE.2040205@anu.edu.au> Message-ID: >> And an even more useful option would be to be able to write out >> conformations more often than in the original run. That would allow one >> run long simulations and go back and zoom in a particular time >> period of the simulation where some interesting event occurred. I'll add the plug that having this sort of functionality would be great, if possible. Could only really be done on the same machine, and may be impossible since on restart, the order of operations might be different, and chaos would get you very quickly, but it would be great! Best, ~~~~~~~~~~~~ Michael Shirts Assistant Professor Department of Chemical Engineering University of Virginia michael.shirts at virginia.edu (434)-243-1821 >> XAvier. >> >> On Jul 21, 2011, at 9:26 AM, Bogdan Costescu wrote: >> >>> On Thu, Jul 21, 2011 at 16:30, Mark Abraham >>> wrote: >>>> Extending the checkpoint file format is not programmer-friendly, >>>> never mind >>>> writing save-and-restore code for DD. >>> >>> If it would have been programmer-friendly, wouldn't it have been done >>> already ? :-) >>> >>> Saving DD state was meant to be done at the same time as the >>> checkpoint to have a restart point for both the molecular system state >>> and the distribution of the atoms on nodes. But it doesn't have to be >>> in the same file - the checkpoint file can remain as it is and an >>> additional one can contain the DD state, as long as they are named >>> similarly (f.e. state_stepX.dd) to know which ones to be used >>> together. >>> >>>> I suggest you look at the hidden options to mdrun that allow you to >>>> impose a >>>> particular DD grid that gives satisfactory performance. See "mdrun -h >>>> -hidden". You might have to reverse engineer how to use these from >>>> the code. >>> >>> I'm already using '-dd x y z' for both the tests with and without DLB. >>> PME is not used in some of the simulations (so playing with -npme has >>> no meaning) and -dlb and -reprod I've already mentioned in my previous >>> message. Are there other options that you refer to ? >>> >>> I understand that saving of DD state is not an easy feat. Do you >>> consider this to be a waste of time ? Even if the answer is positive I >>> would still be interested in it, as it would allow significantly >>> faster while also reproducible for my simulations. >>> >>> Cheers, >>> Bogdan >>> -- >>> gmx-developers mailing list >>> gmx-developers at gromacs.org >>> http://lists.gromacs.org/mailman/listinfo/gmx-developers >>> Please don't post (un)subscribe requests to the list. Use the >>> www interface or send it to gmx-developers-request at gromacs.org. >> > > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-developers-request at gromacs.org. From x.periole at rug.nl Thu Jul 21 23:46:49 2011 From: x.periole at rug.nl (XAvier Periole) Date: Thu, 21 Jul 2011 15:46:49 -0600 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: <4E289BBE.2040205@anu.edu.au> References: <4E2837F0.8010605@anu.edu.au> <4E289BBE.2040205@anu.edu.au> Message-ID: On Jul 21, 2011, at 3:35 PM, Mark Abraham wrote: > On 22/07/2011 2:02 AM, XAvier Periole wrote: >> >> Hi, >> >> nothing I can help with here but having the reprod mode running >> with the >> dlb would be really useful! > > It relies on observing timings... how can that be reproducible? > >> And an even more useful option would be to be able to write out >> conformations more often than in the original run. That would allow >> one >> run long simulations and go back and zoom in a particular time >> period of the simulation where some interesting event occurred. > > Hacking some environment variable to do this seems feasible. So I run a simulation on 128 CPUs using the dlb, keep my cpt let's say every hour and then I just decide I want rerun the simulation writing down every 10* more often the xtc file ... this is possible by hacking some environment variables? > > Mark > >> XAvier. >> >> On Jul 21, 2011, at 9:26 AM, Bogdan Costescu wrote: >> >>> On Thu, Jul 21, 2011 at 16:30, Mark Abraham >>> wrote: >>>> Extending the checkpoint file format is not programmer-friendly, >>>> never mind >>>> writing save-and-restore code for DD. >>> >>> If it would have been programmer-friendly, wouldn't it have been >>> done >>> already ? :-) >>> >>> Saving DD state was meant to be done at the same time as the >>> checkpoint to have a restart point for both the molecular system >>> state >>> and the distribution of the atoms on nodes. But it doesn't have to >>> be >>> in the same file - the checkpoint file can remain as it is and an >>> additional one can contain the DD state, as long as they are named >>> similarly (f.e. state_stepX.dd) to know which ones to be used >>> together. >>> >>>> I suggest you look at the hidden options to mdrun that allow you >>>> to impose a >>>> particular DD grid that gives satisfactory performance. See >>>> "mdrun -h >>>> -hidden". You might have to reverse engineer how to use these >>>> from the code. >>> >>> I'm already using '-dd x y z' for both the tests with and without >>> DLB. >>> PME is not used in some of the simulations (so playing with -npme >>> has >>> no meaning) and -dlb and -reprod I've already mentioned in my >>> previous >>> message. Are there other options that you refer to ? >>> >>> I understand that saving of DD state is not an easy feat. Do you >>> consider this to be a waste of time ? Even if the answer is >>> positive I >>> would still be interested in it, as it would allow significantly >>> faster while also reproducible for my simulations. >>> >>> Cheers, >>> Bogdan >>> -- >>> gmx-developers mailing list >>> gmx-developers at gromacs.org >>> http://lists.gromacs.org/mailman/listinfo/gmx-developers >>> Please don't post (un)subscribe requests to the list. Use the >>> www interface or send it to gmx-developers-request at gromacs.org. >> > > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the www > interface or send it to gmx-developers-request at gromacs.org. From x.periole at rug.nl Thu Jul 21 23:47:56 2011 From: x.periole at rug.nl (XAvier Periole) Date: Thu, 21 Jul 2011 15:47:56 -0600 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: References: Message-ID: On Jul 21, 2011, at 3:43 PM, Shirts, Michael (mrs5pt) wrote: >>> And an even more useful option would be to be able to write out >>> conformations more often than in the original run. That would >>> allow one >>> run long simulations and go back and zoom in a particular time >>> period of the simulation where some interesting event occurred. > > I'll add the plug that having this sort of functionality would be > great, if > possible. Could only really be done on the same machine, and may be > impossible since on restart, the order of operations might be > different, and > chaos would get you very quickly, but it would be great! That is what I thought! But mark seem to suggest it is possible. > > Best, > ~~~~~~~~~~~~ > Michael Shirts > Assistant Professor > Department of Chemical Engineering > University of Virginia > michael.shirts at virginia.edu > (434)-243-1821 > > >>> XAvier. >>> >>> On Jul 21, 2011, at 9:26 AM, Bogdan Costescu wrote: >>> >>>> On Thu, Jul 21, 2011 at 16:30, Mark Abraham >>> > >>>> wrote: >>>>> Extending the checkpoint file format is not programmer-friendly, >>>>> never mind >>>>> writing save-and-restore code for DD. >>>> >>>> If it would have been programmer-friendly, wouldn't it have been >>>> done >>>> already ? :-) >>>> >>>> Saving DD state was meant to be done at the same time as the >>>> checkpoint to have a restart point for both the molecular system >>>> state >>>> and the distribution of the atoms on nodes. But it doesn't have >>>> to be >>>> in the same file - the checkpoint file can remain as it is and an >>>> additional one can contain the DD state, as long as they are named >>>> similarly (f.e. state_stepX.dd) to know which ones to be used >>>> together. >>>> >>>>> I suggest you look at the hidden options to mdrun that allow you >>>>> to >>>>> impose a >>>>> particular DD grid that gives satisfactory performance. See >>>>> "mdrun -h >>>>> -hidden". You might have to reverse engineer how to use these from >>>>> the code. >>>> >>>> I'm already using '-dd x y z' for both the tests with and without >>>> DLB. >>>> PME is not used in some of the simulations (so playing with -npme >>>> has >>>> no meaning) and -dlb and -reprod I've already mentioned in my >>>> previous >>>> message. Are there other options that you refer to ? >>>> >>>> I understand that saving of DD state is not an easy feat. Do you >>>> consider this to be a waste of time ? Even if the answer is >>>> positive I >>>> would still be interested in it, as it would allow significantly >>>> faster while also reproducible for my simulations. >>>> >>>> Cheers, >>>> Bogdan >>>> -- >>>> gmx-developers mailing list >>>> gmx-developers at gromacs.org >>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers >>>> Please don't post (un)subscribe requests to the list. Use the >>>> www interface or send it to gmx-developers-request at gromacs.org. >>> >> >> -- >> gmx-developers mailing list >> gmx-developers at gromacs.org >> http://lists.gromacs.org/mailman/listinfo/gmx-developers >> Please don't post (un)subscribe requests to the list. Use the >> www interface or send it to gmx-developers-request at gromacs.org. > > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the > www interface or send it to gmx-developers-request at gromacs.org. From hess at cbr.su.se Fri Jul 22 13:56:21 2011 From: hess at cbr.su.se (Berk Hess) Date: Fri, 22 Jul 2011 13:56:21 +0200 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: References: Message-ID: <4E296565.9020006@cbr.su.se> On 07/21/2011 11:47 PM, XAvier Periole wrote: > > On Jul 21, 2011, at 3:43 PM, Shirts, Michael (mrs5pt) wrote: > >>>> And an even more useful option would be to be able to write out >>>> conformations more often than in the original run. That would allow >>>> one >>>> run long simulations and go back and zoom in a particular time >>>> period of the simulation where some interesting event occurred. >> >> I'll add the plug that having this sort of functionality would be >> great, if >> possible. Could only really be done on the same machine, and may be >> impossible since on restart, the order of operations might be >> different, and >> chaos would get you very quickly, but it would be great! > That is what I thought! But mark seem to suggest it is possible. Any dynamic load balancing based on actual timings is never reproducible, unless you would store all the timings, which is very impractical. One could load balance based on flops, as the GMX_DLB_FLOP env var does, which is only intended for debugging purposes. But that will not give good load balancing. Therefore it's not worth storing the complete dlb state. You could use the -dd option and the hidden options -ddcsx, -ddcsy and -ddcsz (see mdrun -h -hidden) to do static load balancing. A string is required with the relative sizes of the domains along each dimension, for example -ddcsx "1.2 0.9 0.9 1.2" for 4 domains along x. But the load balancing efficiency will depend very much on your system. As only a few steps are required for accurate timings, you can quickly try a few -dd and size settings to see if you can get reasonable performance. Berk From bcostescu at gmail.com Fri Jul 22 17:51:07 2011 From: bcostescu at gmail.com (Bogdan Costescu) Date: Fri, 22 Jul 2011 17:51:07 +0200 Subject: [gmx-developers] Reproducible runs with DLB In-Reply-To: <4E296565.9020006@cbr.su.se> References: <4E296565.9020006@cbr.su.se> Message-ID: [ I'll try to put all answers in one message ] On Fri, Jul 22, 2011 at 13:56, Berk Hess wrote: > On 07/21/2011 11:47 PM, XAvier Periole wrote: >> On Jul 21, 2011, at 3:43 PM, Shirts, Michael (mrs5pt) wrote: >>>>> And an even more useful option would be to be able to write out >>>>> conformations more often than in the original run. That would allow one >>>>> run long simulations and go back and zoom in a particular time >>>>> period of the simulation where some interesting event occurred. >>> I'll add the plug that having this sort of functionality would be great, >>> if >>> possible. Could only really be done on the same machine, and may be >>> impossible since on restart, the order of operations might be different, >>> and >>> chaos would get you very quickly, but it would be great! Going back and getting more detailed data is also what I try to do. > Any dynamic load balancing based on actual timings is never reproducible, > unless you would store all the timings, which is very impractical. Indeed, I was under the wrong impression that FLOPS and not timing was the base for the default DLB calculations. > One could load balance based on flops, as the GMX_DLB_FLOP env var > does, which is only intended for debugging purposes. But that will not give > good load balancing. Therefore it's not worth storing the complete dlb > state. I have made a short test with GMX_DLB_FLOP=1 and the balancing was indeed worse than the default, but not by much; it's much closer to '-dlb yes' than to '-dlb no'. I'm willing to trade a bit of speed for reproducibility. Please correct me if I'm wrong: when using GMX_DLB_FLOP=1 (no randomness), DLB uses the load in dd_force_load() based on comm->load which is set in dd_force_flop_start/stop() from values returned by force_flop_count() which calculates them based on nrnb which contains iteration counts returned from the nonbonded kernels. This explains why the load balance is not precise: the operations done in other parts of the code (f.e. bonded interactions) are not accounted for. This also means that the variation of the FLOP based load is deterministic, so if the DD state keeps being saved during the run, one can go back and restart from one such state and be able to exactly reproduce the DD evolution from that point on. This would also be reproducible when running on a machine different from the one of the original run - but of course with the same nr. of ranks. > You could use the -dd option and the hidden options -ddcsx, -ddcsy and > -ddcsz > (see mdrun -h -hidden) to do static load balancing. After I have realized that it's -hidden and not --hidden (too much GNU naming convention in my brain ;-)), I have seen them too. Apologies to Mark for needing to point me twice to that... > A string is required > with > the relative sizes of the domains along each dimension, for example > -ddcsx "1.2 0.9 0.9 1.2" for 4 domains along x. > But the load balancing efficiency will depend very much on your system. >From what I see in the code, these values are only read with '-dlb no', which means that they would for a system which is mostly static, but if there are some large structural changes - f.e. during protein (un)folding - once atoms move significantly the distribution becomes sub-optimal again. Why are these -ddcs* options hidden ? > As only a few steps are required for accurate timings, you can quickly > try a few -dd and size settings to see if you can get reasonable > performance. Well, I can also try printing out cell sizes from a run with DLB enabled, no ? Roland Schulz wrote: > take a look at GMX_DLB_FLOP and GMX_DD_LOAD environment variables defined in domdec.c. They might help with what you are trying to do. I don't quite understand how GMX_DD_LOAD would help; this only participates in setting comm->bRecordLoad, with a default setting of 1 anyway. Did you mean GMX_DD_DUMP or GMX_DD_DUMP_GRID by any chance ? Anyway, thanks for pointing me in that direction. Cheers, Bogdan From sheeba.jem at googlemail.com Sat Jul 23 02:10:28 2011 From: sheeba.jem at googlemail.com (Sheeba Jem) Date: Fri, 22 Jul 2011 20:10:28 -0400 Subject: [gmx-developers] User defined reaction coordinate for umbrella sampling Message-ID: Hi all, I am trying to evaluate the free energy of peptide folding using umbrella sampling and I need some help in implementing the reaction coordinate in gromacs. The reaction coordinate I want to use is the average C-alpha distance between the i and i+4 th residues. The umbrella potential would look like: psi = 1/2 * k (avg - avg0)^2 avg = (r1+r2....rN)/N r = CA_i - CA_i+4 avg0 = umbrella position So if I run five umbrella simulations fixing the average calpha distance at say 0.6, 0.7, 0.8, 0.9 and 1.0 nm then I should be able to use g_wham to get the PMF as a function of the average calpha distance of the peptide. I am studying a short alpha helical peptide, therefore when the peptide is helical the average calpha distance should be close to 0.6 nm. As far as I understand it is not possible to define 'average values' as pull groups and what I want to be able to do is: say when I fix the window at 0.7 nm, at each step during the simulation, I need to calculate the average Calpha distance of the peptide and calculate the force (and hence the new set of coordinates) due to the deviation from 0.7 and split the force to each Calpha atom either equally or depending on the Calpha distance the atom is involved in. Is it possible to modify the pull.c code to apply such a reaction coordinate? I understand there are other free energy methods like replica exchange etc to study the thermodynamics of folding and this is not an ideal reaction coordinate for folding but once I get a hold of what I can do with the pull code I can try better, more complicated reaction coordinates. The gromacs version I am using is 4.0.5. I appreciate any help. Thanks Sheeba -------------- next part -------------- An HTML attachment was scrubbed... URL: From uhlig.frank at googlemail.com Sat Jul 23 10:48:56 2011 From: uhlig.frank at googlemail.com (Frank Uhlig) Date: Sat, 23 Jul 2011 10:48:56 +0200 Subject: [gmx-developers] GMX + ORCA QM/MM Message-ID: Hey, sorry for the late reply. I ran a few tests and it works perfectly fine. Thanks again, Frank On Wed, Jul 13, 2011 at 12:03 PM, Gerrit Groenhof wrote: > I had a look anyway. > > On grompp: Since 4.0, the QM atoms need to be in one topology file. Thus if > you have 6 water, you need an atoms section with 6 waters. Dividing the QM > atoms over multiple topologies does not work. see below. > > On mdrun, the above problem works with gmx/gaussian. I never worked with > gmx/orca before, but frmo the gromacs side there seems no problem anymore. > > Hope this helps. > > Best wishes, > > Gerrit > > > ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; > ; defaults and all atom types ; > ;;;;;;;;;;;;;;;;;;;;;;;;;;;;;;; > > [ defaults ] > ; nbfunc comb-rule gen-pairs fudgeLJ fudgeQQ > 1 2 yes 1.0 1.0 > > [ atomtypes ] > ; name mass charge ptype sigma epsilon > OW 8 16.0 -0.8476 A 0.3165492 0.650299 > HW 1 1.0 0.4238 A 0.0 0.0 > > ;;;;;;;;;;;;;;; > ; SPC/E water ; > ;;;;;;;;;;;;;;; > > [ moleculetype ] > ; molname nrexcl > SOL 1 > > [ atoms ] > ; nr type resnr residue atom cgnr charge mass > 1 OW 1 SOL OW 1 -0.8476 > 2 HW 1 SOL HW1 1 0.4238 > 3 HW 1 SOL HW2 1 0.4238 > > [ settles ] > ; OW funct doh dhh > 1 1 0.1 0.1633 > > [ exclusions ] > 1 2 3 > 2 1 3 > 3 1 2 > > [ moleculetype ] > ; molname nrexcl > QM 1 > > [ atoms ] > ; nr type resnr residue atom cgnr charge mass > 1 OW 1 QM OW 1 -0.8476 > 2 HW 1 QM HW1 1 0.4238 > 3 HW 1 QM HW2 1 0.4238 > 4 OW 1 QM OW 1 -0.8476 > 5 HW 1 QM HW1 1 0.4238 > 6 HW 1 QM HW2 1 0.4238 > 7 OW 1 QM OW 1 -0.8476 > 8 HW 1 QM HW1 1 0.4238 > 9 HW 1 QM HW2 1 0.4238 > 10 OW 1 QM OW 1 -0.8476 > 11 HW 1 QM HW1 1 0.4238 > 12 HW 1 QM HW2 1 0.4238 > 13 OW 1 QM OW 1 -0.8476 > 14 HW 1 QM HW1 1 0.4238 > 15 HW 1 QM HW2 1 0.4238 > 16 OW 1 QM OW 1 -0.8476 > 17 HW 1 QM HW1 1 0.4238 > 18 HW 1 QM HW2 1 0.4238 > > [ system ] > something weird > > [ molecules ] > QM 1 > SOL 58 > > On 07/13/2011 10:08 AM, Frank Uhlig wrote: >> >> Dear gmx-develo11pers, >> >> I have a few comments concerning QM/MM in Gromacs in conjunction with >> Orca. I am using the latest Gromacs version 4.5.4 and the latest Orca >> version 2.8.0 to perform QM/MM calculations. >> >> 1) it is a bit misleading that in the help of the configure script it >> is written: >> >> --without-qmmm-orca Use ORCA for QM-MM >> >> and the respective for the other three possible programs for QM/MM >> calculations... >> >> 2) I followed the instructions on this webpage: >> >> http://wwwuser.gwdg.de/~ggroenh/qmmm.html >> >> --> this means ./configure --with-qmmm-orca --without-qmmm-gaussian >> >> to build a QM/MM version of GMX together with Orca. The build goes >> fine and seems to work... >> >> I also tried to build the GMX/ORCA-QM/MM version via CMake (i.e., >> ccmake). Although I activated "orca" as GMX_QMMM_PROGRAM in the gui >> and (re-)configured, the variable GMX_QMMM_ORCA does not get set in >> the src/config.h file. Thus, the obtained build will not work for >> QM/MM calculations... >> >> 3) If I configure gromacs as described in the first part of 2) above I >> obtain a version that seems to work at first. After some experimenting >> with the general setup I encountered some problems though. I attached >> all files necessary files to illustrate and reproduce those problems. >> >> When putting the QM residues first in the [ molecules ] section in the >> topology file, grompp fails with a segmentation fault. >> When putting the QM residues last in the [ molecules ] section in the >> topology file, mdrun fails with a segmentation fault (mdrun -nt 1) >> before calling Orca. >> When putting the QM residues (and all the other residues) in a >> disordered fashion in the topology file (and not the QM >> residues first or last) the calculations runs just fine. >> >> The included examples all contain the same configuration. They only >> differ in the order of the residues in the conf.gro, topol.top and >> index.ndx files. >> >> I also included the debug information for the two failing tests. I am >> not too familiar with C, so I would appreciate your help. If you have >> any suggestion on how to fix these issues or at least further >> information on where they are stemming from, please let me know. >> >> Best regards and thanks in advance, >> >> Frank > > -- > gmx-developers mailing list > gmx-developers at gromacs.org > http://lists.gromacs.org/mailman/listinfo/gmx-developers > Please don't post (un)subscribe requests to the list. Use the www interface > or send it to gmx-developers-request at gromacs.org. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aastr at yahoo.com.br Mon Jul 25 14:46:05 2011 From: aastr at yahoo.com.br (=?iso-8859-1?Q?Andr=E9_Assun=E7=E3o_S=2E_T=2E_Ribeiro?=) Date: Mon, 25 Jul 2011 05:46:05 -0700 (PDT) Subject: [gmx-developers] Wiki Message-ID: <1311597965.63818.YahooMailNeo@web126020.mail.ne1.yahoo.com> Hi, Could someone change the link at http://www.gromacs.org/Downloads/Related_Software/MKTOP from labmm.iq.ufrj.br/mktop to aribeiro.net.br/mktop ?? Thank you, Andre. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jalemkul at vt.edu Mon Jul 25 15:09:05 2011 From: jalemkul at vt.edu (Justin A. Lemkul) Date: Mon, 25 Jul 2011 09:09:05 -0400 Subject: [gmx-developers] Wiki In-Reply-To: <1311597965.63818.YahooMailNeo@web126020.mail.ne1.yahoo.com> References: <1311597965.63818.YahooMailNeo@web126020.mail.ne1.yahoo.com> Message-ID: <4E2D6AF1.4080804@vt.edu> Andr? Assun??o S. T. Ribeiro wrote: > Hi, > > Could someone change the link at > > http://www.gromacs.org/Downloads/Related_Software/MKTOP > > from labmm.iq.ufrj.br/mktop to aribeiro.net.br/mktop ? > Done. I also updated the description, per your previous message to gmx-users. -Justin -- ======================================== Justin A. Lemkul Ph.D. Candidate ICTAS Doctoral Scholar MILES-IGERT Trainee Department of Biochemistry Virginia Tech Blacksburg, VA jalemkul[at]vt.edu | (540) 231-9080 http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin ======================================== From rossen at kth.se Mon Jul 25 14:54:45 2011 From: rossen at kth.se (Rossen Apostolov) Date: Mon, 25 Jul 2011 14:54:45 +0200 Subject: [gmx-developers] Wiki In-Reply-To: <1311597965.63818.YahooMailNeo@web126020.mail.ne1.yahoo.com> References: <1311597965.63818.YahooMailNeo@web126020.mail.ne1.yahoo.com> Message-ID: <4E2D6795.3050308@kth.se> Done. Cheers, Rossen On 7/25/11 2:46 PM, Andr? Assun??o S. T. Ribeiro wrote: > Hi, > > Could someone change the link at > > http://www.gromacs.org/Downloads/Related_Software/MKTOP > > from labmm.iq.ufrj.br/mktop to aribeiro.net.br/mktop ? > > Thank you, > Andre. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xianshine at gmail.com Tue Jul 26 06:12:04 2011 From: xianshine at gmail.com (KONG Xian) Date: Tue, 26 Jul 2011 12:12:04 +0800 Subject: [gmx-developers] How to study the influence of Lateral Pressure profile on a protein's function? Message-ID: <001a01cc4b4a$30b25b40$921711c0$@com> Dear all: I am working on a research to study whether the Lateral pressure profile influence the protein function. To get different lateral pressure profile, I used Parinello-Rahman P coupling method and anisotropic pressure coupling with different p_ref values(such as 0.9bar, 1bar, 1.1bar, .,2bar) in xy. I wonder whether this method feasible. If it is not feasible, could anyone please give me a hint on how to do it. Thank you KONG Xian Tsinghua University, Beijing, China -------------- next part -------------- An HTML attachment was scrubbed... URL: From ileontyev at ucdavis.edu Thu Jul 28 09:21:33 2011 From: ileontyev at ucdavis.edu (Igor Leontyev) Date: Thu, 28 Jul 2011 00:21:33 -0700 Subject: [gmx-developers] Re: In preparation for 4.5.5 and 4.6 releases Message-ID: <5222827E4A0E41C4A389D80DA2073F69@homecomp> > Hi, > > We are preparing for a new maintenance release 4.5.5. It will fix > critical open issues with previous releases, so please file your reports > in redmine.gromacs.org by the end of June. If it's not too late I would like to mention couple cosmetic issues: 1) Printing out of node loading for Particle Decomposition is malfunctioning. Output of bonded interaction distribution is suppressed if master node is not loaded by that type of interaction though other nodes can be loaded. For fix replace the lines in "pr_idef_division" subroutine as follows: /* BUG: the division is not printed if nr = 0 on master if (idef->il[ftype].nr > 0) { nr = idef->il[ftype].nr; */ if (multinr[ftype][nnodes-1] > 0) { 2) For single precision simulations mdrun (4.5.4) prints out "Testing x86_64 SSE2 support" instead of "x86_64 SSE" as it use to be in gmx 4.0.7. > > After the 4.5.5 release, the stable branch will be frozen for bugfixes > only, and new functionality will be added to a new release-4-6-patches > branch, a fork of release-4-5-patches right after 4.5.5. > > Currently the plan is to have in 4.6: > > * faster native GPU implementation supporting most of current > Gromacs features > * collective I/O > * lambda dynamics and other free energy extensions > * AdResS (http://www.mpip-mainz.mpg.de/~poma/multiscale/adress.php) > * advanced rotational pulling > * file history > * several new tools > * autoconf removed - support for building only with CMake > > Code from contributors will be considered for inclusion also but it's > necessary that > > * comes with support for the code in future releases, e.g. port it > to the completely new C++ structure in the 5.0 release and > maintain it after > * builds against 4.5.5 > * produces scientifically reliable results > * works in parallel and doesn't affect the performance > * comes with regression test sets for the new features > * has the necessary documentation for usage > > After 4.5.5 bug fixes need to be applied as: > > * bugs in 4.5.5: > o fix in 4.5.5 -> fix in 4.6 -> fix in master > * bugs in the new features introduced in 4.6: > o fix in 4.6 -> fix in master > > > The plan is to have 4.5.5 around end of July, and 4.6-gamma a month later. > > Rossen