[gmx-users] confusion about implicint solvent
mark.j.abraham at gmail.com
Mon Sep 23 22:22:38 CEST 2013
On Mon, Sep 23, 2013 at 8:08 PM, Szilárd Páll <szilard.pall at cbr.su.se> wrote:
> Admittedly, both the documentation on these features and the
> communication on the known issues with these aspects of GROMACS has
> been lacking.
> Here's a brief summary/explanation:
> - GROMACS 4.5: implicit solvent simulations possible using mdrun-gpu
> which is essentially mdrun + OpenMM, hence it has some limitations,
> most notably it can only run on a single GPU. The performance,
> depending on setting, can be up to 10x higher than on the CPU.
> - GROMACS 4.6: the native GPU acceleration does supports only explicit
> solvent, mdrun + OpenMM is still available (exactly for implicit
> solvent runs), but has been moved to the "contrib" section which means
> that it is not fully supported. Moreover, OpenMM support - unless
> somebody volunteers for maintenance of the mdrun-OpenMM interface -
> will be dropped in the next release.
> I can't comment much on the implicit solvent code on the CPU side
> other than the fact that there have been issues which AFAIK limit the
> parallelization to a rather small number of cores, hence the
> achievable performance is also limited. I hope others can clarify this
IIRC the best 4.5 performance for CPU-only implicit solvent used
infinite cut-offs and SIMD acceleration. The SIMD is certainly broken
in 4.6 (and IIRC was explicitly disabled at some point after 4.6.3).
There is limited enthusiasm for fixing things (e.g. see parts of
http://redmine.gromacs.org/issues/1292) but nobody with the skills has
so far applied the time to do so. As always with an open-source
project, if you want something, be prepared to roll up your sleeves
and work, or hit your knees and pray! :-)
> On Mon, Sep 23, 2013 at 7:34 PM, Francesco <fracarb at myopera.com> wrote:
>> Good afternoon everybody,
>> I'm a bit confuse about gromacs performances with implicit solvent.
>> I'm simulating a 1000 residues protein with explicit solvent, using both
>> a cpu and a gpu cluster.
>> With a gpu node (12 cores and 3 M2090 gpu ) I reach 10 ns/day, while
>> with no gpu and 144 cores I got 34 ns/day.
>> Because I have several mutants (more than 50) I have to reduce the
>> average simulation time and I was considering different option such as
>> the use of implicit solvent.
>> I tried with both the clusters and using gromacs 4.6 and 4.5 but the
>> performances are terrible (1 day for 100ps) comparing to the explicit
>> I read all the other messages on the mailing-list and the documentation,
>> but the mix of old and new "features"/posts really confuses me a lot.
>> it is said that with the gpu 4.5 and implicit solvent I should expect a
>> "substantial speedup".
>> Here (
>> ) I found this sentence "It is ultimately up to you as a user to decide
>> what simulations setups to use, but we would like to emphasize the
>> simply amazing implicit solvent performance provided by GPUs."
>> I follow the advise found in the mailing list and read both the
>> documentation (site and manual), but I can't figured it out what should
>> I do.
>> How can you guys have amazing performances?
>> I also found this answer from a last March post
>> that confuses me even more.
>> "Performance issues are known. There are plans to implement the implicit
>> solvent code for GPU and perhaps allow for better parallelization, but I
>> don't know what the status of all that is. As it stands (and as I have
>> said before on this list and to the developers privately), the implicit
>> code is largely unproductive because the performance is terrible. "
>> Should I skip the idea of using implicit solvent and try something else?
>> these are a set of parameters that I used (also the -pd flag)
>> ; Run parameters
>> integrator = sd
>> tinit = 0
>> nsteps = 50000
>> dt = 0.002
>> ; Output control
>> nstxout = 5000
>> nstvout = 5000
>> nstlog = 5000
>> nstenergy = 5000
>> nstxtcout = 5000
>> xtc_precision = 1000
>> energygrps = system
>> ; Bond parameters
>> continuation = no
>> constraints = all-bonds
>> constraint_algorithm = lincs
>> lincs_iter = 1
>> lincs_order = 4
>> lincs_warnangle = 30
>> ; Neighborsearching
>> ns_type = simple
>> nstlist = 0
>> rlist = 0
>> rcoulomb = 0
>> rvdw = 0
>> ; Electrostatics
>> coulombtype = cut-off
>> pbc = no
>> comm_mode = Angular
>> implicit_solvent = GBSA
>> gb_algorithm = OBC
>> nstgbradii = 1.0
>> rgbradii = 0
>> gb_epsilon_solvent = 80
>> gb_dielectric_offset = 0.009
>> sa_algorithm = Ace-approximation
>> sa_surface_tension = 0.0054
>> ; Temperature coupling
>> tcoupl = v-rescale
>> tc_grps = System
>> tau_t = 0.1
>> ref_t = 310
>> ; Velocity generation
>> gen_vel = yes
>> ld_seed = -1
>> thank you for the help.
>> Francesco Carbone
>> PhD student
>> Institute of Structural and Molecular Biology
>> UCL, London
>> fra.carbone.12 at ucl.ac.uk
>> gmx-users mailing list gmx-users at gromacs.org
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> * Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> gmx-users mailing list gmx-users at gromacs.org
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users