[gmx-developers] MC integrator

David Mobley dmobley at gmail.com
Thu Aug 6 15:30:56 CEST 2009


I agree that an API would be extremely helpful.

On Aug 6, 2009, at 7:15 AM, Berk Hess wrote:

> David Mobley wrote:
>> There are some other times one might want MC even if MD is truly more
>> efficient for most things we're interested in. THe examples that come
>> to mind for me are particular kinds of moves for which MC might be
>> dramatically more efficient, though these probably require extra work
>> beyond "vanilla" MC:
>> - Dihedral space moves of torsions -- for example to help sidechains
>> with large barriers swap rotamers more quickly
>> - GCMC for water insertion/deletion within specified regions (for
>> example, water motions can be extremely slow in ligand binding sites;
>> some recent work from the Roux lab used a GCMC approach in  
>> combination
>> with normal MD to improve sampling of water).
>>
>> David
> I agree (your second example is what I meant with different  
> ensembles).
>
> But for such specialistic cases, which can indeed provide an enormous
> increase in simulation
> efficiency, we do not so much need a MC integrator, but more an API  
> that
> can be called through scripts.
> For your second example the TPI code provides the basic machinery for
> efficient calculation.
> For your first example some work would be require to only locally
> reevaluate bonded and non-bonded
> energies if you would want to do this often for many sidechains.
>
> But I guess I might be better to focus our energy on setting up a  
> proper
> API.
> Then users can write scripts to perform the MC parts of more complex
> sampling schemes.
>
> That is not to say that we not write an MC integrator.
> Although I think most efficient type of MC moves will depend a lot on
> the type of system.
>
> Berk
>
>>
>>
>> On Aug 6, 2009, at 3:12 AM, Berk Hess wrote:
>>
>>> Mark Abraham wrote:
>>>> David van der Spoel wrote:
>>>>> Mark Abraham wrote:
>>>>>> David van der Spoel wrote:
>>>>>>> Erik Marklund wrote:
>>>>>>>> Swell! It's been missing from gromacs for a long time in my
>>>>>>>> oppinion.
>>>>>>>>
>>>>>>>> Regarding parallelism, MC is highly parallelizable in
>>>>>>>> "step-space", and computing different MC steps on different
>>>>>>>> processors will most likely be faster than distributing  
>>>>>>>> particles
>>>>>>>> or domains, since virtually nothing needs to be communicated.
>>>>>>>> Therefore, a step-decomposition option is a good idea. Plus, it
>>>>>>>> would be very easy to implement.
>>>>>>>
>>>>>>> This is not necessarily true if we move to many (tens of  
>>>>>>> thousands
>>>>>>> of processors) which soon will be feasible with gmx 4.1 once  
>>>>>>> it is
>>>>>>> finished. Hence I would prefer if the code did not "interfere"  
>>>>>>> with
>>>>>>> the parallellisation, but rather just uses the existing logic.
>>>>>>> Maybe I misunderstand it, but you do a trial move and then
>>>>>>> recompute the energy, right?
>>>>>>
>>>>>> Reworking the multi-simulation feature that already exists  
>>>>>> would be
>>>>>> the best of all worlds. In REMD you have a bunch of the same  
>>>>>> system
>>>>>> in different states, which sometimes move, but in straight MC you
>>>>>> can have many copies of the same system computing different trial
>>>>>> moves. The catch is that when you accept a move, you probably  
>>>>>> have
>>>>>> to throw away any work on the old state and communicate the
>>>>>> successful move to all systems. Does that suggest constructing
>>>>>> low-probability moves to minimise wastage? Dunno.
>>>>>
>>>>> Multisim sits on top of the normal parallellisation, so you still
>>>>> need to adapt to DD. But what could be simpler than handling the
>>>>> trial moves sequentially? Nevertheless, once a DD implementation
>>>>> works that does the trial moves sequentially, the Multisim  
>>>>> solution
>>>>> that you are suggesting would be simple to implement, and it would
>>>>> lead to slightly higher efficiency probably.
>>>>
>>>> :-) By the same token, what could be simpler than *lots* of
>>>> single-processor simulations that don't need any decomposition?  
>>>> For a
>>>> number of processors sufficiently high with respect to the scaling
>>>> limits of the domain decomposition, the computation lost when an
>>>> accepted move means existing computation is worthless might be  
>>>> lower
>>>> than the loss incurred by the decomposition communication.
>>>>
>>>> Or, if you don't need detailed balance, whenever a processor  
>>>> tries a
>>>> move, it tells all other processors that they can't try any move  
>>>> that
>>>> might clash. Then when one accepts a move, it communicates that  
>>>> to all
>>>> the other processors who apply it when next convenient. Sampling  
>>>> comes
>>>> from all replicas.
>>>>
>>>> Meh, time to quit with the random ideas, already :-)
>>>>
>>>> Mark
>>> I think there is a much more serious issue with MC performance in
>>> Gromacs.
>>> Currently Gromacs is optimized to very efficiently determine the  
>>> energy
>>> of the whole system
>>> with moves slowly with time.
>>> For MC you want to be able to quickly evaluate changes in the  
>>> energy due
>>> to local moves.
>>> In most cases this would mean that you would only want to go  
>>> through the
>>> neighborlist
>>> of the atoms that move and compute the energy before and after the  
>>> move.
>>> You would never need the energy of the whole system.
>>> How the efficiency turns out depends very much on the system size.
>>> But MC might anyhow be inefficient for large systems.
>>> Currently there is code for doing something similar, in the TPI
>>> "integrator".
>>> This only determines local energies, parallelism is handled through
>>> giving
>>> different "steps" to different processors, which in this case  
>>> completely
>>> decouple.
>>> But performance wise TPI is much nicer than general MC, since can
>>> perform
>>> many insertions with a minor translation or rotation at the same
>>> location
>>> with the same local neighborlist.
>>>
>>> I think the solution depends a lot on what you would want to use  
>>> MC for.
>>> In most cases MD will be far more efficient than MC.
>>> The only case I can think of where MC would do better is potential  
>>> with
>>> hard walls, in which case you can not use MD, but  in this case  
>>> Gromacs
>>> would not be suited anyhow.
>>> Or maybe there are some special ensembles for which MD is  
>>> inconvenient?
>>>
>>> Berk
>>>
>>> _______________________________________________
>>> gmx-developers mailing list
>>> gmx-developers at gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>>> Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-developers-request at gromacs.org.
>>
>> _______________________________________________
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use thewww
>> interface or send it to gmx-developers-request at gromacs.org.
>
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-developers-request at gromacs.org.




More information about the gromacs.org_gmx-developers mailing list