[gmx-developers] MC integrator

Berk Hess hess at cbr.su.se
Thu Aug 6 15:36:04 CEST 2009


You can have a look at the TPI code.
Most of the optimization is hidden in the ns and force code
through checking the fr->n_tpi variable.
TPI is a lot simpler compared to general MC, because the molecule
that changes is fixed (the last molecule in Gromacs).
Currently TPI also only works if the molecule is a whole charge group.
The same concept can be used for MC (or any code that wants only
local forces and energies), but that is some work.
Also, neighbor searching is a lot more expensive that only calculating
forces
or energies.
We might want to have an option to the ns call that makes full two way
neighbor
lists instead of a half matrix. Then you can simply call the
force/energy routine
for the molecule/charge group(s)/atom(s) that you want.

I would not worry too much about parallelization at this moment.
Optimization of the force calculation will give you much more improvement
at this moment. Also I don't see that you would want to use MC for
really large
systems, so single processor is probably the most useful case.

Berk


André Assuncao da Silva Teixeira Ribeiro wrote:
> The current code calculates the energy contributions from all atoms,
> just as in md, as is obviously very inefficient. Right now I am trying
> to understand the neighbour search algorithm in order to be able to
> calculate the energy for only those atoms that did move. Once changes in
> internal coordinates are implemented, it is not clear for me how would I
> handle efficiently bonded interactions, i.e., is there any "easy" way to
> calculate only those interactions with atoms that did move?
>
> With regards to the parallelism question, I had considered step-space
> decomposition as Erik pointed out, however I did not want to make such a
> radical change from current code, at least not at first time. I will try
> to make MC work with dd decomposition and after that we can see how it
> goes.
>
> Cheers.
> Andre.
>
>  
> Am Donnerstag, den 06.08.2009, 10:12 +0200 schrieb Berk Hess:
>   
>> Mark Abraham wrote:
>>     
>>> David van der Spoel wrote:
>>>       
>>>> Mark Abraham wrote:
>>>>         
>>>>> David van der Spoel wrote:
>>>>>           
>>>>>> Erik Marklund wrote:
>>>>>>             
>>>>>>> Swell! It's been missing from gromacs for a long time in my oppinion.
>>>>>>>
>>>>>>> Regarding parallelism, MC is highly parallelizable in
>>>>>>> "step-space", and computing different MC steps on different
>>>>>>> processors will most likely be faster than distributing particles
>>>>>>> or domains, since virtually nothing needs to be communicated.
>>>>>>> Therefore, a step-decomposition option is a good idea. Plus, it
>>>>>>> would be very easy to implement.
>>>>>>>               
>>>>>> This is not necessarily true if we move to many (tens of thousands
>>>>>> of processors) which soon will be feasible with gmx 4.1 once it is
>>>>>> finished. Hence I would prefer if the code did not "interfere" with
>>>>>> the parallellisation, but rather just uses the existing logic.
>>>>>> Maybe I misunderstand it, but you do a trial move and then
>>>>>> recompute the energy, right?
>>>>>>             
>>>>> Reworking the multi-simulation feature that already exists would be
>>>>> the best of all worlds. In REMD you have a bunch of the same system
>>>>> in different states, which sometimes move, but in straight MC you
>>>>> can have many copies of the same system computing different trial
>>>>> moves. The catch is that when you accept a move, you probably have
>>>>> to throw away any work on the old state and communicate the
>>>>> successful move to all systems. Does that suggest constructing
>>>>> low-probability moves to minimise wastage? Dunno.
>>>>>           
>>>> Multisim sits on top of the normal parallellisation, so you still
>>>> need to adapt to DD. But what could be simpler than handling the
>>>> trial moves sequentially? Nevertheless, once a DD implementation
>>>> works that does the trial moves sequentially, the Multisim solution
>>>> that you are suggesting would be simple to implement, and it would
>>>> lead to slightly higher efficiency probably.
>>>>         
>>> :-) By the same token, what could be simpler than *lots* of
>>> single-processor simulations that don't need any decomposition? For a
>>> number of processors sufficiently high with respect to the scaling
>>> limits of the domain decomposition, the computation lost when an
>>> accepted move means existing computation is worthless might be lower
>>> than the loss incurred by the decomposition communication.
>>>
>>> Or, if you don't need detailed balance, whenever a processor tries a
>>> move, it tells all other processors that they can't try any move that
>>> might clash. Then when one accepts a move, it communicates that to all
>>> the other processors who apply it when next convenient. Sampling comes
>>> from all replicas.
>>>
>>> Meh, time to quit with the random ideas, already :-)
>>>
>>> Mark
>>>       
>> I think there is a much more serious issue with MC performance in Gromacs.
>> Currently Gromacs is optimized to very efficiently determine the energy
>> of the whole system
>> with moves slowly with time.
>> For MC you want to be able to quickly evaluate changes in the energy due
>> to local moves.
>> In most cases this would mean that you would only want to go through the
>> neighborlist
>> of the atoms that move and compute the energy before and after the move.
>> You would never need the energy of the whole system.
>> How the efficiency turns out depends very much on the system size.
>> But MC might anyhow be inefficient for large systems.
>> Currently there is code for doing something similar, in the TPI
>> "integrator".
>> This only determines local energies, parallelism is handled through giving
>> different "steps" to different processors, which in this case completely
>> decouple.
>> But performance wise TPI is much nicer than general MC, since can perform
>> many insertions with a minor translation or rotation at the same location
>> with the same local neighborlist.
>>
>> I think the solution depends a lot on what you would want to use MC for.
>> In most cases MD will be far more efficient than MC.
>> The only case I can think of where MC would do better is potential with
>> hard walls, in which case you can not use MD, but  in this case Gromacs
>> would not be suited anyhow.
>> Or maybe there are some special ensembles for which MD is inconvenient?
>>
>> Berk
>>
>> _______________________________________________
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the 
>> www interface or send it to gmx-developers-request at gromacs.org.
>>     
>
>
>
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-developers-request at gromacs.org.
>   




More information about the gromacs.org_gmx-developers mailing list