[gmx-developers] What is the function of 'dd_collect_state' used to ?
Mark Abraham
Mark.Abraham at anu.edu.au
Tue Mar 22 14:06:31 CET 2011
On 22/03/2011 8:15 PM, Yukun Wang wrote:
> I don't know if there is a function in Gromacs that it can get and
> change a specific group atom's local information,for example data in
> the t_state data structure , when given a group, which is defined and
> included by command of grompp -n option.
> So I can call this function in the master node and couple severial
> simulations simultaneously .
Data structures exist that contain this information, but it's not in one
usable place. Each processor knows which atoms are local to it, but I
don't think they all know which processor has each atom. Use the
collect_state functionality to prove your concept, and if you need extra
performance, think about doing the weeks of work for implementing
something much harder but a little bit faster.
Mark
> 2011/3/22 David van der Spoel <spoel at xray.bmc.uu.se
> <mailto:spoel at xray.bmc.uu.se>>
>
> On 2011-03-22 04.42, Mark Abraham wrote:
>
> On 22/03/2011 2:38 PM, Yukun Wang wrote:
>
> Hi
> What is the function of 'dd_collect_state' used to ?
>
>
> It collects the state of the simulation system, which was
> distributed
> across the parallel nodes, into a single structure.
>
> In the md.c there is a function as:
> dd_collect_state(cr->dd,state,state_global)
> I don't know what' the mean of it,and where was it defined.
>
> I want to realize a work by gromacs, that there are several
> simulations which coulped weakly only by exchanging position
> information of a group of atoms for every n steps, and for
> each
> simulation I want to run parallelly. So the trouble is
> coming for each
> simulation with domain decomposition parallelizing that
> this group of
> atoms would be distributed in different node.
> How can I get those data from different nodes in the
> running time? If
> I put the self-written coulping code in the master node
> for each
> simulation how I do this data gathering job?
>
>
> Either you need to collect on the master node and communicate
> between
> them before redistributing, or write complex (but ultimately more
> scalable) code to communicate between all the processors. The REMD
> implementation should be a good model for the former.
>
> Indeed, or the more general -multi option.
>
> Mark
>
>
>
> --
> David van der Spoel, Ph.D., Professor of Biology
> Dept. of Cell & Molec. Biol., Uppsala University.
> Box 596, 75124 Uppsala, Sweden. Phone: +46184714205.
> spoel at xray.bmc.uu.se <mailto:spoel at xray.bmc.uu.se>
> http://folding.bmc.uu.se
>
> --
> gmx-developers mailing list
> gmx-developers at gromacs.org <mailto:gmx-developers at gromacs.org>
> http://lists.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-developers-request at gromacs.org
> <mailto:gmx-developers-request at gromacs.org>.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20110323/ac5b9b9b/attachment.html>
More information about the gromacs.org_gmx-developers
mailing list