[gmx-developers] Parallel g_hbond
Erik Marklund
erikm at xray.bmc.uu.se
Sat Mar 28 13:24:14 CET 2009
Hi,
Fair enough. That sounds like a good idea. In that case I can let my
code go over time. But until then, if anyone wants my parallel version,
just contact me off-list.
/Erik
hessb at mpip-mainz.mpg.de skrev:
> Hi,
>
> I would suggest a far simpler and far more general parallelization
> of all tools.
> You could simply parallelize every tool by assigning different frames
> to a single thread or mpi process. This could be done on the level
> of Teemu's new analysis library which uses a callback routine to
> analyze each frame. We will have to slightly modify each tool
> already to use such a callback. Different threads or MPI processes
> could simply call this function for different frames.
> In that way all tools would be 100% parallelized.
> I guess that with threads this would really require no extra
> work for each tool, since the data would be accumulated
> automatically due to shared memory usage.
>
> Berk
>
>
>> Hi fellow developers,
>>
>> I wrote a parallel version of g_hbond using openMP a few months ago,
>> prompted by the large systems simulated in our group (plus I needed a
>> programming project for this course I took). For my small testsystem,
>> comprised of MeOH and water, the calculations of the acf and related
>> quantities show a speedup that is linear with slope ~0.95. For my larger
>> testsystem comprised of a virus capsid, the gridloop scales very well
>> and has a relative spedup that is almost a straight line with slope
>> ~0.95 when plotted against the number of cores used. In both cases the
>> IO is a nasty bottleneck, making the total execution time scale
>> something like t =~ 0.6 x n_cores.
>>
>> Regardless of the slowness from IO, the parallel code should be useful
>> for anyone analyzing large systems. The problem is the use of OpenMP, as
>> this is not supported by e.g. fairly recent verions of gcc (2006?). I
>> discussed this matter some time ago with Erik Lindahl and Davidvan der
>> Spoel. I feel that it is wasteful to let the parallel code rot away, as
>> I will not have the time to merge revisions into the parallel code all
>> the time. All openMP-pragmas are encapsulated in cpp-conditionals,
>> making the code slightly bloated and a bit harder to read, but should
>> make it compile on most systems. Furthermore, the parallel-friendly
>> variables would go well with pthreads and the likes of it. My question
>> is what to do with the code. Should I commit it to the CVS as
>> gmx_hbond.c, or should it go somewhere else?
>>
>> Cheers,
>>
>> --
>> -----------------------------------------------
>> Erik Marklund, PhD student
>> Laboratory of Molecular Biophysics,
>> Dept. of Cell and Molecular Biology, Uppsala University.
>> Husargatan 3, Box 596, 75124 Uppsala, Sweden
>> phone: +46 18 471 4537 fax: +46 18 511 755
>> erikm at xray.bmc.uu.se http://xray.bmc.uu.se/molbiophys
>>
>> _______________________________________________
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://www.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-developers-request at gromacs.org.
>>
>> --
>> This email was Anti Virus checked by Astaro Security Gateway.
>> http://www.astaro.com
>>
>>
>
>
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-developers-request at gromacs.org.
>
--
-----------------------------------------------
Erik Marklund, PhD student
Laboratory of Molecular Biophysics,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596, 75124 Uppsala, Sweden
phone: +46 18 471 4537 fax: +46 18 511 755
erikm at xray.bmc.uu.se http://xray.bmc.uu.se/molbiophys
More information about the gromacs.org_gmx-developers
mailing list