[gmx-developers] slightly odd-looking code
jan
rtm443x at googlemail.com
Tue Mar 24 20:38:33 CET 2020
Hiya,
On 24/03/2020, Mark Abraham <mark.j.abraham at gmail.com> wrote:
> Hi gmx developers!
>
>
> On Tue, 24 Mar 2020 at 15:29, jan <rtm443x at googlemail.com> wrote:
>
[snip]
> Great - you're coming at the questions from a totally different
> perspective, which is healthy for everyone, but that's going to give you a
> steep learning curve.
Noooo kidding. We're going to see how much more I've bitten off than I can chew.
> There's some useful recorded webinars from BioExcel
> given by particularly Paul, Szilard, Carsten, and I over recent years that
> are a good starting point for understanding how the code operates at run
> time, but you should look for something else for "intro to molecular
> dynamics for non-scientists." There's a bunch of material online - has
> anybody got suggestions?
Suggestions welcome, also any links to the talks. Might as well try to follow.
The most value will come at the highest level, so aim for that.
>
[snip]
> been a priority. So go native one way or the other :-)
OK. Centos it is unless I hear another suggestion.
>
[snip]
>>
>
> Sigh, that's broken implementation of a new feature that I never thought
> was worth its cost. Don't know how to fix it.
Will go via git. No probs if the same build instructions work.
>
[megasnip]
>>
>> Yes, thought this might be the case. Definitely worth it for newer chips.
>> However, please note that SIMD performance for later chips do not
>> always mix well with non-SIMD code and can overall *cost* performance
>> <https://blog.cloudflare.com/on-the-dangers-of-intels-frequency-scaling/>
>>
>
> Yes thanks, most of us know ;-) Just updating to add AVX2 would give a
> clear win.
Intersting you know this already. It strongly implies any low hanging
fruit I thought I saw are illusory.
>
>
[snip[]
>>
>
> Memory? What's that? :-D GROMACS memory usage is typically measured in
> megabytes, with sophisticated data-parallelism to keep the working set for
> each core down around cache sizes. Obviously you can scale up the problem
> to get out of cache, but the problem sizes that suit interesting science
> are comparable with the amount of L3 cache you get on a socket these days.
Oh, I was expecting gigabyte data sets which I was afraid would choke
any possible gains from AVX due to have to pull from RAM. Looks like a
winner then, going SIMD.
>
> There's a big pile of code in the repo that warrants exhaustive
> optimization, and a lot that is used by only a handful of people, which
> generally doesn't. It's hard to make a valuable impact in either kind of
> place, for different reasons.
Let's worry if/when I get that far.
cheers
jan
>
> Mark
>
> Happy to take this offline and reduce mailing list clutter.
>>
>> cheers
>>
>> jan
>>
>> >
>> > Mark
>> >
>> > On Mon, 23 Mar 2020 at 14:59, jan <rtm443x at googlemail.com> wrote:
>> >
>> >> Hi,
>> >> I'm a general back-end dev. Given the situation, and folding at home
>> >> using gromacs, I thought I'd poke through the code. I noticed
>> >> something unexpected, and was advised to email it here. in edsam.cpp,
>> >> this:
>> >>
>> >>
>> >> void do_linacc(rvec* xcoll, t_edpar* edi)
>> >> {
>> >> /* loop over linacc vectors */
>> >> for (int i = 0; i < edi->vecs.linacc.neig; i++)
>> >> {
>> >> /* calculate the projection */
>> >> real proj = projectx(*edi, xcoll, edi->vecs.linacc.vec[i]);
>> >>
>> >>
>> >> /* calculate the correction */
>> >> real preFactor = 0.0;
>> >> if (edi->vecs.linacc.stpsz[i] > 0.0)
>> >> {
>> >> if ((proj - edi->vecs.linacc.refproj[i]) < 0.0)
>> >> {
>> >> preFactor = edi->vecs.linacc.refproj[i] - proj;
>> >> }
>> >> }
>> >> if (edi->vecs.linacc.stpsz[i] < 0.0)
>> >> {
>> >> if ((proj - edi->vecs.linacc.refproj[i]) > 0.0)
>> >> {
>> >> preFactor = edi->vecs.linacc.refproj[i] - proj;
>> >> }
>> >> }
>> >> [...]
>> >>
>> >>
>> >> In both cases it reaches the same code
>> >>
>> >> preFactor = edi->vecs.linacc.refproj[i] - proj
>> >>
>> >> That surprised me a bit, is it deliberate? If so it may be the code
>> >> can be simplified anyway.
>> >>
>> >> That aside, if you're looking for performance I might be able to help.
>> >> I don't know the high level stuff *at this point* and my C++ is so
>> >> rusty it creaks, but I can brush that up, do profiling and whatnot.
>> >> I'm pretty experience, just not in this area. Speeding things up is
>> >> something I've got a track record of (though I usually have a good
>> >> feel for the problem domain first, which I don't here)
>> >>
>> >> Would it be of some value for me to try getting more speed? If so,
>> >> first thing I'd need is to get this running under cygwin, which I'm
>> >> struggling with.
>> >>
>> >> regards
>> >>
>> >> jan
>> >> --
>> >> Gromacs Developers mailing list
>> >>
>> >> * Please search the archive at
>> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
>> >> before
>> >> posting!
>> >>
>> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >>
>> >> * For (un)subscribe requests visit
>> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>> >> or send a mail to gmx-developers-request at gromacs.org.
>> >>
>> >
>> --
>> Gromacs Developers mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>> or send a mail to gmx-developers-request at gromacs.org.
>>
>
More information about the gromacs.org_gmx-developers
mailing list