[gmx-users] Intel composer vs. Intel Studio
Mark Abraham
mark.abraham at anu.edu.au
Thu Dec 29 03:02:19 CET 2011
On 29/12/11, "Peter C. Lai" <pcl at uab.edu> wrote:
> What performance are you getting that you want to improve more?
> Here's a datapoint from the last simulation I ran:
>
> Currently running gromacs 4.5.4 built with icc+fftw+openmpi on infiniband
> qdr and I get about 9.7ns/day on 64 PP nodes with 4 PME nodes (68 total
> 2.66ghz X5650) on my 99113 atom system in single precision....
It is very likely you can do better by following grompp's advice about having one-third to one-quarter of your nodes doing PME. See manual 3.17.5.
>
>
> I find that it is more important to optimize your PP/PME allocation than
> microoptimizing the code...
Yes, hence the existence of g_tune_pme and other tools.
> I also find that at some point above 232 nodes (I don't remember what the exact
> number is), mdrun will complain about the overhead it takes to communicate
> energies if am having it communicate energies every 5 steps; which is a
> reflection on thea limitation of the infrastructure than the code too.
I'd say this is more a reflection the limitations of the model you've asked it to use. Per manual 7.3.8 you can control this cost with suitable choices for the nst* variables. You can judge best whether you want faster performance or higher accuracy in the implementation of your approximate model...
Mark
>
>
>
> On 2011-12-27 06:48:23AM -0600, Mark Abraham wrote:
> > On 12/27/2011 11:18 PM, Sudip Roy wrote:
> > > Gromacs users,
> > >
> > > Please let me know what is the best option for gromacs compilation
> > > (looking for better performance in INFINIBAND QDR systems)
> > >
> > > 1. Intel composer XE i.e. Intel compilers, mkl but open MPI library
> > >
> > > 2. Intel studio i.e. Intel compilers, mkl, and Intel MPI library
> >
> > GROMACS is strongly CPU-bound in a way that is rather insensitive to
> > compilers and libraries. I would expect no strong difference between the
> > above two - and icc+MKL+OpenMPI was only a few percent faster than
> > gcc+FFTW+OpenMPI when I tested them on such a machine about two years ago.
> >
> > Mark
> >
> > --
> > This message has been scanned for viruses and
> > dangerous content by MailScanner, and is
> > believed to be clean.
> >
> > --
> > gmx-users mailing list gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> --
> ==================================================================
> Peter C. Lai | University of Alabama-Birmingham
> Programmer/Analyst | KAUL 752A
> Genetics, Div. of Research | 705 South 20th Street
> pcl at uab.edu | Birmingham AL 35294-4461
> (205) 690-0808 |
> ==================================================================
>
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20111229/b14d9860/attachment.html>
More information about the gromacs.org_gmx-users
mailing list