[gmx-users] GMX 2018 regression tests: cufftPlanMany R2C plan failure (error code 5)

Szilárd Páll pall.szilard at gmail.com
Fri Feb 9 16:05:44 CET 2018


Great to hear!

(Also note that one thing we have explicitly focused on is not only peak
performance, but to get as close to peak as possible with just a few CPU
cores! You should be able to get >75% perf with just 3-5 Xeon or 2-3
desktop cores rather than needing a full fast CPU.)

--
Szilárd

On Thu, Feb 8, 2018 at 8:44 PM, Alex <nedomacho at gmail.com> wrote:

> With -pme gpu, I am reporting 383.032 ns/day vs 270 ns/day with the 2016.4
> version. I _did not_ mistype. The system is close to a cubic box of water
> with some ions.
>
> Incredible.
>
> Alex
>
> On Thu, Feb 8, 2018 at 12:27 PM, Szilárd Páll <pall.szilard at gmail.com>
> wrote:
>
> > Note that the actual mdrun performance need not be affected both of it's
> > it's a driver persistence issue (you'll just see a few seconds lag at
> mdrun
> > startup) or some other CUDA application startup-related lag (an mdrun run
> > does mostly very different kind of things than this set of particular
> unit
> > tests).
> >
> > --
> > Szilárd
> >
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/
> Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list