[gmx-users] Problem with gromacs-4.6 compilation on Debian

Mark Abraham mark.j.abraham at gmail.com
Thu Jan 24 19:44:55 CET 2013


On Thu, Jan 24, 2013 at 6:48 PM, James Starlight <jmsstarlight at gmail.com>wrote:

> Justin, Szilárd, thanks for suggestion!
>
> It will be easily for me to found a better card :)
>
> By the way in other topics some developments told me that the Plumed
> plugin for methadynamics will be realised in gromacs 4.6. I've checked
> for it in manual but could not find any notions about it . Have it
> been included in newest Gromacs?
>

I said in that thread there would not be such a feature. Something being a
plug-in means it is not standard. There's never been any mention of PLUMED
(or any other plug-in) in the GROMACS manual, which documents the standard
features. That's because the authors of GROMACS don't write PLUMED. Other
people write plug-ins and document them :-) One of the PLUMED authors wrote
that it would be implemented *for* GROMACS 4.6, not *in* GROMACS 4.6.

Mark


>
> James
>
> 2013/1/24 Szilárd Páll <szilard.pall at cbr.su.se>:
> > Let me add two more things.
> >
> > Note that we *always* compare performance and acceleration to our
> > super-tuned state-of-the-art CPU code, which I can confidently say that
> is
> > among the fastest if not the fastest to date, and never to some slow
> (CPU)
> > implementation. Therefore, while other codes might be able to get 10-20x
> > with GPUs and many x-es even with pre-Fermi (CC <2.0) cards, for obvious
> > reasons we simply can't.
> >
> > Still you'll get high *absolute performance* regardless whether you can
> use
> > CPU only or CPU+GPU, so this should be a decent deal, right?
> >
> > If anybody volunteers to add software-based floating point atomic
> > operations for CC <2.0 and try running the current kernels on earlier
> > hardware, I'm willing to give pointers, but I simply had no time to try
> it
> > myself. This would enable using earlier cards, with a considerable
> > performance penalty of emulated atomic ops (plus these cards are anyway
> > slower), but it *might* be worth a try!
> >
> > Feel free to drop me a mail if anybody is interested.
> >
> > Cheers,
> >
> > --
> > Szilárd
> >
> >
> > On Thu, Jan 24, 2013 at 3:48 PM, Szilárd Páll <szilard.pall at cbr.su.se
> >wrote:
> >
> >> On Thu, Jan 24, 2013 at 3:28 PM, Justin Lemkul <jalemkul at vt.edu> wrote:
> >>
> >>>
> >>>
> >>> On 1/24/13 9:23 AM, James Starlight wrote:
> >>>
> >>>> oh that was simply solved by upgrading of G++ :)
> >>>>
> >>>> the only problem which remains is the missing of support of mu GPU :(
> >>>> That time I've tried to start simulation on simply 2 cores CPU +
> geforce
> >>>> 9500
> >>>>
> >>>>  From md run I've obtained
> >>>>>
> >>>>
> >>>> NOTE: Error occurred during GPU detection:
> >>>>        CUDA driver version is insufficient for CUDA runtime version
> >>>>        Can not use GPU acceleration, will fall back to CPU kernels.
> >>>>
> >>>> ED sampling will be performed!
> >>>> ED: Reading edi file new_a2a_pca_biased.edi
> >>>> ED: Note: Reference and average structure are composed of the same
> atom
> >>>> indices.
> >>>> ED: Found 1 ED group.
> >>>> Using 1 MPI thread
> >>>> Using 2 OpenMP threads
> >>>>
> >>>> No GPUs detected
> >>>>
> >>>>
> >>>> I've installed cuda-5.0 with driver
> >>>> NVRM version: NVIDIA UNIX x86_64 Kernel Module  304.54  Sat Sep 29
> >>>> 00:05:49 PDT 2012
> >>>>
> >>>> that system works fine with the recent nvidia GPU's but has no support
> >>>> for GeForce 9500.
> >>>>
> >>>>
> >>> GeForce 9500 cards have a compute capability of 1.1.  The minimum
> >>> required for Gromacs is 2.0 (noted in the installation instructions).
> >>
> >>
> >> Exactly! Anything below CC 2.0 lacks certain features that make those
> >> devices rather  slow for our algorithms and therefore pretty much
> >> impractical.
> >>
> >> Cheers,
> >> --
> >> Szilárd
> >>
> >>
> >>
> >>
> >>
> >>>
> >>>
> >>> -Justin
> >>>
> >>> --
> >>> ==============================**==========
> >>>
> >>> Justin A. Lemkul, Ph.D.
> >>> Research Scientist
> >>> Department of Biochemistry
> >>> Virginia Tech
> >>> Blacksburg, VA
> >>> jalemkul[at]vt.edu | (540) 231-9080
> >>> http://www.bevanlab.biochem.**vt.edu/Pages/Personal/justin<
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin>
> >>>
> >>> ==============================**==========
> >>>
> >>> --
> >>> gmx-users mailing list    gmx-users at gromacs.org
> >>> http://lists.gromacs.org/**mailman/listinfo/gmx-users<
> http://lists.gromacs.org/mailman/listinfo/gmx-users>
> >>> * Please search the archive at http://www.gromacs.org/**
> >>> Support/Mailing_Lists/Search<
> http://www.gromacs.org/Support/Mailing_Lists/Search>before posting!
> >>> * Please don't post (un)subscribe requests to the list. Use the www
> >>> interface or send it to gmx-users-request at gromacs.org.
> >>> * Can't post? Read http://www.gromacs.org/**Support/Mailing_Lists<
> http://www.gromacs.org/Support/Mailing_Lists>
> >>>
> >>
> >>
> > --
> > gmx-users mailing list    gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list