[gmx-users] using dual CPU's
Mark Abraham
mark.j.abraham at gmail.com
Wed Dec 12 01:40:50 CET 2018
Hi,
In your case the slow down was in part because with a single GPU the PME
work by default went to that GPU. But with two GPUs the default is to leave
the PME work on the CPU (which for your test was very weak), because the
alternative is often not a good idea. You can try it out with the command
Szilard suggested. You won't learn much that will apply to your real case,
because the system size and GPU/CPU balance is critical.
Mark
On Wed., 12 Dec. 2018, 10:56 paul buscemi, <pbuscemi at q.com> wrote:
> Szilard,
>
> Thank you vey much for the information and I apologize how the text
> appeared - internet demons at work.
>
> The computer described in the log files is a basic test rig which we use
> to iron out models. The workhorse is a many core AMD with now one and
> hopefully soon to be two 2080ti’s, It will have to handle several 100k
> particles and at the moment do not think the simulation could be divided.
> These are essentially of a multi component ligand adsorption from solution
> onto a substrate including evaporation of the solvent.
>
> I saw from a 2015 paper form your group “ Best bang for your buck: GPU
> nodes for GROMACS biomolecular simulations “ that I should expect maybe a
> 50% improvement for 90k atoms ( with 2x GTX 970 ) What bothered me in my
> initial attempts was that my simulations became slower by adding the second
> GPU - it was frustrating to say the least
>
> I’ll give your suggestions a good workout, and report on the results when
> I hack it out..
>
> Bes
> Paul
>
> > On Dec 11, 2018, at 12:14 PM, Szilárd Páll <pall.szilard at gmail.com>
> wrote:
> >
> > Without having read all details (partly due to the hard to read log
> > files), what I can certainly recommend is: unless you really need to,
> > avoid running single simulations with only a few 10s of thousands of
> > atoms across multiple GPUs. You'll be _much_ better off using your
> > limited resources by running a few independent runs concurrently. If
> > you really need to get maximum single-run throughput, please check
> > previous discussions on the list on my recommendations.
> >
> > Briefly, what you can try for 2 GPUs is (do compare against the
> > single-GPU runs to see if it's worth it):
> > mdrun -ntmpi N -npme 1 -nb gpu -pme gpu -gpustasks TASKSTRING
> > where typically N = 4, 6, 8 are worth a try (but N <= #cores) and the
> > TASKSTRING should have N digits with either N-1 zeros and the last 1
> > or N-2 zeros and the last two 1, i.e..
> >
> > I suggest to share files using a cloud storage service like google
> > drive, dropbox, etc. or a dedicated text sharing service like
> > paste.ee, pastebin.com, or termbin.com -- especially the latter is
> > very handy for those who don't want to leave the command line just to
> > upload a/several files for sharing (i.e. try "echo "foobar" | nc
> > termbin.com 9999)
> >
> > --
> > Szilárd
> > On Tue, Dec 11, 2018 at 2:44 AM paul buscemi <pbuscemi at q.com> wrote:
> >>
> >>
> >>
> >>> On Dec 10, 2018, at 7:33 PM, paul buscemi <pbuscemi at q.com> wrote:
> >>>
> >>>
> >>> Mark, attached are the tail ends of three log files for
> >>> the same system but run on an AMD 8 Core/16 Thread 2700x, 16G ram
> >>> In summary:
> >>> for ntpmi:ntomp of 1:16 , 2:8, and auto selection (4:4) are 12.0, 8.8
> , and 6.0 ns/day.
> >>> Clearly, I do not have a handle on using 2 GPU's
> >>>
> >>> Thank you again, and I'll keep probing the web for more understanding.
> >>> I’ve propbably sent too much of the log, let me know if this is the
> case
> >> Better way to share files - where is that friend ?
> >>>
> >>> Paul
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users
mailing list