[gmx-users] MD workstation

Hadházi Ádám hadadam at gmail.com
Mon Oct 20 16:12:31 CEST 2014


2014-10-19 6:35 GMT+10:00 Szilárd Páll <pall.szilard at gmail.com>:

> On Fri, Oct 17, 2014 at 5:51 AM, Hadházi Ádám <hadadam at gmail.com> wrote:
> > 2014-10-17 11:47 GMT+10:00 lloyd riggs <lloyd.riggs at gmx.ch>:
> >
> >>
> >> Is there any progress in openCL versions of Gromacs, as it is listed on
> >> the developer site?  Just askin.  One thing I ran across is one can get
> >> integrated GPU arrays on a board if you find say Russian board designs
> from
> >> China for about the same price with 10x the computational speed, but the
> >> boards would be largly OpenCL dependent.
> >>
> >> Stephan Watkins
> >>  *Gesendet:* Donnerstag, 16. Oktober 2014 um 20:21 Uhr
> >> *Von:* "Szilárd Páll" <pall.szilard at gmail.com>
> >> *An:* "Discussion list for GROMACS users" <gmx-users at gromacs.org>
> >> *Betreff:* Re: [gmx-users] MD workstation
> >> On Thu, Oct 16, 2014 at 3:35 PM, Hadházi Ádám <hadadam at gmail.com>
> wrote:
> >> > May I ask why your config is better than e.g.
> >> >
> >> > 2x Intel Xeon E5-2620 CPUs (2x$405)
> >> > 4x GTX 970(4x $330)
> >> > 1x Z9PE-D8 WS ($449)
> >> > 64 GB DDR3 ($600)
> >> > PSU 1600W, ($250)
> >> > standard 2TB 5400rpm drive, ($85)
> >> > total: (~$3500)
> >>
> >> Mirco's suggested setup will give much higher *aggregate* simulation
> >> throughput. GROMACS uses both CPUs and GPUs and requires a balanced
> >> resource mix to run efficiently (less so if you don't use PME). The
> >> E5-2620 is rather slow and it will be a good match for a single GTX
> >> 970, perhaps even a 980, but it will be the limiting factor with two
> >> GPUs per socket.
> >>
> >
> > Maybe this is not the best forum for this question, but I also plan to
> use
> > AMBER and Desmond for MD/FEP purposes.
> > Question: Is the recommended 4x1 node config the best setup for these 2
> > other software too?
>
> What's best for those codes you should indeed ask on their forums.
> However, as AMBER-GPU (and AFAIK Desmond too) doesn't rely on the CPU
> except for coordinating a run so 1 core/GPU is likely enough. While
> the hardware needs are quite different, I think you would get similar
> performance/$ if you spread out 4-6 GPUs in multiple workstations with
> fast consumer CPUs rather than one low-end server. Moreover, the
> former will still work well with GROMACS, while the latter will not.
>
> >>
> >> > As for your setup...can I use that 4 nodes in parallel for 1 long
> >> > simulation or 1 FEP job?
> >>
> >> Not without a fast network.
> >>
> > What would be a fast network? How should I connect them?
>
> Infiniband is recommended, some had decent results with Ethernet too,
> but cheap hardware without tweaking the drivers/OS will likely not
> give good results in simulations across the network - especially not
> with PME. You can still run multi-runs across machines even with cheap
> Ethernet ("native" like replica exchange or "artificial" like
> non-communicating independent runs).
>
> > Is 8GB memory really enough/node?
>
> GROMACS needs quite little memory so 8 GB should be enough for
> everything except exotic things.
>
> >>
> >> > What are the weak points of my workstation?
> >>
> >> The CPU. Desktop IVB-E or HSW-E (e.g. i7 49XX, 59XX) will give much
> >> better performance per buck.
> >>
> >> Also note:
> >> * your smaller 25k MD setup will not scale across multiple GPUs;
> >> * in FEP runs you, by sharing a GPU between multiple runs you can
> >> increase the aggregate throughput by quite a lot!
> >>
> >> Do you mean, by sharing 1 GPU between multiple FEP runs, or  by sharing
> > more GPUs between multiple FEP runs?
>
> Both can work, but sharing one GPU between 2-3 FEP runs is more
> straightforward and more efficient too (no domain-decomposition
> needed). The GPU does only force compute (concurrently with the CPU),
> so it will be idle during constraints, integration, neighbor search,
> etc. If multiple independent simulations share a GPU, hardware can be
> utilized better and therefore the aggregate simulation performance
> will increase.
>
> The gain can be especially impressive on large multi-core CPUs with
> small systems; e.g. see the second plot here: http://goo.gl/2xH52y
>
> > We have an access to a server having multiple nodes... both of them
> > possesses 4x Intel X5675 (3.07GHz) CPU (each 6 cores) and 6x NVIDIA Tesla
> > M2070 GPU). Total: 2 x (24 cores and 6 GPU).
> > Question: Is this a good construction for MD/FEP?
>
> Do you mean six M2070's per node or across two nodes?
>

Yes,  six M2070's per node.

>
> Depending on the setup, your larger systems will likely scale OK to
> 3-4 GPUs, but you'll have to try. The smaller one should run quite
> well with FEP + sharing GPUs between runs. Comparing the log files of
> a few short runs with different setups is the best way to tell.
>
> Cheers,
> --
> Szilárd
>

Thank you, Szilárd.
Adam

>
>
> > Regards,
> > Ádám
> >
> >> Cheers,
> >> --
> >> Szilárd
> >>
> >> > Best,
> >> > Adam
> >> >
> >> >
> >> > 2014-10-16 23:00 GMT+10:00 Mirco Wahab <
> >> mirco.wahab at chemie.tu-freiberg.de>:
> >> >
> >> >> On 16.10.2014 14:38, Hadházi Ádám wrote:
> >> >>
> >> >>> Dear GMX Stuff and Users,
> >> >>>>> I am planning to buy a new MD workstation with 4 GPU (GTX 780 or
> 970)
> >> >>>>> or 3
> >> >>>>> GPU (GTX 980) for 4000$.
> >> >>>>> Could you recommend me a setup for this machine?
> >> >>>>> 1 or 2 CPU is necessary? 32/64 GB memory? Cooling? Power?
> >> >>>>>
> >> >>>>
> >> >>>> - What system (size, type, natoms) do you plan to simulate?
> >> >>>>
> >> >>>> - Do you have to run *only one single simulation* over long time
> >> >>>> or *some similar simulations* with similar parameters?
> >> >>>>
> >> >>>
> >> >>> The systems are kind of mix:
> >> >>> MD:
> >> >>> smallest system: 25k atoms, spc/tip3p, 2fs/4fs, NPT, simulation
> time:
> >> >>> 500-1000ns
> >> >>> biggest system: 150k atoms, spc/tip3p, 2fs/4fs, NPT, simulation
> time:
> >> >>> 100-1000ns
> >> >>> FEP (free energy perturbation): ligand functional group mutation
> >> >>> 25k-150k atoms, in complex and in water simulations, production
> >> >>> simulation:
> >> >>> 5ns for each lambda window (number of windows: 12)
> >> >>>
> >> >>
> >> >> In this situation, I'd probably use 4 machines for $1000 each,
> >> >> putting in each:
> >> >> - consumer i7/4790(K), $300
> >> >> - any 8GB DDR3, $75-$80
> >> >> - standard Z97 board, $100
> >> >> - standard PSU 450W, $40
> >> >> - standard 2TB 5400rpm drive, $85
> >> >>
> >> >> the rest of the money (4 x $395), I'd use for 4 graphics
> >> >> cards, probably 3 GTX-970 ($330) and one GTX-980 ($550) -
> >> >> depending on availability, the actual prices, and
> >> >> your detailed budget.
> >> >>
> >> >> YMMV,
> >> >>
> >> >>
> >> >> Regards
> >> >>
> >> >> M.
> >> >>
> >> >> --
> >> >> Gromacs Users mailing list
> >> >>
> >> >> * Please search the archive at http://www.gromacs.org/
> >> >> Support/Mailing_Lists/GMX-Users_List before posting!
> >> >>
> >> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >>
> >> >> * For (un)subscribe requests visit
> >> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or
> >> >> send a mail to gmx-users-request at gromacs.org.
> >> >>
> >> > --
> >> > Gromacs Users mailing list
> >> >
> >> > * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >> >
> >> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >> >
> >> > * For (un)subscribe requests visit
> >> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >>
> >> --
> >> Gromacs Users mailing list
> >>
> >> * Please search the archive at
> >> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> >> posting!
> >>
> >> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >>
> >> * For (un)subscribe requests visit
> >> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> >> send a mail to gmx-users-request at gromacs.org.
> >>
> >>
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>


More information about the gromacs.org_gmx-users mailing list