[gmx-users] Performance of beowulf cluster

Abhi Acharya abhi117acharya at gmail.com
Tue Aug 5 17:21:48 CEST 2014


Thank you Mirco and Szilard,
With regards to the GPU system, I have decided on a Xeon E5-1650 v2 system
with GEForce GTX -780 Ti GPU for equilibration and production runs with
small systems. But for large systems or REMD simulations, I am a bit
skeptical on banking on GPU systems. Any pointers as to what would be the
minimum configuration required for REMD simulations on say a 50 K atom
protein sampled for 100 different temperatures? I am open to all possible
options in this regard (obviously a little cost effectiveness does not harm
).
Also, would investing in a *good* 40 Gigabit ethernet network ensure good
performance if we later plan to more nodes to the cluster.

Regards,
Abhishek


On Tue, Aug 5, 2014 at 5:46 PM, Szilárd Páll <pall.szilard at gmail.com> wrote:

> Hi,
>
> You need fast network to parallelize across multiple nodes. 1 Gb
> ethernet won't work well and even even 10/40 Gb ethernet needs to be
> of good quality; you'd likely need to buy separate adapters, the
> on-board ones won't perform well. I posted some links to the list
> related to this a fed days ago.
>
> The AMD FX dekstop hardware you mention is OK, but I'm not sure that
> it's gives the best performance/price. If you find (very) discounted
> Sandy Bridge-E (i7 3930K) or the cheaper Haswells like i5 4670 may
> actually provide better prerformance for the money. Ivy Bridge-E or
> Haswell-E as Mirco suggests are the best single-socket workstation
> options, but those are/will be pretty expensive.
>
> Finally, unless you have a good reason not to, you should not just
> consider GPUs, but consider what CPU/platform works best with GPUs.
>
> Cheers,
> --
> Szilárd
>
>
> On Tue, Aug 5, 2014 at 7:01 AM, Abhishek Acharya
> <abhi117acharya at gmail.com> wrote:
> > Hello gromacs users,
> > I am planning on investing in a beowulf cluster with 6 nodes (48 cores)
> each with AMD Fx 8350 processor, 8 GB memory  connected by 1 Gigabit
> Ethernet switch. Although I plan to add more cores to this cluster later
> on, what is the max performance expected from the current specs for a
> 100,000 atom simulation box ? Also, is it better to invest in a  single 48
> core server ? The cluster system can be set up at almost half the price of
> a 48 core server, but do we lose out on performance in the process??
> >
> > Regards,
> >
> > Abhishek Acharya
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>



-- 
Abhishek Acharya
Senior Research Fellow
Gene Regulation Laboratory
National Institute of Immunology


More information about the gromacs.org_gmx-users mailing list