[gmx-users] Dell PowerEdge M710 with Intel Xeon 5667 processor

Szilárd Páll szilard.pall at cbr.su.se
Wed Jan 19 23:01:47 CET 2011


At the same time, I would emphasize that from a scaling point of view less
cores per node is better than lots of cores because processors on the same
node share the same network link. Also frequency of CPUs on 2-socket nodes
goes higher than 4+-socket configurations.

--
Szilárd


On Wed, Jan 19, 2011 at 10:30 PM, Mark Abraham <Mark.Abraham at anu.edu.au>wrote:

>  On 20/01/2011 3:33 AM, Maryam Hamzehee wrote:
>
>   Dear Szilárd,
>
>  Many thanks for your reply. I've got following reply from my question
> from Linux-PowerEdge mailing list. I was wondering which one applies to
> GROMACS parallel computation (I mean CPU bound, disk bound, etc).
>
>
> In serial, GROMACS is very much CPU-bound, and a lot of work has gone into
> making the most of the CPU. In parallel, that CPU-optimization work is so
> effective that smallish packets of information have to be transferred
> regularly without much possibility of effectively overlapping communication
> and computation, and so a low-latency communication network is essential in
> order to continue making effective use of all the CPUs. Something like
> Infiniband or NUMAlink is definitely required.
>
> Mark
>
>
>    >There shouldn't be any linux compatibility issues with any PowerEdge
> >system.  At Duke we have a large compute cluster using a variety of
> >PowerEdge blades (including M710's) all running on linux.
>
> >What interconnect are you using?  And are your jobs memory bound, cpu
> >bound, disk bound, or network bound?
>
> >If your computation is more dependent on the interlink and communication
> >between the nodes, its more important to worry about your interconnect.
>
> >If Inter-node communication is highly important, you may also want to
> >consider something like the M910.  The M910 can be configured with 4
> >8-core CPUs, thus giving you 32 NUMA-connected cores.  Or 64 logical
> >processors if your job is one that can benefit from HT.  Note that when
> >going with more cores-per chip, your max clockrate tends to be lower.
> >As such, its really important to know how your jobs are bound so that
> >you can order a cluster configuration that'll be best for that job.
>
>
>  Cheers, Maryam
>
> --- On *Tue, 18/1/11, Szilárd Páll <szilard.pall at cbr.su.se><szilard.pall at cbr.su.se>
> * wrote:
>
>
> From: Szilárd Páll <szilard.pall at cbr.su.se> <szilard.pall at cbr.su.se>
> Subject: Re: [gmx-users] Dell PowerEdge M710 with Intel Xeon 5667 processor
> To: "Discussion list for GROMACS users" <gmx-users at gromacs.org><gmx-users at gromacs.org>
> Received: Tuesday, 18 January, 2011, 10:31 PM
>
> Hi,
>
> Although the question is a bit fuzzy, I might be able to give you a
> useful answer.
>
> >From what I see on the whitepaper of the Poweredge m710 baldes, among
> other (not so interesting :) OS-es, Dell provides the options of Red
> Had or SUSE Linux as factory installed OS-es. If you have any of
> these, you can rest assured that Gromacs will run just fine -- on a
> single node.
>
> Parallel runs are little bit different story and depends on the
> interconnect. If you have Infiniband, than you'll have a very good
> scaling over multiple nodes. This is true especially if it's the I/O
> cards are the Mellanox QDR-s.
>
> Cheers,
> --
> Szilárd
>
>
> On Tue, Jan 18, 2011 at 4:48 PM, Maryam Hamzehee
> <maryam_h_7860 at yahoo.com <http://mc/compose?to=maryam_h_7860@yahoo.com>>
> wrote:
> >
> > Dear list,
> >
> > I will appreciate it if I can get your expert opinion on doing
> parallel computation (I will use GROMACS and AMBER molecular mechanics
> packages and some other programs like CYANA, ARIA and CNS to do
> structure calculations based on NMR experimental data) using a cluster based
> on Dell PowerEdge M710 with Intel Xeon 5667 processor architecture which
> > apparently each blade has two quad-core cpus. I was wondering if I
> can get some information about LINUX compatibility and parallel
> computation on this system.
> > Cheers,
> > Maryam
> >
> > --
> > gmx-users mailing list    gmx-users at gromacs.org<http://mc/compose?to=gmx-users@gromacs.org>
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org<http://mc/compose?to=gmx-users-request@gromacs.org>
> .
> > Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing list    gmx-users at gromacs.org<http://mc/compose?to=gmx-users@gromacs.org>
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org<http://mc/compose?to=gmx-users-request@gromacs.org>
> .
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
>
>
>
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20110119/fb4370e6/attachment.html>


More information about the gromacs.org_gmx-users mailing list