[gmx-users] MPICH or LAM/MPI
Carsten Kutzner
ckutzne at gwdg.de
Mon Jun 19 13:15:30 CEST 2006
hseara at netscape.net wrote:
>
> Thank you very much you the advise, I will install fiors LAM and then
> MPICH. By the way you think can be interesting to invest in some low
> latenci network devices for this little clustter??. Or with the gigabit
> Can be enought
>
> Thanks Hector
Of course the scaling will be better with low latency interconnects. If
you only want to run one large simulation at a time using the whole
cluster, investing in a good interconnect is a good idea. But if you
anyway have several simulations running in parallel, it could be better
to use the money to buy addional computer nodes.
Carsten
>
> -----Original Message-----
> From: Carsten Kutzner <ckutzne at gwdg.de>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Sent: Mon, 19 Jun 2006 10:28:51 +0200
> Subject: Re: [gmx-users] MPICH or LAM/MPI
>
> Hello Hector,
>
> since it does not take long to install lam and mpich, I would install
> both MPI implementations and then benchmark with a typical MD system
> which one performs better.
>
> I would suggest LAM 7.1.2 which is the newest version, and MPICH-2
> 1.0.3, which works well, at least on our cluster. Many users (including
> myself) found that GROMACS on top of MPICH-1.2.x hangs when executed on
> more than 4 CPUs.
>
> My guess is that LAM-GROMACS outperforms MPICH-GROMACS on a smaller
> number of CPUs, but on 6 or more CPUs MPICH-GROMACS might be faster,
> since it provides optimized collective communication routines.
>
> If you only want to install a single MPI implementation, I would choose
> LAM for a start.
>
> Hope that helps,
> Carsten
>
> hseara at netscape.net wrote:
> > Dear Gmx-Users,
> > > I'm configuring a small cluster of 5 Dell Power Edge 1800 with dual
> xeon > conected by a gigabit dell switch. I was wondering which MPI >
> distribution, LAM/MPI or MPICH(which version), on this platform has >
> better performance for gromacs 3.3.1 on a fedora 5 linux distribution. >
> I will try to combine that with SGEgrid as a queue manager. Also I will
> > like to know if I can expect for big systems, 200.000 atoms, good >
> parallelitzarion in the 5 nodes ( 10 processors).
> > > Thank you
> > Hector Martínez-Seara Monné
> > University of Barcelona
> > _______________________________________________
> > gmx-users mailing list gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please don't post (un)subscribe requests to the list. Use the www >
> interface or send it to gmx-users-request at gromacs.org.
> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> -- Dr. Carsten Kutzner
> Max Planck Institute for Biophysical Chemistry
> Theoretical and Computational Biophysics Department
> Am Fassberg 11
> 37077 Goettingen, Germany
> Tel. +49-551-2012313, Fax: +49-551-2012302
> http://www.mpibpc.mpg.de/research/dep/grubmueller/
> http://www.gwdg.de/~ckutzne
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne
More information about the gromacs.org_gmx-users
mailing list