[gmx-users] Not achieving any speedup using Open MPI

Szilárd Páll pall.szilard at gmail.com
Fri Oct 17 19:15:55 CEST 2014


It's not just the number of atoms.

Cheap Ethernet hardware especially with default QoS, control flow and
other settings won't work well. Not sure about the details of their
setup, but here's a good example what you'll see even with 5000
atoms/core (AFAICT):
http://biowulf.nih.gov/apps/gromacs/bench-4.6.5.html

It is possible to get OK scaling on RDMA-enabled Ethernet, but I don't
know much about it. Google queries e.g. gromacs+ethernet or
gromacs+ethernet+rdma will tell you more, I have even posted links to
this list before.

--
Szilárd


On Fri, Oct 17, 2014 at 5:36 PM, Rytis Slatkevičius <rytis.s at gmail.com> wrote:
> Thanks; what do you think would be a good number of atoms to expect actual
> speedup?
>
> --
> Pagarbiai / Sincerely
> Rytis Slatkevičius
> +370 670 77777
>
> On Fri, Oct 17, 2014 at 5:05 PM, Da-Wei Li <lidawei at gmail.com> wrote:
>
>> For a system with only ~20K atoms and you are using normal LAN, I feel your
>> result is not unexpected.
>>
>> dawei
>>
>> On Fri, Oct 17, 2014 at 9:20 AM, Rytis Slatkevičius <rytis.s at gmail.com>
>> wrote:
>>
>> > Hello,
>> >
>> > first of all: I am not a scientist, but rather an IT person trying to set
>> > up Gromacs for fellow scientists. My goal is to set up Gromacs on several
>> > local computers (connected with 1gbps LAN) and achieve some speedup of
>> our
>> > runs using MPI.
>> >
>> > I have installed Gromacs 5.0.2 from Debian repository (package
>> > gromacs-openmpi). I can run Gromacs over multiple computers, I see
>> > processes being spooled up, so MPI itself is working. However, I am not
>> > seeing any speedup at all - in fact, I get a significant slowdown.
>> >
>> > When running on a rather old dualcore laptop, I can finish a d.lzm run
>> from
>> > http://www.gromacs.org/About_Gromacs/Benchmarks within 109s or so. If I
>> > add
>> > a quadcore machine into the mix, I would expect the computation to be
>> much
>> > faster - but instead I finish everything within 139 seconds! The quadcore
>> > can finish the job in 45 seconds when running on its own without Open
>> MPI.
>> >
>> > I can't wrap my head around what I am doing wrong. My command line is:
>> >
>> > mpirun --display-map --hostfile hosts mdrun_mpi -v -deffnm d.lzm -dlb yes
>> >
>> > Any help would be appreciated.
>> >
>> > --
>> > Pagarbiai / Sincerely
>> > Rytis Slatkevičius
>> > +370 670 77777
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-request at gromacs.org.
>> >
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list