[gmx-users] Gromacs 4 Scaling Benchmarks...
viveksharma.iitb at gmail.com
Tue Nov 11 14:30:30 CET 2008
one thing I forgot to mention I am getting here around 6 ns/day...for a
protein of size around 2600 atoms..
2008/11/11 vivek sharma <viveksharma.iitb at gmail.com>
> HI MArtin,
> I am using here the infiniband having speed more than 10 gbps..Can you
> suggest some option to scale better in this case.
> With Thanks,
> 2008/11/11 Martin Höfling <martin.hoefling at gmx.de>
> Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma:
>> > I have also tried scaling gromacs for a number of nodes ....but was not
>> > able to optimize it beyond 20 processor..on 20 nodes i.e. 1 processor
>> As mentioned before, performance strongly depends on the type of
>> you're using between your processes. Shared Memory, Ethernet, Infiniband,
>> NumaLink, whatever...
>> I assume you're using ethernet (100/1000 MBit?), you can tune here to some
>> extend as described in:
>> Kutzner, C.; Spoel, D. V. D.; Fechner, M.; Lindahl, E.; Schmitt, U. W.;
>> B. L. D. & Grubmüller, H. Speeding up parallel GROMACS on high-latency
>> networks Journal of Computational Chemistry, 2007
>> ...but be aware that principal limitations of ethernet remain. To come
>> this, you might consider to invest in the interconnect. If you can come
>> with <16 cores, shared memory nodes will give you the "biggest bang for
>> gmx-users mailing list gmx-users at gromacs.org
>> Please search the archive at http://www.gromacs.org/search before
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the gromacs.org_gmx-users