[gmx-users] Gromacs 4 Scaling Benchmarks...

vivek sharma viveksharma.iitb at gmail.com
Wed Nov 12 06:18:14 CET 2008


2008/11/11 Justin A. Lemkul <jalemkul at vt.edu>

>
>
> vivek sharma wrote:
>
>> HI MArtin,
>> I am using here the infiniband having speed more than 10 gbps..Can you
>> suggest some option to scale better in this case.
>>
>>
> What % imbalance is being reported in the log file?  What fraction of the
> load is being assigned to PME, from grompp?  How many processors are you
> assigning to the PME calculation?  Are you using dynamic load balancing?


Everybody thanks for your usefull suggestions..
What do you mean by % imbalance reported in log file. I don't know how to
assign the specific load to PME, but I can see that around 37% of the
computation is being used by PME.
I am not assigning PME nodes separately. I have no idea of dynamic load
balancing and how to use it  ?

Looking forward for answers...

With Thanks,
Vivek

>
>
> All of these factors affect performance.
>
> -Justin
>
>  With Thanks,
>> Vivek
>>
>> 2008/11/11 Martin Höfling <martin.hoefling at gmx.de <mailto:
>> martin.hoefling at gmx.de>>
>>
>>
>>    Am Dienstag 11 November 2008 12:06:06 schrieb vivek sharma:
>>
>>
>>     > I have also tried scaling gromacs for a number of nodes ....but
>>    was not
>>     > able to optimize it beyond 20 processor..on 20 nodes i.e. 1
>>    processor per
>>
>>    As mentioned before, performance strongly depends on the type of
>>    interconnect
>>    you're using between your processes. Shared Memory, Ethernet,
>>    Infiniband,
>>    NumaLink, whatever...
>>
>>    I assume you're using ethernet (100/1000 MBit?), you can tune here
>>    to some
>>    extend as described in:
>>
>>    Kutzner, C.; Spoel, D. V. D.; Fechner, M.; Lindahl, E.; Schmitt, U.
>>    W.; Groot,
>>    B. L. D. & Grubmüller, H. Speeding up parallel GROMACS on high-latency
>>    networks Journal of Computational Chemistry, 2007
>>
>>    ...but be aware that principal limitations of ethernet remain. To
>>    come around
>>    this, you might consider to invest in the interconnect. If you can
>>    come out
>>    with <16 cores, shared memory nodes will give you the "biggest bang
>>    for the
>>    buck".
>>
>>    Best
>>             Martin
>>    _______________________________________________
>>    gmx-users mailing list    gmx-users at gromacs.org
>>    <mailto:gmx-users at gromacs.org>
>>    http://www.gromacs.org/mailman/listinfo/gmx-users
>>    Please search the archive at http://www.gromacs.org/search before
>>    posting!
>>    Please don't post (un)subscribe requests to the list. Use the
>>    www interface or send it to gmx-users-request at gromacs.org
>>    <mailto:gmx-users-request at gromacs.org>.
>>    Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://www.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>
> --
> ========================================
>
> Justin A. Lemkul
> Graduate Research Assistant
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> ========================================
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20081112/2a5e31af/attachment.html>


More information about the gromacs.org_gmx-users mailing list