[gmx-users] gromacs memory usage

Roland Schulz roland at utk.edu
Wed Mar 3 06:47:28 CET 2010


Hi,

last time I checked (summer) I got
40bytes per atom
and 294byes per atom/core (RF with 12Å cut-off)

100M atoms works with that cut-off on 128 16GB nodes with 8 cores. I haven't
tried on less than 128 nodes. (See http://cmb.ornl.gov/research/petascale-md
)

We could relatively easy fix the 40bytes per atom (no one had time so far to
work on it) but I don't think there is much which can be done about the
294bytes per atom/core.

On how many nodes do you want to simulate? Thus are you limited by the
40bytes per atom or the 294bytes per atom/core?

Roland



On Tue, Mar 2, 2010 at 11:31 PM, Amit Choubey <kgp.amit at gmail.com> wrote:

> Hi Mark,
>
> Yes thats one way to go about it. But it would have been great if i could
> get a rough estimation.
>
> Thank you.
>
> amit
>
>
>
> On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham <Mark.Abraham at anu.edu.au>wrote:
>
>> On 3/03/2010 12:53 PM, Amit Choubey wrote:
>>
>>>    Hi Mark,
>>>
>>>    I quoted the memory usage requirements from a presentation by Berk
>>>    Hess, Following is the link to it
>>>
>>>
>>>
>>> http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf
>>>
>>>    l. In that presentation on pg 27,28 Berk does talk about memory
>>>    usage but then I am not sure if he referred to any other specific
>>> thing.
>>>
>>>    My system only contains SPC water. I want Berendsen T coupling and
>>>    Coulomb interaction with Reaction Field.
>>>
>>>    I just want a rough estimate of how big of a system of water can be
>>>    simulated on our super computers.
>>>
>>
>> Try increasingly large systems until it runs out of memory. There's your
>> answer.
>>
>> Mark
>>
>>  On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham <mark.abraham at anu.edu.au
>>> <mailto:mark.abraham at anu.edu.au>> wrote:
>>>
>>>    ----- Original Message -----
>>>    From: Amit Choubey <kgp.amit at gmail.com <mailto:kgp.amit at gmail.com>>
>>>    Date: Saturday, February 27, 2010 10:17
>>>    Subject: Re: [gmx-users] gromacs memory usage
>>>    To: Discussion list for GROMACS users <gmx-users at gromacs.org
>>>    <mailto:gmx-users at gromacs.org>>
>>>
>>>     > Hi Mark,
>>>     > We have few nodes with 64 GB memory and many other with 16 GB of
>>>    memory. I am attempting a simulation of around 100 M atoms.>
>>>
>>>    Well, try some smaller systems and work upwards to see if you have a
>>>    limit in practice. 50K atoms can be run in less than 32GB over 64
>>>    processors. You didn't say whether your simulation system can run on
>>>    1 processor... if it does, then you can be sure the problem really
>>>    is related to parallelism.
>>>
>>>     > I did find some document which says one need (50bytes)*NATOMS on
>>>    master node, also one needs
>>>     >  (100+4*(no. of atoms in cutoff)*(NATOMS/nprocs) for compute
>>>    nodes. Is this true?>
>>>
>>>    In general, no. It will vary with the simulation algorithm you're
>>>    using. Quoting such without attributing the source or describing the
>>>    context is next to useless. You also dropped a parenthesis.
>>>
>>>    Mark
>>>    --
>>>    gmx-users mailing list gmx-users at gromacs.org
>>>    <mailto:gmx-users at gromacs.org>
>>>
>>>    http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>    Please search the archive at http://www.gromacs.org/search before
>>>    posting!
>>>    Please don't post (un)subscribe requests to the list. Use the
>>>    www interface or send it to gmx-users-request at gromacs.org
>>>    <mailto:gmx-users-request at gromacs.org>.
>>>
>>>    Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>
>>>
>>>  --
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>



-- 
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100303/65258fcb/attachment.html>


More information about the gromacs.org_gmx-users mailing list