[gmx-users] gromacs memory usage
Roland Schulz
roland at utk.edu
Thu Mar 4 02:23:59 CET 2010
Hi,
a couple of points:
1) you will need some additional memory for the system, MPI, the binary,
.... - how much this is does not depend on GROMACS (ask e.g. your sys-admin)
2) you might want to try to run only 1 rank on the first node (how to do
this depends on your MPI implementation and should be asked on the specific
MPI list)
3) By setting limits (e.g. ulimit with bash) you can prevent the system to
freeze (again ask you sysadmin about how to use limits)
4) You can compile GROMACS with CFLAGS="-DPRINT_ALLOC_KB" and it gives debug
information about the memory usage. This can be used to verify the my/Berk's
numbers
Roland
On Wed, Mar 3, 2010 at 7:15 PM, Amit Choubey <kgp.amit at gmail.com> wrote:
> Hi Roland,
>
> I was using 32 nodes with 8 cores, each with 16 Gb memory. The system was
> about 154 M particles. This should be feasible according to the numbers.
> Assuming that it takes 50bytes per atoms and 1.76 KB per atom per core then
>
> Masternode -> (50*154 M + 8*1.06)bytes ~ 16GB (There is no leverage here)
> All other nodes 8*1.06 ~ 8.5 GB
>
> I am planning to try the same run on 64 nodes with 8 cores each again but
> not until i am a little more confident. The problem is if gromacs crashes
> due to memory it makes the nodes to hang and people have to recycle the
> power supply.
>
>
> Thank you,
>
> amit
>
> On Wed, Mar 3, 2010 at 7:34 AM, Roland Schulz <roland at utk.edu> wrote:
>
>> Hi,
>>
>> ok then it is compiled in 64bit.
>>
>> You didn't say how many cores each node has and on how many nodes you want
>> to run.
>>
>> Roland
>>
>>
>> On Wed, Mar 3, 2010 at 4:32 AM, Amit Choubey <kgp.amit at gmail.com> wrote:
>>
>>> Hi Roland,
>>>
>>> It says
>>>
>>> gromacs/4.0.5/bin/mdrun_mpi: ELF 64-bit LSB executable, AMD x86-64,
>>> version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared
>>> libs), for GNU/Linux 2.6.9, not stripped
>>>
>>>
>>> On Tue, Mar 2, 2010 at 10:34 PM, Roland Schulz <roland at utk.edu> wrote:
>>>
>>>> Amit,
>>>>
>>>> try the full line (with the "file")
>>>>
>>>> Roland
>>>>
>>>> On Wed, Mar 3, 2010 at 1:22 AM, Amit Choubey <kgp.amit at gmail.com>wrote:
>>>>
>>>>> Hi Roland
>>>>>
>>>>> I tried 'which mdrun' but it only gives the path name of installation.
>>>>> Is there any other way to know if the installation is 64 bit ot not?
>>>>>
>>>>> Thank you,
>>>>> Amit
>>>>>
>>>>>
>>>>> On Tue, Mar 2, 2010 at 10:03 PM, Roland Schulz <roland at utk.edu> wrote:
>>>>>
>>>>>> Hi,
>>>>>>
>>>>>> do:
>>>>>> file `which mdrun`
>>>>>> and it should give:
>>>>>> /usr/bin/mdrun: ELF 64-bit LSB executable, x86-64, version 1 (SYSV),
>>>>>> dynamically linked (uses shared libs), for GNU/Linux 2.6.15, stripped
>>>>>>
>>>>>> If it is not 64 you need to compile with 64 and have a 64bit kernel.
>>>>>> Since you asked before about 2GB large files this might indeed be your
>>>>>> problem.
>>>>>>
>>>>>> Roland
>>>>>>
>>>>>> On Wed, Mar 3, 2010 at 12:48 AM, Amit Choubey <kgp.amit at gmail.com>wrote:
>>>>>>
>>>>>>> Hi Tsjerk,
>>>>>>>
>>>>>>> I tried to do a test run based on the presentation. But there was a
>>>>>>> memory related error (I had given a leverage of more than 2 GB).
>>>>>>>
>>>>>>> I did not understand the 64 bit issue, could you let me know wheres
>>>>>>> the documentation? I need to look into that.
>>>>>>>
>>>>>>> Thank you,
>>>>>>> amit
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Mar 2, 2010 at 9:14 PM, Tsjerk Wassenaar <tsjerkw at gmail.com>wrote:
>>>>>>>
>>>>>>>> Hi Amit,
>>>>>>>>
>>>>>>>> I think the presentation gives right what you want: a rough
>>>>>>>> estimate.
>>>>>>>> Now as Berk pointed out, to allocate more than 2GB of memory, you
>>>>>>>> need
>>>>>>>> to compile in 64bit. Then, if you want to have a real feel for the
>>>>>>>> memory usage, there's no other way than trying. But fortunately, the
>>>>>>>> memory requirements of a (very) long simulation are equal to that of
>>>>>>>> a
>>>>>>>> very short one, so it doesn't need to cost much time.
>>>>>>>>
>>>>>>>> Cheers,
>>>>>>>>
>>>>>>>> Tsjerk
>>>>>>>>
>>>>>>>> On Wed, Mar 3, 2010 at 5:31 AM, Amit Choubey <kgp.amit at gmail.com>
>>>>>>>> wrote:
>>>>>>>> > Hi Mark,
>>>>>>>> >
>>>>>>>> > Yes thats one way to go about it. But it would have been great if
>>>>>>>> i could
>>>>>>>> > get a rough estimation.
>>>>>>>> >
>>>>>>>> > Thank you.
>>>>>>>> >
>>>>>>>> > amit
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > On Tue, Mar 2, 2010 at 8:06 PM, Mark Abraham <
>>>>>>>> Mark.Abraham at anu.edu.au>
>>>>>>>> > wrote:
>>>>>>>> >>
>>>>>>>> >> On 3/03/2010 12:53 PM, Amit Choubey wrote:
>>>>>>>> >>>
>>>>>>>> >>> Hi Mark,
>>>>>>>> >>>
>>>>>>>> >>> I quoted the memory usage requirements from a presentation by
>>>>>>>> Berk
>>>>>>>> >>> Hess, Following is the link to it
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> http://www.csc.fi/english/research/sciences/chemistry/courses/cg-2009/berk_csc.pdf
>>>>>>>> >>>
>>>>>>>> >>> l. In that presentation on pg 27,28 Berk does talk about
>>>>>>>> memory
>>>>>>>> >>> usage but then I am not sure if he referred to any other
>>>>>>>> specific
>>>>>>>> >>> thing.
>>>>>>>> >>>
>>>>>>>> >>> My system only contains SPC water. I want Berendsen T
>>>>>>>> coupling and
>>>>>>>> >>> Coulomb interaction with Reaction Field.
>>>>>>>> >>>
>>>>>>>> >>> I just want a rough estimate of how big of a system of water
>>>>>>>> can be
>>>>>>>> >>> simulated on our super computers.
>>>>>>>> >>
>>>>>>>> >> Try increasingly large systems until it runs out of memory.
>>>>>>>> There's your
>>>>>>>> >> answer.
>>>>>>>> >>
>>>>>>>> >> Mark
>>>>>>>> >>
>>>>>>>> >>> On Fri, Feb 26, 2010 at 3:56 PM, Mark Abraham <
>>>>>>>> mark.abraham at anu.edu.au
>>>>>>>> >>> <mailto:mark.abraham at anu.edu.au>> wrote:
>>>>>>>> >>>
>>>>>>>> >>> ----- Original Message -----
>>>>>>>> >>> From: Amit Choubey <kgp.amit at gmail.com <mailto:
>>>>>>>> kgp.amit at gmail.com>>
>>>>>>>> >>> Date: Saturday, February 27, 2010 10:17
>>>>>>>> >>> Subject: Re: [gmx-users] gromacs memory usage
>>>>>>>> >>> To: Discussion list for GROMACS users <gmx-users at gromacs.org
>>>>>>>> >>> <mailto:gmx-users at gromacs.org>>
>>>>>>>> >>>
>>>>>>>> >>> > Hi Mark,
>>>>>>>> >>> > We have few nodes with 64 GB memory and many other with 16
>>>>>>>> GB of
>>>>>>>> >>> memory. I am attempting a simulation of around 100 M atoms.>
>>>>>>>> >>>
>>>>>>>> >>> Well, try some smaller systems and work upwards to see if you
>>>>>>>> have a
>>>>>>>> >>> limit in practice. 50K atoms can be run in less than 32GB
>>>>>>>> over 64
>>>>>>>> >>> processors. You didn't say whether your simulation system can
>>>>>>>> run on
>>>>>>>> >>> 1 processor... if it does, then you can be sure the problem
>>>>>>>> really
>>>>>>>> >>> is related to parallelism.
>>>>>>>> >>>
>>>>>>>> >>> > I did find some document which says one need
>>>>>>>> (50bytes)*NATOMS on
>>>>>>>> >>> master node, also one needs
>>>>>>>> >>> > (100+4*(no. of atoms in cutoff)*(NATOMS/nprocs) for
>>>>>>>> compute
>>>>>>>> >>> nodes. Is this true?>
>>>>>>>> >>>
>>>>>>>> >>> In general, no. It will vary with the simulation algorithm
>>>>>>>> you're
>>>>>>>> >>> using. Quoting such without attributing the source or
>>>>>>>> describing the
>>>>>>>> >>> context is next to useless. You also dropped a parenthesis.
>>>>>>>> >>>
>>>>>>>> >>> Mark
>>>>>>>> >>> --
>>>>>>>> >>> gmx-users mailing list gmx-users at gromacs.org
>>>>>>>> >>> <mailto:gmx-users at gromacs.org>
>>>>>>>> >>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>>>>> >>> Please search the archive at http://www.gromacs.org/searchbefore
>>>>>>>> >>> posting!
>>>>>>>> >>> Please don't post (un)subscribe requests to the list. Use the
>>>>>>>> >>> www interface or send it to gmx-users-request at gromacs.org
>>>>>>>> >>> <mailto:gmx-users-request at gromacs.org>.
>>>>>>>> >>> Can't post? Read
>>>>>>>> http://www.gromacs.org/mailing_lists/users.php
>>>>>>>> >>>
>>>>>>>> >>>
>>>>>>>> >> --
>>>>>>>> >> gmx-users mailing list gmx-users at gromacs.org
>>>>>>>> >> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>>>>> >> Please search the archive at http://www.gromacs.org/searchbefore posting!
>>>>>>>> >> Please don't post (un)subscribe requests to the list. Use the www
>>>>>>>> >> interface or send it to gmx-users-request at gromacs.org.
>>>>>>>> >> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>>>>> >
>>>>>>>> >
>>>>>>>> > --
>>>>>>>> > gmx-users mailing list gmx-users at gromacs.org
>>>>>>>> > http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>>>>> > Please search the archive at http://www.gromacs.org/search before
>>>>>>>> posting!
>>>>>>>> > Please don't post (un)subscribe requests to the list. Use the
>>>>>>>> > www interface or send it to gmx-users-request at gromacs.org.
>>>>>>>> > Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>>>>> >
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Tsjerk A. Wassenaar, Ph.D.
>>>>>>>>
>>>>>>>> Computational Chemist
>>>>>>>> Medicinal Chemist
>>>>>>>> Neuropharmacologist
>>>>>>>> --
>>>>>>>> gmx-users mailing list gmx-users at gromacs.org
>>>>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>>>>> Please search the archive at http://www.gromacs.org/search before
>>>>>>>> posting!
>>>>>>>> Please don't post (un)subscribe requests to the list. Use the
>>>>>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>>>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> gmx-users mailing list gmx-users at gromacs.org
>>>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>>>> Please search the archive at http://www.gromacs.org/search before
>>>>>>> posting!
>>>>>>> Please don't post (un)subscribe requests to the list. Use the
>>>>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
>>>>>> 865-241-1537, ORNL PO BOX 2008 MS6309
>>>>>>
>>>>>> --
>>>>>> gmx-users mailing list gmx-users at gromacs.org
>>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>>> Please search the archive at http://www.gromacs.org/search before
>>>>>> posting!
>>>>>> Please don't post (un)subscribe requests to the list. Use the
>>>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> gmx-users mailing list gmx-users at gromacs.org
>>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>>> Please search the archive at http://www.gromacs.org/search before
>>>>> posting!
>>>>> Please don't post (un)subscribe requests to the list. Use the
>>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
>>>> 865-241-1537, ORNL PO BOX 2008 MS6309
>>>>
>>>> --
>>>> gmx-users mailing list gmx-users at gromacs.org
>>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>>> Please search the archive at http://www.gromacs.org/search before
>>>> posting!
>>>> Please don't post (un)subscribe requests to the list. Use the
>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>>
>>>
>>>
>>> --
>>> gmx-users mailing list gmx-users at gromacs.org
>>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>>> Please search the archive at http://www.gromacs.org/search before
>>> posting!
>>> Please don't post (un)subscribe requests to the list. Use the
>>> www interface or send it to gmx-users-request at gromacs.org.
>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>>
>>
>>
>>
>> --
>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
>> 865-241-1537, ORNL PO BOX 2008 MS6309
>>
>> --
>> gmx-users mailing list gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>
>
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
--
ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
865-241-1537, ORNL PO BOX 2008 MS6309
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100303/85180c47/attachment.html>
More information about the gromacs.org_gmx-users
mailing list