[gmx-users] 答复: 答复: Can't allocate memory problem

Szilárd Páll pall.szilard at gmail.com
Fri Jul 18 23:02:32 CEST 2014


On Fri, Jul 18, 2014 at 8:54 PM, Yunlong Liu <yliu120 at jhmi.edu> wrote:
> Hi Szilard,
>
> Thank you for your comments.
> I really learn a lot from that. Can you please explain more on the -nb gpu_cpu tag?

It's a command line option that tells mdrun to use the hybrid mode in
which the loncal non-bonded interactions are calculated on the GPU and
the non-locals (communicated) on the CPU. Unfortunately, this is not
very flexible as the ratio of local/non-local will depend on the
domain decomposition, but you may get lucky and achieve a convenient
split of workload avoiding CPU wait, but also GPU idling

> And what I know is the Stampede node contains 16 Intel Xeon cores with only one Tesla K20m GPU. But you mention there are two CPU XEON cores. I am a little confused on this.

I wrote "two Xeon E5 2680-s", meaning two 8-core/16 thread E5 2680
CPUs (http://goo.gl/amcQG3) in each.

Cheers,
--
Szilárd

> Yunlong
>
> ________________________________________
> 发件人: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> 代表 Szilárd Páll <pall.szilard at gmail.com>
> 发送时间: 2014年7月19日 2:41
> 收件人: Discussion list for GROMACS users
> 主题: Re: [gmx-users] 答复: Can't allocate memory problem
>
> On Fri, Jul 18, 2014 at 7:31 PM, Yunlong Liu <yliu120 at jhmi.edu> wrote:
>> Hi,
>>
>> Thank you for your reply.
>> I am actually not doing anything unusual, just common MD simulation of a protein. My system contains ~250000 atoms, more or less depend on how many water molecules I put in it.
>>
>> The way I called mdrun is
>> ibrun mdrun_mpi_gpu -pin on -ntomp 8 -deffnm pi3k-wt-1 -gpu_id 00
>>
>> I pinned 8 threads on 1 MPI task (this is the optimal way to run simulation on Stampede).
>
> FYI: That can't be universally true. The best run configuration will
> always depend on at least the machine characteristics  parallelization
> capabilities and behavior of the software/algorithms used as well as
> often the  setting/size of input too (especially as different type of
> runs may use different algorithms).
>
> More concretely, GROMACS will not always perform best with 8
> threads/rank - even though that's the number of cores/socket on
> Stampede. My guess is that you'll be better off with 2-4 threads per
> rank.
>
> One thing you may have noticed is that a single K20 that Stampede's
> visualization nodes seem to have (based on http://goo.gl/9fG7Vd), will
> probably not be enough to keep up with two Xeon E5 2680-s and a
> considerable amount of runtime will be lost as the CPU will be idling
> while waiting for the GPU to complete the non-bonded calculation. You
> may want to give a try to the "-nb gpu_cpu" option.
>
> Cheers,
> --
> Szilárd
>
>> It has been problem with other systems like lysosome. But my system is a little unusual and I don't really understand where is unusual.
>>
>> The systems are doing fine if I use CPU to run the simulation but as soon as I turned on the GPU, the simulation sucks frequently. One of the guesses is that GPU is more sensitvie in dealing with the non-bonded interactions.
>>
>> Thank you.
>> Yunlong
>>
>> ________________________________________
>> 发件人: gromacs.org_gmx-users-bounces at maillist.sys.kth.se <gromacs.org_gmx-users-bounces at maillist.sys.kth.se> 代表 Mark Abraham <mark.j.abraham at gmail.com>
>> 发送时间: 2014年7月18日 23:52
>> 收件人: Discussion list for GROMACS users
>> 主题: Re: [gmx-users] Can't allocate memory problem
>>
>> Hi,
>>
>> That's highly unusual, and suggests you are doing something highly unusual,
>> like trying to run on huge numbers of threads, or very large numbers of
>> bonded interactions. How are you setting up to call mdrun, and what is in
>> your tpr?
>>
>> Mark
>> On Jul 17, 2014 10:13 PM, "Yunlong Liu" <yliu120 at jhmi.edu> wrote:
>>
>>> Hi,
>>>
>>>
>>> I am currently experiencing a "Can't allocate memory" problem on Gromacs
>>> 4.6.5 with GPU acceleration.
>>>
>>> Actually, I am running my simulations on Stampede/TACC supercomputers with
>>> their GPU queue. My first experience is when the simulation length longer
>>> than 10 ns, the system starts to throw out the "Can't allocate memory"
>>> problem as follows:
>>>
>>>
>>> Fatal error:
>>> Not enough memory. Failed to realloc 1403808 bytes for f_t->f,
>>> f_t->f=0xa912a010
>>> (called from file
>>> /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
>>> line 3840)
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> -------------------------------------------------------
>>>
>>> "These Gromacs Guys Really Rock" (P.J. Meulenhoff)
>>> : Cannot allocate memory
>>> Error on node 0, will try to stop all the nodes
>>> Halting parallel program mdrun_mpi_gpu on CPU 0 out of 4
>>>
>>> -------------------------------------------------------
>>> Program mdrun_mpi_gpu, VERSION 4.6.5
>>> Source code file:
>>> /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/smalloc.c,
>>> line: 241
>>>
>>> Fatal error:
>>> Not enough memory. Failed to realloc 1403808 bytes for f_t->f,
>>> f_t->f=0xaa516e90
>>> (called from file
>>> /admin/build/admin/rpms/stampede/BUILD/gromacs-4.6.5/src/gmxlib/bondfree.c,
>>> line 3840)
>>> For more information and tips for troubleshooting, please check the GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> -------------------------------------------------------
>>>
>>> Recently, this error occurs even I run a short NVT equilibrium. This
>>> problem also exists when I use Gromacs 5.0 with GPU acceleration. I looked
>>> up the Gromacs errors website to check the reasons for this. But it seems
>>> that none of those reasons will fit in this situation. I use a very good
>>> computer, the Stampede and I run short simulations. And I know gromacs use
>>> nanometers as unit. I tried all the solutions that I can figure out but the
>>> problem becomes more severe.
>>>
>>> Is there anybody that has an idea on solving this issue?
>>>
>>> Thank you.
>>>
>>> Yunlong
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Davis Yunlong Liu
>>>
>>> BCMB - Second Year PhD Candidate
>>>
>>> School of Medicine
>>>
>>> The Johns Hopkins University
>>>
>>> E-mail: yliu120 at jhmi.edu<mailto:yliu120 at jhmi.edu>
>>> --
>>> Gromacs Users mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>>> send a mail to gmx-users-request at gromacs.org.
>>>
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
> --
> Gromacs Users mailing list
>
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.


More information about the gromacs.org_gmx-users mailing list