[gmx-developers] gmx_d nmeig command Not enough memory

Berk Hess hess at kth.se
Tue Jul 5 14:46:54 CEST 2016


Yes, it will still take very long.

Supporting more than 2^31 total eigenvector elements requires code 
changes through all our matrix and I/O routines. I don't think this is 
worth the effort with the current diagonalization times. If we add 
OpenMP parallelization, we can reconsider doing this.
For now I will add checks that ensure we don't ask for more than 2^31 
matrix or eigenvector elements.

Cheers,

Berk

On 2016-07-05 14:36, Adrien Nicolaï wrote:
> Dear Berk,
>
> Yes I changed the default value from 50 to 3N because I need all the 
> modes for further analyses. And it looks like it’s still taking 
> “forever” to get the 50 first modes diagonalised...
>
> Best,
>
> ************************************************************
> Adrien Nicolaï / Maître de conférences
> Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS
> Département NANO - Equipe Physique appliquée aux protéines
> Université de Bourgogne Franche-Comté / Faculté des Sciences et 
> Techniques Mirande
> 9, Av. Savary - B.P. 47 870 21078. DIJON CEDEX - France
> Email : adrien.nicolai at u-bourgogne.fr 
> <mailto:adrien.nicolai at u-bourgogne.fr>
> Tél: 03 80 39 60 93
> URL : 
> https://icb.u-bourgogne.fr/fr/axes-scientifiques/nano/physique-appliquee-aux-proteines.html
> ************************************************************
>
>> On 05 Jul 2016, at 14:26, Berk Hess <hess at kth.se 
>> <mailto:hess at kth.se>> wrote:
>>
>> This is using the sparse matrix code. The default value for -last 
>> (end) is 50, which should work. I assume you increased it.
>>
>> We should fix the code by casting the column and row size to size_t 
>> before multiplying them.
>>
>> Cheers,
>>
>> Berk
>>
>> On 2016-07-05 14:21, Erik Lindahl wrote:
>>> Seems like a bug - we should use a size_t to calculate the size of 
>>> the matrix storage required.
>>>
>>> However, before you get your hopes up: The reason why this happens 
>>> is that you are asking the code to do a full diagonalization of a 
>>> matrix that is 63255^2. That will take at least ~100GB of memory, 
>>> and probably take several months. I would recommend that you use 
>>> some cutoff-based electrostatics/VdW instead to enable sparse matrix 
>>> diagonalization.
>>>
>>> Cheers,
>>>
>>> Erik
>>>
>>>
>>>
>>>
>>>> On 05 Jul 2016, at 14:02, Adrien Nicolaï 
>>>> <adrien.nicolai at u-bourgogne.fr> wrote:
>>>>
>>>> Dear GROMACS developers,
>>>>
>>>> I’m performing a Normal mode analysis using the GROMACS 5.1 
>>>> software. The system I study is a dimer of a protein surrounded by 
>>>> water molecules and is comprised of 21075 atoms (corresponding to 
>>>> 63225 modes). After a proper minimisation using the L-BFGS 
>>>> algorithm and a normal mode calculation in double precision, I 
>>>> tried to diagonalise the entire hessian matrix using the nmeig command.
>>>>
>>>> The use of the gmx_d nmeig command leads to the following error:
>>>>
>>>> *Reading double precision matrix generated by GROMACS VERSION 5.1*
>>>>
>>>> *-------------------------------------------------------*
>>>> *Program gmx nmeig, VERSION 5.1*
>>>> *Source code file: 
>>>> /tmp3/gromacs-5.1-20150825/gromacs-5.1/src/gromacs/utility/smalloc.c, 
>>>> line: 182*
>>>>
>>>> *Fatal error:*
>>>> *Not enough memory. Failed to calloc -297566671 elements of size 8 
>>>> for eigenvectors*
>>>> *(called from file 
>>>> /tmp3/gromacs-5.1-20150825/gromacs-5.1/src/gromacs/gmxana/gmx_nmeig.c, 
>>>> line 401)*
>>>> *For more information and tips for troubleshooting, please check 
>>>> the GROMACS*
>>>> *website at http://www.gromacs.org/Documentation/Errors*
>>>> *-------------------------------------------------------*
>>>> *: Cannot allocate memory*
>>>> *Using begin = 1 and end = 63225*
>>>> *Sparse matrix storage format, nrow=63225, ncols=63225*
>>>> *Starter(339849): Return code=1*
>>>> *Starter end(339849)*
>>>>
>>>> Could you help me with the meaning of this error? Is my system to 
>>>> big for NMA using GROMACS 5.1?
>>>>
>>>> Thanks you in advance for your help
>>>>
>>>> Best regards,
>>>>
>>>> ************************************************************
>>>> Adrien Nicolaï / Maître de conférences
>>>> Laboratoire Interdisciplinaire Carnot de Bourgogne, UMR 6303 CNRS
>>>> Département NANO - Equipe Physique appliquée aux protéines
>>>> Université de Bourgogne Franche-Comté / Faculté des Sciences et 
>>>> Techniques Mirande
>>>> 9, Av. Savary - B.P. 47 870 21078. DIJON CEDEX - France
>>>> Email : adrien.nicolai at u-bourgogne.fr 
>>>> <mailto:adrien.nicolai at u-bourgogne.fr>
>>>> Tél: 03 80 39 60 93
>>>> URL : 
>>>> https://icb.u-bourgogne.fr/fr/axes-scientifiques/nano/physique-appliquee-aux-proteines.html
>>>> ************************************************************
>>>>
>>>> -- 
>>>> Gromacs Developers mailing list
>>>>
>>>> * Please search the archive at 
>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List 
>>>> before posting!
>>>>
>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>
>>>> * For (un)subscribe requests visit
>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers 
>>>> or send a mail to gmx-developers-request at gromacs.org 
>>>> <mailto:gmx-developers-request at gromacs.org>.
>>>
>>>
>>>
>>
>> -- 
>> Gromacs Developers mailing list
>>
>> * Please search the archive at 
>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List 
>> before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers 
>> or send a mail to gmx-developers-request at gromacs.org 
>> <mailto:gmx-developers-request at gromacs.org>.
>
>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20160705/a16eb115/attachment-0001.html>


More information about the gromacs.org_gmx-developers mailing list