[gmx-users] Too many LINCS warning
vivek sharma
viveksharma.iitb at gmail.com
Fri Sep 26 15:33:07 CEST 2008
Hi Justin,
thanks for your reply....I am using the GROMACS-3.3.3 version compiled with
hpmpi.
my system is having 45999 no. of atoms out of which protein is having 2627
no. of atoms.
so according to you it should go upto 45999/2621= 18 cpus approx....that is
what I am getting with my experimentation ...but the thing is it is taking
around 28 hours for 5 nsec of run. If I would be interested in more time of
run like in microsec (Please correct me If it is not sensefull to go for
microsec scale in MDS). then it will not be possible using GROMACS.
Please suggest me for the same, and if I will use some bigger molecule like
4 times of the molecule I am using right now i.e having around 10000 of
atoms, then the scaling will further reduce and if I will take some bigger
box and add more no. of water then it will scale better.
Please suggest...
with thanks,
Vivek
2008/9/26 Justin A. Lemkul <jalemkul at vt.edu>
>
>
> vivek sharma wrote:
>
>> Hi There,
>> I am trying to scale my system(system with 45000 atoms which is one
>> protein molecule in water box) to run on more number of processor
>> I have asked a number of related queries, but now I am getting warning
>> pasted below...
>>
>> Fatal error:
>> Too many LINCS warnings (11587) - aborting to avoid logfile runaway.
>> This normally happens when your system is not sufficiently equilibrated,or
>> if you are changing lambda too fast in free energy simulations.
>> If you know what you are doing you can adjust the lincs warning threshold
>> in your mdp file, but normally it is better to fix the problem.
>>
>>
>>
>> and the same run is running well with 20 processor, but I got the error
>> pasted above in aan attempt to run the same problem for 40 processor...and
>> it followed by writing the intermediate step.pdb
>>
>> Can anybody suggest how should I tackle the problem, and what other option
>> I can try in this scenario ?
>>
>
> I have seen this too, when I try to use too many processors. I don't know
> the reason. Did you adhere to my previous advice (regarding Gromacs 3.3.x,
> you still haven't told us which version of Gromacs you're using):
>
> Max # of CPU = (total atoms)/(protein atoms)?
>
> The goal is to keep the load evenly distributed over nodes. The above is
> not necessarily necessary under 4.0rc1 with the advent of P-LINCS.
>
> -Justin
>
>
>> With Thanks,
>> Vivek
>>
>>
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> gmx-users mailing list gmx-users at gromacs.org
>> http://www.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>
> --
> ========================================
>
> Justin A. Lemkul
> Graduate Research Assistant
> Department of Biochemistry
> Virginia Tech
> Blacksburg, VA
> jalemkul[at]vt.edu | (540) 231-9080
> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>
> ========================================
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the www interface
> or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20080926/3c211237/attachment.html>
More information about the gromacs.org_gmx-users
mailing list