[gmx-developers] Regd. Electrostatic calc. time

Anjan Raghunathan araghuna at uiuc.edu
Sun Aug 10 02:59:49 CEST 2003


Hi,
   Thanks for the observation. No I havent tested for a parallel  implementation yet...Shall try using for a larger system though....
  One more Q. ... Are the units in which the f, phi variables in force.c (force and potential at particles) in gromacs units? or are they converted later for writing to output?
   Reason am asking is potential energy calculated and output seems to increase suddenly and complaint that water cant settle appears(though the normal PME implementation simulation works fine.).. and the simulation crashes. Also is 'phi' variable the potential at the particle locations or the potential energy?

Anjan


---- Original message ----
>Date: 08 Aug 2003 09:13:56 +0200
>From: David <spoel at xray.bmc.uu.se>  
>Subject: Re: [gmx-developers] Regd. Electrostatic calc. time  
>To: gmx-developers at gromacs.org
>
>On Fri, 2003-08-08 at 04:30, Anjan Raghunathan wrote:
>> Hi,
>>    I'd like to know if there are any particular optimizing flags or code that were involved in calculating the electrostatic code, particularly PME in gromacs.
>>    The reason I as k this is because I have written a patch that could use Fast Multipole Method to calculate the electrostatic interaction. Though theoretically is supposed to calculate the interaction(electrostatic) faster(than PME), it seems top be nearly 15 times slower than PME(for approx. 10^4 particles). Any suggestions on how to optimize this for gromacs; while compiling(some flags) or even using some optimised code from gromacs?
>> 
>> thanks,
>> Anjan
>this is great news! I've been meaning to look at it for a long while.
>There is however some indication that the turning point lies a bit
>higher (100,000 atoms, see:
>@Article{Petersen95,
>  author = 	 {H. G. Petersen},
>  title = 	 {Accuracy and efficiency of the particle mesh {E}wald
>method},
>  journal = 	 {J. Chem. Phys.},
>  year = 	 1995,
>  volume =	 103,
>  pages =	 {3668-3679}
>}
>)
>
>
>As far as I know there is nothing special in PME, except a few assembly
>tricks for truncating a real to an integer. Furthermore the FFT is
>important.
>
>However, FMM is also supposed to scale much better in parallel, so do
>you have a parallel implementation and have you tested that?
>
>
>
>> _______________________________________
>> 
>> Anjan V. Raghunathan
>> Graduate Research Assistant
>> Computational Electronics Group
>> 3213 Beckman Institute,
>> Univ. of Illinois at Urbana-Champaign.
>> 
>> E-mail:araghuna at uiuc.edu 
>> Off: 217-244-1964
>> Res: 217-355-5418
>> _______________________________________________
>> gmx-developers mailing list
>> gmx-developers at gromacs.org
>> http://www.gromacs.org/mailman/listinfo/gmx-developers
>> Please don't post (un)subscribe requests to the list. Use the 
>> www interface or send it to gmx-developers-request at gromacs.org.
>-- 
>Groeten, David.
>________________________________________________________________________
>Dr. David van der Spoel, 	Dept. of Cell and Molecular Biology
>Husargatan 3, Box 596,  	75124 Uppsala, Sweden
>phone:	46 18 471 4205		fax: 46 18 511 755
>spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
>++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>_______________________________________________
>gmx-developers mailing list
>gmx-developers at gromacs.org
>http://www.gromacs.org/mailman/listinfo/gmx-developers
>Please don't post (un)subscribe requests to the list. Use the 
>www interface or send it to gmx-developers-request at gromacs.org.
_______________________________________

Anjan V. Raghunathan
Graduate Research Assistant
Computational Electronics Group
3213 Beckman Institute,
Univ. of Illinois at Urbana-Champaign.

E-mail:araghuna at uiuc.edu 
Off: 217-244-1964
Res: 217-355-5418



More information about the gromacs.org_gmx-developers mailing list