[gmx-developers] neighbor list, Monte Carlo

David spoel at xray.bmc.uu.se
Thu Sep 11 08:31:18 CEST 2003


On Thu, 2003-09-11 at 00:48, Jason DeJoannis wrote:
> Hi, 
>  
> Lets say we wish to add a Monte Carlo move into mdrun. 
> It is simpler to consider only the class of fixed-topology moves. 
> That is to say, only change the position of one or more atoms. 
> This should be easy. Just have to make sure the neighbor list  
> gets updated afterwards. 
>  
> Here is my first effort. I have added the following lines just 
> before do_force() is called. 
>  
>       if (MASTER(cr)) 
>         if (step%777 == 0) { 
>           printf("\nYippie! JASON WAS HERE on Step %d\n\n",step); 
>           do_swap(nsb->natoms,x); 
>           bNS = TRUE; 
>         } 
>  
> where do_swap() swaps the postion of two randomly selected atoms. 
> I hacked it into md.c as follows: 
>  
> static void do_swap(int natom, rvec x[]) 
> { 
>   static bool bFirst = TRUE; 
>   static bool bDebug = TRUE; 
>   static int swap_seed; 
>   int i,j; 
>   rvec temp; 
>  
>   if (bFirst) { 
>     swap_seed = make_seed(); 
>     if (bDebug) 
>       printf("Initial do_swap seed: %d\n",swap_seed); 
>     bFirst = FALSE; 
>   } 
>  
>   i = (int)(natom*rando(&swap_seed)); 
>   j = (int)(natom*rando(&swap_seed)); 
>  
>   if (bDebug) 
>     printf("Swap atoms: %d %d\n",i,j); 
>  
>   copy_rvec(x[i],temp); 
>   copy_rvec(x[j],x[i]); 
>   copy_rvec(temp,x[j]); 
> } 
>  
> By setting bNS = TRUE I am hoping to force a complete 
> list regeneration immediately after the swap move. 

First, you have to know that not all processors have all coordinates,
which may mean that when you modify coordinates in one processor you
haven't done so in the next.
More in detail, the coordinate array is not shared, but has to be
communicated explicitly, or alternatively you have to swap excatly the
same atoms on both processors, by running the random number generator
with the same seed.

Print coordinates before the swap, after the swap and after the update
step on all processors to test it.

> o test this I am using a Argon/Krypton mixture. It always 
> works on one processor and it fails just after the swap about  
> half the time on two processors. Here is a copy of the crash  
> message. 
>  
> MPI_Wait: message truncated (rank 1, MPI_COMM_WORLD) 
> Rank (1, MPI_COMM_WORLD): Call stack within LAM: 
> Rank (1, MPI_COMM_WORLD):  - MPI_Wait() 
> Rank (1, MPI_COMM_WORLD):  - main() 
> ----------------------------------------------------------------------------- 
>  
> One of the processes started by mpirun has exited with a nonzero exit 
> code.  This typically indicates that the process finished in error. 
> If your process did not finish in error, be sure to include a "return 
> 0" or "exit(0)" in your C code before exiting the application. 
>  
> PID 5095 failed on node n0 with exit status 1. 
> ----------------------------------------------------------------------------- 
> make: *** [run] Error 1 
>  
> 
> Maybe it crashes when the atoms are not in the same slab. 
> Could someone describe the structure of the neighbor search 
> routines to me. I have read everything in the manual about  
> them.  
>  
>   Thanks, 
>  
> --- 
> Jason de Joannis, Ph.D. 
> Chemistry Department, Emory University 
> 1515 Pierce Dr. NE, Atlanta, GA 30322 
> Phone: (404) 712-2983 
> Email: jdejoan at emory.edu 
> http://userwww.service.emory.edu/~jdejoan
> 
> 
> 
> 
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-developers-request at gromacs.org.
-- 
Groeten, David.
________________________________________________________________________
Dr. David van der Spoel, 	Dept. of Cell and Molecular Biology
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://xray.bmc.uu.se/~spoel
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++




More information about the gromacs.org_gmx-developers mailing list