[gmx-users] REMD

David van der Spoel spoel at xray.bmc.uu.se
Thu Apr 13 16:13:26 CEST 2006


Dongsheng Zhang wrote:
> Dear David,
> 
> I forgot one more information. The computer administrator told me I did
> not need to use mpirun. As I told you in the previous email, I have done
> tests for parallel computing. It worked fine.
> 
try it with mpirun anyway.

I suspect you are just running a single process which crashes as soon as 
it needs to do REMD communication.


> Dongsheng
> 
> On Thu, 2006-04-13 at 09:52 +0200, David van der Spoel wrote:
>> Dongsheng Zhang wrote:
>>> dear Mark,
>>>
>>> Thank you very much for your prompt reply.  I try to use parallel
>>> computing. It works fine.
>>> for example: grompp -f -c -p -o replica0 -np 2 -sort -shuffle     
>>>              to  get replica0.tpr
>>>              then, mdrun_mpi -np 2 -s replica0.tpr
>>>
>> please give EXACT command line and do use mpirun
>>
>>
>>> The error message mentioned in the previous email looks very strange to
>>> me. MPI works fine, and individual tpr runs fine. The error message
>>> comes out before replica exchange. replica0 stops at step 500, replica1
>>> stops at step 400, even the output informatio can't be complted. The
>>> last lines in replica1.log is
>>>
>>>         Step           Time         Lambda
>>>             400        0.80000        0.00000
>>>
>>>    Rel. Constraint Deviation:  Max    between atoms     RMS
>>>
>>>
>>> I hope these further information can help you to figure out what's the
>>> problem.
>>>
>>> Best Wishes!
>>>
>>> Dongsheng
>>>
>>>
>>> On Thu, 2006-04-13 at 15:17 +1000, Mark Abraham wrote: 
>>>>> Dear gmx users:
>>>>>
>>>>> I am trying to run REMD with two replicas (for testing). I used
>>>>> grompp -f -c -p -o replica0 to get replica0.tpr
>>>>> grompp -f -c -p -o replica1 to get replica1.tpr
>>>>>
>>>>> then used
>>>>> mdrun_mpi -np 2 -multi -replex 500 -reseed -1 -s replica -deffnm replica
>>>>> -v -N 2
>>>>> to run it.
>>>> mdrun doesn't take '-N 2' but I'm not sure this is the problem. Otherwise,
>>>> looks fine to me.
>>>>
>>>>> I got an error message "Segmentation fault" from my script output, but
>>>>> no error message in both log files. When I tried to run individual tpr
>>>>> file, it worked fine.
>>>>>
>>>>> Could someone can comment why I got "Segmentation fault"? Thank you for
>>>>> your help!
>>>> Your MPI setup might require you use a command like "mpirun -N 2 mdrun_mpi
>>>> ..." to make it work - the segfault might be a gromacs-MPI interaction
>>>> problem. If you can run an MPI process from the command line you may get
>>>> more helpful feedback.
>>>>
>>>> Mark
>>>>
>>>> _______________________________________________
>>>> gmx-users mailing list    gmx-users at gromacs.org
>>>> http://www.gromacs.org/mailman/listinfo/gmx-users
>>>> Please don't post (un)subscribe requests to the list. Use the 
>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>> _______________________________________________
>>> gmx-users mailing list    gmx-users at gromacs.org
>>> http://www.gromacs.org/mailman/listinfo/gmx-users
>>> Please don't post (un)subscribe requests to the list. Use the 
>>> www interface or send it to gmx-users-request at gromacs.org.
>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php


-- 
David.
________________________________________________________________________
David van der Spoel, PhD, Assoc. Prof., Molecular Biophysics group,
Dept. of Cell and Molecular Biology, Uppsala University.
Husargatan 3, Box 596,  	75124 Uppsala, Sweden
phone:	46 18 471 4205		fax: 46 18 511 755
spoel at xray.bmc.uu.se	spoel at gromacs.org   http://folding.bmc.uu.se
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++



More information about the gromacs.org_gmx-users mailing list