[gmx-users] Parallel running of mdrun

Dallas B. Warren Dallas.Warren at pharm.monash.edu.au
Thu Feb 5 00:49:22 CET 2009


It is one of those forehead slapping days again, d'oh

Previously on this sc did not have to use the mpi on the end, but now  
you do. Problem solved, I think.

  <slap>Yep </slap>

Log file now has nnodes: 2 and there is all the domain decomposition  
details. That is more like it :)

Catch ya,
Dallas Warren

A polar bear is a Cartesian bear that has undergone a polar  
transformation

On 05/02/2009, at 10:36 AM, "Chris Neale" <chris.neale at utoronto.ca>  
wrote:

> I get one line like the following for each core in an mpi job:
> 5723 ?        RL     0:38 /work/cneale/exe/gromacs-4.0.2/exec/bin/ 
> mdrun_mpi -deffnm ./md5_running/s6117B2_md5 -cpt 600
> 5724 ?        RL     0:39 /work/cneale/exe/gromacs-4.0.2/exec/bin/ 
> mdrun_mpi -deffnm ./md5_running/s6117B2_md5 -cpt 600
> ...
>
> And you mean mdrun_mpi right?
>
> Chris.
>
> -- original message --
>
>
> I must be missing something here.  Now we don't need to use the -np   
> switch for grompp or mdrun, you simply specify with mpirun (4.0.3) eg
>
> mpirun -np 4 mdrun -deffnm md_01
>
> Doing that I end up with four separate processes.  Does that mean  
> that  the mdrun being used is not mpi enabled?
>
> This is on a super computer system not compiled by myself.
>
> Catch ya,
> Dr Dallas Warren
>
> A polar bear is a Cartesian bear that has undergone a polar   
> transformation
>
> _______________________________________________
> gmx-users mailing list    gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before  
> posting!
> Please don't post (un)subscribe requests to the list. Use the www  
> interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php



More information about the gromacs.org_gmx-users mailing list