[gmx-users] lAM/MPI

Mark Abraham mark.abraham at anu.edu.au
Sun May 23 12:25:58 CEST 2010


----- Original Message -----
From: delara aghaie <d_aghaie at yahoo.com>
Date: Sunday, May 23, 2010 18:15
Subject: [gmx-users] lAM/MPI
To: gmx-users at gromacs.org

-----------------------------------------------------------
| > Dear gromacs users,
> Our university network is LAM/MPI.
>  
> I connect to the network from my office computer to submit simulation job in gromacs.
> In order to boot the MPI invironment I use lamboot command. 
> $ lamboot -v lamhosts
> the lamhosts is a text file where I should introduce the processeros which I want to boot them and here I should include the (lnx-server) which is where I connect to, from my office computer and run the lamboot order there.
>  
> Now when I want to submit simulation job I use (grompp) order which is in the path
> /usr/local/gromacs/bin/grompp (with the necessary options) including -np (to introduce the number of processors).

Thus you're using GROMACS 3.x, which is long superseded. Unless you know of a scientific reason for continuing to use it, you will get much better performance from GROMACS 4.0.7.

More to the point, grompp is not the simulation. It prepares the simulation input. The .tpr file is binary-portable. You can create that file anywhere, copy it to the compute machine, and then use mpirun_mpi to run it.

> Here again it seems that I should add the server to the number of processors and also it is the same for the order mpiexec .

Unlikely.

> I think in this way the server will become one of the processors that my job starts to run on it and because always the server is very busy with other jobs, the speed that I get is very very low !!!

Sure, so you don't want to do it.

> please let me know if it is a way that can tell the system not to run job on the server.
> I think this can solve the speed problem.

This will be a standard problem (independent of GROMACS), for which the administrators for your compute environment will have a standard solution. Talk to them.

Mark



More information about the gromacs.org_gmx-users mailing list