[gmx-users] handling particle decomposition with distance restraints
chris.neale at utoronto.ca
Thu Jun 25 22:31:10 CEST 2009
Let me re-emphasize that the pull code may be a good solution for you.
As per your request, I currently use the following without any problems:
gromacs 3.3.1 or 4.0.4
Be especially aware that openmpi 1.3.0 and 1.3.1 are broken, as I posted
To be clear, I have never experienced any openmpi-based problems with
any version of gromacs 4 and openmpi 1.2.6.
I posted the original notice of our problems with openmpi (1.2.1) that
were solved by using lam here.
jayant james wrote:
> thanks for your mail. Could you please share what OS and versions of
> fftw, openmpi and gmx you are currently using.
> Thanks you
> On Thu, Jun 25, 2009 at 12:28 PM, <chris.neale at utoronto.ca
> <mailto:chris.neale at utoronto.ca>> wrote:
> Why not use the pull code? If you haev to use distance restraints,
> then try LAM mpi with your pd run. We had similar error messages
> with vanilla .mdp files using openmpi with large and complex
> systems that went away when we switched to LAM MPI. Our problems
> disappeared in gmx 4 so we went back to openmpi for all systems as
> that mdrun_mpi version is faster in our hands.
> I admit, there is no good reason why LAM would work and openMPI
> would not, but I have seen it happen before so it's worth a shot.
> -- original message--
> The energy minimization went on without any problem on 4
> processors but the
> problem occurs when I perform the MD run. Also, I did not get any
> message with relevance to LINCS etc...
> gmx-users mailing list gmx-users at gromacs.org
> <mailto:gmx-users at gromacs.org>
> Please search the archive at http://www.gromacs.org/search before
> Please don't post (un)subscribe requests to the list. Use thewww
> interface or send it to gmx-users-request at gromacs.org
> <mailto:gmx-users-request at gromacs.org>.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
> Jayasundar Jayant James
More information about the gromacs.org_gmx-users