[gmx-users] Parallel running problem
Alan Dodd
anoddlad at yahoo.com
Fri Sep 16 14:44:21 CEST 2005
For completeness, this is the configure command I'm
using for installing fftw:
./configure --prefix=/home/ad0303/fftw --enable-float
--enable-type-prefix --enable-mpi
And this is the result:
checking for mpicc... mpicc
checking for MPI_Init... no
checking for MPI_Init in -lmpi... no
configure: error: couldn't find mpi library for
--enable-mpi
I'm certain there's some critical command I'm not
specifying here - what did you mean by "I also linked
FFTW against MPICH"?
--- "Peter C. Lai" <sirmoo at cowbert.2y.net> wrote:
> On Fri, Sep 16, 2005 at 03:23:07AM -0700, Alan Dodd
> wrote:
> > Thanks for all your help, I thought I had compiled
> > with MPI but from trying to reinstall, it appears
> not.
> > The system I'm trying to install on is using
> mpich,
> > rather than lammpi. I wouldn't have thought this
> > would be a problem, but installing fftw locally
> > doesn't work - it can't find the mpi libraries.
> Both
> > using the rpms and compiling from source seem to
> > produce similar errors. I'm pretty sure others
> have
> > used mpich, so have any of you come across a
> similar
> > problem (and, ideally, a solution?)
> > Thanks,
> > Alan
> >
>
> I dunno, I recently (a few months ago, really)
> compiled 3.2.1 against
> mpich1.2.5.2 initially and then against mpich1.2.6
> on a dual-socket p3
> with gcc3.4.2 and ran all the test suites with no
> problems (I was actually
> running gromacs-mpich to debug the new FreeBSD
> kernel scheduler while getting
> some side work out of the otherwise idle cpus), so I
> really don't know what
> your problem is either [i.e. WORKSFORME albeit on a
> different platform] :(
>
> Note that I also linked FFTW against MPICH - I think
> this is a critical step
> (and everything was built as single precision, but I
> vaguely remember running
> double over mpich without any problems either).
>
> > --- David van der Spoel <spoel at xray.bmc.uu.se>
> wrote:
> >
> > > On Wed, 2005-09-07 at 05:18 -0700, Alan Dodd
> wrote:
> > > > Hello Gromacs users,
> > > > I gather this problem is similar to many
> previous,
> > > but
> > > > can't see an obvious solution in the replies
> to
> > > any of
> > > > them. I've been trying to get GROMACS to run
> on
> > > this
> > > > sample dual-core, dual-socket opteron box that
> we
> > > have
> > > > on loan. Despite my best efforts, I seem
> unable
> > > to
> > > > get mdrun to understand that it's supposed to
> run
> > > on
> > > > more than one node. I'm telling it to do so,
> and
> > > it
> > > > even appreciates it's supposed to in the
> output
> > > (see
> > > > below), but then decides I've told it to run
> on
> > > just
> > > > the one and dies. Has anyone any idea what's
> > > going
> > > > wrong? Is it just some kind of
> incompatibility
> > > with
> > > > mpich/the hardware?
> > > Have you compiled with MPI?
> > >
> > > you can check by typing
> > > ldd `which mdrun`
> > > It should show some MPI libraries.
> > >
> > > Dual core opterons run fine by the way. We have
> a
> > > brand new cluster
> > > humming along at 85 decibel.
> > > >
> > > > Input:
> > > > mpirun -np 4 -machinefile machines mdrun -np 4
>
> > > >
> > > > mdrun output:
> > > > for all file options
> > > > -np int 4 Number of nodes,
> must
> > > be
> > > > the same as used for
> > > > grompp
> > > > -nt int 1 Number of threads
> to
> > > start
> > > > on each node
> > > > -[no]v bool no Be loud and noisy
> > > > -[no]compact bool yes Write a compact
> log
> > > file
> > > > -[no]multi bool no Do multiple
> > > simulations in
> > > > parallel (only with
> > > > -np > 1)
> > > > -[no]glas bool no Do glass
> simulation
> > > with
> > > > special long range
> > > > corrections
> > > > -[no]ionize bool no Do a simulation
> > > including
> > > > the effect of an X-Ray
> > > > bombardment on
> your
> > > system
> > > >
> > > >
> > > > Back Off! I just backed up md.log to
> ./#md.log.5#
> > > > Reading file short.tpr, VERSION 3.2.1 (single
> > > > precision)
> > > > Fatal error: run input file short.tpr was made
> for
> > > 4
> > > > nodes,
> > > > while mdrun expected it to be for
> 1
> > > > nodes.
> > > >
> > > >
> > > > Alan Dodd (University of Bristol)
> > > >
> > > >
> > > >
> > > >
> > > >
> > > _
> > > _______________________________________________
> > > gmx-users mailing list
> > > gmx-users at gromacs.org
> > >
> http://www.gromacs.org/mailman/listinfo/gmx-users
> > > Please don't post (un)subscribe requests to the
> > > list. Use the
> > > www interface or send it to
> > > gmx-users-request at gromacs.org.
> > >
> >
> >
> >
> >
> > __________________________________
> > Yahoo! Mail - PC Magazine Editors' Choice 2005
> > http://mail.yahoo.com
> > _______________________________________________
> > gmx-users mailing list
> > gmx-users at gromacs.org
> > http://www.gromacs.org/mailman/listinfo/gmx-users
> > Please don't post (un)subscribe requests to the
> list. Use the
> > www interface or send it to
> gmx-users-request at gromacs.org.
>
> --
> Peter C. Lai
> Cesium Hyperfine Enterprises
> http://cowbert.2y.net/
>
>
__________________________________
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com
More information about the gromacs.org_gmx-users
mailing list