[gmx-users] Problems running in paralell on IBM SP

David L. Bostick dbostick at physics.unc.edu
Fri Mar 8 15:20:22 CET 2002


Hi Erik,

Thanks for the response on the IBM.  Let me be more specific.  The SP I use
has 32 bit mpi.  I did the following because I kept getting errors
pertaining to "large files":

setenv CC xlc
setenv CFLAGS "-O3 -bmaxdata:2147483648 -I/gpfs/dbostick/fftw/include"
setenv CPPFLAGS -I/gpfs/dbostick/fftw/include
setenv F77 mpxlf
setenv FFLAGS "-O3 -bmaxdata:2147483648 -I/gpfs/dbostick/fftw/include"
setenv LDFLAGS  "-L/gpfs/dbostick/fftw/lib -L/usr/local/lib 
	-bmaxdata:2147483648"

Then I ran configure like,

./configure --enable-mpi --prefix=/gpfs/dbostick/gmx-3.1
--disable-largefile --program-suffix="_mpi"

Then make mdrun, make install-mdrun.

The fftw libraries were installed using the same variables.


The non-mpi stuff seems to have compiled fine.  A few notes however:

I have a test system that worked in my version of gmx-2.0 that is basically
a 4-helix alamethicin bundle in a hydrated POPC bilayer.

1) The tutorial of 216 spc waters in a box works with the mpi gmx-3.1 I
compiled as above.

2) The test POPC/alamethecin system does not work.  The log files are
started, but standard error gives,

ERROR: 0031-250  task 5: Segmentation fault
ERROR: 0031-250  task 6: Segmentation fault
ERROR: 0031-250  task 4: Segmentation fault

etc.  etc..  many more depending on the number of processors used.

3) I tried a .tpr file for the test system generated from gmx-2.0 grompp
for 10 processors that worked in gmx-2.0 as input for mdrun-3.1.  This also
did not work--gave the same result.

4) I tried grompp-3.1 without the -np option for only one processor for the
test system and ported the .tpr file from the SP to my desktop Linux with
gmx-3.1 installed.  It worked like a charm.

So, I'm pretty sure that there are no problems with the way the simulation
is set up in the .mdp file and that there are no problems with the
preprocessor on my IBM SP.  Could the environmental variables that I am
setting for the compilation be leading to this problem of running mpi for
3.1 on the SP?  If not, do you have any other ideas?

Thanks,
David



-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
David Bostick					Office: 262 Venable Hall
Dept. of Physics and Astronomy			Phone:  (919)962-0165 
Program in Molecular and Cellular Biophysics 
UNC-Chapel Hill					
CB #3255 Phillips Hall				dbostick at physics.unc.edu	
Chapel Hill, NC 27599	           		http://www.unc.edu/~dbostick	
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

On Thu, 7 Mar 2002, Erik Lindahl wrote:

> Hi David,
> 
> >
> >
> >1)
> >
> >libtool: link: warning: library
> >`/gpfs/dbostick/fftw_IBMSP/lib/libsfftw_mpi.la' was moved.
> >
> >I assume this is okay because I have the compiled libraries in a directory
> >that I move from place to place.  As long as the path can be found for
> >linking I imagine this is fine..
> >
> Yes - this is just a bug in the FFTW configuration that we can't do much 
> about. It's just libtool telling you that the fftw library was installed 
> to a different path in the first place. It won't cause any problems.
> 
> >
> >
> >2)
> >
> >ld: 0711-230 WARNING: Nested archives are not supported.
> >        Archive member ../mdlib/.libs/libmd_mpi.a[libc.a] is being ignored.
> >
> >I suspect this may be causing my problems when running mdrun_mpi.  How do I
> >fix this?   
> >
> I don't see this on my SP system, but I don't think it will cause any 
> problems. In theory, libtool has the capability to link all libraries 
> into the library you are creating (i.e. include the parts of libc we 
> need into libmd), but since I'm pretty sure your linker/compiler will 
> include libc at the link stage it should probably work. This is a 
> libtool bug ;-)
> 
> >
> >
> >Also in the configure script I changed mpcc to the threadsafe mpcc_r.  Is
> >this okay?
> >
> Sure. We're not using multithreading parallelization yet, but when it's 
> working I will be changing the compiler detection part to use the 
> threadsafe ones by default.
> 
> 
> Cheers,
> 
> Erik
>  
> 
> >
> 
> 
> _______________________________________________
> gmx-users mailing list
> gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-users-request at gromacs.org.
> 





More information about the gromacs.org_gmx-users mailing list