[gmx-users] gromacs 4.0 on CRAY-XT4

BIN ZHANG zhngbn at gmail.com
Wed Mar 17 20:31:54 CET 2010


Dear Roland:

Thank you very much for the reply.  Since I'm planning to play with  
the source code a little bit, I would prefer to compile my own  
version ;-) Aslo, I don't think I have the permission to look at your  
config.log file.

After using the configuration you suggested, I got some strange error  
as attached. I will try to contact the help-desk for more compiler info.

Sincerely,
Bin

===============================================================

/opt/cray/xt-asyncpe/3.3/bin/cc: INFO: linux target is being used
/opt/cray/xt-asyncpe/3.3/bin/cc: INFO: WARNING:  gcc/4.2.3 does not  
support barcelona options.
/opt/cray/xt-asyncpe/3.3/bin/cc: INFO: Swap gcc/4.2.3 for gcc/ 
420quadcore or a post 4.3 gcc version.
/usr/bin/ld: attempted static link of dynamic object `/opt/fftw/3.2.2/ 
lib/libfftw3f.so'
collect2: ld returned 1 exit status
make[3]: *** [grompp] Error 1
make[3]: Leaving directory `/global/u2/b/bingo/bin/gromacs-4.0.7/src/ 
kernel'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory `/global/u2/b/bingo/bin/gromacs-4.0.7/src'
make[1]: *** [all] Error 2
make[1]: Leaving directory `/global/u2/b/bingo/bin/gromacs-4.0.7/src'
make: *** [all-recursive] Error 1
On Mar 17, 2010, at 11:04 AM, Roland Schulz wrote:

> Hi,
>
> gromacs is installed on Franklin. (module load gromacs). But I don't  
> recommend the installed version. It is compiled only in double  
> precision and with FFTW2 and PGI.
>
> You are welcome to use the version I have installed there:
> /global/homes/r/rschulz/software/gromacs (4.0.7)
> /global/homes/r/rschulz/software/gromacs.head/ (GIT master)
>
> You can also look at the config.log and env.sh in /global/homes/r/ 
> rschulz/download/ to look how it is compiled.
>
> Of course if you do you do that on your own risk and I don't support  
> it there. Also I recompile it there frequently so you probably want  
> to make a copy if you want to keep the same binary during your  
> simulation.
>
> It is probably better to ask the help-desk to compile with single  
> precision, fftw3 (to make it faster) and gcc (there were problems  
> with Gromacs and PGI before).
>
> Roland
>
> On Wed, Mar 17, 2010 at 1:27 PM, BIN ZHANG <zhngbn at gmail.com> wrote:
> Dear mark, and Roland:
>
> Thanks for all the suggestions.
> I was confused gromacs4 paper with another one by Erik Lindahl. (http://www.ncbi.nlm.nih.gov/pubmed/19229308 
> ) Since there they studied the same system (Kv1.2 voltage-gated ion  
> channel), and they mentioned about the same speed as the benckmark,  
> on a cray-xt4 machine. Here is also a link for the machine I'm  
> working on: http://www.nersc.gov/nusers/systems/franklin/
>
> Bin.
>
>
> On Mar 17, 2010, at 1:39 AM, Roland Schulz wrote:
>
>>
>>
>> On Wed, Mar 17, 2010 at 3:55 AM, Mark Abraham <Mark.Abraham at anu.edu.au 
>> > wrote:
>> On 17/03/2010 6:19 PM, BIN ZHANG wrote:
>> Dear Mark:
>>
>> Thanks for your reply and sorry for my less info.
>>
>> I'm confused about your statement "I was trying to build gromacs  
>> 4.0 on a cray-xt4 machine, the same one as the benchmark in the  
>> gromacs4 paper" because I can't see any mention of results on such  
>> a machine in J. Chem. Theory Comput. 2008, 4, 435-447
>>
>>
>> Here is the configuration line I used:
>> module load fftw/2.1.5.1
>> module load libxml2
>> CXX=CC CC=cc F77=ftn MPICC=cc LIBS="$XML2_LIBDIR -L/usr/X11/lib64"
>> ./configure --enable-mpi --enable-double --enable-fortran
>> --prefix=$HOME/gmx/para --disable-nice --with-fft=fftw2
>> --program-suffix=_mpi
>>
>> The use of double precision and the absence of hardware-optimized  
>> inner loops (--enable-fortran, since AFAIK there's no alternative  
>> on XT4) will slow things down compared to an Intel machine.
>>
>> No. These are standard AMD CPUs and thus SSE2 works find.
>>
>> I use
>> module swap PrgEnv-pgi PrgEnv-gnu
>> module swap gcc gcc/4.2.4
>> ./configure '--enable-mpi' '--without-x' 'CC=cc' 'CPP=cpp'  
>> 'CXXCPP=cpp' 'CXX=CC' '--without-xml' 'CPPFLAGS=-I/sw/xt5/fftw/ 
>> 3.1.2/sles10.1_gnu4.2.4/include' 'LDFLAGS=-L/sw/xt5/fftw/3.1.2/ 
>> sles10.1_gnu4.2.4/lib -lfftw3f' 'CFLAGS=-O3 -fomit-frame-pointer - 
>> finline-functions -Wall -Wno-unused -msse2 -funroll-all-loops - 
>> std=gnu99'
>>
>>
>> Also, I searched the maillist and added ddorder in the mdrun line,  
>> but
>> that doesn't seem to help much.
>> aprun -n 128 $gmxdir/mdrun_mpi -deffnm npt_02 -g npt_02.log -ddorder
>> cartesian -npme 32
>>
>> You will need to experiment with npme and ddorder to find what's  
>> best. Also consider g_tune_pme in the git codebase.
>>
>> If you want to scale to larger number of CPUs you probably need 2D  
>> PME. It is available in the 2dpme git branch. (That it is in a  
>> branch means it is at the moment considered experimental - even  
>> more than mast that is)
>>
>> Roland
>>
>>
>> Mark
>>
>>
>> The system is a membrane protein and I'm using berger lipid/OPLS-AA
>> force filed.
>>
>> Thanks
>> Bin
>>
>>
>> On Mar 17, 2010, at 12:02 AM, Mark Abraham wrote:
>>
>> On 17/03/2010 5:43 PM, BIN ZHANG wrote:
>> Dear all:
>>
>> I was trying to build gromacs 4.0 on a cray-xt4 machine, the same  
>> one as
>> the benchmark in the gromacs4 paper (). However, my timing for a  
>> similar
>> system, ~100,000 atoms, is almost 3 times slower than in the paper.  
>> For
>> example, I only got ~25 ns/day with 128 cpus. So is there any special
>> flags, or tricks I can use during the configuration for detailed  
>> tuning?
>>
>> Probably. You've made it hard to help you when you haven't provided
>> (at least) your configure command, compiler and version, mdrun  
>> command
>> line and simulation system composition.
>>
>> Mark
>> --
>> gmx-users mailing list gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>> -- 
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before  
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the www  
>> interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>>
>>
>>
>> -- 
>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
>> 865-241-1537, ORNL PO BOX 2008 MS6309
>> -- 
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at http://www.gromacs.org/search before  
>> posting!
>> Please don't post (un)subscribe requests to the list. Use the
>> www interface or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before  
> posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
>
>
> -- 
> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
> 865-241-1537, ORNL PO BOX 2008 MS6309
> -- 
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before  
> posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20100317/0b4aca60/attachment.html>


More information about the gromacs.org_gmx-users mailing list