[gmx-users] Re: Gromacs 4.6 Installation under Cygwin

toma0052 at umn.edu toma0052 at umn.edu
Tue Feb 26 03:15:23 CET 2013


Hi,
     I have run the 3 scenarios that you mentioned. The commands and output 
are pasted below.

Thanks,
Mike


***Trial 1***
mdrun -v -deffnm Clp_Test -ntmpi 1 -ntomp 1
Reading file Clp_Test.tpr, VERSION 4.6 (single precision)
Using 1 MPI thread

Can not set thread affinities on the current platform. On NUMA systems this
can cause performance degradation. If you think your platform should support
setting affinities, contact the GROMACS developers.

starting mdrun 'Martini system for ClpX'
10000 steps,    200.0 ps.

--Relavant outpur from log file--
Log file opened on Mon Feb 25 20:26:57 2013
Host: Theory-Monster  pid: 8176  nodeid: 0  nnodes:  1
Gromacs version:    VERSION 4.6
Precision:          single
Memory model:       32 bit
MPI library:        thread_mpi
OpenMP support:     enabled
GPU support:        disabled
invsqrt routine:    gmx_software_invsqrt(x)
CPU acceleration:   AVX_256
FFT library:        fftw-3.3.3-sse2
Large file support: enabled
RDTSCP usage:       enabled
Built on:           Mon, Feb 25, 2013 10:38:04 AM
Built by:           Mike at Theory-Monster [CMAKE]
Build OS/arch:      CYGWIN_NT-6.1-WOW64 1.7.17(0.262/5/3) i686
Build CPU vendor:   GenuineIntel
Build CPU brand:    Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz
Build CPU family:   6   Model: 45   Stepping: 7
Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr 
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 s
se4.1 sse4.2 ssse3 tdt x2apic
C compiler:         /usr/bin/gcc.exe GNU gcc (GCC) 4.5.3
C compiler flags: -mavx -Wextra -Wno-missing-field-initializers 
-Wno-sign-compare -Wall -Wno-unused -Wunused-value -fomit-frame-pointer
 -funroll-all-loops -fexcess-precision=fast  -O3 -DNDEBUG

Using 1 MPI thread

Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz
Family:  6  Model: 45  Stepping:  7
Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc 
pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4
.2 ssse3 tdt x2apic
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: AVX_256

Table routines are used for coulomb: TRUE
Table routines are used for vdw:     TRUE
Using shifted Lennard-Jones, switch between 0.9 and 1.2 nm
Cut-off's:   NS: 1.4   Coulomb: 1.2   LJ: 1.2
System total charge: 0.000
Generated table with 1200 data points for Shift.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ6Shift.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ12Shift.
Tabscale = 500 points/nm
Potential shift: LJ r^-12: 0.000 r^-6 0.000
Removing pbc first time

Can not set thread affinities on the current platform. On NUMA systems this
can cause performance degradation. If you think your platform should support
setting affinities, contact the GROMACS developers.

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess
P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 116-122
-------- -------- --- Thank You --- -------- --------

The number of constraints is 840
414 constraints are involved in constraint triangles,
will apply an additional matrix expansion of order 4 for couplings
between constraints inside triangles
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, J. P. M. Postma, A. DiNola and J. R. Haak
Molecular dynamics with coupling to an external bath
J. Chem. Phys. 81 (1984) pp. 3684-3690
-------- -------- --- Thank You --- -------- --------


***Trial 2***
mdrun -v -deffnm Clp_Test -ntmpi 1
Reading file Clp_Test.tpr, VERSION 4.6 (single precision)
Using 1 MPI thread

Can not set thread affinities on the current platform. On NUMA systems this
can cause performance degradation. If you think your platform should support
setting affinities, contact the GROMACS developers.
starting mdrun 'Martini system for ClpX'
10000 steps,    200.0 ps.

--Relavant outpur from log file--
Log file opened on Mon Feb 25 20:40:32 2013
Host: Theory-Monster  pid: 4624  nodeid: 0  nnodes:  1
Gromacs version:    VERSION 4.6
Precision:          single
Memory model:       32 bit
MPI library:        thread_mpi
OpenMP support:     enabled
GPU support:        disabled
invsqrt routine:    gmx_software_invsqrt(x)
CPU acceleration:   AVX_256
FFT library:        fftw-3.3.3-sse2
Large file support: enabled
RDTSCP usage:       enabled
Built on:           Mon, Feb 25, 2013 10:38:04 AM
Built by:           Mike at Theory-Monster [CMAKE]
Build OS/arch:      CYGWIN_NT-6.1-WOW64 1.7.17(0.262/5/3) i686
Build CPU vendor:   GenuineIntel
Build CPU brand:    Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz
Build CPU family:   6   Model: 45   Stepping: 7
Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr 
nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 s
se4.1 sse4.2 ssse3 tdt x2apic
C compiler:         /usr/bin/gcc.exe GNU gcc (GCC) 4.5.3
C compiler flags: -mavx -Wextra -Wno-missing-field-initializers 
-Wno-sign-compare -Wall -Wno-unused -Wunused-value -fomit-frame-pointer
 -funroll-all-loops -fexcess-precision=fast  -O3 -DNDEBUG

Using 1 MPI thread

Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: GenuineIntel
Brand:  Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz
Family:  6  Model: 45  Stepping:  7
Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc 
pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4
.2 ssse3 tdt x2apic
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: AVX_256
Table routines are used for coulomb: TRUE
Table routines are used for vdw:     TRUE
Using shifted Lennard-Jones, switch between 0.9 and 1.2 nm
Cut-off's:   NS: 1.4   Coulomb: 1.2   LJ: 1.2
System total charge: 0.000
Generated table with 1200 data points for Shift.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ6Shift.
Tabscale = 500 points/nm
Generated table with 1200 data points for LJ12Shift.
Tabscale = 500 points/nm
Potential shift: LJ r^-12: 0.000 r^-6 0.000
Removing pbc first time

Can not set thread affinities on the current platform. On NUMA systems this
can cause performance degradation. If you think your platform should support
setting affinities, contact the GROMACS developers.

Initializing LINear Constraint Solver

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess
P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 116-122
-------- -------- --- Thank You --- -------- --------

The number of constraints is 840
414 constraints are involved in constraint triangles,
will apply an additional matrix expansion of order 4 for couplings
between constraints inside triangles
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
  0:  rest

++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, J. P. M. Postma, A. DiNola and J. R. Haak
Molecular dynamics with coupling to an external bath
J. Chem. Phys. 81 (1984) pp. 3684-3690
-------- -------- --- Thank You --- -------- --------


***Trial 3***
mdrun -v -deffnm Clp_Test -debug 1

Output of mdrun.debug uploaded to http://pastebin.com/mqujNsWW
I did delete some parts of the file so as to be under the pastebin size 
limit. Let me know if I have deleted something helpful and should try 
again.






On Feb 25 2013, Szilárd Páll wrote:

>That's strange, it seems that mdrun gets stuck somewhere. This should not
>happen, but as we don't actively test cygwin, we can't be sure what's
>happening. It would be great if you could help us figure out what is going
>wrong.
>
>Could you try doing the following:
>- run with -ntmpi 1 -ntomp 1 (i.e single-threaded);
>- run with OpenMP (-only) multhithreading, that is with -ntmpi 1 (this
>should start with 8 OpenMP threads)
>- run with -debug 1 which will produce the mdrun.debug output, please
>upload this to e.g pastebin and post a link;
>
>On a side-note: you would be better off using a newer gcc, 4.7 should be
>considerably faster than 4.5 - especially on this Sandy Bridge Xeon
>processor.
>
>Cheers,
>
>--
>Szilárd
>
>
>On Mon, Feb 25, 2013 at 5:25 PM, <toma0052 at umn.edu> wrote:
>
>> Hello,
>>     Thanks for the help. After setting the library path properly, I seem
   to be able to get gromacs up and running. However, I have run into 
another
>> problem with mdrun and actually running any jobs. When I execute
>> mdrun -v -deffnm Clp_Test -nt
>> The output is: Reading file Clp_Test.tpr, VERSION 4.6 (single precision)
>> Using 8 MPI threads
>>
>> Followed by several occurrences of:
   Can not set thread affinities on the current platform. On NUMA systems 
this
>> can cause performance degradation. If you think your platform should
>> support
>> setting affinities, contact the GROMACS developers.
>>
>> Then:
>> starting mdrun 'Martini system for ClpX'
>> 10000 steps,    200.0 ps.
>>
   After this however, the simulation never actually begins. I can get rid 
of
   the error messages by using -pin off, but that doesn't seem to actually 
fix
   anything. Is there something that has not been installed properly? Below 
is
>> the seemingly relevant portions of the log file generated by the above
>> mdrun command.
>>
>>
>> Log file opened on Mon Feb 25 11:21:08 2013
>> Host: Theory-Monster  pid: 3192  nodeid: 0  nnodes:  1
>> Gromacs version:    VERSION 4.6
>> Precision:          single
>> Memory model:       32 bit
>> MPI library:        thread_mpi
>> OpenMP support:     enabled
>> GPU support:        disabled
>> invsqrt routine:    gmx_software_invsqrt(x)
>> CPU acceleration:   AVX_256
>> FFT library:        fftw-3.3.3-sse2
>> Large file support: enabled
>> RDTSCP usage:       enabled
>> Built on:           Mon, Feb 25, 2013 10:38:04 AM
>> Built by:           Mike at Theory-Monster [CMAKE]
>> Build OS/arch:      CYGWIN_NT-6.1-WOW64 1.7.17(0.262/5/3) i686
>> Build CPU vendor:   GenuineIntel
>> Build CPU brand:    Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz
>> Build CPU family:   6   Model: 45   Stepping: 7
>> Build CPU features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr
>> nonstop_tsc pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1
>> sse4.2 ssse3 tdt x2apic
>> C compiler:         /usr/bin/gcc.exe GNU gcc (GCC) 4.5.3
>> C compiler flags: -mavx -Wextra -Wno-missing-field-**initializers
>> -Wno-sign-compare -Wall -Wno-unused -Wunused-value -fomit-frame-pointer
>> -funroll-all-loops -fexcess-precision=fast -O3 -DNDEBUG
>>
>> Initializing Domain Decomposition on 8 nodes
>> Dynamic load balancing: auto
>> Will sort the charge groups at every domain (re)decomposition
>> Initial maximum inter charge-group distances:
>>    two-body bonded interactions: 0.988 nm, Bond, atoms 997 1005
>>  multi-body bonded interactions: 1.042 nm, G96Angle, atoms 1938 1942
>> Minimum cell size due to bonded interactions: 1.146 nm
   Maximum distance for 5 constraints, at 120 deg. angles, all-trans: 0.810 
nm
>> Estimated maximum distance required for P-LINCS: 0.810 nm
>> Scaling the initial minimum size with 1/0.8 (option -dds) = 1.25
   Optimizing the DD grid for 8 cells with a minimum initial size of 1.433 
nm
>> The maximum allowed number of cells is: X 11 Y 11 Z 9
>> Domain decomposition grid 4 x 2 x 1, separate PME nodes 0
>> Domain decomposition nodeid 0, coordinates 0 0 0
>>
>> Using 8 MPI threads
>>
>> Detecting CPU-specific acceleration.
>> Present hardware specification:
>> Vendor: GenuineIntel
>> Brand:  Intel(R) Xeon(R) CPU E5-2687W 0 @ 3.10GHz
>> Family:  6  Model: 45  Stepping:  7
   Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr 
nonstop_tsc
   pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 
ssse3
>> tdt x2apic
>> Acceleration most likely to fit this hardware: AVX_256
>> Acceleration selected at GROMACS compile time: AVX_256
>>
>> Table routines are used for coulomb: TRUE
>> Table routines are used for vdw:     TRUE
>> Using shifted Lennard-Jones, switch between 0.9 and 1.2 nm
>> Cut-off's:   NS: 1.4   Coulomb: 1.2   LJ: 1.2
>> System total charge: 0.000
>> Generated table with 1200 data points for Shift.
>> Tabscale = 500 points/nm
>> Generated table with 1200 data points for LJ6Shift.
>> Tabscale = 500 points/nm
>> Generated table with 1200 data points for LJ12Shift.
>> Tabscale = 500 points/nm
>> Potential shift: LJ r^-12: 0.000 r^-6 0.000
>> Removing pbc first time
>>
   Can not set thread affinities on the current platform. On NUMA systems 
this
>> can cause performance degradation. If you think your platform should
>> support
>> setting affinities, contact the GROMACS developers.
>>
>> Initializing Parallel LINear Constraint Solver
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> B. Hess
>> P-LINCS: A Parallel Linear Constraint Solver for molecular simulation
>> J. Chem. Theory Comput. 4 (2008) pp. 116-122
>> -------- -------- --- Thank You --- -------- --------
>>
>> The number of constraints is 840
>> There are inter charge-group constraints,
>> will communicate selected coordinates each lincs iteration
>> 414 constraints are involved in constraint triangles,
>> will apply an additional matrix expansion of order 4 for couplings
>> between constraints inside triangles
>>
>> Linking all bonded interactions to atoms
>>
>> The initial number of communication pulses is: X 1 Y 1
>> The initial domain decomposition cell size is: X 4.02 nm Y 8.04 nm
>>
   The maximum allowed distance for charge groups involved in interactions 
is:
>>                 non-bonded interactions           1.400 nm
>> (the following are initial values, they could change due to box
>> deformation)
>>            two-body bonded interactions  (-rdd)   1.400 nm
>>          multi-body bonded interactions  (-rdd)   1.400 nm
>>  atoms separated by up to 5 constraints  (-rcon)  4.021 nm
>>
   When dynamic load balancing gets turned on, these settings will change 
to:
>> The maximum number of communication pulses is: X 1 Y 1
>> The minimum size for domain decomposition cells is 1.400 nm
>> The requested allowed shrink of DD cells (option -dds) is: 0.80
>> The allowed shrink of domain decomposition cells is: X 0.35 Y 0.17
   The maximum allowed distance for charge groups involved in interactions 
is:
>>                 non-bonded interactions           1.400 nm
>>            two-body bonded interactions  (-rdd)   1.400 nm
>>          multi-body bonded interactions  (-rdd)   1.400 nm
>>  atoms separated by up to 5 constraints  (-rcon)  1.400 nm
>>
>>
>> Making 2D domain decomposition grid 4 x 2 x 1, home cell index 0 0 0
>>
>> Center of mass motion removal mode is Linear
>> We have the following groups for center of mass motion removal:
>>  0:  rest
>>
>> ++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
>> H. J. C. Berendsen, J. P. M. Postma, A. DiNola and J. R. Haak
>> Molecular dynamics with coupling to an external bath
>> J. Chem. Phys. 81 (1984) pp. 3684-3690
>> -------- -------- --- Thank You --- -------- --------
>>
>>
>>
>>
>>
>>
>>
>>
>>  Hello,      I am trying to install Gromacs 4.6 on a Windows workstation
    under cygwin. After I install everything, when executing g_luck I come 
up
    with the error: 'error while loading shared libraries: cyggmx-6.dll: 
cannot
    open shared object file: No such file or directory' There does exist
>>> the file cyggmx-6.dll in /usr/local/gromacs/lib and I have tried: export
>>> LD_LIBRARY_PATH=/usr/local/**gromacs/lib as well as using the flag
    -BUILD_SHARED_LIBS=OFF with cmake, but neither seem to help. What could 
be
>>> the cause of this?
>>>
>> ... [show rest of quote]
>>
>>
   For cygwin, you need to have the .dll directory path, which is located 
at
>> the position given by:
>>   -DCMAKE_INSTALL_PREFIX=/my/**install/path  trailed by /lib
>> in your PATH variable:
>>   PATH=/my/install/path/lib:$**PATH   mdrun
>> (or set it permanent somewhere)
>>
   BTW: with the current cygwin (1.7.17-1), Gromacs 4.6 *does* indeed 
compile
>> fine:
>> cmake  -DCMAKE_INSTALL_PREFIX=/usr/**local -DGMX_GPU=OFF
>> -DGMX_BUILD_OWN_FFTW=OFF  ../gromacs-4.6
>> and, more interestingly, runs multithreaded at similar performance as
>> compiled with visual studio (tested on Win8/x64).
>> --
>> gmx-users mailing list    gmx-users at gromacs.org
    
http://lists.gromacs.org/**mailman/listinfo/gmx-users<http://lists.gromacs.org/mailman/listinfo/gmx-users>
>> * Please search the archive at http://www.gromacs.org/**
    
Support/Mailing_Lists/Search<http://www.gromacs.org/Support/Mailing_Lists/Search>before
posting!
>> * Please don't post (un)subscribe requests to the list. Use the www
>> interface or send it to gmx-users-request at gromacs.org.
   * Can't post? Read 
http://www.gromacs.org/**Support/Mailing_Lists<http://www.gromacs.org/Support/Mailing_Lists>
>>
>--
>gmx-users mailing list    gmx-users at gromacs.org
>http://lists.gromacs.org/mailman/listinfo/gmx-users
 * Please search the archive at 
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>* Please don't post (un)subscribe requests to the list. Use the
>www interface or send it to gmx-users-request at gromacs.org.
>* Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list