[gmx-users] Dyn. Load Balance changes Cut off
Johnny Lu
johnny.lu128 at gmail.com
Fri Nov 7 19:27:35 CET 2014
I don't see things like that on every machine that I used.
The machine that made that log file has 4 Tesla K40, and 16 Xeon CPU.
Using 4 MPI threads
Using 8 OpenMP threads per tMPI thread
Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: GenuineIntel
Brand: Intel(R) Xeon(R) CPU E5-2660 0 @ 2.20GHz
Family: 6 Model: 45 Stepping: 7
Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc
pcid pclmuldq pdcm pdpe1gb popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3
tdt x2apic
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: AVX_256
4 GPUs detected:
#0: NVIDIA Tesla K40m, compute cap.: 3.5, ECC: no, stat: compatible
#1: NVIDIA Tesla K40m, compute cap.: 3.5, ECC: no, stat: compatible
#2: NVIDIA Tesla K40m, compute cap.: 3.5, ECC: no, stat: compatible
#3: NVIDIA Tesla K40m, compute cap.: 3.5, ECC: no, stat: compatible
4 GPUs auto-selected for this run.
Mapping of GPUs to the 4 PP ranks in this node: #0, #1, #2, #3
Will do PME sum in reciprocal space.
On Fri, Nov 7, 2014 at 1:21 PM, Johnny Lu <johnny.lu128 at gmail.com> wrote:
> Hi.
>
> When I read the log file, I see:
>
> PP/PME load balancing changed the cut-off and PME settings:
> particle-particle PME
> rcoulomb rlist grid spacing 1/beta
> initial 1.000 nm 1.090 nm 64 64 64 0.117 nm 0.320 nm
> final 1.302 nm 1.392 nm 48 48 48 0.156 nm 0.417 nm
> cost-ratio 2.08 0.42
> (note that these numbers concern only part of the total PP and PME load)
>
> So, the cut off that I typed in the mdp file was changed.
>
> Will that affect the result of the simulation? I'm using gromacs 4.6.7.
> Or any cut off will be fine, as long as I use a cut off that is long
> enough?
>
> The force field paper for Amber99SB-ILDN used 1.0 nm for both VdW and PME
> electrostatic cut off.
>
> Thanks again.
>
More information about the gromacs.org_gmx-users
mailing list