[gmx-users] MPI oversubscription
Christian H.
hypolit at googlemail.com
Tue Feb 5 13:45:02 CET 2013
>From the .log file:
Present hardware specification:
Vendor: GenuineIntel
Brand: Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
Family: 6 Model: 42 Stepping: 7
Features: aes apic avx clfsh cmov cx8 cx16 htt lahf_lm mmx msr nonstop_tsc
pcid pclmuldq pdcm popcnt pse rdtscp sse2 sse3 sse4.1 sse4.2 ssse3 tdt
Acceleration most likely to fit this hardware: AVX_256
Acceleration selected at GROMACS compile time: AVX_256
Table routines are used for coulomb: FALSE
Table routines are used for vdw: FALSE
>From /proc/cpuinfo (8 entries like this in total):
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel(R) Core(TM) i7-2600K CPU @ 3.40GHz
stepping : 7
microcode : 0x28
cpu MHz : 1600.000
cache size : 8192 KB
physical id : 0
siblings : 8
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmper
f pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm pcid
sse4_1 sse4_2 popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb
xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid
bogomips : 6784.04
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
It also does not work on the local cluster, the output in the .log file is:
Detecting CPU-specific acceleration.
Present hardware specification:
Vendor: AuthenticAMD
Brand: AMD Opteron(TM) Processor 6220
Family: 21 Model: 1 Stepping: 2
Features: aes apic avx clfsh cmov cx8 cx16 fma4 htt lahf_lm misalignsse mmx
msr nonstop_tsc pclmuldq pdpe1gb popcnt pse rdtscp sse2 sse3 sse4a sse4.1
sse4.2 ssse3 xop
Acceleration most likely to fit this hardware: AVX_128_FMA
Acceleration selected at GROMACS compile time: AVX_128_FMA
Table routines are used for coulomb: FALSE
Table routines are used for vdw: FALSE
I am not too sure about the details for that setup, but the brand looks
about right.
Do you need any other information?
Thanks for looking into it!
2013/2/5 Berk Hess <gmx3 at hotmail.com>
>
> Hi,
>
> This looks like our CPU detection code failed and the result is not
> handled properly.
>
> What hardware are you running on?
> Could you mail the 10 lines from the md.log file following: "Detecting
> CPU-specific acceleration."?
>
> Cheers,
>
> Berk
>
>
> ----------------------------------------
> > Date: Tue, 5 Feb 2013 11:38:53 +0100
> > From: hypolit at googlemail.com
> > To: gmx-users at gromacs.org
> > Subject: [gmx-users] MPI oversubscription
> >
> > Hi,
> >
> > I am using the latest git version of gromacs, compiled with gcc 4.6.2 and
> > openmpi 1.6.3.
> > I start the program using the usual mpirun -np 8 mdrun_mpi ...
> > This always leads to a warning:
> >
> > Using 1 MPI process
> > WARNING: On node 0: oversubscribing the available 0 logical CPU cores per
> > node with 1 MPI processes.
> >
> > Checking the processes confirms that there is only one of the 8 available
> > cores used.
> > Running mdrun_mpi with an additional debug -1:
> >
> > Detected 0 processors, will use this as the number of supported hardware
> > threads.
> > hw_opt: nt 0 ntmpi 0 ntomp 1 ntomp_pme 1 gpu_id ''
> > 0 CPUs detected, but 8 was returned by CPU_COUNTIn gmx_setup_nodecomm:
> > hostname 'myComputerName', hostnum 0
> > ...
> > 0 CPUs detected, but 8 was returned by CPU_COUNTOn rank 0, thread 0, core
> > 0 the affinity setting returned 0
> >
> > I also made another try by compiling gromacs using some experimental
> > version of gcc 4.8, which did not help in this case.
> > Is this a known problem? Obviously gromacs detects the right value with
> > CPU_COUNT, why is it not just taking that value?
> >
> >
> > Best regards,
> > Christian
> > --
> > gmx-users mailing list gmx-users at gromacs.org
> > http://lists.gromacs.org/mailman/listinfo/gmx-users
> > * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> > * Please don't post (un)subscribe requests to the list. Use the
> > www interface or send it to gmx-users-request at gromacs.org.
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
More information about the gromacs.org_gmx-users
mailing list