[gmx-users] low cpu usage
Dr. Bernd Rupp
rupp at fmp-berlin.de
Fri Jun 27 09:21:07 CEST 2008
I think so too. It's a system-specific problem.
The Benchmark runs and finish with the following:
NODE (s) Real (s) (%)
Time: 782.000 782.000 100.0
13:02
(Mnbf/s) (GFlops) (ns/day) (hour/ns)
Performance: 56.745 5.416 1.105 21.722
But with longer md's the machine hang after a while.
It is not a reproduceable position in the dynamic,
And the machine writes no message in the system logs!!?
At the moment we have no Idea for a solution of this Problem.
Regards,
Bernd
Am Mittwoch, 25. Juni 2008 schrieb Yang Ye:
> It could be system-specific. Could you try out dppc in tutor/gmxbench or
> download gmxbench from gromacs' website (section Benchmark)?
>
> Regards,
> Yang Ye
>
> Dr. Bernd Rupp wrote:
> > same problem as mpich2.
> >
> > regards,
> > Bernd
> >
> > Am Mittwoch, 25. Juni 2008 schrieb Yang Ye:
> >> I don't think Python is to be blamed.
> >> How about lam-mpi?
> >>
> >> Regards,
> >> Yang Ye
> >>
> >> Dr. Bernd Rupp wrote:
> >>> Dear all,
> >>>
> >>>
> >>> CPU: Intel(R) Core(TM)2 Extreme CPU Q6850 @ 3.00GHz
> >>> System: fedora 8
> >>> Kernel: 2.6.25.6-27.fc8 #1 SMP
> >>> gromacs 3.3.3 correct compiled
> >>> MPI : mpich or mpich2
> >>>
> >>> We had the same problem with mpich2.
> >>> single processor run CPU load 100%
> >>> double processor run CPU load around 70%
> >>> quad processor run CPU load around 40%
> >>>
> >>> With mpich we have no problem:
> >>> quad processor run CPU load around 95%
> >>>
> >>> We think that implementation of python are the reason of the bad
> >>> scaling of mpich2. Because mpiexec and mpdboot of mipch2 are python
> >>> scipts.
> >>>
> >>> May be we are wrong, but mpich dont use python and runs well!?
> >>>
> >>> see you
> >>>
> >>> Bernd
> >>>
> >>> Am Samstag, 21. Juni 2008 schrieb ha salem:
> >>>>>> Dear users
> >>>>>> my gromacs is 3.3.3 my cpus are intel core2quad 2.4
> >>>>>
> >>>>> GHz and my mpi is
> >>>>>
> >>>>>> LAM 7.0.6
> >>>>>> I can get the cpu usage of 4 cores on one node but
> >>>>>
> >>>>> when I run on 2
> >>>>>
> >>>>>> node the cpu usage of cores is low
> >>>>>> I have installed gromacs with these instructions
> >>>>>> Compile LAM 7
> >>>>>> ./configure --prefix=/usr/local/share/lam7
> >>>>>
> >>>>> --enable-static
> >>>>>
> >>>>>> make |tee make.log
> >>>>>> make install
> >>>>>> make clean
> >>>>>>
> >>>>>> Compile fftw
> >>>>>>
> >>>>>> export MPI_HOME=/usr/local/share/lam7
> >>>>>> export LAMHOME=/usr/local/share/lam7
> >>>>>> export PATH=/usr/local/share/lam7/bin:$PATH
> >>>>>> ./configure --prefix=/usr/local/share/fftw3
> >>>>>
> >>>>> --enable-mpi
> >>>>>
> >>>>>> make |tee make.log
> >>>>>> make install
> >>>>>> make distclean
> >>>>>>
> >>>>>> Compile Gromacs
> >>>>>>
> >>>>>> export MPI_HOME=/usr/local/share/lam7
> >>>>>> export LAMHOME=/usr/local/share/lam7
> >>>>>> export PATH=/usr/local/share/lam7/bin:$PATH
> >>>>>>
> >>>>>> ./configure --prefix=/usr/local/share/gromacs_333
> >>>>>> --exec-prefix=/usr/local/share/gromacs_333
> >>>>>
> >>>>> --program-prefix=""
> >>>>>
> >>>>>> --program-suffix="" --enable-static
> >>>>>
> >>>>> --enable-mpi --disable-float
> >>>>>
> >>>>>> make |tee make.log
> >>>>>> make install
> >>>>>> make distclean
> >>>>>>
> >>>>>> lamboot -v lamhosts
> >>>>>>
> >>>>>>
> >>>>>> Run Gromacs on 2 machine (each machine has 1
> >>>>>
> >>>>> core2quad)
> >>>>>
> >>>>>> /usr/local/share/gromacs_333/bin/grompp -f md.mdp -po
> >>>>>
> >>>>> mdout.mdp -c
> >>>>>
> >>>>>> md.gro -r md_out.gro -n md.ndx -p md.top -o topol.tpr
> >>>>>
> >>>>> -np 2
> >>>>>
> >>>>>> mpirun -np 2 /usr/local/share/gromacs_333/bin/mdrun
> >>>>>
> >>>>> -np 2-f topol.tpr
> >>>>>
> >>>>>> -o md.trr -c md_out.gro -e md.edr -g md.log &
> >>>>>> I also test with -np 8 but my cpu usage is low and the
> >>>>>
> >>>>> speed is less
> >>>>>
> >>>>>> than single run!!!
> >>>>>> thank you in your advance
> >>>>>
> >>>>> ---------------------------------------------------------------------
> >>>>>---
> >>>>>
> >>>>> _______________________________________________
> >>>>> gmx-users mailing list gmx-users at gromacs.org
> >>>>> http://www.gromacs.org/mailman/listinfo/gmx-users
> >>>>> Please search the archive at http://www.gromacs.org/search before
> >>>>> posting! Please don't post (un)subscribe requests to the list. Use
> >>>>> the www interface or send it to gmx-users-request at gromacs.org.
> >>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
>
> _______________________________________________
> gmx-users mailing list gmx-users at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
--
Dr. Bernd F. Rupp
Leibniz-Institut für Molekulare Pharmakologie (FMP)
Abt. NMR-unterstützte Strukturforschung
AG Molecular Modeling/ Drug Design
Robert-Roessle-Str. 10
13125 Berlin
Germany
Tel. +49/0-30-94793-279
FAX +49/0-30-94793-169
Web www.fmp-berlin.info/drug_design.html
E-Mail rupp at fmp-berlin.de
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part.
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20080627/a1d78c37/attachment.sig>
More information about the gromacs.org_gmx-users
mailing list