[gmx-developers] Re: gmx-developers Digest, Vol 50, Issue 5

Carsten Kutzner ckutzne at gwdg.de
Mon Jun 23 09:41:36 CEST 2008


Yang Ye wrote:
> Hi, Xuji
> 
> Please cut of your email from the digest. It confuses people.
> 
> Could you test your system with 1, 2, 4, 8, 16, 24 CPUs and get
> corresponding speed-up ratios?
Hi, Xuji

These numbers would be interesting to see. Is there any particular
reason that you choose 6 PME nodes? Typically I would first determine
the performance *without* PME nodes and then try if I can get a better
performance with PME nodes.

Carsten

> 
> Regards,
> Yang Ye
> 
> xuji wrote:
>> gmx-developers-request,您好!
>>
>> The interconnect is Gigabit Ethernet. The simulation system has 102,136 SPC waters and about 40,000 protein atoms. I use PME to calculate the long-range electrostatic force. And the command line is 
>> "mpiexec -machinefile ./mf -n 24 mdrun -v -dd 6 3 1 -npme 6 -dlb -s md1.tpr -o md1.trr -g md1.log  -e md1.edr -x md1.xtc>& md1.job &"
>> The machinefile "mf" have all the 24 CPUs of the three nodes. The CVS version I use is "3.3.99_development_200800503".
>> Carsten, would you analyse something and help me more? 
>>
>> Thanks!
>>
>> ======= 2008-06-21 12:00:00 您在来信中写道:=======
>>
>>   
>>> Message: 2
>>> Date: Fri, 20 Jun 2008 20:13:52 +0800
>>> From: "xuji"<xuji at home.ipe.ac.cn>
>>> Subject: [gmx-developers] problem about Gromacs CVS version
>>> 	3.3.99_development_200800503 parallel efficiency
>>> To: "gmx-developers" <gmx-developers at gromacs.org>
>>> Message-ID: <20080620121421.B083D165 at colibri.its.uu.se>
>>> Content-Type: text/plain; charset="gb2312"
>>>
>>> Hi all
>>>
>>> I have 3 nodes, and there're 8 CPUs in each node. I run a 24 processes mdrun on the three nodes. And I use MPICH2 environment. But the efficiency of the mdrun program is very low. The occupancy of each CPU is only about 10%. I don't know why. Can some one give me some help?
>>>
>>> Appricaite any help in advance!
>>>
>>> Best wishes!
>>>
>>> Ji Xu
>>> xuji at home.ipe.ac.cn
>>>
>>> 2008-06-20
>>>               
>>> -------------- next part --------------
>>> An HTML attachment was scrubbed...
>>> URL: http://www.gromacs.org/pipermail/gmx-developers/attachments/20080620/7d4b3afa/attachment-0001.html
>>>
>>> ------------------------------
>>>
>>> Message: 3
>>> Date: Fri, 20 Jun 2008 14:22:11 +0200
>>> From: Carsten Kutzner <ckutzne at gwdg.de>
>>> Subject: Re: [gmx-developers] problem about Gromacs CVS	version
>>> 	3.3.99_development_200800503 parallel efficiency
>>> To: Discussion list for GROMACS development
>>> 	<gmx-developers at gromacs.org>
>>> Message-ID: <485BA0F3.8020800 at gwdg.de>
>>> Content-Type: text/plain; charset=GB2312
>>>
>>> xuji wrote:
>>>     
>>>> Hi all
>>>>  
>>>> I have 3 nodes, and there're 8 CPUs in each node. I run a 24 processes 
>>>> mdrun on the three nodes. And I use MPICH2 environment. But the 
>>>> efficiency of the mdrun program is very low. The occupancy of each CPU 
>>>> is only about 10%. I don't know why. Can some one give me some help?
>>>>       
>>> Hi,
>>>
>>> what kind of interconnect do you have? It should be *at least* Gigabit
>>> Ethernet! How big is your system? Are you using PME? Please give more
>>> information, also on what was your command line to start the runs and
>>> what CVS version (date) you are using.
>>>
>>> Carsten
>>>
>>>
>>> -- 
>>> Dr. Carsten Kutzner
>>> Max Planck Institute for Biophysical Chemistry
>>> Theoretical and Computational Biophysics Department
>>> Am Fassberg 11
>>> 37077 Goettingen, Germany
>>> Tel. +49-551-2012313, Fax: +49-551-2012302
>>> http://www.mpibpc.mpg.de/research/dep/grubmueller/
>>> http://www.gwdg.de/~ckutzne
>>>
>>>
>>> ------------------------------
>>>
>>> Message: 4
>>> Date: Fri, 20 Jun 2008 21:04:09 +0800
>>> From: Yang Ye <leafyoung at yahoo.com>
>>> Subject: Re: [gmx-developers] problem about Gromacs CVS	version
>>> 	3.3.99_development_200800503 parallel efficiency
>>> To: Discussion list for GROMACS development
>>> 	<gmx-developers at gromacs.org>
>>> Message-ID: <485BAAC9.9050809 at yahoo.com>
>>> Content-Type: text/plain; charset=GB2312
>>>
>>> Hi,
>>>
>>> This is related to the type of your network connection.
>>>
>>> If it is just ethernet, such low CPU utilization is expected. Also, if
>>> your system has a small number of atoms, the speed-up through parallel
>>> is also small.
>>>
>>> So, reduce the number of parallel, or wait for Gromacs 4, or change to a
>>> better inter-node network (Infiniband, etc), sorted according to
>>> required time, IMHO.
>>>
>>> Regards,
>>> Yang Ye
>>>
>>> xuji wrote:
>>>     
>>>> Hi all
>>>> I have 3 nodes, and there're 8 CPUs in each node. I run a 24 processes
>>>> mdrun on the three nodes. And I use MPICH2 environment. But the
>>>> efficiency of the mdrun program is very low. The occupancy of each CPU
>>>> is only about 10%. I don't know why. Can some one give me some help?
>>>> Appricaite any help in advance!
>>>> Best wishes!
>>>> Ji Xu
>>>> xuji at home.ipe.ac.cn <mailto:xuji at home.ipe.ac.cn>
>>>>
>>>> 2008-06-20
>>>>               
>>>>
>>>> 	
>>>>
>>>>
>>>> ------------------------------------------------------------------------
>>>>
>>>> _______________________________________________
>>>> gmx-developers mailing list
>>>> gmx-developers at gromacs.org
>>>> http://www.gromacs.org/mailman/listinfo/gmx-developers
>>>> Please don't post (un)subscribe requests to the list. Use the 
>>>> www interface or send it to gmx-developers-request at gromacs.org.
>>>>       
>>>     
>>   
> 
> _______________________________________________
> gmx-developers mailing list
> gmx-developers at gromacs.org
> http://www.gromacs.org/mailman/listinfo/gmx-developers
> Please don't post (un)subscribe requests to the list. Use the 
> www interface or send it to gmx-developers-request at gromacs.org.

-- 
Dr. Carsten Kutzner
Max Planck Institute for Biophysical Chemistry
Theoretical and Computational Biophysics Department
Am Fassberg 11
37077 Goettingen, Germany
Tel. +49-551-2012313, Fax: +49-551-2012302
http://www.mpibpc.mpg.de/research/dep/grubmueller/
http://www.gwdg.de/~ckutzne



More information about the gromacs.org_gmx-developers mailing list