[gmx-users] Help: Gromacs Installation

Mark Abraham Mark.Abraham at anu.edu.au
Thu Apr 28 01:37:53 CEST 2011


On 4/28/2011 4:44 AM, Hrachya Astsatryan wrote:
> Dear Roland,
>
> We need to run the GROMACS on the base of the nodes of our cluster (in 
> order to use all computational resources of the cluster), that's why 
> we need MPI (instead of using thread or OpenMP within the SMP node).
> I can run simple MPI examples, so I guess the problem on the 
> implementation of the Gromacs.

I agree with Roland that the problem is likely to be in the 
configuration and function of the MPI library. RHEL4, being at least 5 
years old, is probably using some ancient MPI library version that is 
not up to the job. This is frequently-occurring problem. Roland asked 
about your MPI library... if you want free help, you'll do yourself 
favours by providing answers to the questions of people who are offering 
help :-)

Mark

> On 4/27/11 11:29 PM, Roland Schulz wrote:
>> This seems to be a problem with your MPI library. Test to see whether 
>> other MPI programs don't have the same problem. If it is not GROMACS 
>> specific please ask on the mailinglist of your MPI library. If it 
>> only happens with GROMACS be more specific about what your setup is 
>> (what MPI library, what hardware, ...).
>>
>> Also you could use the latest GROMACS 4.5.x. It has built in thread 
>> support and doesn't need MPI as long as you only run on n cores 
>> within one SMP node.
>>
>> Roland
>>
>> On Wed, Apr 27, 2011 at 2:13 PM, Hrachya Astsatryan <hrach at sci.am 
>> <mailto:hrach at sci.am>> wrote:
>>
>>     Dear Mark Abraham & all,
>>
>>     We  used another benchmarking systems, such as d.dppc on 4
>>     processors, but we have the same problem (1 proc use about 100%,
>>     the others 0%).
>>     After for a while we receive the following error:
>>
>>     Working directory is /localuser/armen/d.dppc
>>     Running on host wn1.ysu-cluster.grid.am
>>     <http://wn1.ysu-cluster.grid.am>
>>     Time is Fri Apr 22 13:55:47 AMST 2011
>>     Directory is /localuser/armen/d.dppc
>>     ____START____
>>     Start: Fri Apr 22 13:55:47 AMST 2011
>>     p2_487:  p4_error: Timeout in establishing connection to remote
>>     process: 0
>>     rm_l_2_500: (301.160156) net_send: could not write to fd=5, errno
>>     = 32
>>     p2_487: (301.160156) net_send: could not write to fd=5, errno = 32
>>     p0_32738:  p4_error: net_recv read:  probable EOF on socket: 1
>>     p3_490: (301.160156) net_send: could not write to fd=6, errno = 104
>>     p3_490:  p4_error: net_send write: -1
>>     p3_490: (305.167969) net_send: could not write to fd=5, errno = 32
>>     p0_32738: (305.371094) net_send: could not write to fd=4, errno = 32
>>     p1_483:  p4_error: net_recv read:  probable EOF on socket: 1
>>     rm_l_1_499: (305.167969) net_send: could not write to fd=5, errno
>>     = 32
>>     p1_483: (311.171875) net_send: could not write to fd=5, errno = 32
>>     Fri Apr 22 14:00:59 AMST 2011
>>     End: Fri Apr 22 14:00:59 AMST 2011
>>     ____END____
>>
>>     We tried new version of Gromacs, but receive the same error.
>>     Please, help us to overcome the problem.
>>
>>
>>     With regards,
>>     Hrach
>>
>>
>>     On 4/22/11 1:41 PM, Mark Abraham wrote:
>>
>>         On 4/22/2011 5:40 PM, Hrachya Astsatryan wrote:
>>
>>             Dear all,
>>
>>             I would like to inform you that I have installed the
>>             gromacs4.0.7 package on the cluster (nodes of the cluster
>>             are 8 core Intel, OS: RHEL4 Scientific Linux) with the
>>             following steps:
>>
>>             yum install fftw3 fftw3-devel
>>             ./configure --prefix=/localuser/armen/gromacs --enable-mpi
>>
>>             Also I have downloaded gmxbench-3.0 package and try to
>>             run d.villin to test it.
>>
>>             Unfortunately it wok fine until np is 1,2,3, if I use
>>             more than 3 procs I receive low CPU balancing and the
>>             process in hanging.
>>
>>             Could you, please, help me to overcome the problem?
>>
>>
>>         Probably you have only four physical cores (hyperthreading is
>>         not normally useful), or your MPI is configured to use only
>>         four cores, or these benchmarks are too small to scale usefully.
>>
>>         Choosing to do a new installation of a GROMACS version that
>>         is several years old is normally less productive than the
>>         latest version.
>>
>>         Mark
>>
>>
>>
>>
>>     -- 
>>     gmx-users mailing list gmx-users at gromacs.org
>>     <mailto:gmx-users at gromacs.org>
>>     http://lists.gromacs.org/mailman/listinfo/gmx-users
>>     Please search the archive at
>>     http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>>     Please don't post (un)subscribe requests to the list. Use the www
>>     interface or send it to gmx-users-request at gromacs.org
>>     <mailto:gmx-users-request at gromacs.org>.
>>     Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>>
>>
>>
>>
>> -- 
>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov 
>> <http://cmb.ornl.gov>
>> 865-241-1537, ORNL PO BOX 2008 MS6309
>




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20110428/114066d4/attachment.html>


More information about the gromacs.org_gmx-users mailing list