[gmx-users] Problem mdrun_mpi GROMACS 4.6.1 in Intel Cluster

Agnivo Gosai agnivogromacs14 at gmail.com
Thu Nov 20 02:32:51 CET 2014


Dear Users

I am again back with my issue of GROMACS 4.6.X and the Intel cluster used
for my research.

This time ,I compiled a serial ( without any MPI threading version ) of
GROMACS 4.6.7 by the following command line :-

/work/gb_lab/agosai/GROMACS/cmake-2.8.11/bin/cmake .. -DGMX_GPU=OFF
-DGMX_MPI=OFF -DGMX_OPENMP=OFF -DGMX_THREAD_MPI=OFF -DGMX_OPENMM=OFF
-DCMAKE_C_COMPILER=icc -DCMAKE_CXX_COMPILER=icpc -DGMX_BUILD_OWN_FFTW=ON
-DCMAKE_INSTALL_PREFIX=/work/gb_lab/agosai/gmx467ag -DGMX_DOUBLE=ON

Now , I am not getting any problems with "genbox" , "grompp" and other
GROMACS tools. I hope I will not face any errors like the one mentioned in
the *trailing mail*.

For mdrun I use the mdrun_mpi installed by a colleague of mine using
GROMACS 4.6.1 package. I am using serial commands from 4.6.7 for pre and
post processing and mdrun_mpi from 4.6.1 for the molecular dynamics runs.

 I observe that my cluster has got 1 node having 16 processors. Using
mpirun -np 16 mdrun_mpi ..... gives desired result with on an average 6.5
ns / day of simulation time.

However when I use more than 1 node (4*16 = np = 64) , there are no output
files generated by mdrun_mpi. Nothing happens. I deleted the job and found
the following errors.
[mpiexec at node094] HYD_pmcd_pmiserv_send_signal
(./pm/pmiserv/pmiserv_cb.c:221): assert (!closed) failed
[mpiexec at node094] ui_cmd_cb (./pm/pmiserv/pmiserv_pmci.c:128): unable to
send SIGUSR1 downstream
[mpiexec at node094] HYDT_dmxu_poll_wait_for_event
(./tools/demux/demux_poll.c:77): callback returned error status
[mpiexec at node094] HYD_pmci_wait_for_completion
(./pm/pmiserv/pmiserv_pmci.c:388): error waiting for event
[mpiexec at node094] main (./ui/mpich/mpiexec.c:718): process manager error
waiting for completion
~

I believe that mdrun_mpi from version 4.6.1 has not been properly compiled
to run on several nodes in the cluster. However I would like to do it to
speed up my simulations.

Any suggestions , please ??

Thanks & Regards
Agnivo Gosai
Grad Student, Iowa State University.

---------- Forwarded message ----------
From: Mark Abraham <mark.j.abraham at gmail.com>
To: Discussion list for GROMACS users <gmx-users at gromacs.org>
Cc:
Date: Mon, 10 Nov 2014 02:26:09 +0100
Subject: Re: [gmx-users] New problem GROMACS 4.6.7 in Intel Cluster
On Mon, Nov 10, 2014 at 2:17 AM, Justin Lemkul <jalemkul at vt.edu> wrote:

>
>
> On 11/9/14 8:15 PM, Agnivo Gosai wrote:
>
>> Dear Users
>>
>> Firstly thanks (specially Drs. Szilard and Mark ) for helping me out with
>> the installation of GROMACS 4.6.7 on my university Intel cluster. However
>> I
>> ran into a new problem while using it.
>>
>> 1. Firstly I used pdb2gmx to process a pdb file.
>> 2. Then I used editconf.
>> 3. Then I used genbox.
>>
>> But I encountered with the following error.
>>
>> Initialising van der waals distances...
>> Will generate new solvent configuration of 5x5x9 boxes
>> Generating configuration
>> Sorting configuration
>> Found 1 molecule type:
>>      SOL (   3 atoms): 48600 residues
>> Calculating Overlap...
>> box_margin = 0.315
>> Removed 53454 atoms that were outside the box
>> OMP: Error #178: Function pthread_getattr_np failed:
>> OMP: System error #12: Cannot allocate memory
>> Aborted
>>
>> Now upon searching on the web it seems to be an OpenMP error. Again I am
>> in
>> a fix which I have little or no idea about.
>>
>>
> Preprocessing tools (and most analysis tools) are not parallelized in any
> way. They run on one core only.  Generally you do not carry out these
sorts
> of operations on a cluster, as there is no benefit to doing so
>
> -Justin

True, and the particular problem is likely that Agnivo has not prepared the
environment correctly, e.g. by sourcing Intel's compilervars.sh script,
and/or following the cluster's usage guide for loading its Intel module.

Mark


More information about the gromacs.org_gmx-users mailing list