[gmx-users] RHEL OS, g_mdrun

Szilárd Páll szilard.pall at cbr.su.se
Wed May 18 22:51:00 CEST 2011


Hi,

You should really use fftw, it's *much* faster! (Btw: fftpack comes
with Gromacs.)

I'm not sure what versions of Gromacs are available through the RHEL
packages, but as installing from source is not very difficult
(especially for a sysadmin) you might be better of with getting a
fresh installation. You can find the instructions here:
http://www.gromacs.org/Downloads/Installation_Instructions

Cheers,
--
Szilárd



On Wed, May 18, 2011 at 10:24 PM, ram bio <rmbio861 at gmail.com> wrote:
> Dear Justin,
>
> Thanks for the suggestion.
> Do we need to install FFTW libraries also while installing gromacs as
> I think FFT pack got installed as default when the gromacs is
> installed by the system admin using yum i.e. yum install gromacs.
>
>
> yum install gromacs_openmpi /
> yum install gromacs_mpich2,
>
> because when i asked my system administrator he mentioned that he
> installed mpich2 version of gromacs, but when i tried to locate
> g_mdrun_mpich2, i could not find it in the path, instead i found
> g_mdrun_openmpi and so i tried to run the job with g_mdrun_openmpi,
> but could not run the job.
>
> So it is difficult now to figure out why I could not use
> g_mdrun_openmpi, even though the file exists in the
> path:/usr/lib64/openmpi/bin/g_mdrun_openmpi
>
> Ram
>
> On Wed, May 18, 2011 at 3:00 PM, Justin A. Lemkul <jalemkul at vt.edu> wrote:
>>
>>
>> ram bio wrote:
>>>
>>> Dear Gromacs Users,
>>>
>>> I am trying to run a gromacs mdrun  job (pbs script) on the cluster
>>> (RHEL operating system) using the recently installed gromacs version
>>> 4.5.3, but  i think i could not run it paralleled on the nodes, as it
>>> is taking a long time to finish the job.
>>>
>>> The information regarding the  installed gromacs on the server is as
>>> follows:
>>>
>>> Scheduler: Moab/Torque
>>> Mpi Version: Mpich2
>>> OS: RHEL 5.5
>>> Compiler: gcc  version 4.4 enterprise linux 5
>>> FFT – FFT Pack
>>>
>>> I am running a batch job using the script as under:
>>>
>>> #PBS -S /bin/bash
>>> #PBS -N aarontest
>>> #PBS -o aaronout.txt
>>> #PBS -j oe
>>> #PBS -l nodes=2:ppn=4
>>> #PBS -l walltime=336:00:00
>>>
>>> #cat $PBS_NODEFILE
>>> NP=`wc -l < $PBS_NODEFILE`
>>> cd $PBS_O_WORKDIR
>>> #export MPI_GROUP_MAX=128
>>>
>>>
>>> /usr/local/bin/mpirun -np 8 /usr/bin/g_mdrun -nt 8 -deffnm testnpt3ns
>>> -v dlb yes -rdd 1.2
>>>
>>> Please let me know your suggestions for correcting the error.
>>>
>>
>> MPI and threading are mutually exclusive; you can't launch a multi-thread
>> process (mdrun -nt) via mpirun.  The mdrun executable must be MPI-enabled to
>> run on this system.  Perhaps the pre-compiled binary is not, in which case
>> you should install from source with --enable-mpi and --disable-threads.
>>
>> -Justin
>>
>> --
>> ========================================
>>
>> Justin A. Lemkul
>> Ph.D. Candidate
>> ICTAS Doctoral Scholar
>> MILES-IGERT Trainee
>> Department of Biochemistry
>> Virginia Tech
>> Blacksburg, VA
>> jalemkul[at]vt.edu | (540) 231-9080
>> http://www.bevanlab.biochem.vt.edu/Pages/Personal/justin
>>
>> ========================================
>> --
>> gmx-users mailing list    gmx-users at gromacs.org
>> http://lists.gromacs.org/mailman/listinfo/gmx-users
>> Please search the archive at
>> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
>> Please don't post (un)subscribe requests to the list. Use the www interface
>> or send it to gmx-users-request at gromacs.org.
>> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
> --
> gmx-users mailing list    gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>



More information about the gromacs.org_gmx-users mailing list