[gmx-users] gromacs on glacier
Justin A. Lemkul
jalemkul at vt.edu
Wed Jul 15 02:45:31 CEST 2009
Payman Pirzadeh wrote:
> Hi Justin,
> Regarding your suggestion using spc216, when I tried the energetically
> minimize spc216 with grompp,I got the following error:
> Program grompp, VERSION 4.0.4
> Source code file: topio.c, line: 415
> Fatal error:
> Syntax error - File spce.itp, line 1
> Last line read:
> '[ moleculetype ]'
> Invalid order for directive moleculetype
> Same thing happened when I tried TIP4P. Where is the problem? I did not have
> this issue with my own model.
Something is malformed in your .top, but it's impossible to comment any further
unless you post your entire .top file.
> -----Original Message-----
> From: gmx-users-bounces at gromacs.org [mailto:gmx-users-bounces at gromacs.org]
> On Behalf Of Justin A. Lemkul
> Sent: June 8, 2009 1:53 PM
> To: Gromacs Users' List
> Subject: Re: [gmx-users] gromacs on glacier
> Payman Pirzadeh wrote:
>> Dear Justin,
>> Here is the mpich version:
>> MPICH Version: 1.2.7p1
>> MPICH Release date: $Date: 2005/11/04 11:54:51$
>> MPICH Patches applied: none
>> MPICH configure: -prefix=/share/apps/intel/mpich/
>> MPICH Device: ch_p4
> Well, I think that MPICH is your problem then. There are several reports of
> sporadic bugs (the one that you're seeing!) with that version. Since it's
> several years old, getting a fix is probably a bit unlikely :)
> Perhaps you can sort out with the sysadmins what you can do. Like I said
> before, probably OpenMPI is a better bet - we've never had a problem with
> You can probably install it yourself in your home directory, and point to
> with environment variables during the Gromacs installation.
>> Also the gcc compiler:
>> gcc (GCC) 3.4.6 20060404 (Red Hat 3.4.6-9)
>> Copyright (C) 2006 Free Software Foundation, Inc.
>> About the system, I have water model with 3 atom sites and 3 virtual
> Probably not a very good test case for diagnosing problems, but I think it's
> unrelated in this case. When testing, keep it simple - run a simulation
> spc216.gro from the Gromacs installation, it should pretty much always work
>> -----Original Message-----
>> From: gmx-users-bounces at gromacs.org [mailto:gmx-users-bounces at gromacs.org]
>> On Behalf Of Justin A. Lemkul
>> Sent: June 8, 2009 11:56 AM
>> To: Gromacs Users' List
>> Subject: Re: [gmx-users] gromacs on glacier
>> Payman Pirzadeh wrote:
>>> Hi Justin,
>>> Since the manual itself was not sufficient, I asked some other people who
>>> are running GROMACS in our group (but they run only on 2 CPUs). Here are
>>> steps I took to compile the parallel version (I have included my notes
>>> they told me as well):
>>> 1. ./configure --prefix=/global/home/pirzadeh/gromacs-4.0.4
>>> This line specifies the installation path
>>> 2. Make
>>> 3. Make install
>>> These two commands will make 'grompp' and analysis functions of GROMACS.
>>> 4. Make clean
>>> This command will clean some files generated during installation which
>>> not needed anymore.
>>> 5. ./configure --enable-mpi --disable-nice
>>> Here we compile the code for parallel version of GROMACS
>>> 6. make mdrun
>>> 7. make install-mdrun
>>> Now the parallel version of mdrun is built. The analysis functions are
>>> in 'bin' folder accompanied by 'GMXRC'.
>>> 8. Before running 'grompp' to produce the topology file for simulation,
>>> we should use the command source
>>> /global/home/pirzadeh/gromacs-4.0.4/bin/GMXRC to specify the path for the
>>> current code.
>> This is pretty much the standard installation procedure. What we also
>> know are the compilers used, etc. for the installation. The error you're
>> getting is from MPICH. Which version is installed on the cluster? It may
>> old and buggy. In any case, you can try to install something newer, like
>> recent version of OpenMPI (which we have on our cluster); it may be more
>> reliable. Only a random thought, means nothing unless we know what you
>> installed :)
>> What about the contents of your system, as I asked before? Do you really
>> have a
>> 100% virtual site system?
>>> Sorry for tons of e-mails.
>>> -----Original Message-----
>>> From: gmx-users-bounces at gromacs.org
> [mailto:gmx-users-bounces at gromacs.org]
>>> On Behalf Of Justin A. Lemkul
>>> Sent: June 8, 2009 11:12 AM
>>> To: Discussion list for GROMACS users
>>> Subject: Re: [gmx-users] gromacs on glacier
>>> Payman Pirzadeh wrote:
>>>> I had the chance to run the GROMACS 4.0.4 on another cluster. Same
>>>> problem still persists. But what I found is that it can be run on a node
>>>> with 2 CPUs, but as soon as the number of nodes are increased to 2, 3, .
>>>> it will crash. Following are the last lines reported in different files:
>>>> "In the log file of the code":
>>>> There are: 1611 Atoms
>>>> There are: 1611 VSites
>>> All of your atoms are virtual sites? If so, I would try a simpler test
>>> case, to
>>> rule out stumbling across some obscure bug.
>>>> p2_22627: p4_error: Timeout in establishing connection to remote
>>> This is an error message from MPICH, not Gromacs. See, for example:
>>>> To me, it seems that code can not communicate through more than one
>>>> node. I am suspicious of doing sth wrong during installation! I tried
>>>> wiki, but I can not find the documents as before, and I eally do not
>>>> know in which step I might have gone wrong.
>>> If you suspect you have done something wrong, then post the details of
>>> system configuration (hardware, compilers, OS, etc.) as well as a
>>> record of what you did to compile the software. If your procedure is
>>> then it helps rule out the possibility that you messed something up.
>>>> gmx-users mailing list gmx-users at gromacs.org
>>>> Please search the archive at http://www.gromacs.org/search before
>>>> Please don't post (un)subscribe requests to the list. Use the
>>>> www interface or send it to gmx-users-request at gromacs.org.
>>>> Can't post? Read http://www.gromacs.org/mailing_lists/users.php
Justin A. Lemkul
ICTAS Doctoral Scholar
Department of Biochemistry
jalemkul[at]vt.edu | (540) 231-9080
More information about the gromacs.org_gmx-users