[gmx-users] gromacs.org_gmx-users Digest, Vol 145, Issue 4

Pabitra Mohan uniquepabs at gmail.com
Tue May 3 10:31:36 CEST 2016


Dear users,
Got it. I was running REMD in my laptop with 8 processors but tried in HPCC
with 20 processors and its running.

Thank you very much for your suggestions

On Tue, May 3, 2016 at 1:53 PM, <
gromacs.org_gmx-users-request at maillist.sys.kth.se> wrote:

> Send gromacs.org_gmx-users mailing list submissions to
>         gromacs.org_gmx-users at maillist.sys.kth.se
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users
> or, via email, send a message with subject or body 'help' to
>         gromacs.org_gmx-users-request at maillist.sys.kth.se
>
> You can reach the person managing the list at
>         gromacs.org_gmx-users-owner at maillist.sys.kth.se
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of gromacs.org_gmx-users digest..."
>
>
> Today's Topics:
>
>    1. Re: Using ORCA with Gromacs (bharat gupta)
>    2. problems in MPI run for REMD in GROMACS (Pabitra Mohan)
>    3. Building and Equilibrating a United-Atom
>       DOPC-DPPC-Cholesterol Bilayer (John Smith)
>    4. Re: problems in MPI run for REMD in GROMACS (Terry)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 3 May 2016 13:37:56 +0900
> From: bharat gupta <bharat.85.monu at gmail.com>
> To: Discussion list for GROMACS users <gmx-users at gromacs.org>
> Subject: Re: [gmx-users] Using ORCA with Gromacs
> Message-ID:
>         <CAAh+zSU9XxJ-YuNEtx0Ght=
> kKUzBbLPsXpMKnwcF_cPReMn7ZA at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Dear Gmx Users,
>
> I am trying to run a QM-MM optimization using ORCA and Gromacs 5.1.2
> version. I complied gromacs 5.1.2 with ORCA as QM-MM program. Here's the
> output of CMakeCache.txt file for QM package:
>
> //QM package for QM/MM. Pick one of: none, gaussian, mopac, gamess,
> // orca
> GMX_QMMM_PROGRAM:STRING=ORCA
>
> When I run the optimizationusing mdrun, I am getting the following error:
>
> INIT_EWALD_F_TABLE: scale=905.814328, maxr=2.433000, size=2205
> SAVING EWALD TABLE
> Layer 0
> nr of QM atoms 103
> QMlevel: RHF/STO-3G
>
>
> -------------------------------------------------------
> Program gmx_d, VERSION 5.0
> Source code file:
> /home/Bharat/Documents/gromacs-5.0/src/gromacs/mdlib/qmmm.c, line: 1006
>
> Fatal error:
> Ab-initio calculation only supported with Gamess, Gaussian or ORCA.
> For more information and tips for troubleshooting, please check the GROMACS
>
> I had previously set the path for ORCA i.e. $ORCA_PATH, but I don't know
> how to set path for
> $BASENAME, using set command.
>
>
> I searched the list and found this thread:
>
> https://www.mail-archive.com/gromacs.org_gmx-users@maillist.sys.kth.se/msg03280.html
>
> But, I am not able to run the first command from this thread: Here's what I
> get when I execute the first command from the thread:
>
> git clone https://gerrit.gromacs.org/gromacs*
> Initialized empty Git repository in /home/Bharat/gromacs*/.git/
> fatal: remote error: Git repository not found
>
> Also, I would like to know whether ORCA with gromacs runs the calculation
> in parallel mode or not?
>
> --
> *Best Regards*
> BM
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 3 May 2016 12:20:48 +0530
> From: Pabitra Mohan <uniquepabs at gmail.com>
> To: gmx-users at gromacs.org
> Subject: [gmx-users] problems in MPI run for REMD in GROMACS
> Message-ID:
>         <
> CABC5K_U_gokKMP404uiF-Lv8K6weOw17XpgSOoazbTYd60qx6w at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> Dear Gromacs Users,
> Can any body suggest the solutions to my problem appearing during REMD run
> in GROMACS
>
> While executing the following command these error messages are coming
>
> pabitra at pabitra-Dell-System-XPS-L502X:~/Desktop/ctld_remd/stage2$ mpirun
> -np 4 mdrun_mpi -v -multidir sim0 sim1 sim2 sim3 sim4 -replex 500
> GROMACS:    mdrun_mpi, VERSION 5.0.6
>
> GROMACS is written by:
> Emile Apol         Rossen Apostolov   Herman J.C. Berendsen Par
> Bjelkmar
> Aldert van Buuren  Rudi van Drunen    Anton Feenstra     Sebastian Fritsch
> Gerrit Groenhof    Christoph Junghans Peter Kasson       Carsten Kutzner
> Per Larsson        Justin A. Lemkul   Magnus Lundborg    Pieter Meulenhoff
> Erik Marklund      Teemu Murtola      Szilard Pall       Sander Pronk
> Roland Schulz      Alexey Shvetsov    Michael Shirts     Alfons Sijbers
> Peter Tieleman     Christian Wennberg Maarten Wolf
> and the project leaders:
> Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
>
> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> Copyright (c) 2001-2014, The GROMACS development team at
> Uppsala University, Stockholm University and
> the Royal Institute of Technology, Sweden.
> check out http://www.gromacs.org for more information.
>
> GROMACS is free software; you can redistribute it and/or modify it
> under the terms of the GNU Lesser General Public License
> as published by the Free Software Foundation; either version 2.1
> of the License, or (at your option) any later version.
>
> GROMACS:      mdrun_mpi, VERSION 5.0.6
> Executable:   /usr/bin/mdrun_mpi.openmpi
> Library dir:  /usr/share/gromacs/top
> Command line:
>   mdrun_mpi -v -multidir sim0 sim1 sim2 sim3 sim4 -replex 500
>
>
> -------------------------------------------------------
> Program mdrun_mpi, VERSION 5.0.6
> Source code file:
> /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line: 383
>
> Fatal error:
> The number of ranks (4) is not a multiple of the number of simulations (5)
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Error on rank 2, will try to stop all ranks
> Halting parallel program mdrun_mpi on CPU 2 out of 4
>
> -------------------------------------------------------
> Program mdrun_mpi, VERSION 5.0.6
> Source code file:
> /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line: 383
>
> Fatal error:
> The number of ranks (4) is not a multiple of the number of simulations (5)
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Error on rank 0, will try to stop all ranks
> Halting parallel program mdrun_mpi on CPU 0 out of 4
>
> -------------------------------------------------------
> Program mdrun_mpi, VERSION 5.0.6
> Source code file:
> /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line: 383
>
> Fatal error:
> The number of ranks (4) is not a multiple of the number of simulations (5)
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Error on rank 1, will try to stop all ranks
> Halting parallel program mdrun_mpi on CPU 1 out of 4
>
> -------------------------------------------------------
> Program mdrun_mpi, VERSION 5.0.6
> Source code file:
> /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line: 383
>
> Fatal error:
> The number of ranks (4) is not a multiple of the number of simulations (5)
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
>
> Error on rank 3, will try to stop all ranks
> Halting parallel program mdrun_mpi on CPU 3 out of 4
>
> gcq#11: "She's Not Bad, She's Just Genetically Mean" (Captain Beefheart)
>
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
> with errorcode -1.
>
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
>
> gcq#13: "If Life Seems Jolly Rotten, There's Something You've Forgotten !"
> (Monty Python)
>
>
> gcq#268: "It's Not Dark Yet, But It's Getting There" (Bob Dylan)
>
>
> gcq#49: "I'll Master Your Language, and In the Meantime I'll Create My Own"
> (Tricky)
>
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 2 with PID 3232 on
> node pabitra-Dell-System-XPS-L502X exiting improperly. There are two
> reasons this could occur:
>
> 1. this process did not call "init" before exiting, but others in
> the job did. This can cause a job to hang indefinitely while it waits
> for all processes to call "init". By rule, if one process calls "init",
> then ALL processes must call "init" prior to termination.
>
> 2. this process called "init", but exited without calling "finalize".
> By rule, all processes that call "init" MUST call "finalize" prior to
> exiting or it will be considered an "abnormal termination"
>
> This may have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here).
> --------------------------------------------------------------------------
> [pabitra-Dell-System-XPS-L502X:03229] 3 more processes have sent help
> message help-mpi-api.txt / mpi-abort
> [pabitra-Dell-System-XPS-L502X:03229] Set MCA parameter
> "orte_base_help_aggregate" to 0 to see all help / error messages
>
>
> Thank you very much for helping
>
> --
> Dr. Pabitra Mohan Behera
> National Post-Doctoral Fellow (DST-SERB)
> Computational Biology and Bioinformatics Lab
> Institute of Life Sciences, Bhubaneswar
> An Autonomous Organization of
> Department of Biotechnology, Government of India
> Nalco Square, Bhubaneswar
> Odisha, Pin-751023 (India)
> Mobile No: +91-9776503664, +91-9439770247
> E-mail: pabitra at ils.res.in, uniquepabs at gmail.com
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 3 May 2016 06:59:10 +0000 (UTC)
> From: John Smith <nimzster at yahoo.com>
> To: "gromacs.org_gmx-users at maillist.sys.kth.se"
>         <gromacs.org_gmx-users at maillist.sys.kth.se>
> Subject: [gmx-users] Building and Equilibrating a United-Atom
>         DOPC-DPPC-Cholesterol Bilayer
> Message-ID:
>         <564426000.6499935.1462258750136.JavaMail.yahoo at mail.yahoo.com>
> Content-Type: text/plain; charset=UTF-8
>
> Hello all,
> I have been attempting to build a proper united-atom DOPC-DPPC-Cholesterol
> bilayer for use in simulations for a while. At first I used CHARMM-GUI and
> then backward after equilibration to convert the coarse-grained
> equilibrated bilayer into a UA representation, but the resulting membrane
> system was not usable for my simulations because a couple DOPC molecules
> popped out of the bilayer, cholesterol molecules were in the middle of the
> bilayer, and so on. This does not seem like physically reasonable behavior.
> To get around this problem, I decided to keep all the molecule center of
> masses the same in the XY plane, but reorient them to point along the
> z-axis and recenter them in z so that all molecules in the top and bottom
> leaflets, respectively, would have the same center of mass in z. I then ran
> several rounds of energy minimization on this recentered system. However,
> when I use the result and attempt to run equilibration I am getting LINCS
> warnings (relative constraint deviati
>  on after LINCS), and then fatal errors of the form "1 particles
> communicated to PME node 15 are more than 2/3 times the cut-off out of the
> domain decomposition cell of their charge group in dimension y." From what
> I understand, the LINCS warnings are usually a sign that the system is not
> well equilibrated. But how can I correct this without even being able to
> equilibrate my system? Running the energy minimizations for even longer
> doesn't seem to fix the issue.
> Please help as I am really not sure how to proceed.
> Thanks
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 3 May 2016 16:23:34 +0800
> From: Terry <terrencesun at gmail.com>
> To: gmx-users <gmx-users at gromacs.org>
> Subject: Re: [gmx-users] problems in MPI run for REMD in GROMACS
> Message-ID:
>         <
> CAM9kr8ij+rWGQb20j2CYP13KQF8U1uyT4wawbFVgZ6vigJTAuQ at mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Tue, May 3, 2016 at 2:50 PM, Pabitra Mohan <uniquepabs at gmail.com>
> wrote:
>
> > Dear Gromacs Users,
> > Can any body suggest the solutions to my problem appearing during REMD
> run
> > in GROMACS
> >
> > While executing the following command these error messages are coming
> >
> > pabitra at pabitra-Dell-System-XPS-L502X:~/Desktop/ctld_remd/stage2$ mpirun
> > -np 4 mdrun_mpi -v -multidir sim0 sim1 sim2 sim3 sim4 -replex 500
> > GROMACS:    mdrun_mpi, VERSION 5.0.6
> >
> > GROMACS is written by:
> > Emile Apol         Rossen Apostolov   Herman J.C. Berendsen Par
> > Bjelkmar
> > Aldert van Buuren  Rudi van Drunen    Anton Feenstra     Sebastian
> Fritsch
> > Gerrit Groenhof    Christoph Junghans Peter Kasson       Carsten Kutzner
> > Per Larsson        Justin A. Lemkul   Magnus Lundborg    Pieter
> Meulenhoff
> > Erik Marklund      Teemu Murtola      Szilard Pall       Sander Pronk
> > Roland Schulz      Alexey Shvetsov    Michael Shirts     Alfons Sijbers
> > Peter Tieleman     Christian Wennberg Maarten Wolf
> > and the project leaders:
> > Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
> >
> > Copyright (c) 1991-2000, University of Groningen, The Netherlands.
> > Copyright (c) 2001-2014, The GROMACS development team at
> > Uppsala University, Stockholm University and
> > the Royal Institute of Technology, Sweden.
> > check out http://www.gromacs.org for more information.
> >
> > GROMACS is free software; you can redistribute it and/or modify it
> > under the terms of the GNU Lesser General Public License
> > as published by the Free Software Foundation; either version 2.1
> > of the License, or (at your option) any later version.
> >
> > GROMACS:      mdrun_mpi, VERSION 5.0.6
> > Executable:   /usr/bin/mdrun_mpi.openmpi
> > Library dir:  /usr/share/gromacs/top
> > Command line:
> >   mdrun_mpi -v -multidir sim0 sim1 sim2 sim3 sim4 -replex 500
> >
> >
> > -------------------------------------------------------
> > Program mdrun_mpi, VERSION 5.0.6
> > Source code file:
> > /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line:
> 383
> >
> >
> Hi,
>
> The real problem is here. Your command line arguments will not work. This
> is by design, not a bug. Do what the error suggests.
>
> Cheers
> Terry
>
>
> > Fatal error:
> > The number of ranks (4) is not a multiple of the number of simulations
> (5)
>
> For more information and tips for troubleshooting, please check the GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > -------------------------------------------------------
> >
> > Error on rank 2, will try to stop all ranks
> > Halting parallel program mdrun_mpi on CPU 2 out of 4
> >
> > -------------------------------------------------------
> > Program mdrun_mpi, VERSION 5.0.6
> > Source code file:
> > /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line:
> 383
> >
> > Fatal error:
> > The number of ranks (4) is not a multiple of the number of simulations
> (5)
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > -------------------------------------------------------
> >
> > Error on rank 0, will try to stop all ranks
> > Halting parallel program mdrun_mpi on CPU 0 out of 4
> >
> > -------------------------------------------------------
> > Program mdrun_mpi, VERSION 5.0.6
> > Source code file:
> > /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line:
> 383
> >
> > Fatal error:
> > The number of ranks (4) is not a multiple of the number of simulations
> (5)
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > -------------------------------------------------------
> >
> > Error on rank 1, will try to stop all ranks
> > Halting parallel program mdrun_mpi on CPU 1 out of 4
> >
> > -------------------------------------------------------
> > Program mdrun_mpi, VERSION 5.0.6
> > Source code file:
> > /build/gromacs-EDw7D5/gromacs-5.0.6/src/gromacs/gmxlib/main.cpp, line:
> 383
> >
> > Fatal error:
> > The number of ranks (4) is not a multiple of the number of simulations
> (5)
> > For more information and tips for troubleshooting, please check the
> GROMACS
> > website at http://www.gromacs.org/Documentation/Errors
> > -------------------------------------------------------
> >
> > Error on rank 3, will try to stop all ranks
> > Halting parallel program mdrun_mpi on CPU 3 out of 4
> >
> > gcq#11: "She's Not Bad, She's Just Genetically Mean" (Captain Beefheart)
> >
> >
> --------------------------------------------------------------------------
> > MPI_ABORT was invoked on rank 2 in communicator MPI_COMM_WORLD
> > with errorcode -1.
> >
> > NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> > You may or may not see output from other processes, depending on
> > exactly when Open MPI kills them.
> >
> --------------------------------------------------------------------------
> >
> > gcq#13: "If Life Seems Jolly Rotten, There's Something You've Forgotten
> !"
> > (Monty Python)
> >
> >
> > gcq#268: "It's Not Dark Yet, But It's Getting There" (Bob Dylan)
> >
> >
> > gcq#49: "I'll Master Your Language, and In the Meantime I'll Create My
> Own"
> > (Tricky)
> >
> >
> --------------------------------------------------------------------------
> > mpirun has exited due to process rank 2 with PID 3232 on
> > node pabitra-Dell-System-XPS-L502X exiting improperly. There are two
> > reasons this could occur:
> >
> > 1. this process did not call "init" before exiting, but others in
> > the job did. This can cause a job to hang indefinitely while it waits
> > for all processes to call "init". By rule, if one process calls "init",
> > then ALL processes must call "init" prior to termination.
> >
> > 2. this process called "init", but exited without calling "finalize".
> > By rule, all processes that call "init" MUST call "finalize" prior to
> > exiting or it will be considered an "abnormal termination"
> >
> > This may have caused other processes in the application to be
> > terminated by signals sent by mpirun (as reported here).
> >
> --------------------------------------------------------------------------
> > [pabitra-Dell-System-XPS-L502X:03229] 3 more processes have sent help
> > message help-mpi-api.txt / mpi-abort
> > [pabitra-Dell-System-XPS-L502X:03229] Set MCA parameter
> > "orte_base_help_aggregate" to 0 to see all help / error messages
> >
> >
> > Thank you very much for helping
> >
> > --
> > Dr. Pabitra Mohan Behera
> > National Post-Doctoral Fellow (DST-SERB)
> > Computational Biology and Bioinformatics Lab
> > Institute of Life Sciences, Bhubaneswar
> > An Autonomous Organization of
> > Department of Biotechnology, Government of India
> > Nalco Square, Bhubaneswar
> > Odisha, Pin-751023 (India)
> > Mobile No: +91-9776503664, +91-9439770247
> > E-mail: pabitra at ils.res.in, uniquepabs at gmail.com
> > --
> > Gromacs Users mailing list
> >
> > * Please search the archive at
> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> > posting!
> >
> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> >
> > * For (un)subscribe requests visit
> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> > send a mail to gmx-users-request at gromacs.org.
> >
>
>
> ------------------------------
>
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
> End of gromacs.org_gmx-users Digest, Vol 145, Issue 4
> *****************************************************
>



-- 
Dr. Pabitra Mohan Behera
National Post-Doctoral Fellow (DST-SERB)
Computational Biology and Bioinformatics Lab
Institute of Life Sciences, Bhubaneswar
An Autonomous Organization of
Department of Biotechnology, Government of India
Nalco Square, Bhubaneswar
Odisha, Pin-751023 (India)
Mobile No: +91-9776503664, +91-9439770247
E-mail: pabitra at ils.res.in, uniquepabs at gmail.com


More information about the gromacs.org_gmx-users mailing list