[gmx-users] Re: orca and Segmentation fault (xi zhao)
Gerrit Groenhof
ggroenh at gwdg.de
Mon Nov 14 13:19:40 CET 2011
THe error message is clear: your spin multiplicity is 0, which is
impossible.
Please make sure you understand the basics of electronic structure
theory. To test this, you can run the QM system only in stand along QM
package.
gerrit
> 2. Re: orca and Segmentation fault (xi zhao)
> 3. RE: Potential Energy Landscape (Natalie Stephenson)
>
>
> -----------------
>
> ./configure --without-qmmm-orca --without--qmmm-gaussian --enable-mpi
> make
> make install
> I installed gromacs with Parallel  mode, is not threading. when I run  " mpirun -np 1 mdrun_dd -v -s pyp.tpr&" or mdrun_dd -nt 1 -v -s pyp.tpr
> it still"
> Back Off! I just backed up md.log to ./#md.log.20#
> Getting Loaded...
> Reading file pyp.tpr, VERSION 4.5.1 (single precision)
> Loaded with Money
> QM/MM calculation requested.
> there we go!
> Layer 0
> nr of QM atoms 22
> QMlevel: B3LYP/3-21G
> orca initialised...
> Back Off! I just backed up traj.trr to ./#traj.trr.1#
> Back Off! I just backed up traj.xtc to ./#traj.xtc.1#
> Back Off! I just backed up ener.edr to ./#ener.edr.2#
> starting mdrun 'PHOTOACTIVE YELLOW PROTEIN in water'
> 500 steps,     0.5 ps.
> Calling 'orca pyp.inp>> pyp.out'
> Error : multiplicity (Mult:=2*S+1) is zero
> -------------------------------------------------------
> Program mdrun_dd, VERSION 4.5.1
> Source code file: qm_orca.c, line: 393
> Fatal error:
> Call to 'orca pyp.inp>> pyp.out' failed
> For more information and tips for troubleshooting, please check the GROMACS
> website at http://www.gromacs.org/Documentation/Errors
> -------------------------------------------------------
> "The Carpenter Goes Bang Bang" (The Breeders)
> Halting program mdrun_dd
> gcq#129: "The Carpenter Goes Bang Bang" (The Breeders)
> --------------------------------------------------------------------------
> MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
> with errorcode -1.
> NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
> You may or may not see output from other processes, depending on
> exactly when Open MPI kills them.
> --------------------------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun has exited due to process rank 0 with PID 18080 on
> node localhost.localdomai exiting without calling "finalize". This may
> have caused other processes in the application to be
> terminated by signals sent by mpirun (as reported here)."
>
>
> --- 11å¹´11æ14æ¥ï¼å¨ä¸, Christoph Riplinger<cri at thch.uni-bonn.de> åéï¼
>
>
> å件人: Christoph Riplinger<cri at thch.uni-bonn.de>
> 主é¢: Re: [gmx-users] orca and Segmentation fault
> æ¶ä»¶äºº: "Discussion list for GROMACS users"<gmx-users at gromacs.org>
> æ¥æ: 2011å¹´11æ14æ¥,å¨ä¸,ä¸å6:51
>
>
>
More information about the gromacs.org_gmx-users
mailing list