[gmx-users] help needed
Mark Abraham
mark.j.abraham at gmail.com
Mon Jun 2 00:11:45 CEST 2014
Hi,
What's the smallest relevant neutral system you can think of? Test your
implementation of your model on that, on a local machine, not a
supercomputer ;-) If there's any diagnosis of why the crash happens, it
will be in the log file. But you seem like you have far too many variables
to know where the major problem lies.
Mark
On Sun, Jun 1, 2014 at 11:55 PM, Chetan Mahajan <chetanvm10 at gmail.com>
wrote:
> More information:
>
> As the log file shows, coulomb interaction energy is shooting to nan during
> equilibration where error occurs. Coulomb interaction energy during
> minimization is finite.
>
> Thanks
> Chetan
>
>
> On Sun, Jun 1, 2014 at 4:44 PM, Chetan Mahajan <chetanvm10 at gmail.com>
> wrote:
>
> > Dear All,
> >
> > I am testing my gromacs code using user-specified potential functions (LJ
> > as well as Buckingham as non-bonded potentials). *Minimization run with
> > these potentials runs fine*. However, equilibration run gives following
> > error at which I am clueless. Could anyone help? Is it because total
> charge
> > is -0.018 and not zero?
> >
> > ...........
> >
> > Non-default thread affinity set probably by the OpenMP library,
> > disabling internal thread affinity
> > starting mdrun 'tio2_formate'
> > 1000000 steps, 1000.0 ps.
> > [c418-701.stampede.tacc.utexas.edu:mpi_rank_5][error_sighandler] Caught
> > error: Segmentation fault (signal 11)
> > [c418-701.stampede.tacc.utexas.edu:mpispawn_0][readline] Unexpected
> > End-Of-File on file descriptor 7. MPI process died?
> > [c418-701.stampede.tacc.utexas.edu:mpispawn_0][mtpmi_processops] Error
> > while reading PMI socket. MPI process died?
> > [c418-701.stampede.tacc.utexas.edu:mpispawn_0][child_handler] MPI
> process
> > (rank: 5, pid: 122921) terminated with signal 11 -> abort job
> > [c418-701.stampede.tacc.utexas.edu:
> mpirun_rsh][process_mpispawn_connection]
> > mpispawn_0 from node c418-701 aborted: Error while reading a PMI socket
> (4)
> > [c428-301.stampede.tacc.utexas.edu:mpispawn_1][read_size] Unexpected
> > End-Of-File on file descriptor 21. MPI process died?
> > [c428-301.stampede.tacc.utexas.edu:mpispawn_1][read_size] Unexpected
> > End-Of-File on file descriptor 21. MPI process died?
> > [c428-301.stampede.tacc.utexas.edu:mpispawn_1][handle_mt_peer] Error
> > while reading PMI socket. MPI process died?
> > [c428-902.stampede.tacc.utexas.edu:mpispawn_2][read_size] Unexpected
> > End-Of-File on file descriptor 21. MPI process died?
> > [c428-902.stampede.tacc.utexas.edu:mpispawn_2][read_size] Unexpected
> > End-Of-File on file descriptor 21. MPI process died?
> > [c428-902.stampede.tacc.utexas.edu:mpispawn_2][handle_mt_peer] Error
> > while reading PMI socket. MPI process died?
> > TACC: MPI job exited with code: 1
> >
> > TACC: Shutdown complete. Exiting.
> >
> >
> > NOTE 1 [file lnanopf3.top, line 4422]:
> > System has non-zero total charge: -0.018000
> > Total charge should normally be an integer. See
> > http://www.gromacs.org/Documentation/Floating_Point_Arithmetic
> > for discussion on how close it should be to an integer.
> >
> >
> >
> >
> > -------------------------------------------------------
> > Program grompp, VERSION 4.6.5
> > Source code file:
> > /work/01906/xzhu216/gromacs/source/gromacs-4.6.5/src/gmxlib/futil.c,
> line:
> > 593
> >
> > File input/output error:
> > /work/01019/cmahajan/postdoc/mapotruns/r1/eq/lnanop.gro
> >
> > ...........
> >
> > Thanks a lot!
> >
> > regards
> > Chetan
> >
> --
> Gromacs Users mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
> send a mail to gmx-users-request at gromacs.org.
>
More information about the gromacs.org_gmx-users
mailing list