[gmx-users] Magic number error

zazeri zazeri at yahoo.com.br
Sun Jul 15 22:52:46 CEST 2007


I was forgetting to tell you some things...

1) the trajectory in which I'm evaluation is part from
a simulation that is running still;
2) the machine where the simulation is running belongs
to a cluster, whose "/home" is connected by NFS to the
master machine;
3) the md.log and nohup.out do not accuse any mistake;
4) the edr file (energy file) is ok, I've noticed it
because I've calculated the system volume and the file
supplied the values besides 
the instant that the xtc file has generated the error.
Besides, the system volume is good, the system did not
explode! :D
5) I'm saving the trajectory every 8 ps;
6) all the analyses, creation of tpr file and
simulation were made in the same machine. (Simulation
goes on running)

Other considerations:

(gmxcheck -f system.md.xtc -c system.md.tpr -e
ener.edr -n index.ndx)

Checking file system.md.xtc
Reading frame       0 time    0.000
# Atoms  23924
Precision 0.001 (nm)
Reading frame     110 time  880.000  
Program gmxcheck, VERSION 3.3.1
Source code file: xtcio.c, line: 83

Fatal error:
Magic Number Error in XTC file (read 0, should be

g_potential, g_order and "mdrun -rerun" also give the
same error.
(mpirun -np 2 mdrun_mpi.lam -s system.md.tpr -rerun
system.md.xtc -o rerun.trr -x rerun.xtc -e
ener_rerun.edr -g rerun.log -v)
(The tpr file was generated for 2 nodes, one job for
each processor. There're 2 processors in each

I've also verified the atom number and it's ok, 23924.

Thank you for the attention!

--- Chris Neale <chris.neale at utoronto.ca> escreveu:
> Sounds like unrecoverable data loss. Can you
> reproduce the problem with a new mdrun? If so,
> please post more data including the
> entire output from the program (e.g. g_potential)
> that gives problems (but let's stick to gmxcheck for
> now). Also the log file from the
> mdrun would be good. Also the exact commands (copy
> and paste please instead or re-write) that you used
> for the mdrun and for the 
> gmxcheck/g_potential.
> Since you're going to be re-running mdrun anyway,
> keep the nstlog to a large number so that log file
> doesn't become huge.
> Other things to question: are you running analysis
> on the same computer that you ran mdrun on? same
> compilation? same precision?

Novo Yahoo! Cadê? - Experimente uma nova busca.

More information about the gromacs.org_gmx-users mailing list