[gmx-users] Magic number error

Yang Ye leafyoung at yahoo.com
Mon Jul 16 08:05:26 CEST 2007


It is better for you to run the simulation for more time to see whether you can reproduce this error. 
If the problem persists, NFS may first be considered and you may test for writing the trajectory on local disk. If still so (shall be very rare), memory is likely to be in fault.

NFS is also better mounted with option: hard.
 
Regards,
Yang Ye

----- Original Message ----
From: zazeri <zazeri at yahoo.com.br>
To: Discussion list for GROMACS users <gmx-users at gromacs.org>
Sent: Monday, July 16, 2007 4:52:46 AM
Subject: Re: Re: [gmx-users] Magic number error

Chris,

I was forgetting to tell you some things...

1) the trajectory in which I'm evaluation is part from
a simulation that is running still;
2) the machine where the simulation is running belongs
to a cluster, whose "/home" is connected by NFS to the
master machine;
3) the md.log and nohup.out do not accuse any mistake;
4) the edr file (energy file) is ok, I've noticed it
because I've calculated the system volume and the file
supplied the values besides 
the instant that the xtc file has generated the error.
Besides, the system volume is good, the system did not
explode! :D
5) I'm saving the trajectory every 8 ps;
6) all the analyses, creation of tpr file and
simulation were made in the same machine. (Simulation
goes on running)

Other considerations:

gmxcheck:
(gmxcheck -f system.md.xtc -c system.md.tpr -e
ener.edr -n index.ndx)

Checking file system.md.xtc
Reading frame       0 time    0.000
# Atoms  23924
Precision 0.001 (nm)
Reading frame     110 time  880.000  
-------------------------------------------------------
Program gmxcheck, VERSION 3.3.1
Source code file: xtcio.c, line: 83

Fatal error:
Magic Number Error in XTC file (read 0, should be
1995)
-------------------------------------------------------

g_potential, g_order and "mdrun -rerun" also give the
same error.
(mpirun -np 2 mdrun_mpi.lam -s system.md.tpr -rerun
system.md.xtc -o rerun.trr -x rerun.xtc -e
ener_rerun.edr -g rerun.log -v)
(The tpr file was generated for 2 nodes, one job for
each processor. There're 2 processors in each
machine(node)).

I've also verified the atom number and it's ok, 23924.

Thank you for the attention!




--- Chris Neale <chris.neale at utoronto.ca> escreveu:
> Sounds like unrecoverable data loss. Can you
> reproduce the problem with a new mdrun? If so,
> please post more data including the
> entire output from the program (e.g. g_potential)
> that gives problems (but let's stick to gmxcheck for
> now). Also the log file from the
> mdrun would be good. Also the exact commands (copy
> and paste please instead or re-write) that you used
> for the mdrun and for the 
> gmxcheck/g_potential.
> 
> Since you're going to be re-running mdrun anyway,
> keep the nstlog to a large number so that log file
> doesn't become huge.
> 
> Other things to question: are you running analysis
> on the same computer that you ran mdrun on? same
> compilation? same precision?



       
____________________________________________________________________________________
Novo Yahoo! Cadê? - Experimente uma nova busca.
http://yahoo.com.br/oqueeuganhocomisso 
_______________________________________________
gmx-users mailing list    gmx-users at gromacs.org
http://www.gromacs.org/mailman/listinfo/gmx-users
Please search the archive at http://www.gromacs.org/search before posting!
Please don't post (un)subscribe requests to the list. Use the
www interface or send it to gmx-users-request at gromacs.org.
Can't post? Read http://www.gromacs.org/mailing_lists/users.php




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-users/attachments/20070715/20945071/attachment.html>


More information about the gromacs.org_gmx-users mailing list