[gmx-users] problem with vsite and particle decomposition

Serena Donnini sdonnin at gwdg.de
Wed Jan 20 11:19:14 CET 2010


Great! Thanks.

Serena



> Hi,
> 
> I fixed the bug for 4.0.8 (if it will ever be released).
> 
> You can work around it by put your protein in a big box and using long cut-off's
> with domain decomposition (or simple run in serial).
> 
> The next version of Gromacs will support domain decomposition also
> for vacuum simulations, even without cut-off's.
> 
> Berk

>/ Date: Wed, 20 Jan 2010 09:34:45 +0100
/>/ From: sdonnin at gwdg.de <http://lists.gromacs.org/mailman/listinfo/gmx-users>
/>/ To: gmx-users at gromacs.org <http://lists.gromacs.org/mailman/listinfo/gmx-users>
/>/ Subject: [gmx-users] problem with vsite and particle decomposition
/>/ 
/>/ Hi,
/>/ 
/>/ yes, I've just filed a bugzilla and attached the tpr.
/>/ Thanks,
/>/ 
/>/ Serena
/>/ 
/>/ > Hi,
/>/ >
/>/ > This is clearly a bug.
/>/ > Could you file a bugzilla and attach the tpr file?
/>/ > 
/>/ > Thanks,
/>/ > 
/>/ > Berk
/>/ 
/>/ >/ Date: Wed, 20 Jan 2010 08:52:27 +0100
/>/ />/ From: sdonnin at gwdg.de <http://lists.gromacs.org/mailman/listinfo/gmx-users>
/>/ />/ To: gmx-users at gromacs.org <http://lists.gromacs.org/mailman/listinfo/gmx-users>
/>/ />/ Subject: [gmx-users] problem with vsite and particle decomposition
/>/ />/ 
/>/ />/ Hallo,
/>/ />/ 
/>/ />/ I had a problem when running with virtual sites (-vsite hydrogens) and
/>/ />/ particle decomposition (mdrun -pd) on more than one processor with
/>/ />/ gromacs 4.0.7. I could run without problems the same files without
/>/ />/ virtual site description. The error message looks like this:
/>/ />/ 
/>/ />/ [node036:28870] *** An error occurred in MPI_Wait
/>/ />/ [node036:28870] *** on communicator MPI_COMM_WORLD
/>/ />/ [node036:28870] *** MPI_ERR_TRUNCATE: message truncated
/>/ />/ [node036:28870] *** MPI_ERRORS_ARE_FATAL (goodbye)
/>/ />/ 
/>/ />/ It seems that the error message comes from move_cgcm() in
/>/ />/ sim_util.c, line 157, but the cause could as well
/>/ />/ be MPI communication that happens before move_cgcm.
/>/ />/ 
/>/ />/ Did anybody experienced similar problems or can help with this issue?
/>/ />/ Thanks,
/>/ />/ 
/>/ />/ Serena
/>/ />/ /

<http://lists.gromacs.org/mailman/listinfo/gmx-users>



More information about the gromacs.org_gmx-users mailing list