[gmx-users] About system requirement to gromacs

rama david ramadavidgroup at gmail.com
Wed Aug 1 17:02:44 CEST 2012


Thank you for reply and your suggestion..

As I mentioned earlier I installed Gromacs-openmpi 4.5.5  from Ubuntu
software package manager

So I want to just check is it performing REMD or  not???

So as per previous suggestion of Mark ...
I made two tpr file that has two different temp 300 k and 310
k..topol0.tpr and topol1.tpr  respectively


I used command line....

 mpirun -c 4   mdrun_mpi  -v -multi 2 -replex 10

output as follow.....


node 0 par_fn 'topol0.tpr'
node 0 par_fn 'topol1.tpr'
node 0 par_fn 'traj1.trr'
node 0 par_fn 'traj1.xtc'
node 0 par_fn 'state1.cpt'
node 0 par_fn 'state1.cpt'
node 0 par_fn 'confout1.gro'
node 0 par_fn 'ener1.edr'
node 0 par_fn 'traj0.trr'
node 0 par_fn 'md1.log'
log
node 0 par_fn 'traj0.xtc'
node 0 par_fn 'dhdl1.xvg'
node 0 par_fn 'field1.xvg'
node 0 par_fn 'state0.cpt'
node 0 par_fn 'rerun1.xtc'
node 0 par_fn 'tpi1.xvg'
node 0 par_fn 'tpidist1.xvg'
node 0 par_fn 'state0.cpt'
node 0 par_fn 'sam1.edo'
node 0 par_fn 'confout0.gro'
node 0 par_fn 'bam1.gct'
node 0 par_fn 'gct1.xvg'
node 0 par_fn 'ener0.edr'
node 0 par_fn 'deviatie1.xvg'
node 0 par_fn 'runaver1.xvg'
node 0 par_fn 'md0.log'
node 0 par_fn 'pullx1.xvg'
node 0 par_fn 'pullf1.xvg'
node 0 par_fn 'nm1.mtx'
log
node 0 par_fn 'dipole1.ndx'
node 0 par_fn 'dhdl0.xvg'

Back Off! I just backed up md1.log to ./#md1.log.4#
Getting Loaded...
Reading file topol1.tpr, VERSION 4.5.5 (single precision)
node 0 par_fn 'field0.xvg'
node 0 par_fn 'rerun0.xtc'
node 0 par_fn 'tpi0.xvg'
node 0 par_fn 'tpidist0.xvg'
node 0 par_fn 'sam0.edo'
node 0 par_fn 'bam0.gct'
node 0 par_fn 'gct0.xvg'
node 0 par_fn 'deviatie0.xvg'
node 0 par_fn 'runaver0.xvg'
node 0 par_fn 'pullx0.xvg'
node 0 par_fn 'pullf0.xvg'
node 0 par_fn 'nm0.mtx'
node 0 par_fn 'dipole0.ndx'

Back Off! I just backed up md0.log to ./#md0.log.4#
Getting Loaded...
Reading file topol0.tpr, VERSION 4.5.5 (single precision)
Loaded with Money

Loaded with Money

Making 1D domain decomposition 2 x 1 x 1

Back Off! I just backed up traj0.trr to ./#traj0.trr.4#

Back Off! I just backed up ener0.edr to ./#ener0.edr.4#
Making 1D domain decomposition 2 x 1 x 1

Back Off! I just backed up traj1.trr to ./#traj1.trr.4#

Back Off! I just backed up ener1.edr to ./#ener1.edr.4#

-------------------------------------------------------
Program mdrun_mpi, VERSION 4.5.5
Source code file: /build/buildd/gromacs-4.5.5/src/kernel/repl_ex.c, line: 177

Fatal error:
The properties of the 2 systems are all the same, there is nothing to exchange
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

"I Do It All the Time" (Magnapop)


-------------------------------------------------------
Program mdrun_mpi, VERSION 4.5.5
Source code file: /build/buildd/gromacs-4.5.5/src/kernel/repl_ex.c, line: 177

Fatal error:
The properties of the 2 systems are all the same, there is nothing to exchange
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

"I Do It All the Time" (Magnapop)

Error on node 2, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 2 out of 4
Error on node 0, will try to stop all the nodes
Halting parallel program mdrun_mpi on CPU 0 out of 4

gcq#197: "I Do It All the Time" (Magnapop)


gcq#197: "I Do It All the Time" (Magnapop)

--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD
with errorcode -1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 9919 on
node VPCEB34EN exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[VPCEB34EN:09918] 1 more process has sent help message
help-mpi-api.txt / mpi-abort
[VPCEB34EN:09918] Set MCA parameter "orte_base_help_aggregate" to 0 to
see all help / error messag



Thanks a lot for hearing my problem

So from above output is it able to perform REMD or not ?????
Is the gromacs installation  on my system is right for open-mpi????


With Best Wishes and regards

Rama david



More information about the gromacs.org_gmx-users mailing list