[gmx-users] About gpu
陈照云
chenzhaoyun06 at gmail.com
Thu Apr 11 08:14:03 CEST 2013
I have tested gromacs-4.6.1 with k20. But when I run the mdrun, I met some
problems.
1.Configure options are -DGMX_MPI=ON ,-DGMX_DOUBLE=ON -DGMX_GPU=OFF .
But if I run parallely with mpirun, it would get wrong.
"Note: file tpx version 58, software tpx version 83
Fatal error in PMPI_Bcast: Invalid buffer pointer, error stack:
PMPI_Bcast(2011): MPI_Bcast(buf=(nil), count=56, MPI_BYTE, root=0,
MPI_COMM_WORLD) failed
PMPI_Bcast(1919): Null buffer pointer
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)
"
2. Configure options are -DGMX_MPI=ON ,-DGMX_GPU=ON -DGMX_DOUBLE=OFF . But
if I run with gpu, the program would get wrong.
run one process with gpu:
"Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single precision)
Note: file tpx version 73, software tpx version 83
NOTE: GPU(s) found, but the current simulation can not use GPUs
To use a GPU, set the mdp option: cutoff-scheme = Verlet
(for quick performance testing you can use the -testverlet option)
Using 1 MPI process
1 GPU detected on host node11:
#0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible
Back Off! I just backed up ener.edr to ./#ener.edr.4#
starting mdrun 'Protein'
-1 steps, infinite ps.
Segmentation Fault (core dumped)
run eight processes with gpu:
Reading file topol.tpr, VERSION 4.5.1-dev-20100917-b1d66 (single precision)
Note: file tpx version 73, software tpx version 83
NOTE: GPU(s) found, but the current simulation can not use GPUs
To use a GPU, set the mdp option: cutoff-scheme = Verlet
(for quick performance testing you can use the -testverlet option)
Non-default thread affinity set, disabling internal thread affinity
Using 8 MPI processes
1 GPU detected on host node11:
#0: NVIDIA Tesla K20c, compute cap.: 3.5, ECC: yes, stat: compatible
Back Off! I just backed up ener.edr to ./#ener.edr.6#
starting mdrun 'Protein'
-1 steps, infinite ps.
APPLICATION TERMINATED WITH THE EXIT STRING: Hangup (signal 1)"
Thanks for your help!
More information about the gromacs.org_gmx-users
mailing list