[gmx-users] gromacs 4.0.7 compilation problem
Mark Abraham
Mark.Abraham at anu.edu.au
Thu Feb 11 04:01:18 CET 2010
On 11/02/10 01:55, sarbani chattopadhyay wrote:
> Hi ,
> I want to install gromacs 4.0.7 in double precision in a 64 bit Mac
> computer with 8
> nodes.
> I got the lam7.1.4 source code files and installed them using the
> following commands
> ./configure --without-fc ( it was giving an error for the fortran compiler)
> make
> make install
>
> then I get the gromacs 4.0.7 source code files and installed it as
> ./configure --disable-float
> make
> make install
>
> After that I try get the "mpi " version for mdrun
> make clean
> ./configure --enable-mpi --disable-nice --program-suffix="_mpi"
> make mdrun
> I GET ERROR IN THIS STEP , With error message
> undefined symbols:
> "_lam_mpi_double", referenced from:
Apparently the linker can find some MPI libraries during configure, but
can't find the right ones during compilations.
I suggest checking for and removing other MPI libraries, or using
OpenMPI rather than the deprecated LAM, and reading their documentation
for how to install correctly on your OS. Any way, this is not a problem
specific to GROMACS.
Mark
> _gmx_sumd_sim in libgmx_mpi.a(network.o)
> _gmx_sumd in libgmx_mpi.a(network.o)
> _gmx_sumd in libgmx_mpi.a(network.o)
> _wallcycle_sum in libmd_mpi.a(gmx_wallcycle.o)
> "_lam_mpi_byte", referenced from:
> _exchange_rvecs in repl_ex.o
> _replica_exchange in repl_ex.o
> _replica_exchange in repl_ex.o
> _replica_exchange in repl_ex.o
> _finish_run in libmd_mpi.a(sim_util.o)
> _dd_collect_vec in libmd_mpi.a(domdec.o)
> _dd_collect_vec in libmd_mpi.a(domdec.o)
> _set_dd_cell_sizes in libmd_mpi.a(domdec.o)
> _dd_distribute_vec in libmd_mpi.a(domdec.o)
> _dd_distribute_vec in libmd_mpi.a(domdec.o)
> _dd_partition_system in libmd_mpi.a(domdec.o)
> _partdec_init_local_state in libmd_mpi.a(partdec.o)
> _partdec_init_local_state in libmd_mpi.a(partdec.o)
> _gmx_rx in libmd_mpi.a(partdec.o)
> _gmx_tx in libmd_mpi.a(partdec.o)
> _gmx_bcast_sim in libgmx_mpi.a(network.o)
> _gmx_bcast in libgmx_mpi.a(network.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _gmx_pme_do in libmd_mpi.a(pme.o)
> _write_traj in libmd_mpi.a(stat.o)
> _write_traj in libmd_mpi.a(stat.o)
> _gmx_pme_receive_f in libmd_mpi.a(pme_pp.o)
> _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_send_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_send_force_vir_ener in libmd_mpi.a(pme_pp.o)
> _gmx_pme_send_force_vir_ener in libmd_mpi.a(pme_pp.o)
> _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
> _gmx_pme_recv_q_x in libmd_mpi.a(pme_pp.o)
> _dd_gatherv in libmd_mpi.a(domdec_network.o)
> _dd_scatterv in libmd_mpi.a(domdec_network.o)
> _dd_gather in libmd_mpi.a(domdec_network.o)
> _dd_scatter in libmd_mpi.a(domdec_network.o)
> _dd_bcastc in libmd_mpi.a(domdec_network.o)
> _dd_bcast in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv2_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv_rvec in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
> _dd_sendrecv_int in libmd_mpi.a(domdec_network.o)
> "_lam_mpi_prod", referenced from:
> _gprod in do_gct.o
> _do_coupling in do_gct.o
> _do_coupling in do_gct.o
> _do_coupling in do_gct.o
> "_lam_mpi_float", referenced from:
> _gprod in do_gct.o
> _do_coupling in do_gct.o
> _do_coupling in do_gct.o
> _do_coupling in do_gct.o
> _gmx_tx_rx_real in libmd_mpi.a(partdec.o)
> _gmx_sumf_sim in libgmx_mpi.a(network.o)
> _gmx_sumf in libgmx_mpi.a(network.o)
> _gmx_sumf in libgmx_mpi.a(network.o)
> _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid_dd in libmd_mpi.a(pme.o)
> _pmeredist in libmd_mpi.a(pme.o)
> _gmx_pme_init in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid in libmd_mpi.a(pme.o)
> _gmx_sum_qgrid in libmd_mpi.a(pme.o)
> _gmx_parallel_transpose_xy in libmd_mpi.a(gmx_parallel_3dfft.o)
> _gmx_parallel_transpose_xy in libmd_mpi.a(gmx_parallel_3dfft.o)
> "_lam_mpi_int", referenced from:
> _make_dd_communicators in libmd_mpi.a(domdec.o)
> _make_dd_communicators in libmd_mpi.a(domdec.o)
> _make_dd_communicators in libmd_mpi.a(domdec.o)
> _gmx_sumi_sim in libgmx_mpi.a(network.o)
> _gmx_sumi in libgmx_mpi.a(network.o)
> _gmx_sumi in libgmx_mpi.a(network.o)
> _pmeredist in libmd_mpi.a(pme.o)
> "_lam_mpi_sum", referenced from:
> _make_dd_communicators in libmd_mpi.a(domdec.o)
> _make_dd_communicators in libmd_mpi.a(domdec.o)
> _make_dd_communicators in libmd_mpi.a(domdec.o)
> _gmx_sumi_sim in libgmx_mpi.a(network.o)
> _gmx_sumf_sim in libgmx_mpi.a(network.o)
> _gmx_sumd_sim in libgmx_mpi.a(network.o)
> _gmx_sumi in libgmx_mpi.a(network.o)
> _gmx_sumi in libgmx_mpi.a(network.o)
> _gmx_sumf in libgmx_mpi.a(network.o)
> _gmx_sumf in libgmx_mpi.a(network.o)
> _gmx_sumd in libgmx_mpi.a(network.o)
> _gmx_sumd in libgmx_mpi.a(network.o)
> _gmx_sum_qgrid in libmd_mpi.a(pme.o)
> _wallcycle_sum in libmd_mpi.a(gmx_wallcycle.o)
> "_lam_mpi_comm_world", referenced from:
> _init_par in libgmx_mpi.a(main.o)
> _init_multisystem in libgmx_mpi.a(main.o)
> _gmx_finalize in libgmx_mpi.a(network.o)
> _gmx_abort in libgmx_mpi.a(network.o)
> _gmx_node_num in libgmx_mpi.a(network.o)
> _gmx_node_rank in libgmx_mpi.a(network.o)
> _gmx_setup in libgmx_mpi.a(network.o)
> ld: symbol(s) not found
> collect2: ld returned 1 exit status
>
>
> I am not being able to solve this problem.
> I know that in gromacs 4 "grompp" command does not take np flag. but
> then i will have to
> specify no. of nodes to mpirun.
> It could be that the "mpi" environment has not been set up correctly.
> Any suggestion regarding this will be very helpful.
> Thanks in advance
> Sarbani
>
> <http://sigads.rediff.com/RealMedia/ads/click_nx.ads/www.rediffmail.com/signatureline.htm@Middle?>
>
More information about the gromacs.org_gmx-users
mailing list