[gmx-users] question re; building Gromacs 4.6
Szilárd Páll
szilard.pall at cbr.su.se
Tue Jan 29 19:27:30 CET 2013
On Tue, Jan 29, 2013 at 4:39 PM, Susan Chacko <susanc at helix.nih.gov> wrote:
>
> Sorry for a newbie question -- I've built several versions of Gromacs in
> the
> past but am not very familiar with the new cmake build system.
>
> In older versions, the procedure was:
> - build the single-threaded version
>
FYI there is really no point in compiling GROMACS with all parallelization
disabled, and by default you get (thread-)MPI and OpenMP which work just
fine within a node (and mdrun as well as a few tools support
multi-threading).
> - then build the MPI version of mdrun only. No need to build the other
> executables with MPI.
>
> Is this still how it should be done, or should one just build everything
> once with MPI?
>
That is how I usually do it so that I have tools and a multi-threaded mdrun
without without MPI dependency, then configure again with MPI and install
to the same location mdrun only (make install-mdrun) - which by deafult
will have the _mpi suffix.
> Likewise, if I want a separate GPU version (only a few nodes on our
> cluster have GPUs), do I build the whole tree separately with -DGMX_GPU=ON,
> or just a GPU-enabled version of mdrun?
>
You only need to build separate GPU-enabled version of mdrun if you can not
have CUDA available on the non-GPU nodes (as mdrun links against the CUDA
libraries). Otherwise, a GPU-acceleration enabled mdrun on a node without
GPUs works just fine.
Cheers,
--
Szilard
> Thanks for any suggestions,
> Susan.
>
>
>
>
>
>
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
More information about the gromacs.org_gmx-users
mailing list