[gmx-users] question re; building Gromacs 4.6
Susan Chacko
susanc at helix.nih.gov
Tue Jan 29 17:07:06 CET 2013
Thanks for the info! Our cluster is somewhat heterogenous, with
some 32-core GigE-connected nodes, some older 8-core Infiniband-connected
nodes, and some GPU nodes. So we need pretty much every variation
of mdrun :-).
On Jan 29, 2013, at 11:00 AM, Mark Abraham wrote:
> On Tue, Jan 29, 2013 at 4:39 PM, Susan Chacko <susanc at helix.nih.gov> wrote:
>
>>
>> Sorry for a newbie question -- I've built several versions of Gromacs in
>> the
>> past but am not very familiar with the new cmake build system.
>>
>> In older versions, the procedure was:
>> - build the single-threaded version
>> - then build the MPI version of mdrun only. No need to build the other
>> executables with MPI.
>>
>> Is this still how it should be done, or should one just build everything
>> once with MPI?
>>
>
> You can still follow this workflow if you need mdrun with real MPI to run
> on your hardware (i.e. multiple physical nodes with network connections
> between them).
>
>
>> Likewise, if I want a separate GPU version (only a few nodes on our
>> cluster have GPUs), do I build the whole tree separately with -DGMX_GPU=ON,
>> or just a GPU-enabled version of mdrun?
>>
>
> Only mdrun is GPU-aware, so that's all you'd need/want. I'll update the
> installation instructions accordingly. Thanks!
>
> Mark
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
Susan Chacko
Helix/Biowulf Staff
More information about the gromacs.org_gmx-users
mailing list