[gmx-users] Multithread run issues

Mahmood Naderan mahmood.nt at gmail.com
Mon Oct 10 13:02:26 CEST 2016


Sorry for the previous incomplete email.

Program mdrun, VERSION 5.1
Source code file:
/share/apps/chemistry/gromacs-5.1/src/programs/mdrun/resource-division.cpp,
line: 746

Fatal error:
OpenMP threads have been requested with cut-off scheme Group, but these are
only supported with cut-off scheme Verlet
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors



I read that document from that web site but didn't understand what is the
issue!
Thanks

Regards,
Mahmood



On Mon, Oct 10, 2016 at 2:29 PM, Mahmood Naderan <mahmood.nt at gmail.com>
wrote:

> >mpirun -np 1 mdrun_mpi -v -ntomp 2
>
> Agree with that.
>
> >This is not a problem to solve by running applications differently. You
> >will have users running jobs with single ranks/processes that use
> threading
> >of various kinds to fill the cores. That's a feature, not a bug. Either
> >configure PBS to cope with decades-old technology, or don't worry about
> it.
>
> I found this document
> ​(
> https://wiki.anl.gov/cnm/HPC/Submitting_and_Managing_Jobs/
> Advanced_node_selection#Multithreading_using_OpenMP)
> ​That is what I want to be sure that number of threads and cores used for
> gromacs fits to the PBS stats.
>
> Instead of the variable, I wrote
>
>
>
> #PBS -l nodes=1:ppn=2​
> export OMP_NUM_THREADS=2
> mpirun  mdrun -v
>
> So that will use two cores with 4 threads totally and PBS should report 4
> processors are occupied.
> However, gromacs failed with the following error
>
>
>
>
> Regards,
> Mahmood
>
>
>
> On Mon, Oct 10, 2016 at 1:41 PM, Mark Abraham <mark.j.abraham at gmail.com>
> wrote:
>
>> Hi,
>>
>> Yeah, but that run is very likely
>>
>> a) useless because you're just running two copies of the same simulation
>> because you're not running MPI-enabled mdrun
>> b) and even if not, less efficient than the thread-MPI version
>>
>> mdrun -v -nt 2
>>
>> c) and even if not, likely slightly less efficient than the real-MPI
>> version
>>
>> mpirun -np 1 mdrun_mpi -v -ntomp 2
>>
>> top isn't necessarily reporting anything relevant. A CPU can be nominally
>> idle while waiting for communication, but what does top think about that?
>>
>> Mark
>>
>> On Mon, Oct 10, 2016 at 11:47 AM Mahmood Naderan <mahmood.nt at gmail.com>
>> wrote:
>>
>> > OK. I understood the  documents.
>> > Thing that I want is to see two processes (for example) each consumes
>> 100%
>> > cpu. The command for that is
>> >
>> > mpirun -np 2 mdrun -v -nt 1
>> >
>> > ​Thanks Mark.​
>> >
>> > Regards,
>> > Mahmood
>> > --
>> > Gromacs Users mailing list
>> >
>> > * Please search the archive at
>> > http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before
>> > posting!
>> >
>> > * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>> >
>> > * For (un)subscribe requests visit
>> > https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> > send a mail to gmx-users-request at gromacs.org.
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
>
>


More information about the gromacs.org_gmx-users mailing list