pall.szilard at gmail.com
Wed Sep 13 19:14:22 CEST 2017
My guess is that the two jobs are using the same cores -- either all
cores/threads or only half of them, but the same set.
You should use -pinoffset; see:
- Docs and example:
- More explanation on the thread pinning behavior on the old website:
On Wed, Sep 13, 2017 at 6:35 PM, gromacs query <gromacsquery at gmail.com> wrote:
> Sorry forgot to add; we thought the two jobs are using same GPU ids but
> cuda visible devices show both jobs are using different ids (0,1 and 2,3)
> On Wed, Sep 13, 2017 at 5:33 PM, gromacs query <gromacsquery at gmail.com>
>> Hi All,
>> I have some issues with gromacs performance. There are many nodes and each
>> node has number of gpus and the batch process is controlled by slurm.
>> Although I get good performance with some settings of number of gpus and
>> nprocs but when I submit same job twice on the same node then the
>> performance is reduced drastically. e.g
>> For 2 GPUs I get 300 ns per day when there is no other job running on the
>> node. When I submit same job twice on the same node & at the same time, I
>> get only 17 ns/day for both the jobs. I am using this:
>> mpirun -np 4 gmx_mpi mdrun -deffnm test -ntomp 2 -maxh 0.12
>> Any suggestions highly appreciated.
> Gromacs Users mailing list
> * Please search the archive at http://www.gromacs.org/Support/Mailing_Lists/GMX-Users_List before posting!
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or send a mail to gmx-users-request at gromacs.org.
More information about the gromacs.org_gmx-users