[gmx-users] Problem with running REMD in Gromacs 4.6.3
szilard.pall at cbr.su.se
Wed Jul 10 09:47:36 CEST 2013
Is affinity setting (pinning) on? What compiler are you using? There
are some known issues with Intel OpenMP getting in the way of the
internal affinity setting. To verify whether this is causing a
problem, try turning of pinning (-pin off).
On Tue, Jul 9, 2013 at 5:29 PM, gigo <gigo at poczta.ibb.waw.pl> wrote:
> Dear GMXers,
> With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
> separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 cores
> with OpenMP. There is Torque installed on the cluster build of 12-cores
> nodes, so I used the following script:
> #!/bin/tcsh -f
> #PBS -S /bin/tcsh
> #PBS -N test
> #PBS -l nodes=48:ppn=12
> #PBS -l walltime=300:00:00
> #PBS -l mem=288Gb
> #PBS -r n
> cd $PBS_O_WORKDIR
> mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4
> -replex 2000
> It was working just great with 4.6.2. It does not work with 4.6.3. The new
> version was compiled with the same options in the same environment. Mpiexec
> spreads the replicas evenly over the cluster. Each replica forks 4 threads,
> but only one of them uses any cpu. Logs end at the citations. Some empty
> energy and trajectory files are created, nothing is written to them.
> Please let me know if you have any immediate suggestion on how to make it
> work (maybe based on some differences between versions), or if I should fill
> the bug report with all the technical details.
> Best Regards,
> Grzegorz Wieczorek
> gmx-users mailing list gmx-users at gromacs.org
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users