[gmx-users] Problems with REMD in Gromacs 4.6.3
Mark Abraham
mark.j.abraham at gmail.com
Fri Jul 12 10:15:24 CEST 2013
What does --loadbalance do? What do the .log files say about
OMP_NUM_THREADS, thread affinities, pinning, etc?
Mark
On Fri, Jul 12, 2013 at 3:46 AM, gigo <gigo at poczta.ibb.waw.pl> wrote:
> Dear GMXers,
> With Gromacs 4.6.2 I was running REMD with 144 replicas. Replicas were
> separate MPI jobs of course (OpenMPI 1.6.4). Each replica I run on 4 cores
> with OpenMP. There is Torque installed on the cluster build of 12-cores
> nodes, so I used the following script:
>
> #!/bin/tcsh -f
> #PBS -S /bin/tcsh
> #PBS -N test
> #PBS -l nodes=48:ppn=12
> #PBS -l walltime=300:00:00
> #PBS -l mem=288Gb
> #PBS -r n
> cd $PBS_O_WORKDIR
> mpiexec -np 144 --loadbalance mdrun_mpi -v -cpt 20 -multi 144 -ntomp 4
> -replex 2000
>
> It was working just great with 4.6.2. It does not work with 4.6.3. The new
> version was compiled with the same options in the same environment. Mpiexec
> spreads the replicas evenly over the cluster. Each replica forks 4 threads,
> but only one of them uses any cpu. Logs end at the citations. Some empty
> energy and trajectory files are created, nothing is written to them.
> Please let me know if you have any immediate suggestion on how to make it
> work (maybe based on some differences between versions), or if I should fill
> the bug report with all the technical details.
> Best Regards,
>
> Grzegorz Wieczorek
>
> --
> gmx-users mailing list gmx-users at gromacs.org
> http://lists.gromacs.org/mailman/listinfo/gmx-users
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the www
> interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users
mailing list