[gmx-users] GPU performance
mark.j.abraham at gmail.com
Wed Apr 10 04:02:56 CEST 2013
On Apr 10, 2013 3:34 AM, "Benjamin Bobay" <bgbobay at ncsu.edu> wrote:
> Szilárd -
> First, many thanks for the reply.
> Second, I am glad that I am not crazy.
> Ok so based on your suggestions, I think I know what the problem is/was.
> There was a sander process running on 1 of the CPUs. Clearly GROMACS was
> trying to use 4 with "Using 4 OpenMP thread". I just did not catch that.
> Sorry! Rookie mistake.
> Which I guess leads me to my next question (sorry if its too naive):
> (1) When running GROMACS (or a I guess any other CUDA based programs), its
> best to have all the CPUs free, right? I guess based on my results I have
> pretty much answered that question. Although I thought that as long as I
> have one CPU available to run the GPU it would be good: would setting
> "-ntmpi 1 -ntomp 1" help or would I take a major hit in ns/day as well?
Some codes might treat the CPU as a "I/O, MPI and memory-serving
co-processor" of the GPU; those codes will tend to be insensitive to the
CPU config. GROMACS goes to great lengths to use all the hardware in a
dynamically load-balanced way, so CPU load and config tend to affect the
bottom line immediately.
> If I try the benchmarks again just to see (for fun) with "Using 4 OpenMP
> thread", under top I have - so I think the CPU is fine :
> PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
> 24791 bobayb 20 0 48.3g 51m 7576 R 299.1 0.2 11:32.90
> When I have a chance (after this sander run is done - hopefully soon) I
> try the benchmarks again.
> Thanks again for the help!
> gmx-users mailing list gmx-users at gromacs.org
> * Please search the archive at
http://www.gromacs.org/Support/Mailing_Lists/Search before posting!
> * Please don't post (un)subscribe requests to the list. Use the
> www interface or send it to gmx-users-request at gromacs.org.
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
More information about the gromacs.org_gmx-users