[gmx-users] load problem
Justin Lemkul
jalemkul at vt.edu
Fri Jul 22 22:52:12 CEST 2016
On 7/22/16 4:50 PM, Stephen Chan wrote:
> Hi Justin,
>
> My system has ~55,000 atoms. I tried reducing the number of nodes but the
> situation didn't improve. I wonder if it's related to plumed performance.
>
That's a rather important detail. Try a vanilla GROMACS run to do benchmarking,
then try using PLUMED. If things slow down with PLUMED, then there's a
different forum you should be visiting to report such problems :)
-Justin
> Stephen
>
>
> On 07/22/16 22:17, Justin Lemkul wrote:
>>
>>
>> On 7/22/16 3:58 PM, Stephen Chan wrote:
>>> Hello all,
>>>
>>> I'm running an NPT MD on 4 nodes and each contains 28 cores. I notice the
>>> performance is quite low:
>>> vol 0.20! imb F 4% pme/F 0.17 step 7800, remaining wall clock time: 191 s
>>>
>>> I tried a couple of numbers for -npme (12, 28, 32, 40). None of these improves
>>> the situation.
>>>
>>> I also added 'optimize-fft = yes' in the mdp file and rerun without -npme
>>> option. The problem persists.
>>>
>>> I wonder if anyone could offer some help.
>>>
>>
>> How many atoms are in your system? The volume reported in the output suggests
>> that each domain is extremely small, so you may have simply hit a
>> parallelization bottleneck and you may be able to run faster using fewer
>> nodes/cores. Is the interconnect between the nodes suitably fast (e.g.
>> InfiniBand)?
>>
>> -Justin
>>
>
--
==================================================
Justin A. Lemkul, Ph.D.
Ruth L. Kirschstein NRSA Postdoctoral Fellow
Department of Pharmaceutical Sciences
School of Pharmacy
Health Sciences Facility II, Room 629
University of Maryland, Baltimore
20 Penn St.
Baltimore, MD 21201
jalemkul at outerbanks.umaryland.edu | (410) 706-7441
http://mackerell.umaryland.edu/~jalemkul
==================================================
More information about the gromacs.org_gmx-users
mailing list