[gmx-users] The simulation of big protein in implicit solvent

Yi Isaac Yang yesterday.young at gmail.com
Thu Sep 1 16:35:32 CEST 2016


Thank you very much!

But when I using this command:

gmx mdrun -ntomp 4 -ntmpi 1 -deffnm lrepressor_bal

It shows:
Fatal error:
OpenMP threads have been requested with cut-off scheme Group, but these are
only supported with cut-off scheme Verlet
For more information and tips for troubleshooting, please check the GROMACS
website at http://www.gromacs.org/Documentation/Errors
-------------------------------------------------------

However, pbc=no have not be supportted by cut-off scheme=verlet


2016-09-01 16:20 GMT+02:00 Yi Isaac Yang <yesterday.young at gmail.com>:

> Thank you very much!
>
> Now I know how to perform in serial mode. However, I still don't know how
> to parallel perform the MD simulation. I have read the website you
> recommended, but I still know how to change the domain decomposition
> algorithm. I search the Internet, some people said add the flag -pd to
> change to particle decomposition. However, in gromacs 5 the flag "-pd" have
> been removed.
>
> So do you know how to parallel perform the simulation in implicit solvent
> using Gromacs 5?
>
> Thank you and best regards,
> Isaac
>
>
> 2016-09-01 15:30 GMT+02:00 Justin Lemkul <jalemkul at vt.edu>:
>
>>
>>
>> On 9/1/16 9:28 AM, Yi Isaac Yang wrote:
>>
>>> Thank you very much! But I just perform the minimization in my own
>>> computer:
>>>
>>> yangy at magadino:/mnt/storage2/yangy/l_repressor$ gmx mdrun -v -deffnm
>>> lrepressor_min
>>>
>>>                    :-) GROMACS - gmx mdrun, VERSION 5.1.2 (-:
>>>
>>>                             GROMACS is written by:
>>>      Emile Apol      Rossen Apostolov  Herman J.C. Berendsen    Par
>>> Bjelkmar
>>>  Aldert van Buuren   Rudi van Drunen     Anton Feenstra   Sebastian
>>> Fritsch
>>>   Gerrit Groenhof   Christoph Junghans   Anca Hamuraru    Vincent
>>> Hindriksen
>>>  Dimitrios Karkoulis    Peter Kasson        Jiri Kraus      Carsten
>>> Kutzner
>>>     Per Larsson      Justin A. Lemkul   Magnus Lundborg   Pieter
>>> Meulenhoff
>>>    Erik Marklund      Teemu Murtola       Szilard Pall       Sander Pronk
>>>    Roland Schulz     Alexey Shvetsov     Michael Shirts     Alfons
>>> Sijbers
>>>    Peter Tieleman    Teemu Virolainen  Christian Wennberg    Maarten Wolf
>>>                            and the project leaders:
>>>         Mark Abraham, Berk Hess, Erik Lindahl, and David van der Spoel
>>>
>>> Copyright (c) 1991-2000, University of Groningen, The Netherlands.
>>> Copyright (c) 2001-2015, The GROMACS development team at
>>> Uppsala University, Stockholm University and
>>> the Royal Institute of Technology, Sweden.
>>> check out http://www.gromacs.org for more information.
>>>
>>> GROMACS is free software; you can redistribute it and/or modify it
>>> under the terms of the GNU Lesser General Public License
>>> as published by the Free Software Foundation; either version 2.1
>>> of the License, or (at your option) any later version.
>>>
>>> GROMACS:      gmx mdrun, VERSION 5.1.2
>>> Executable:   /home/yangy/opt/bin/gmx
>>> Data prefix:  /home/yangy/opt
>>> Command line:
>>>   gmx mdrun -v -deffnm lrepressor_min
>>>
>>>
>>> Running on 1 node with total 4 cores, 8 logical cores
>>> Hardware detected:
>>>   CPU info:
>>>     Vendor: GenuineIntel
>>>     Brand:  Intel(R) Xeon(R) CPU E3-1246 v3 @ 3.50GHz
>>>     SIMD instructions most likely to fit this hardware: AVX2_256
>>>     SIMD instructions selected at GROMACS compile time: AVX2_256
>>>
>>> Reading file lrepressor_min.tpr, VERSION 5.1.2 (single precision)
>>>
>>> -------------------------------------------------------
>>> Program gmx mdrun, VERSION 5.1.2
>>> Source code file:
>>> /home/yangy/Downloads/gromacs-5.1.2-patched/src/gromacs/domd
>>> ec/domdec.cpp,
>>> line: 6542
>>>
>>> Fatal error:
>>> Domain decomposition does not support simple neighbor searching, use grid
>>> searching or run with one MPI rank
>>> For more information and tips for troubleshooting, please check the
>>> GROMACS
>>> website at http://www.gromacs.org/Documentation/Errors
>>> -------------------------------------------------------
>>>
>>>
>>> In fact, I'm not very familiar with gromacs, before that I always use
>>> AMBER. So could you tell me how to perform mdrun with "one MPI rank"?
>>> Although I directly run it in my own computer, it still show that
>>> "Running
>>> on 1 node with total 4 cores, 8 logical cores". And what is "domain
>>> decomposition"? Could I change it? Although I can perform the
>>> minimization
>>> in serial mode,  I must using MPI to perform the MD simulation.
>>>
>>>
>> Start here:
>>
>> http://www.gromacs.org/Documentation/Acceleration_and_parallelization
>>
>> The mdrun help description has all the rest of what you need to know
>> (e.g. mdrun -ntmpi 1 in this case).
>>
>>
>> -Justin
>>
>> --
>> ==================================================
>>
>> Justin A. Lemkul, Ph.D.
>> Ruth L. Kirschstein NRSA Postdoctoral Fellow
>>
>> Department of Pharmaceutical Sciences
>> School of Pharmacy
>> Health Sciences Facility II, Room 629
>> University of Maryland, Baltimore
>> 20 Penn St.
>> Baltimore, MD 21201
>>
>> jalemkul at outerbanks.umaryland.edu | (410) 706-7441
>> http://mackerell.umaryland.edu/~jalemkul
>>
>> ==================================================
>> --
>> Gromacs Users mailing list
>>
>> * Please search the archive at http://www.gromacs.org/Support
>> /Mailing_Lists/GMX-Users_List before posting!
>>
>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>
>> * For (un)subscribe requests visit
>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-users or
>> send a mail to gmx-users-request at gromacs.org.
>>
>
>
>
> --
> Yesterday Young
> College of Chemistry and Molecular Engineering
> Peking University
>



-- 
Yesterday Young
College of Chemistry and Molecular Engineering
Peking University


More information about the gromacs.org_gmx-users mailing list