[gmx-users] Re: Running Gromacs in Clusters

Dr. Vitaly Chaban vvchaban at gmail.com
Wed Nov 7 23:53:21 CET 2012


On Wed, Nov 7, 2012 at 11:48 PM, Dr. Vitaly Chaban <vvchaban at gmail.com> wrote:
> On Wed, Nov 7, 2012 at 11:24 PM, Marcelo Depolo <marcelodepolo at gmail.com> wrote:
>> I thought that at first, but other softwares run in parallel. If there's a
>> problem, it' s somehow in the PBS.
>>
>> My guess is that my PBS don't allow the LAM library "see" others nodes. But
>> I have no clue where the problem could be.
>
> I would be very surprised if this is true. The "nornal" sequences of
> events during submission process is the following:
>
> 1) The system looks into your submission script and finds out the
> resource requirements.
>
> 2) If the requirements are met, the job gets "R" status and the
> remaining commands (which do not start with #PBS) are executed.
>
> 3) If there is a problem with the message parallel interface or the
> [scientific] code, the job dies with some MPI-specific error message,
> or with code-specific message, or usually both of them.
>
> What I see in your report, your "error" message comes from PBS, i.e.
> neither MPI nor gromacs are launched.
>


Are you stating that other programs on your cluster run successfully
on multiple nodes using the same (the #PBS part) submission script and
only gromacs-jobs complain about lack of resources? I cannot
believe...


-- 
Dr. Vitaly V. Chaban
MEMPHYS - Center for Biomembrane Physics
Department of Physics, Chemistry and Pharmacy
University of Southern Denmark
Campusvej 55, 5230 Odense M, Denmark



More information about the gromacs.org_gmx-users mailing list