[gmx-developers] GROMACS OpenCL on Gallium

Szilárd Páll pall.szilard at gmail.com
Thu Nov 26 20:51:13 CET 2015


On Thu, Nov 26, 2015 at 8:25 PM, Roland Schulz <roland at utk.edu> wrote:

>
>
> On Thu, Nov 26, 2015 at 2:07 PM, Szilárd Páll <pall.szilard at gmail.com>
> wrote:
>
>> Hi,
>>
>> Besides FPGA folks, what about Apple, embedded and mobile platforms
>> (Qualcomm, ARM, Samsung, etc.)?
>>
> All the embedded GPUs are for the foreseeable futures uninteresting for
> HPC.
>

Note that HPC does, in most cases, not drive programming
language/framework/standard adoption.


> And I don't think anyone is interesting to program for ARM CPUs in OpenCL.
>

For the above reason, stable libraries and compilers may drive the
ecosystem forward even without HPC interest.

Plus there's the merging phone-tablet-laptiop segment where applications
can gain a lot from CPU+GPU heterogeneous execution.


>
>
>> I'm not sure Intel is totally uninterested. They've just moved out again
>> their OpenCL SDK form the silly bundle they had before in the latest
>> release AFAIK because people complained.
>>
> Both the comments from Intel people and their performance tell me that it
> is a low to very low priority. They might be more interested for it for
> Iris but not for Phi.
>

Exactly. Photoshop, Autocad, etc. For the above reasons, I would not
dismiss OpenCL as relevant. As far as I understood while chatting with
someone from Intel a few months ago, Adobe and similar big players do use
OpenCL quite a bit.


> And again Iris will probably not be relevant for HPC.
>

O the sort run likely not. And that's unfortunate, I think.


>
> NVIDIA: no comment.
>>
>> HIP and CUDA support seems like a desperate move from AMD to lower the
>> barrier of entry and make things (seem) easier. Attracting dev/user
>> interest to stay afloat is crucial for them. It would however be a major
>> mistake for AMD to move away from OpenCL, I think - unless they want to
>> shoot themselves in the foot by encouraging people to only write CUDA
>> kernels for AMD. Still need to look into this closer to understand what the
>> direction is.
>>
> Well that's what they told me. Go ahead write HIP/Cuda and it'll work on
> AMD.
>
>
>> Overall, I feel like this is the time to not take the back seat. Rather
>> than letting others decide whether it's going to be open standards or
>> vendor lock-in that defines the low-level accelerator programming for the
>> coming years I feel like we, though GROMACS, can show that we care and
>> perhaps can make a difference. That's why I wrote the previous mail. Don't
>> get me wrong, I do not have the illusion that tomorrow we can just drop
>> CUDA support just to make a point. However, providing an a decent
>> alternative based on OpenCL and pointing out that we want the open
>> alternative to work as well as the closed one does require effort, but it
>> is realistic.
>>
> Given that we and other won't be willing to drop CUDA as long as it is
> faster, there is no reason for NVidia to change. And as long as Nvidia
> doesn't change OpenCL isn't useful for AMD. Hoping that it is different, or
> asking pretty please, won't change that.
>

Shift some developer focus away from CUDA may be just enough to send a
message, no need to drop CUDA.

Given that there is a substantial effort needed to get from implementing
some cool new feature to getting it into a production release, one could
postpone bringing this  feature into a production version (keep it in an
unmerged under review state) and spend the time saved e.g. on tuning OpenCL
(e.g. for Intel CPU+iGPU or AMD APUs). This could send a message that may
be heard and perhaps taken seriously if done repeatedly by multiple OOS
projects.

Cheers,
--
Szilárd


>
> Roland
>
>
>>
>> Cheers,
>>
>> --
>> Szilárd
>>
>> On Thu, Nov 26, 2015 at 7:37 PM, Roland Schulz <roland at utk.edu> wrote:
>>
>>> Hi,
>>>
>>> I wouldn't be surprised if OpenCL is fading out. NVidia and Intel have
>>> very little to no interest. And AMD has realized that a standard only
>>> really supported by them isn't going to be used and they now push HIP (
>>> http://www.amd.com/en-us/press-releases/Pages/boltzmann-initiative-2015nov16.aspx)
>>> instead. People from AMD I talked to at SC, recommended to use HIP over
>>> OpenCL because they claim this will allow performance portable code. This
>>> might leave the FPGA guys as the only ones providing performant OpenCL
>>> implementations. Of course having a true standard would be nicer than
>>> having to rely on HIP/Cuda but in practice it might very well be that those
>>> are the only useful (=performant) options in the future.
>>>
>>> Roland
>>>
>>> On Thu, Nov 26, 2015 at 1:04 PM, Szilárd Páll <pall.szilard at gmail.com>
>>> wrote:
>>>
>>>> One more thing!
>>>>
>>>> Let me take the opportunity to invite everyone interested to contribute
>>>> (with either code, testing, docs) and help improving features and
>>>> performance of our truly portable GPU/accelerator OpenCL code-path!
>>>>
>>>> Our OpenCL implementation is stable and solid, but is lacking thorough
>>>> tuning for AMD GPUs and support for integrated CPU+GPU architectures would
>>>> be great too. There are a number of known to be useful extensions &
>>>> optimizations (and probably even more that we have not thought of) that
>>>> could be pursued, but due to the lack of time/resources we have not done it
>>>> yet.
>>>>
>>>> I'd be happy to share ideas and collaborate with the goal of improving
>>>> the OpenCL support for the next release!
>>>>
>>>> So if you're interested, get in touch!
>>>>
>>>> Cheers,
>>>>
>>>> --
>>>> Szilárd
>>>>
>>>> On Thu, Nov 26, 2015 at 6:52 PM, Szilárd Páll <pall.szilard at gmail.com>
>>>> wrote:
>>>>
>>>>> Hi!
>>>>>
>>>>> My reply got quite delayed, sorry about that.
>>>>>
>>>>> Just wanted to let you know that I am personally interested in getting
>>>>> Gallium support to work. I can't drive the work, ATM have very limited time
>>>>> to put into this, but I would love to help with fixing small things and
>>>>> with code review!
>>>>>
>>>>> It would be nice to be able to use GROMACS on GPUs without any
>>>>> proprietary stuff. I'm sure distros will be happy to be able to provide a
>>>>> GROMACS package with no proprietary dependencies for GPUs. Of course, the
>>>>> performance matters too, but first thing is to get it to work.
>>>>>
>>>>> If somebody is interested in taking up the task of driving the work,
>>>>> please file a (some) redimine issue (list the concrete tasks if they're
>>>>> known)!
>>>>>
>>>>> Cheers,
>>>>> --
>>>>> Szilárd
>>>>>
>>>>>
>>>>> On Tue, Oct 20, 2015 at 8:45 PM, Vedran Miletić <rivanvx at gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> is there any interest for extending GROMACS OpenCL support to include
>>>>>> Gallium for Radeon cards and perhaps others?
>>>>>>
>>>>>> (Background: We have a machine in our lab with Debian
>>>>>> unstable/experimental and latest Kernel/DRM/LLVM/Mesa and an AMD
>>>>>> Caicos card, set up a couple of years ago in hope that AMD will make
>>>>>> completely open source OpenCL stack work at some point. After recent
>>>>>> updates, we managed to run hello world examples and parts of ViennaCL
>>>>>> benchmark.)
>>>>>>
>>>>>> Running gmx mdrun on Radeon HD 7450 on Kernel 4.2.3 and Mesa 11.0.2
>>>>>> results in
>>>>>>
>>>>>> Fatal error:
>>>>>> Failed to compile NBNXN kernels for GPU #AMD CAICOS (DRM 2.43.0, LLVM
>>>>>> 3.7.0)
>>>>>>
>>>>>> This creates a file named nbnxn_ocl_kernels.cl.FAILED with the
>>>>>> following information:
>>>>>>
>>>>>> Compilation of source file failed!
>>>>>> -- Used build options: -DWARP_SIZE_TEST=64 -D_AMD_SOURCE_
>>>>>> -DGMX_OCL_FASTGEN_ADD_TWINCUT -DEL_EWALD_ANA -DEELNAME=_ElecEw
>>>>>> -DVDWNAME=_VdwLJ -DCENTRAL=22 -DNBNXN_GPU_NCLUSTER_PER_SUPERCLUSTER=8
>>>>>> -DNBNXN_GPU_CLUSTER_SIZE=8 -DNBNXN_GPU_JGROUP_SIZE=4
>>>>>> -DNBNXN_AVOID_SING_R2_INC=1.0e-12f
>>>>>> -I"/usr/local/gromacs/share/gromacs/opencl"
>>>>>> --------------LOG START---------------
>>>>>> input.cl:59:10: fatal error:
>>>>>> 'nbnxn_ocl_kernels_fastgen_add_twincut.clh' file not found
>>>>>> input.cl:45:36: note: expanded from macro 'FLAVOR_LEVEL_GENERATOR'
>>>>>> ---------------LOG END----------------
>>>>>>
>>>>>> Is there any interest in supporting this configuration? Is there
>>>>>> anyone besides us who would run GROMACS on Gallium and Radeon cards?
>>>>>>
>>>>>> Regards,
>>>>>> Vedran
>>>>>>
>>>>>> --
>>>>>> Vedran Miletić
>>>>>> http://vedranmileti.ch/
>>>>>> --
>>>>>> Gromacs Developers mailing list
>>>>>>
>>>>>> * Please search the archive at
>>>>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List
>>>>>> before posting!
>>>>>>
>>>>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>>>>
>>>>>> * For (un)subscribe requests visit
>>>>>>
>>>>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>>>>> or send a mail to gmx-developers-request at gromacs.org.
>>>>>
>>>>>
>>>>>
>>>>
>>>
>>>
>>> --
>>> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
>>> 865-241-1537, ORNL PO BOX 2008 MS6309
>>>
>>> --
>>> Gromacs Developers mailing list
>>>
>>> * Please search the archive at
>>> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
>>> posting!
>>>
>>> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>>>
>>> * For (un)subscribe requests visit
>>> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
>>> or send a mail to gmx-developers-request at gromacs.org.
>>>
>>
>>
>
>
> --
> ORNL/UT Center for Molecular Biophysics cmb.ornl.gov
> 865-241-1537, ORNL PO BOX 2008 MS6309
>
> --
> Gromacs Developers mailing list
>
> * Please search the archive at
> http://www.gromacs.org/Support/Mailing_Lists/GMX-developers_List before
> posting!
>
> * Can't post? Read http://www.gromacs.org/Support/Mailing_Lists
>
> * For (un)subscribe requests visit
> https://maillist.sys.kth.se/mailman/listinfo/gromacs.org_gmx-developers
> or send a mail to gmx-developers-request at gromacs.org.
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://maillist.sys.kth.se/pipermail/gromacs.org_gmx-developers/attachments/20151126/c87df1ee/attachment-0001.html>


More information about the gromacs.org_gmx-developers mailing list