Skip to content
This repository has been archived by the owner on May 27, 2021. It is now read-only.

Common interface between backends #26

Closed
SimonDanisch opened this issue Jan 15, 2017 · 4 comments
Closed

Common interface between backends #26

SimonDanisch opened this issue Jan 15, 2017 · 4 comments
Labels

Comments

@SimonDanisch
Copy link
Member

SimonDanisch commented Jan 15, 2017

Now that I have a working prototype for GLSL transpilation, it'd be nice to have the same julia code compile to GLSL and CUDAnative without hassle!

Shared Memory

In GLSL it seems keywords like shared a just one keyword from a set of other keywords. So I had the idea of having to create an intrinsic type Qualified{Qualifier, Type} .
So you could create shared memory like this:

Qualified{:shared, StaticVector{10, Float32}()

I'm not sure how well this can work with CUDAnatives code generation...

intrinsics

There are a lot of shared intrinsics like memory barriers, work group index getters etc.
The problem with them is, that we'd need to dispatch on some backend type to allow to select the correct intrinsic name for the backend.
I could in theory just mirror the cuda names, since I go through the Julia code anyways and can just replace them with the correct names for GLSL.
Any thoughts on this?

@maleadt
Copy link
Member

maleadt commented Jan 16, 2017

Shared memory: I haven't put too much thought in a good language construct, hence the current macro, but I'd be willing to replace it with something more portable between CUDAnative and GLSL. Problem is, I'm abusing llvmcall pretty horribly already, so I first would need to work to gt the proper functionality in base before figuring out how to build an abstraction with it.

What do you propose? Could you elaborate on the Qualified example?

Intrinsics: I need something similar in order to specialize for hardware generations, eg. for hardware-specific intrinsics, optimized implementations (like the Keppler-specific reduction), or to allow just using Base.sin but have it resolve to CUDAnative.sin when doing CUDA codegen. I had been toying with inference hooks, but got side-tracked upstreaming more critical parts. See the demo.jl, where the call_hook currently just matches child, it could use smth like a global current_backend to perform backend-specific dispatch.

@vchuravy
Copy link
Member

I first would need to work to gt the proper functionality in base before figuring out how to build an abstraction with it.
This would require Base to expose qualified pointers with address space, right? It might be good to think about a second use-case where address spaces might be useful for Base so that CUDAnative is not the only user.

@maleadt
Copy link
Member

maleadt commented Jan 16, 2017

This would require Base to expose qualified pointers with address space, right? It might be good to think about a second use-case where address spaces might be useful for Base so that CUDAnative is not the only user.

No, as we can propagate/infer address space information at the LLVM level, we just need to have proper AS info at the 'end' (ie. where we do an llvmcall)

I meant the support to introduce globals in the LLVM module. I think this is going to be covered by vtjnash's cglobal proposal, where IIUC the entire foreign module will be embedded & linked.

@maleadt maleadt added the design label Jan 17, 2017
@maleadt
Copy link
Member

maleadt commented Jun 9, 2017

Closing some of the speculative/too ambitious issues. I don't think it should be CUDAnative's task to both support all of CUDA C, while exporting it through a shared interface. That could be tackled by a Plots.jl-like package.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

3 participants