Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

META: CUDA external language implementation #18338

Closed
wants to merge 1 commit into from
Closed

Conversation

maleadt
Copy link
Member

@maleadt maleadt commented Sep 2, 2016

Summary: make inference and codegen modular and configurable for packages to reuse, eg., when generating code for a different platform. This way, we avoid bloating the compiler, and make it possible to develop and import new hardware support without requiring modifications to the compiler/base.

Concretely, this PR will track the necessary additions to Julia master and host the remaining diff to support the CUDA compiler over at CUDAnative.jl.

Inference

We need to influence inference in order to select alternative functions for some stdlib functionality (think sin, which now calls out to libm but needs to call another library), sometimes even depending on the GPU version.

Codegen

Much of this is similar to inference:

Longer-term:

  • Make codegen sane: pure entry-points, less global state

Support

@maleadt maleadt added kind:julep Julia Enhancement Proposal compiler:codegen Generation of LLVM IR and native code compiler:inference Type inference labels Sep 2, 2016
@maleadt maleadt force-pushed the tb/ext_langimpl branch 2 times, most recently from fb9155a to 796d182 Compare September 6, 2016 17:31
@maleadt maleadt force-pushed the tb/ext_langimpl branch 2 times, most recently from 11a8e58 to bf6d1c6 Compare September 13, 2016 19:11
@maleadt
Copy link
Member Author

maleadt commented Sep 22, 2016

Small update: LLVM.jl is now powerful enough to fully implement the PTX JIT (which is pretty simple).

@datnamer
Copy link

Would be cool to keep in mind a Webassembly usecase for this:

https://github.com/WebAssembly/wasm-jit-prototype
https://github.com/WebAssembly/binaryen

@maleadt
Copy link
Member Author

maleadt commented Oct 2, 2016

Yes, definitely. Implementation wise, we can look at Rust for inspiration as they have a functional LLVM-based wasm target nowadays. Do you have any use-cases in mind?

@datnamer
Copy link

datnamer commented Oct 5, 2016

Well my (dream/pony) high level usecases include distributing fully client side interactive reports and machine learning mobile apps.

@maleadt maleadt force-pushed the tb/ext_langimpl branch 2 times, most recently from 4c4e504 to 767bf64 Compare October 12, 2016 21:28
@maleadt
Copy link
Member Author

maleadt commented Oct 12, 2016

Rebased on top of #18496. I've been working on similar 'codegen params' in the JuliaGPU/julia repo, but that still needs to be fleshed out.

@maleadt maleadt force-pushed the tb/ext_langimpl branch 2 times, most recently from 8536254 to 0487033 Compare October 19, 2016 14:09
@maleadt maleadt changed the title Julep: external language implementations RFC/WIP: CUDA external language implementation Oct 20, 2016
@maleadt maleadt force-pushed the tb/ext_langimpl branch 9 times, most recently from edca252 to 00c0db3 Compare October 31, 2016 19:12
@maleadt maleadt force-pushed the tb/ext_langimpl branch 5 times, most recently from 070a9c7 to c7115eb Compare November 8, 2016 22:39
@maleadt maleadt closed this Nov 10, 2016
@maleadt maleadt changed the title RFC/WIP: CUDA external language implementation META: CUDA external language implementation Nov 10, 2016
@maleadt
Copy link
Member Author

maleadt commented Nov 10, 2016

I've removed commits from this PR, making it a tracking issue instead. Code has moved to tb/cuda.

@maleadt maleadt reopened this Nov 10, 2016
@maleadt
Copy link
Member Author

maleadt commented Sep 8, 2017

This is outdated.

@maleadt maleadt closed this Sep 8, 2017
@DilumAluthge DilumAluthge deleted the tb/ext_langimpl branch March 25, 2021 22:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
compiler:codegen Generation of LLVM IR and native code compiler:inference Type inference domain:gpu Affects running Julia on a GPU kind:julep Julia Enhancement Proposal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants