Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancements #41

Merged
merged 22 commits into from
Jun 7, 2024
Merged

Enhancements #41

merged 22 commits into from
Jun 7, 2024

Conversation

avik-pal
Copy link
Member

@avik-pal avik-pal commented Jun 5, 2024

  • Extend get_device to arbitrary structures
  • Add extensions for Intel oneAPI
    • Add buildkite for intel GPUs
  • Check if macOS github runners have GPUs M1 Macs only available for Github Teams and Enterprise
    • Metal.jl only kind of supports Intel hardware but that is good enough for us to run on the GH Actions Runners for coverage purposes.
  • Deprecating uses of *Adaptor instead everything is defined on *Device one. We keep the binding and export them for semvar.
  • LuxAMDGPU usage removed, now just using AMDGPU will trigger all the required dependencies.
  • functional and loaded are now part of Public API.
  • Removed unwanted dependencies
  • get_device for wrapper Array Types need to be fixed: ReverseDiff & Tracker
  • More comprehensive tests:
    • get_device
    • Sparse Arrays for CUDA
    • Stub tests for Array: Fill, OneElement, SparseArrays
    • Add multigpu tests
    • RecursiveArrayTools
  • set_device! tests
  • More coverage for the get_device function with tuples and custom structs.
  • Test the preferences mechanism

We need to wait for AMDGPU CI to come back online before this can be merged.

@avik-pal avik-pal force-pushed the ap/intel branch 2 times, most recently from e0c5e6a to 537b738 Compare June 6, 2024 02:17
test/oneapi.jl Outdated Show resolved Hide resolved
@avik-pal avik-pal force-pushed the ap/intel branch 4 times, most recently from 654a565 to 7265edf Compare June 6, 2024 03:14
@avik-pal avik-pal force-pushed the ap/intel branch 2 times, most recently from 36914f9 to 520266e Compare June 6, 2024 03:27
test/amdgpu.jl Outdated Show resolved Hide resolved
test/cuda.jl Outdated Show resolved Hide resolved
test/metal.jl Outdated Show resolved Hide resolved
test/oneapi.jl Outdated Show resolved Hide resolved
Copy link

codecov bot commented Jun 7, 2024

Codecov Report

Attention: Patch coverage is 91.19497% with 14 lines in your changes missing coverage. Please review.

Project coverage is 87.75%. Comparing base (8d1b76a) to head (bd7ce93).

Files Patch % Lines
src/LuxDeviceUtils.jl 81.53% 12 Missing ⚠️
ext/LuxDeviceUtilsAMDGPUExt.jl 96.15% 1 Missing ⚠️
ext/LuxDeviceUtilsoneAPIExt.jl 93.75% 1 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##             main      #41       +/-   ##
===========================================
+ Coverage   42.22%   87.75%   +45.53%     
===========================================
  Files          11       13        +2     
  Lines         270      294       +24     
===========================================
+ Hits          114      258      +144     
+ Misses        156       36      -120     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@avik-pal avik-pal force-pushed the ap/intel branch 5 times, most recently from 983075a to a6f9faf Compare June 7, 2024 05:54
@avik-pal avik-pal force-pushed the ap/intel branch 2 times, most recently from 82a3315 to dba99cf Compare June 7, 2024 06:49
@avik-pal avik-pal force-pushed the ap/intel branch 2 times, most recently from 8c2c3ae to 3636a82 Compare June 7, 2024 07:17
test/cuda.jl Outdated Show resolved Hide resolved
@avik-pal avik-pal merged commit 82a99fc into main Jun 7, 2024
20 checks passed
@avik-pal avik-pal deleted the ap/intel branch June 7, 2024 19:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant