Replies: 1 comment 3 replies
-
Yes, your summary is accurate. However, I want to point out that you don't necessarily need to use cupy to wrap the views. You can still use numpy by creating a mirror view: view = kokkos.array(..., space=kokkos.CudaSpace, ...)
# create mirror on host
mirror = view.create_mirror_view()
# numpy wrapper to the same memory allocation
arr = np.array(mirror, copy=False)
# modify numpy array
arr[0] = ...
# the changes to numpy array 'arr' will be present in 'mirror' since they both reference same memory allocation
# so you can now just deep copy the mirror back into the view
view.deep_copy(mirror) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey Jonathan
This module looks very promising, great work! I have a few questions about your vision/direction for it.
Based on what I see so far, one should be able to write cupy/numpy agnostic code very easily. As in:
Create "arrays", either on the python side via numpy/cupy, via the kokkos module with kokkos.array, or through a bound C++ function like your generate_views.
If numpy was used, data resides in Host/CudaUVMSpace, if cupy was used data resides on Cuda/UVM space...
or
If kokkos.array/generate_view was used with HostSpace wrap the View with numpy, alternatively cupy if CudaSpace was used.
Do some calculations on the python side and the view data changes, do some calculation with Kokkos and the python side sees it.
That makes writing large scale python code really appealing if a single variable can toggle between HostSpace Kokkos calculation and CudaSpace Kokkos calculation (with heavy calculations done through the C++ Kokkos code)
Is that an accurate summary of, at least one, use case you envision?
Again, awesome stuff, i'll be following along.
kyle
Beta Was this translation helpful? Give feedback.
All reactions