-
I have some tensors with larger numbers of dimensions; I noticed that permute support ends with 4d tensors; other than forward-compatibility with optimizations on ggml_permute, is there any reason I can't just permute the tensor in-place? I noticed that ggml_permute involves a copy, though I didn't dig far enough to understand how shallow/deep that copy is (or whether there is a real cost to the alloc/free associated with that copy). I will use the answer to this question to decide whether to implement a higher dim permute operation under the ggml API, or, a small function that is specific to my usage that permutes the dims in-place. Thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
That's already the way |
Beta Was this translation helpful? Give feedback.
-
Thanks! As a follow-up, if I'm writing permutations in more than 4 dims, do
you think the convention would be to provide a new operator, or to handle
it directly as a view? I noticed that Nd permutations would be a bit
problematic to implement.
…On Sat, Aug 3, 2024, 11:17 slaren ***@***.***> wrote:
That's already the way ggml_permute works, it creates a view of the
original tensor, but it doesn't copy the tensor data. A new ggml_tensor
struct is allocated while doing so, but this is a relatively small object.
—
Reply to this email directly, view it on GitHub
<#910 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/BGA7QVN2PFRFDSILNHQ3RWLZPUNDXAVCNFSM6AAAAABL46SQUGVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAMRTGE4DMNQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
ggml tensors only support 4 dimensions, so you may have to find another way to do what you want. All ops supported by ggml use at most 4 dimensions, so you should be able to merge some of the dimensions into a single one to achieve the same result.