-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for explicit Einstein summation #91
Comments
NumPy's explicit mode is not conventional Einstein summation. It is provided for convenience. If you want to disable summation over an index use two indices instead of one and use Tensor<double,10> result = diag(einsum<Index<k,n>,Index<k,m>,Index<m,o>>(d, inv_cov, d)); Also, If you want to re-order a tensor to a specific type, use Tensor<double,10,2> e = einsum<Index<k,n>,Index<k,m>>(d, inv_cov);
Tensor<double,2,10> f = permutation<1,0>(e); You can manually almost do everything with Fastor's I have updated the documentation on einsum here. Please ask if you have any questions |
I am also looking for a C++ tensor library with einstein summation and the possibility to exclude indices from summation. I often write numpy code eliminating another explicit loop that way. I agree that this is not canonical notation, but it is incredibly useful nevertheless. |
Okay so I do find the argument of letting the user specify the shape of the output tensor quite convincing and we should/can support this relatively easily. We can implement something like this which will be closer to NumPy's style Tensor<double,2,3> a;
Tensor<double,4,3,5> b; Using default ordering // default ordering of the output tensor based on free indices i,k,l
Tensor<double,2,4,5> c = einsum<Index<i,j>,Index<k,j,l>>(a,b);
// i,k,l Using user-specified ordering based on a new output Index // user-specified ordering based on OIndex as the last argument of einsum
Tensor<double,5,2,4> c = einsum<Index<i,j>,Index<k,j,l>,OIndex<l,i,k>>(a,b);
// l,i,k What will be tricky with current implementation (although possible with some work) is to disable summing over certain indices based on the output shape as that pretty much means combining broadcasting and Einstein summation together. I will see what I can do |
It would be very nice if you can help with that! I really appreciate it! |
The explicit You can now control the type/shape of the output explicitly through the
However, note that Fastor's The feature is available through the latest clone of Fastor and will be part of |
Hi Romeric,
thanks for your great work!!! I am using your library for my project. However, I met some problems when using Einstain summation for tensor contraction. I wanted to do the similar calculation as Numpy in Python. The example is:
This Numpy.einsum supports this explicit Einstain summation by using this "->" identifier to specifying output subscript labels. By applying this explicit mode, one can disable the summation over index "n".(the documentaion of this: https://numpy.org/doc/stable/reference/generated/numpy.einsum.html).
As for your library, the output is automatically deduced and summation over certain index can not be disabled, which constrain a lots of options for many applications. I am wondering if you can also support this function in your library? or maybe it is already supported but just not mentioned in the Wiki?
Best regards,
Yujie
The text was updated successfully, but these errors were encountered: