Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for explicit Einstein summation #91

Open
Yujie95 opened this issue May 25, 2020 · 5 comments
Open

Support for explicit Einstein summation #91

Yujie95 opened this issue May 25, 2020 · 5 comments

Comments

@Yujie95
Copy link

Yujie95 commented May 25, 2020

Hi Romeric,
thanks for your great work!!! I am using your library for my project. However, I met some problems when using Einstain summation for tensor contraction. I wanted to do the similar calculation as Numpy in Python. The example is:

import numpy as np
d =np.ones([2,10])
inv_cov = np.ones([2,2])
result = np.einsum("kn, km, mn->n", d, inv_cov, d)

This Numpy.einsum supports this explicit Einstain summation by using this "->" identifier to specifying output subscript labels. By applying this explicit mode, one can disable the summation over index "n".(the documentaion of this: https://numpy.org/doc/stable/reference/generated/numpy.einsum.html).

As for your library, the output is automatically deduced and summation over certain index can not be disabled, which constrain a lots of options for many applications. I am wondering if you can also support this function in your library? or maybe it is already supported but just not mentioned in the Wiki?

Best regards,
Yujie

@romeric
Copy link
Owner

romeric commented May 26, 2020

NumPy's explicit mode is not conventional Einstein summation. It is provided for convenience. If you want to disable summation over an index use two indices instead of one and use diag or sum functions on the resulting output. In your specific case this will be

Tensor<double,10> result = diag(einsum<Index<k,n>,Index<k,m>,Index<m,o>>(d, inv_cov, d));

Also, If you want to re-order a tensor to a specific type, use permutation

Tensor<double,10,2> e = einsum<Index<k,n>,Index<k,m>>(d, inv_cov);
Tensor<double,2,10> f = permutation<1,0>(e);

You can manually almost do everything with Fastor's einsum that you can do with NumPy's einsum.

I have updated the documentation on einsum here.

Please ask if you have any questions

@rempferg
Copy link

I am also looking for a C++ tensor library with einstein summation and the possibility to exclude indices from summation. I often write numpy code eliminating another explicit loop that way. I agree that this is not canonical notation, but it is incredibly useful nevertheless.

@romeric
Copy link
Owner

romeric commented May 26, 2020

Okay so I do find the argument of letting the user specify the shape of the output tensor quite convincing and we should/can support this relatively easily.

We can implement something like this which will be closer to NumPy's style

Tensor<double,2,3> a;
Tensor<double,4,3,5> b;

Using default ordering

// default ordering of the output tensor based on free indices i,k,l
Tensor<double,2,4,5> c = einsum<Index<i,j>,Index<k,j,l>>(a,b);
//            i,k,l

Using user-specified ordering based on a new output Index OIndex

// user-specified ordering based on OIndex as the last argument of einsum
Tensor<double,5,2,4> c = einsum<Index<i,j>,Index<k,j,l>,OIndex<l,i,k>>(a,b);
//            l,i,k

What will be tricky with current implementation (although possible with some work) is to disable summing over certain indices based on the output shape as that pretty much means combining broadcasting and Einstein summation together. I will see what I can do

@Yujie95
Copy link
Author

Yujie95 commented May 27, 2020

It would be very nice if you can help with that! I really appreciate it!
looking forward to your update!

@romeric
Copy link
Owner

romeric commented Jun 1, 2020

The explicit einsum mode is implemented in Fastor. This feature however requires C++17 support for the time being (will be backported to C++14 in the future), so you need to use -std=c++17 or /std:c++17 depending on the compiler.

You can now control the type/shape of the output explicitly through the OIndex sequence. To cut it short and illustrate this concisely here is the NumPy-to-Fastor cheatsheet

NumPy Fastor
einsum("ii",A) einsum<Index<i,i>>(A)
einsum("ji",A) permute<Index<j,i>>(A)
einsum("ijk->kij",A) permute<Index<k,i,j>>(A)
einsum("ij,jk",A,B) einsum<Index<i,j>,Index<j,k>>(A,B)
einsum("ij,jk->ki",A,B) einsum<Index<i,j>,Index<j,k>,OIndex<k,i>>(A,B)
einsum("mij,jk,lk->mli",A,B,C) einsum<Index<m,i,j>,Index<j,k>,Index<l,k>,OIndex<m,l,i>>(A,B,C)

However, note that Fastor's einsum is still not equivalent to NumPy's einsum in particular,
explicitly disabling summing over an index is still not supported. That will take a bit more refactoring. I will keep this issue open for that.

The feature is available through the latest clone of Fastor and will be part of V0.6.3 release.

@romeric romeric changed the title Support of explicit Einstain summation Support of explicit Einstein summation Aug 6, 2020
@romeric romeric changed the title Support of explicit Einstein summation Support for explicit Einstein summation Aug 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants