Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement cov #11193

Closed
wants to merge 0 commits into from
Closed

implement cov #11193

wants to merge 0 commits into from

Conversation

Hokchris
Copy link
Contributor

#6269

closes #6269

Implemented cov for linear algebra section.

@ivy-leaves ivy-leaves added Array API Conform to the Array API Standard, created by The Consortium for Python Data API Standards Ivy Functional API labels Feb 28, 2023
of weights can be used to assign probabilities to observation vectors.
dtype
optional variable to set data-type of the result. By default, data-type
will have at least ``numpy.float64`` precision.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float64 would be more accurate as we are in an ivy function and the dtype will not necessarily be translated to numpy.

an array containing the covariance matrix of an input matrix, or the
covariance matrix of two variables. The returned array must have a
floating-point data type determined by Type Promotion Rules and must be
a square matrix of shape (N, N), where N is the number of rows in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that "N is the number of variables in the input(s)" would be more accurate. rowvar will determine whether that's rows or columns in practice.

/,
*,
rowVar: Optional[bool] = True,
bias: Optional[bool] = False,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Optional type-hint is supposed to be used only for arguments that can be None.

ddof: int = None,
fweights: ivy.Array = None,
aweights: ivy.Array = None,
dtype: Optional[Union[ivy.Dtype, ivy.NativeDtype]] = None,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to my previous comment, please add Optional wherever it's needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same goes for all the argument definitions (backends included).

floating-point data type determined by Type Promotion Rules and must be
a square matrix of shape (N, N), where N is the number of rows in the
input(s).
Examples
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please apply the same changes to all of the docstrings in other scripts of the function as well.

fweights=fweights,
aweights=aweights,
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're not handling out anywhere, and it is not supported by any native framework for this function. I think you should remove it from the definitions.


w = None
if fweights is not None:
fweights = tf.cast(fweights, dtype=tf.float64)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as for the jax backend.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure casting fweights and aweights makes any difference in the results. Is there a specific reason you have added it in all backends?

b: ivy.array([ 1., -1.]
[-1., 1.])
}
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be great if you added some examples using more of the supported arguments, not just x1 and x2.

# modifiers: rowVar, bias, ddof
rowVar = draw(st.booleans())
bias = draw(st.booleans())
ddof = draw(helpers.ints(min_value=0, max_value=1))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should test for ddof values greater than 1 as well.

large_abs_safety_factor=2,
safety_factor_scale="log",
),
test_gradients=st.just(False),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you remove the out argument, you should add test_with_out=st.just(False), here.

@Hokchris
Copy link
Contributor Author

Hokchris commented Mar 6, 2023

Hey! Just went through the requested changes and I think they should be done now. The one thing I wasn't sure about making changes to were some of the datatype casts, namely in the Jax, Numpy and Tensorflow backends.

Jax's default datatype is float32 and I get a datatype warning that it doesn't match with the ground truth backend (Tensorflow and Numpy both give the same complaint I think).
The three backends give an error if I get rid of the fweights datatype casting to int64 (I removed the aweights and they all still worked). Jax and Numpy both say that fweights must be integer (it should be with the dtype specified in the test file, but it looks like the fweights are being cast to floats when being passed into the backends), and Tensorflow doesn't allow w * aweights on line 117 without the casting ("cannot compute Mul as input #1(zero-based) was expected to be a double tensor but is a float tensor").

Is there another flag that I could use in the test file to prevent the datatype casting? The datatypes for fweights and aweights are currently hardcoded into the dtypes input array.

I assume I should make another PR due to the recent force push again to resolve the conflicts above?

@AnnaTz
Copy link
Contributor

AnnaTz commented Mar 6, 2023

Hey! Just went through the requested changes and I think they should be done now. The one thing I wasn't sure about making changes to were some of the datatype casts, namely in the Jax, Numpy and Tensorflow backends.

Jax's default datatype is float32 and I get a datatype warning that it doesn't match with the ground truth backend (Tensorflow and Numpy both give the same complaint I think). The three backends give an error if I get rid of the fweights datatype casting to int64 (I removed the aweights and they all still worked). Jax and Numpy both say that fweights must be integer (it should be with the dtype specified in the test file, but it looks like the fweights are being cast to floats when being passed into the backends), and Tensorflow doesn't allow w * aweights on line 117 without the casting ("cannot compute Mul as input #1(zero-based) was expected to be a double tensor but is a float tensor").

Is there another flag that I could use in the test file to prevent the datatype casting? The datatypes for fweights and aweights are currently hardcoded into the dtypes input array.

I assume I should make another PR due to the recent force push again to resolve the conflicts above?

Hi!
Since the castings are there to prevent certain errors you were getting, you shouldn't remove them of course. I just wasn't sure what was the reason you had added them.
As far as I know there is no flag to prevent that. That's why we are using the input_dtypes argument.
Yes, after the force-push all PRs need to be re-created.

@Hokchris
Copy link
Contributor Author

Hokchris commented Mar 7, 2023

Hey, I think I might've accidentally closed this yesterday when trying to re-commit the changes to a new branch. Should I contact someone about issue 6269 (#6269) being closed by ivy-leaves yesterday (I assume automatically) or can these changes be done via this PR?

@AnnaTz
Copy link
Contributor

AnnaTz commented Mar 7, 2023

Hey, I think I might've accidentally closed this yesterday when trying to re-commit the changes to a new branch. Should I contact someone about issue 6269 (#6269) being closed by ivy-leaves yesterday (I assume automatically) or can these changes be done via this PR?

No worries, I opened the issue again and re-linked it to the ToDo list.

@ivy-seed ivy-seed added the Stale label Mar 15, 2023
@ivy-seed
Copy link

This PR has been labelled as stale because it has been inactive for more than 7 days. If you would like to continue working on this PR, then please add another comment or this PR will be closed in 7 days.

@Hokchris
Copy link
Contributor Author

Hey, just made a small change removing a flag from the test. I had an issue trying to run the tests locally after the previous force push and I started getting an assertion error: "ground truth backend (tensorflow) returned array on device gpu:0 but target backend (torch) returned array on device cpu" regardless of the function I was trying to test, but I just set up the environment with a docker container and it worked as intended.

Are there still some changes I should make or is this version fine as the tests look to pass locally at least for me?

@AnnaTz
Copy link
Contributor

AnnaTz commented Mar 20, 2023

Hi @Hokchris, unfortunately there's been another force push in our repository so you'll need to create a new PR. Apologies for the inconvenience.
If the tests are passing locally I guess the new PR will be able to get merged fairly quickly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Array API Conform to the Array API Standard, created by The Consortium for Python Data API Standards Ivy Functional API Stale
Projects
None yet
Development

Successfully merging this pull request may close these issues.

cov
4 participants