-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor ivy.clip
function
#21997
Refactor ivy.clip
function
#21997
Conversation
Thanks for contributing to Ivy! 😊👏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @NripeshN ! Thanks for working on this issue. The way you have tried to fix this problem is not the correct way to go about it. Since clip
is a primary function, you need to make the change in all the backend implementations of clip
. For example, numpy already supports one of x_min
or x_max
to be None
, so no update is needed there but in the jax backend it's implemented using jax.where
so, you should first check that both values are not None
at the same time, like:
if x_min is None and x_max is None:
raise ValueError("At least one of the x_min or x_max must be provided")
And then clip the values like :
if x_max is not None:
x = jnp.where(x > x_max, x_max, x)
if x_min is not None:
return jnp.where(x < x_min, x_min, x)
You can use the same idea to update the other backend implementations as well if they don't natively support one of the arguments to be None
. Finally, you should update the test of ivy.clip
such that one of x_min
or x_max
can be None
.
Feel free to request another review when you have made the changes.
ivy-gardener |
If you are working on an open task, please edit the PR description to link to the issue you've created. For more information, please check ToDo List Issues Guide. Thank you 🤗 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please update the type hints in the numpy backend and update the test to allow one of x_min
or x_max
to be None
?
Co-authored-by: Anwaar Khalid <[email protected]>
Co-authored-by: Anwaar Khalid <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work so far, would you please update the tests as well :)
ivy_tests/test_ivy/test_functional/test_core/test_manipulation.py
Outdated
Show resolved
Hide resolved
ivy-gardener |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using tf.experimental.numpy when one of the value is None
, since tf.clip_by_value
doesn't accept None
Hi @hello-fri-end, tensorflow differs from all other frameworks when
|
Co-authored-by: Shreyansh Bardia <[email protected]>
Co-authored-by: Shreyansh Bardia <[email protected]>
Co-authored-by: Shreyansh Bardia <[email protected]>
Hey @ShreyanshBardia ! Thanks for pointing this out 🚀 It seems that import numpy as np
import tensorflow as tf
import torch
print(tf.clip_by_value(tf.constant([1,10,13]),12,11))
print(np.clip([1,10,13],12,11))
print(torch.clamp(torch.tensor([1, 10, 13]), min=12, max=11))
Would you please update our implementations to match this behavior? @NripeshN In the tensorflow backend implementation, we can use |
Co-authored-by: ShreyanshBardia <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @NripeshN ! Awesome work so far, just some minor comments this time. Will be happy to merge once these are resolved :)
if tf.size(x) == 0: | ||
ret = x | ||
elif x.dtype == tf.bool: | ||
ret = tf.clip_by_value(tf.cast(x, tf.float16), x_min, x_max) | ||
ret = tf.cast(ret, x.dtype) | ||
else: | ||
ret = tf.clip_by_value(x, x_min, x_max) | ||
ret = tf.experimental.numpy.clip(x, x_min, x_max) | ||
else: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I follow why we need an else case here since x_min is not None and x_max is not None and x_min > x_max
case (i.e not cond
) can be handled by tf.experimental.numpy.clip
as well, right? Maybe I'm missing something here..
Also, it looks like the frontend tests are failing because of cond
with the following error:
E ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all()
This is because x_min
and x_max
can also be arrays.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, sorry missed the fact that they could be arrays too, I think .any
should be able to resolve this. tf.experimental.numpy.clip
also follows tf.clip_by_value
behaviour when x_min>x_max
, which we discussed in the above comments. I think the only difference is numpy namespace and the fact that it can accept None
. You can verify this from the following code.
>>> print(tf.clip_by_value(tf.constant([1,10,13]),12,11))
tf.Tensor([12 12 12], shape=(3,), dtype=int32)
>>> print(tf.experimental.numpy.clip(tf.constant([1,10,13]),12,11))
tf.Tensor([12 12 12], shape=(3,), dtype=int32)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh right, sorry missed the fact that they could be arrays too, I think
.any
should be able to resolve this.tf.experimental.numpy.clip
also followstf.clip_by_value
behaviour whenx_min>x_max
, which we discussed in the above comments. I think the only difference is numpy namespace and the fact that it can acceptNone
. You can verify this from the following code.>>> print(tf.clip_by_value(tf.constant([1,10,13]),12,11)) tf.Tensor([12 12 12], shape=(3,), dtype=int32) >>> print(tf.experimental.numpy.clip(tf.constant([1,10,13]),12,11)) tf.Tensor([12 12 12], shape=(3,), dtype=int32)
Ah, that's interesting. I expected tf.experimental.numpy.clip
to behave exactly the same as np.clip
.
min_val = draw( | ||
st.one_of(st.just(None), helpers.array_values(dtype=dtype[0], shape=())) | ||
) | ||
max_val = draw( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we update min_val
and max_val
such that these can be arrays as well? They can be of any shape that is broadcastable to the shape of the input x
. You should use the broadcast helper functions in the Array helpers section here --> :https://unify.ai/docs/ivy/docs/helpers/ivy_tests.test_ivy.helpers.hypothesis_helpers/ivy_tests.test_ivy.helpers.hypothesis_helpers.array_helpers.html#ivy_tests.test_ivy.helpers.hypothesis_helpers.array_helpers.mutually_broadcastable_shapes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just left a comment that will fix your failing numpy test. Another thing to look into is the following error from the frontend tests:
promoted_type = ivy.as_native_dtype(ivy.promote_types(x.dtype, x_min.dtype))
E AttributeError: 'float' object has no attribute 'dtype'
Since x_min
and x_max
can also be Numbers
, they won't necessarily have the dtype
attribute. Would you please update all the implementations to check if this attribute is present first and then try to access it.
Co-authored-by: Anwaar Khalid <[email protected]>
ivy-gardener |
ivy-gardener |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like the failures in the CI are unrelated to this function now. Feel free to merge it - thanks @NripeshN @ShreyanshBardia 🚀
Co-authored-by: Anwaar Khalid <[email protected]> Co-authored-by: Shreyansh Bardia <[email protected]> Co-authored-by: ShreyanshBardia <[email protected]>
No description provided.