-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(frontend): Added logsumexp function in paddlepaddle frontend #21725
Conversation
Thanks for contributing to Ivy! 😊👏 |
If you are working on an open task, please edit the PR description to link to the issue you've created. For more information, please check ToDo List Issues Guide. Thank you 🤗 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @shaby112 , you should edit the description of the PR to add the closing tag as
close #issue_number
to link the PR with the issue that you create. This will close the issue with the pull request accordingly. Thanks
Okay, I have edited the description. Thank you for letting me know! |
|
||
|
||
@with_supported_dtypes( | ||
{"2.5.1 and below": ("float32", "float64")}, "paddle" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
x (Tensor) – The input Tensor with data type float16, float32 or float64, which have no more than 4 dimensions.
Original function supports the following dtypes for x, please update the decorator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Acknowledged. The changes have been made.
@shaby112 I noticed that you wrote a frontend for I would advice you to adapt a compositional implementation similar to the one in jax's frontend for this function. |
Sure I will try my best |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have changed the implementation, kindly review this.
|
||
|
||
@with_supported_dtypes( | ||
{"2.5.1 and below": ("float32", "float64")}, "paddle" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Acknowledged. The changes have been made.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had some errors in the previous code that I submitted, I didn't test it properly. This implementation works as expected and passes al 5 tests aswell. Kindly review this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kindly review this pull request. I would appreciate your valuable insights and thoughts!
Hi @shaby112 , Could you please resolve the merge conflicts? |
@handle_frontend_test( | ||
fn_tree="paddle.tensor.math.amax", | ||
fn_tree="paddle.tensor.math.logsumexp", | ||
dtype_and_x=helpers.dtype_and_values( | ||
available_dtypes=helpers.get_dtypes("valid"), | ||
num_arrays=2, | ||
allow_inf=False, | ||
shared_dtype=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add max_num_dims
= 4 in the decorator , tests are failing because of it
On the paddle docs it says
x (Tensor) – The input Tensor with data type float16, float32 or float64, which have no more than 4 dimensions.
return out | ||
|
||
|
||
@with_supported_dtypes( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix Indentation issues on Line 495
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, I am doing that. Thank you for letting me know!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I fixed the indentation issue however I am still getting the same lint error, I apologize for taking so much of your time.
Hello @shaby112 , Please update your PR with the proposed changes, Verify all the tests pass and let me know! Thanks! |
Alright, the tests are running I will let you know how it goes. Thanks! |
@saeedashrraf can you kindly review this? I will appreciate your input and insights. |
@sagewhocodes can you kindly review this PR, I will greatly appreciate your insights. |
I closed it accidentally, reopening the PR! |
Can you kindly review this @sagewhocodes @saeedashrraf ? I will greatly appreciate your insights! |
Thank you for this PR, here is the CI results: This pull request does not result in any additional test failures. Congratulations! |
LGTM!! |
close #21617