Skip to content

Commit

Permalink
Merge branch 'main' into add-resize
Browse files Browse the repository at this point in the history
  • Loading branch information
alec-kr authored Aug 29, 2023
2 parents 8bee326 + 7a048c1 commit 22c0d50
Show file tree
Hide file tree
Showing 262 changed files with 57,336 additions and 54,909 deletions.
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,6 @@ repos:
# Exclude everything in frontends except __init__.py, and func_wrapper.py
exclude: 'ivy/functional/(frontends|backends)/(?!.*/func_wrapper\.py$).*(?!__init__\.py$)'
- repo: https://github.com/unifyai/lint-hook
rev: b9a103a9f7991fec0ed636a2bcd4497691761e78
rev: 27646397c5390f644a645f439535b1061b9c0105
hooks:
- id: ivy-lint
4 changes: 4 additions & 0 deletions docs/overview/contributing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,9 @@ The contributor guide is split into the sections below, it's best to go from sta
|
| (g) :ref:`Helpful Resources`
| Resources you would find useful when learning Ivy 📖
|
| (g) :ref:`Error Handling`
| Common errors you will be facing contributing to Ivy ❌
.. toctree::
:hidden:
Expand All @@ -48,6 +51,7 @@ The contributor guide is split into the sections below, it's best to go from sta
contributing/open_tasks.rst
contributing/applied_libraries.rst
contributing/helpful_resources.rst
contributing/error_handling.rst

**Video**

Expand Down
112 changes: 112 additions & 0 deletions docs/overview/contributing/error_handling.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
Error Handling
==============

.. _`discord`: https://discord.gg/sXyFF8tDtm
.. _`pycharm channel`: https://discord.com/channels/799879767196958751/942114831039856730
.. _`docker channel`: https://discord.com/channels/799879767196958751/942114744691740772
.. _`pre-commit channel`: https://discord.com/channels/799879767196958751/982725464110034944
.. _`pip packages channel`: https://discord.com/channels/799879767196958751/942114789642080317
.. _`ivy tests channel`: https://discord.com/channels/799879767196958751/982738436383445073

This section, "Error Handling" aims to assist you in navigating through some common errors you might encounter while working with the Ivy's Functional API. We'll go through some common errors which you might encounter while working as a contributor or a developer.

#. This is the case where we pass in a dtype to `torch` which is not actually supported by the torch's native framework itself. The function which was

.. code-block:: python
E RuntimeError: "logaddexp2_cpu" not implemented for 'Half'
E Falsifying example: test_logaddexp2(
E backend_fw='torch',
E on_device='cpu',
E dtype_and_x=(['float16', 'float16'],
E [array([-1.], dtype=float16), array([-1.], dtype=float16)]),
E test_flags=FunctionTestFlags(
E ground_truth_backend='tensorflow',
E num_positional_args=2,
E with_out=False,
E instance_method=False,
E test_gradients=False,
E test_compile=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E fn_name='logaddexp2',
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2BkAAMoBaaR2WAAAACVAAY=') as a decorator on your test case
#. This is the case where the value from the ground-truth backend(tensorflow) does not match the value of the backend(jax) we are testing for this case.
.. code-block:: python
E AssertionError: the results from backend jax and ground truth framework tensorflow do not match
E 0.25830078125!=0.258544921875
E
E
E Falsifying example: test_acosh(
E backend_fw='jax',
E on_device='cpu',
E dtype_and_x=(['float16'], [array(4., dtype=float16)]),
E test_flags=FunctionTestFlags(
E ground_truth_backend='tensorflow',
E num_positional_args=1,
E with_out=False,
E instance_method=False,
E test_gradients=True,
E test_compile=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E fn_name='acosh',
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2BAABYQwQgiAABDAAY=') as a decorator on your test case
#. This is a similar assertion as stated in point 2 but with torch and ground-truth tensorflow not matching but the matrices are quite different so there should be an issue in the backends rather than a numerical instability here:
.. code-block:: python
E AssertionError: the results from backend torch and ground truth framework tensorflow do not match
E [[1.41421356 1.41421356 1.41421356]
E [1.41421356 1.41421356 1.41421356]
E [1.41421356 inf 1.41421356]]!=[[1.41421356e+000 1.41421356e+000 1.41421356e+000]
E [1.41421356e+000 1.41421356e+000 1.41421356e+000]
E [1.41421356e+000 1.34078079e+154 1.41421356e+000]]
E
E
E Falsifying example: test_abs(
E backend_fw='torch',
E on_device='cpu',
E dtype_and_x=(['complex128'],
E [array([[-1.-1.00000000e+000j, -1.-1.00000000e+000j, -1.-1.00000000e+000j],
E [-1.-1.00000000e+000j, -1.-1.00000000e+000j, -1.-1.00000000e+000j],
E [-1.-1.00000000e+000j, -1.-1.34078079e+154j, -1.-1.00000000e+000j]])]),
E fn_name='abs',
E test_flags=FunctionTestFlags(
E ground_truth_backend='tensorflow',
E num_positional_args=1,
E with_out=False,
E instance_method=False,
E test_gradients=False,
E test_compile=None,
E as_variable=[False],
E native_arrays=[False],
E container=[False],
E ),
E )
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2ZkYAIiBiBgZIAAxqHEXsAAB7jUQAAAMtEAzQ==') as a decorator on your test case
**Note**
This section is specifically targeted towards dealing with the Ivy Functional API and the Ivy Experimental API.
**Round Up**
This should have hopefully given you an understanding of how to deal with common errors while working with the the functional API.
If you have any questions, please feel free to reach out on `discord`_ in the `ivy tests channel`_, `pycharm channel`_, `docker channel`_, `pre-commit channel`_, `pip packages channel`_ depending on the question!
14 changes: 7 additions & 7 deletions docs/overview/contributing/setting_up.rst
Original file line number Diff line number Diff line change
Expand Up @@ -146,12 +146,12 @@ Using miniconda
pip install -r requirements/optional.txt
b. On M1 Mac, you will need to use the optional_m1_1 and optional_m1_2 requirements files. To install dependencies.
b. On M1 Mac, you will need to use the optional_apple_silicon_1 and optional_apple_silicon_2 requirements files. To install dependencies.

.. code-block:: none
pip install -r requirements/optional_m1_1.txt
pip install -r requirements/optional_m1_2.txt
pip install -r requirements/optional_apple_silicon_1.txt
pip install -r requirements/optional_apple_silicon_2.txt
Using venv
**********
Expand Down Expand Up @@ -224,12 +224,12 @@ This is a builtin package and doesn't require explicit installation.
PS: If the link gets expired at some point in the future, check http://nz2.archive.ubuntu.com/ubuntu/pool/main/o/openssl/?C=M;O=D for a valid one.

b. On M1 Mac, you will need to use the optional_m1_1 and optional_m1_2 requirements files. To install dependencies.
b. On M1 Mac, you will need to use the optional_apple_silicon_1 and optional_apple_silicon_2 requirements files. To install dependencies.

.. code-block:: none
pip install -r requirements/optional_m1_1.txt
pip install -r requirements/optional_m1_2.txt
pip install -r requirements/optional_apple_silicon_1.txt
pip install -r requirements/optional_apple_silicon_2.txt
#. Installing array API testing dependencies.

Expand Down Expand Up @@ -349,7 +349,7 @@ If Docker's latest version causes an error, try using an earlier version by visi

**Important Note**

When setting up on an M1 Mac, you would have to update the Dockerfile to install libraries from :code:`requirements/optional_m1_1.txt` and :code:`requirements/optional_m1_2.txt` instead of :code:`requirements/optional.txt`.
When setting up on an M1 Mac, you would have to update the Dockerfile to install libraries from :code:`requirements/optional_apple_silicon_1.txt` and :code:`requirements/optional_apple_silicon_2.txt` instead of :code:`requirements/optional.txt`.

**Video**

Expand Down
6 changes: 4 additions & 2 deletions docs/overview/deep_dive/containers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ The *nestable* behaviour is added to any function which is decorated with the `h
This wrapper causes the function to be applied at each leaf of any containers passed in the input.
More information on this can be found in the `Function Wrapping <https://github.com/unifyai/ivy/blob/b725ed10bca15f6f10a0e5154af10231ca842da2/docs/partial_source/deep_dive/function_wrapping.rst>`_ section of the Deep Dive.

Additionally, any nestable function which returns multiple arrays, will return the same number of containers for it's container counterpart.
Additionally, any nestable function which returns multiple arrays, will return the same number of containers for its container counterpart.
This property makes the function symmetric with regards to the input-output behavior, irrespective of whether :class:`ivy.Array` or :class:`ivy.Container` instances are based used.
Any argument in the input can be replaced with a container without changing the number of inputs, and the presence or absence of ivy.Container instances in the input should not change the number of return values of the function.
In other words, if containers are detected in the input, then we should return a separate container for each array that the function would otherwise return.
Expand Down Expand Up @@ -246,8 +246,10 @@ The functions :func:`ivy.clip`, :func:`ivy.log`, :func:`ivy.sum` and :func:`ivy.

Therefore, our approach is to **not** wrap any compositional functions which are already *implicitly nestable* as a result of the *nestable* functions called internally.

**Explicitly Nestable Compositional Functions**

There may be some compositional functions which are not implicitly nestable for some reason, and in such cases adding the explicit `handle_nestable <https://github.com/unifyai/ivy/blob/5f58c087906a797b5cb5603714d5e5a532fc4cd4/ivy/func_wrapper.py#L407>`_ wrapping may be necessary.
One such example is the :func:`ivy.linear` function which is not implicitly nestable despite being compositional. This is because of the use of special functions like :func:`__len__` which is not nestable and can't be made nestable.
One such example is the :func:`ivy.linear` function which is not implicitly nestable despite being compositional. This is because of the use of special functions like :func:`__len__` and :func:`__list__` which, among other functions, are not nestable and can't be made nestable.
But we should try to avoid this, in order to make the flow of computation as intuitive to the user as possible.

When compiling the code, the computation graph is **identical** in either case, and there will be no implications on performance whatsoever.
Expand Down
30 changes: 19 additions & 11 deletions docs/overview/deep_dive/devices.rst
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ doesn't care about this, it moves all the tensors to the same device before perf
In Ivy, users can control the device on which the operation is to be executed using `ivy.set_soft_device_mode`_ flag. There are two cases for this,
either the soft device mode is set to :code:`True` or :code:`False`.

1. When :code:`ivy.set_soft_device_mode(True)`:
**When ivy.set_soft_device_mode(True)**:

a. All the input arrays are moved to :code:`ivy.default_device()` while performing an operation. If the array is already present
in the default device, no device shifting is done.
Expand All @@ -174,7 +174,14 @@ are moved to :code:`ivy.default_device()` while performing :code:`ivy.add` opera
y = ivy.array([34], device="gpu:0")
ivy.add(x, y)
2. When :code:`ivy.set_soft_device_mode(False)`:
The priority of device shifting is following in this mode:

#. The ``device`` argument.
#. device the arrays are on.
#. :code:`default_device`


**When ivy.set_soft_device_mode(False)**:

a. If any of the input arrays are on a different device, a device exception is raised.

Expand Down Expand Up @@ -226,18 +233,16 @@ The code to handle all these cases are present inside `@handle_device_shifting`_
all the functions that accept at least one array as input(except mixed and compositional functions) in `ivy.functional.ivy`_ submodule. The decorator calls
:code:`ivy.handle_soft_device_variable` function under the hood to handle device shifting for each backend.
**Soft Device Handling Function**
The priority of device shifting is following in this mode:
There is a backend specific implementation of :code:`ivy.handle_soft_device_variable` function for numpy and tensorflow. The reason being, for numpy there
is no need for device shifting as it only support 'cpu' device, whereas, tensorflow automatically moves the inputs to 'gpu' if one is available and there is no way to turn this
off globally.
#. The ``device`` argument.
#. :code:`default_device`
The `numpy soft device handling function`_ just returns the inputs of the operation as it is without making any changes.
Whereas the `tensorflow soft device handling function`_ move the input arrays to :code:`ivy.default_device()` using
`tf.device`_ context manager.
**Soft Device Handling Function**
This is a function which plays a crucial role in the :code:`handle_device_shifting` decorator. The purpose of this function is to ensure that the function :code:`fn` passed to it is executed on the device passed in :code:`device_shifting_dev` argument. If it is passed as :code:`None`, then the function will be executed on the default device.
For the rest of the frameworks, the `ivy implementation`_ of soft device handling function is used, which loops through
the inputs of the function and move the arrays to :code:`ivy.default_device()`, if not already on that device.
Most of the backend implementations are very similar, first they move all the arrays to the desired device using :code:`ivy.nested_map` and then execute the function inside the device handling context manager from that native framework. The prupose of executing the function inside the context manager is to handle the functions that do not accept any arrays, the only way in that case to let the native framework know on which device we want the function to be executed on is through the context manager. This approach is used in most backend implementations with the exceptions being tensorflow, where we dont have to move all the tensors to the desired device because just using its context manager is enough, it moves all the tensors itself internally, and numpy, since it only accepts `cpu` as device.
**Forcing Operations on User Specified Device**
Expand All @@ -258,6 +263,9 @@ context manager. So from now on, all the operations will be executed on 'cpu' de
On exiting the context manager(`__exit__`_ method), the default device and soft device mode is reset to the previous state using `ivy.unset_default_device()`_ and
`ivy.unset_soft_device_mode()`_ respectively, to move back to the previous state.
There are some functions(mostly creation function) which accept a :code:`device` argument. This is for specifying on which device the function is executed on and the device of the returned array. :code:`handle_device_shifting` deals with this argument by first checking if it exists and then setting :code:`device_shifting_dev` to that which is then passed to the :code:`handle_soft_device_variable` function depending on the :code:`soft_device` mode.
**Round Up**
This should have hopefully given you a good feel for devices, and how these are handled in Ivy.
Expand Down
8 changes: 6 additions & 2 deletions docs/overview/deep_dive/ivy_frontends_tests.rst
Original file line number Diff line number Diff line change
Expand Up @@ -629,7 +629,11 @@ for example, :code:`ndarray.__add__` would expect an array as input, despite the
- :code:`init_tree` A full path to initialization function.
- :code:`method_name` The name of the method to test.
:func:`helpers.test_frontend_method` is used to test frontend instance methods. It is used in the same way as :func:`helpers.test_frontend_function`.
:func:`helpers.test_frontend_method` is used to test frontend instance methods. It is used in the same way as :func:`helpers.test_frontend_function`. A few important arguments for this function are following:
- :code:`init_input_dtypes` Input dtypes of the arguments on which we are initializing the array on.
- :code:`init_all_as_kwargs_np` The data to be passed when intializing, this will be a dictionary in which the numpy array which will contain the data will be passed in the :code:`data` key.
- :code:`method_input_dtypes` The input dtypes of the arguemnt which are to be passed to the instance method after the intialization of the array.
- :code:`method_all_as_kwargs_np` All the arguments which are to be passed to instance method.
Frontend Instance Method Test Examples
Expand Down Expand Up @@ -822,4 +826,4 @@ If you have any questions, please feel free to reach out on `discord`_ in the `i
<iframe width="420" height="315" allow="fullscreen;"
src="https://www.youtube.com/embed/iS7QFsQa9bI" class="video">
</iframe>
</iframe>
4 changes: 2 additions & 2 deletions install_dependencies.sh
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
pip install -r requirements/requirements.txt
if [[ $(arch) == 'arm64' ]]; then
pip install -r requirements/optional_m1_1.txt
pip install -r requirements/optional_m1_2.txt
pip install -r requirements/optional_apple_silicon_1.txt
pip install -r requirements/optional_apple_silicon_2.txt
else
pip install -r requirements/optional.txt
fi
2 changes: 1 addition & 1 deletion ivy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -758,7 +758,7 @@ class Node(str):
add_ivy_container_instance_methods,
)
from .data_classes.nested_array import NestedArray
from .data_classes.FactorizedTensor import TuckerTensor, CPTensor
from .data_classes.factorized_tensor import TuckerTensor, CPTensor
from ivy.utils.backend import (
current_backend,
compiled_backends,
Expand Down
2 changes: 1 addition & 1 deletion ivy/data_classes/__init__.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
from . import array
from . import container
from . import nested_array
from . import FactorizedTensor
from . import factorized_tensor
44 changes: 44 additions & 0 deletions ivy/data_classes/array/experimental/elementwise.py
Original file line number Diff line number Diff line change
Expand Up @@ -1020,3 +1020,47 @@ def digamma(
ivy.array([-0.7549271 0.92278427 0.9988394])
"""
return ivy.digamma(self._data, out=out)

def sparsify_tensor(
self: ivy.Array,
card: int,
/,
*,
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""
ivy.Array class method variant of ivy.sparsify_tensor. This method simply wraps
the function, and so the docstring for ivy.sparsify_tensor also applies to this
method with minimal changes.
Parameters
----------
self : array
The tensor to sparsify.
card : int
The number of values to keep.
out : array, optional
Optional output array, for writing the result to.
Returns
-------
ret : array
The sparsified tensor.
Examples
--------
>>> x = ivy.arange(100)
>>> x = ivy.reshape(x, (10, 10))
>>> x.sparsify_tensor(10)
ivy.array([[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[90, 91, 92, 93, 94, 95, 96, 97, 98, 99]])
"""
return ivy.sparsify_tensor(self._data, card, out=out)
Loading

0 comments on commit 22c0d50

Please sign in to comment.