Skip to content

Commit

Permalink
feat(datasets): Add rioxarray and RasterDataset (kedro-org#355)
Browse files Browse the repository at this point in the history
* refactor(datasets): deprecate "DataSet" type names (#328)

* refactor(datasets): deprecate "DataSet" type names (api)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (biosequence)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (dask)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (databricks)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (email)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (geopandas)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (holoviews)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (json)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (matplotlib)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (networkx)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.csv_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.deltatable_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.excel_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.feather_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.gbq_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.generic_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.hdf_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.json_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.parquet_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.sql_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.xml_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pickle)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pillow)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (plotly)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (polars)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (redis)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (snowflake)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (spark)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (svmlight)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (tensorflow)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (text)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (tracking)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (video)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (yaml)

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): ignore TensorFlow coverage issues

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* added basic code for geotiff

Signed-off-by: tgoelles <[email protected]>

* renamed to xarray

Signed-off-by: tgoelles <[email protected]>

* renamed to xarray

Signed-off-by: tgoelles <[email protected]>

* added load and self args

Signed-off-by: tgoelles <[email protected]>

* only local files

Signed-off-by: tgoelles <[email protected]>

* added empty test

Signed-off-by: tgoelles <[email protected]>

* added test data

Signed-off-by: tgoelles <[email protected]>

* added rioxarray requirements

Signed-off-by: tgoelles <[email protected]>

* reformat with black

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.14

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.15

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.12

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.9

Signed-off-by: tgoelles <[email protected]>

* fixed dataset typo

Signed-off-by: tgoelles <[email protected]>

* fixed docstring for sphinx

Signed-off-by: tgoelles <[email protected]>

* run black

Signed-off-by: tgoelles <[email protected]>

* sort imports

Signed-off-by: tgoelles <[email protected]>

* class docstring

Signed-off-by: tgoelles <[email protected]>

* black

Signed-off-by: tgoelles <[email protected]>

* fixed pylint

Signed-off-by: tgoelles <[email protected]>

* added release notes

Signed-off-by: tgoelles <[email protected]>

* added yaml example

Signed-off-by: tgoelles <[email protected]>

* improve testing WIP

Signed-off-by: tgoelles <[email protected]>

* basic test success

Signed-off-by: tgoelles <[email protected]>

* test reloaded

Signed-off-by: tgoelles <[email protected]>

* test exists

Signed-off-by: tgoelles <[email protected]>

* added version

Signed-off-by: tgoelles <[email protected]>

* basic test suite

Signed-off-by: tgoelles <[email protected]>

* run black

Signed-off-by: tgoelles <[email protected]>

* added example and test it

Signed-off-by: tgoelles <[email protected]>

* deleted duplications

Signed-off-by: tgoelles <[email protected]>

* fixed position of example

Signed-off-by: tgoelles <[email protected]>

* black

Signed-off-by: tgoelles <[email protected]>

* style: Introduce `ruff` for linting in all plugins. (#354)

Signed-off-by: Merel Theisen <[email protected]>

* feat(datasets): create custom `DeprecationWarning` (#356)

* feat(datasets): create custom `DeprecationWarning`

Signed-off-by: Deepyaman Datta <[email protected]>

* feat(datasets): use the custom deprecation warning

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): show Kedro's deprecation warnings

Signed-off-by: Deepyaman Datta <[email protected]>

* fix(datasets): remove unused imports in test files

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): add note about DataSet deprecation (#357)

Signed-off-by: tgoelles <[email protected]>

* test(datasets): skip `tensorflow` tests on Windows (#363)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci: Pin `tables` version (#370)

* Pin tables version

Signed-off-by: Ankita Katiyar <[email protected]>

* Also fix kedro-airflow

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert trying to fix airflow

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Release `1.7.1` (#378)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: Update CONTRIBUTING.md and add one for `kedro-datasets` (#379)

Update CONTRIBUTING.md + add one for kedro-datasets

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): Run tensorflow tests separately from other dataset tests (#377)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat: Kedro-Airflow convert all pipelines option (#335)

* feat: kedro airflow convert --all option

Signed-off-by: Simon Brugman <[email protected]>

* docs: release docs

Signed-off-by: Simon Brugman <[email protected]>

---------

Signed-off-by: Simon Brugman <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): blacken code in rst literal blocks (#362)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: cloudpickle is an interesting extension of the pickle functionality (#361)

Signed-off-by: H. Felix Wittmann <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Fix secret scan entropy error (#383)

Fix secret scan entropy error

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* style: Rename mentions of `DataSet` to `Dataset` in `kedro-airflow` and `kedro-telemetry` (#384)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Migrated `PartitionedDataSet` and `IncrementalDataSet` from main repository to kedro-datasets (#253)

Signed-off-by: Peter Bludau <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>

* fix: backwards compatibility for `kedro-airflow` (#381)

Signed-off-by: Simon Brugman <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* added metadata

Signed-off-by: tgoelles <[email protected]>

* after linting

Signed-off-by: tgoelles <[email protected]>

* ignore ruff PLR0913

Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Don't warn for SparkDataset on Databricks when using s3 (#341)

Signed-off-by: Alistair McKelvie <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Hot fix for RTD due to bad pip version (#396)

fix RTD

Signed-off-by: Nok <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Pin pip version temporarily (#398)

* Pin pip version temporarily

Signed-off-by: Ankita Katiyar <[email protected]>

* Hive support failures

Signed-off-by: Ankita Katiyar <[email protected]>

* Also pin pip on lint

Signed-off-by: Ankita Katiyar <[email protected]>

* Temporary ignore databricks spark tests

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* perf(datasets): don't create connection until need (#281)

* perf(datasets): delay `Engine` creation until need

Signed-off-by: Deepyaman Datta <[email protected]>

* chore: don't check coverage in TYPE_CHECKING block

Signed-off-by: Deepyaman Datta <[email protected]>

* perf(datasets): don't connect in `__init__` method

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): fix tests to touch `create_engine`

Signed-off-by: Deepyaman Datta <[email protected]>

* perf(datasets): don't connect in `__init__` method

Signed-off-by: Deepyaman Datta <[email protected]>

* style(datasets): exec Ruff on sql_dataset.py :dog:

Signed-off-by: Deepyaman Datta <[email protected]>

* Undo changes to `engines` values type (for Sphinx)

Signed-off-by: Deepyaman Datta <[email protected]>

* Patch Sphinx build by removing `Engine` references

* perf(datasets): don't connect in `__init__` method

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): don't require coverage for import

* chore(datasets): del unused `TYPE_CHECKING` import

* docs(datasets): document lazy connection in README

* perf(datasets): remove create in `SQLQueryDataset`

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): do not return the created conn

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>

* chore: Drop Python 3.7 support for kedro-plugins (#392)

* Remove references to Python 3.7

Signed-off-by: lrcouto <[email protected]>

* Revert kedro-dataset changes

Signed-off-by: lrcouto <[email protected]>

* Revert kedro-dataset changes

Signed-off-by: lrcouto <[email protected]>

* Add information to release docs

Signed-off-by: lrcouto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): support Polars lazy evaluation  (#350)

* feat(datasets) add PolarsDataset to support Polars's Lazy API

Signed-off-by: Matthias Roels <[email protected]>

* Fix(datasets): rename PolarsDataSet to PolarsDataSet

Add PolarsDataSet as an alias for PolarsDataset with
deprecation warning.

Signed-off-by: Matthias Roels <[email protected]>

* Fix(datasets): apply ruff linting rules

Signed-off-by: Matthias Roels <[email protected]>

* Fix(datasets): Correct pattern matching when Raising exceptions

Corrected PolarsDataSet to PolarsDataset in the pattern to match
in test_load_missing_file

Signed-off-by: Matthias Roels <[email protected]>

* fix(datasets): clean up PolarsDataset related code

Remove reference to PolarsDataSet as this is not required for new
dataset implementations.

Signed-off-by: Matthias Roels <[email protected]>

* feat(datasets): Rename Polars Datasets to better describe their intent

Signed-off-by: Matthias Roels <[email protected]>

* feat(datasets): clean up LazyPolarsDataset

Signed-off-by: Matthias Roels <[email protected]>

* fix(datasets): increase test coverage for PolarsDataset classes

Signed-off-by: Matthias Roels <[email protected]>

* docs(datasets): add renamed Polars datasets to docs

Signed-off-by: Matthias Roels <[email protected]>

* docs(datasets): Add new polars datasets to release notes

Signed-off-by: Matthias Roels <[email protected]>

* fix(datasets): load_args not properly passed to LazyPolarsDataset.load

Signed-off-by: Matthias Roels <[email protected]>

* docs(datasets): fix spelling error in release notes

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Matthias Roels <[email protected]>

---------

Signed-off-by: Matthias Roels <[email protected]>
Signed-off-by: Matthias Roels <[email protected]>
Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Matthias Roels <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Release `1.8.0` (#406)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(airflow): Release 0.7.0 (#407)

* bump version

Signed-off-by: Ankita Katiyar <[email protected]>

* Update release notes

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(telemetry): Release 0.3.0 (#408)

Bump version

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(docker): Release 0.4.0 (#409)

Bump version

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* style(airflow): blacken README.md of Kedro-Airflow (#418)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Fix missing jQuery (#414)

Fix missing jQuery

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Fix Lazy Polars dataset to use the new-style base class (#413)

* Fix Lazy Polars dataset to use the new-style base class

Fix gh-412

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Update release notes

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert "Update release notes"

This reverts commit 92ceea6d8fa412abf3d8abd28a2f0a22353867ed.

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets):  lazily load `partitions` classes (#411)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): fix code blocks and `data_set` use (#417)

* chore(datasets):  lazily load `partitions` classes

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): run doctests to check examples run

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): keep running tests amidst failures

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): format ManagedTableDataset example

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): ignore breaking mods for doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* style(airflow): black code in Kedro-Airflow README

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): fix example syntax, and autoformat

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `>>> ` prefix for YAML code

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): replace `data_set`s with `dataset`s

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): undo changes for running doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* revert(datasets):  undo lazily load `partitions` classes

Refs: 3fdc5a8efa034fa9a18b7683a942415915b42fb5
Signed-off-by: Deepyaman Datta <[email protected]>

* revert(airflow): undo black code in Kedro-Airflow README

Refs: dc3476ea36bac98e2adcc0b52a11b0f90001e31d

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: TF model load failure when model is saved as a TensorFlow Saved Model format (#410)

* fixes TF model load failure when model is saved as a TensorFlow Saved Model format

when a model is saved in the TensorFlow SavedModel format ("tf" default option in tf.save_model when using TF 2.x) via the catalog.xml file, the subsequent loading of that model for further use in a subsequent node fails. The issue is linked to the fact that the model files don't get copied into the temporary folder, presumably because the _fs.get function "thinks" that the provided path is a file and not a folder. Adding an terminating "/" to the path fixes the issue.

Signed-off-by: Edouard59 <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Drop support for Python 3.7 on kedro-datasets (#419)

* Drop support for Python 3.7 on kedro-datasets

Signed-off-by: lrcouto <[email protected]>

* Remove redundant 3.8 markers

Signed-off-by: lrcouto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>

* test(datasets): run doctests to check examples run (#416)

* chore(datasets):  lazily load `partitions` classes

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): run doctests to check examples run

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): keep running tests amidst failures

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): format ManagedTableDataset example

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): ignore breaking mods for doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* style(airflow): black code in Kedro-Airflow README

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): fix example syntax, and autoformat

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `>>> ` prefix for YAML code

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): replace `data_set`s with `dataset`s

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): run doctests separately

Signed-off-by: Deepyaman Datta <[email protected]>

* separate dataset-doctests

Signed-off-by: Nok <[email protected]>

* chore(datasets): ignore non-passing tests to make CI pass

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): fix comment location

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): fix .py.py

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): don't measure coverage on doctest run

Signed-off-by: Deepyaman Datta <[email protected]>

* build(datasets): fix windows and snowflake stuff in Makefile

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: Nok <[email protected]>
Co-authored-by: Nok <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Add support for `databricks-connect>=13.0` (#352)

Signed-off-by: Miguel Rodriguez Gutierrez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(telemetry): remove double execution by moving to after catalog created hook (#422)

* remove double execution by moving to after catalog created hook

Signed-off-by: Florian Roessler <[email protected]>

* update release notes

Signed-off-by: Florian Roessler <[email protected]>

* fix tests

Signed-off-by: Florian Roessler <[email protected]>

* remove unsued fixture

Signed-off-by: Florian Roessler <[email protected]>

---------

Signed-off-by: Florian Roessler <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: Add python version support policy to plugin `README.md`s (#425)

* Add python version support policy to plugin readmes

Signed-off-by: Merel Theisen <[email protected]>

* Temporarily pin connexion

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(airflow): Use new docs link (#393)

Use new docs link

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Jo Stichbury <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* style: Add shared CSS and meganav to datasets docs (#400)

* Add shared CSS and meganav

Signed-off-by: Jo Stichbury <[email protected]>

* Add end of file

Signed-off-by: Jo Stichbury <[email protected]>

* Add new heap data source

Signed-off-by: Jo Stichbury <[email protected]>

* adjust heap parameter

Signed-off-by: Jo Stichbury <[email protected]>

* Remove nav_version next to Kedro logo in top left; add Kedro logo

* Revise project name and author name

Signed-off-by: Jo Stichbury <[email protected]>

* Use full kedro icon and type for logo

* Add close btn to mobile nav

Signed-off-by: vladimir-mck <[email protected]>

* Add css for mobile nav logo image

Signed-off-by: vladimir-mck <[email protected]>

* Update close button for mobile nav

Signed-off-by: vladimir-mck <[email protected]>

* Add open button to mobile nav

Signed-off-by: vladimir-mck <[email protected]>

* Delete kedro-datasets/docs/source/kedro-horizontal-color-on-light.svg

Signed-off-by: vladimir-mck <[email protected]>

* Update conf.py

Signed-off-by: vladimir-mck <[email protected]>

* Update layout.html

Add links to subprojects

Signed-off-by: Jo Stichbury <[email protected]>

* Remove svg from docs -- not needed??

Signed-off-by: Jo Stichbury <[email protected]>

* linter error fix

Signed-off-by: Jo Stichbury <[email protected]>

---------

Signed-off-by: Jo Stichbury <[email protected]>
Signed-off-by: vladimir-mck <[email protected]>
Co-authored-by: Tynan DeBold <[email protected]>
Co-authored-by: vladimir-mck <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Add Hugging Face datasets (#344)

* Add HuggingFace datasets

Co-authored-by: Danny Farah <[email protected]>
Co-authored-by: Kevin Koga <[email protected]>
Co-authored-by: Mate Scharnitzky <[email protected]>
Co-authored-by: Tomer Shor <[email protected]>
Co-authored-by: Pierre-Yves Mousset <[email protected]>
Co-authored-by: Bela Chupal <[email protected]>
Co-authored-by: Khangjrakpam Arjun <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Apply suggestions from code review

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

Co-authored-by: Joel <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>

* Typo

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Fix docstring

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add docstring for HFTransformerPipelineDataset

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Use intersphinx for cross references in Hugging Face docstrings

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add docstring for HFDataset

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add missing test dependencies

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add tests for huggingface datasets

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Fix HFDataset.save

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add test for HFDataset.list_datasets

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Use new name

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Consolidate imports

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Danny Farah <[email protected]>
Co-authored-by: Kevin Koga <[email protected]>
Co-authored-by: Mate Scharnitzky <[email protected]>
Co-authored-by: Tomer Shor <[email protected]>
Co-authored-by: Pierre-Yves Mousset <[email protected]>
Co-authored-by: Bela Chupal <[email protected]>
Co-authored-by: Khangjrakpam Arjun <[email protected]>
Co-authored-by: Joel <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): fix `dask.ParquetDataset` doctests (#439)

* test(datasets): fix `dask.ParquetDataset` doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): use `tmp_path` fixture in doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): simplify by not passing the schema

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): ignore conftest for doctests cover

Signed-off-by: Deepyaman Datta <[email protected]>

* Create MANIFEST.in

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* refactor: Remove `DataSet` aliases and mentions (#440)

Signed-off-by: Merel Theisen <[email protected]>

* chore(datasets): replace "Pyspark" with "PySpark" (#423)

Consistently write "PySpark" rather than "Pyspark"

Also, fix list formatting

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): make `api.APIDataset` doctests run (#448)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix `pandas.GenericDataset` doctest (#445)

Fix pandas.GenericDataset doctest

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): make datasets arguments keywords only (#358)

* feat(datasets): make `APIDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `BioSequenceDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ParquetDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `EmailMessageDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GeoJSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `HoloviewsWriter.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `MatplotlibWriter.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GraphMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make NetworkX `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `PickleDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ImageDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make plotly `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `PlotlyDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make polars `CSVDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make polars `GenericDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make redis `PickleDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SnowparkTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SVMLightDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `TensorFlowModelDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `TextDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `YAMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ManagedTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `VideoDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `CSVDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `DeltaTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ExcelDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `FeatherDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GBQTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GenericDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make pandas `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make pandas `ParquerDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SQLTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `XMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `HDFDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `DeltaTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkHiveDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkJDBCDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkStreamingDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `IncrementalDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `LazyPolarsDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* docs(datasets): update doctests for HoloviewsWriter

Signed-off-by: Felix Scherz <[email protected]>

* Update release notes

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Felix Scherz <[email protected]>
Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Felix Scherz <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Drop support for python 3.8 on kedro-datasets (#442)

* Drop support for python 3.8 on kedro-datasets

---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: Dmitry Sorokin <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): add outputs to matplotlib doctests (#449)

* test(datasets): add outputs to matplotlib doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* Update Makefile

Signed-off-by: Deepyaman Datta <[email protected]>

* Reformat code example, line length is short enough

* Update kedro-datasets/kedro_datasets/matplotlib/matplotlib_writer.py

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix more doctest issues  (#451)

Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): fix failing doctests in Windows CI (#457)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): fix accidental reference to NumPy (#450)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): don't pollute dev env in doctests (#452)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat: Add tools to heap event (#430)

* Add add-on data to heap event

Signed-off-by: lrcouto <[email protected]>

* Move addons logic to _get_project_property

Signed-off-by: Ankita Katiyar <[email protected]>

* Add condition for pyproject.toml

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix tests

Signed-off-by: Ankita Katiyar <[email protected]>

* add tools to mock

Signed-off-by: lrcouto <[email protected]>

* lint

Signed-off-by: lrcouto <[email protected]>

* Update tools test

Signed-off-by: Ankita Katiyar <[email protected]>

* Add after_context_created tools test

Signed-off-by: lrcouto <[email protected]>

* Update rename to tools

Signed-off-by: Ankita Katiyar <[email protected]>

* Update kedro-telemetry/tests/test_plugin.py

Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): install deps in single `pip install` (#454)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Bump s3fs (#463)

* Use mocking for AWS responses

Signed-off-by: Merel Theisen <[email protected]>

* Add change to release notes

Signed-off-by: Merel Theisen <[email protected]>

* Update release notes

Signed-off-by: Merel Theisen <[email protected]>

* Use pytest xfail instead of commenting out test

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): make SQL dataset examples runnable (#455)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): correct pandas-gbq as py311 dependency (#460)

* update pandas-gbq dependency declaration

Signed-off-by: Onur Kuru <[email protected]>

* fix fmt

Signed-off-by: Onur Kuru <[email protected]>

---------

Signed-off-by: Onur Kuru <[email protected]>
Co-authored-by: Ahdra Merali <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): Document `IncrementalDataset` (#468)

Document IncrementalDataset

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Update datasets to be arguments keyword only (#466)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Clean up code for old dataset syntax compatibility (#465)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Update scikit-learn version (#469)

Update scikit-learn version

Signed-off-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): support versioning data partitions (#447)

* feat(datasets): support versioning data partitions

Signed-off-by: Deepyaman Datta <[email protected]>

* Remove unused import

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): use keyword arguments when needed

Signed-off-by: Deepyaman Datta <[email protected]>

* Apply suggestions from code review

Signed-off-by: Deepyaman Datta <[email protected]>

* Update kedro-datasets/kedro_datasets/partitions/partitioned_dataset.py

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): Improve documentation index (#428)

Rework documentation index

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): update wrong docstring about `con` (#461)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Release `2.0.0`  (#472)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(telemetry): Pin `PyYAML` (#474)

Pin PyYaml

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(telemetry): Release 0.3.1 (#475)

Signed-off-by: tgoelles <[email protected]>

* docs(datasets): Fix broken links in README (#477)

Fix broken links in README

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): replace more "data_set" instances (#476)

Signed-off-by: Deepyaman Datta <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix doctests (#488)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix delta + incremental dataset docstrings (#489)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(airflow): Post 0.19 cleanup (#478)

* bump version

Signed-off-by: Ankita Katiyar <[email protected]>

* Unbump version and clean test

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Split big test into smaller tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update conftest

Signed-off-by: Ankita Katiyar <[email protected]>

* Update conftest

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix coverage

Signed-off-by: Ankita Katiyar <[email protected]>

* Try unpin airflow

Signed-off-by: Ankita Katiyar <[email protected]>

* remove datacatalog step

Signed-off-by: Ankita Katiyar <[email protected]>

* Change node

Signed-off-by: Ankita Katiyar <[email protected]>

* update tasks test step

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert to older airflow and constraint pendulum

Signed-off-by: Ankita Katiyar <[email protected]>

* Update template

Signed-off-by: Ankita Katiyar <[email protected]>

* Update message in e2e step

Signed-off-by: Ankita Katiyar <[email protected]>

* Final cleanup

Signed-off-by: Ankita Katiyar <[email protected]>

* Update kedro-airflow/pyproject.toml

Signed-off-by: Nok Lam Chan <[email protected]>

* Pin apache-airflow again

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Nok Lam Chan <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(airflow): Release 0.8.0 (#491)

Bump version

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: telemetry metadata (#495)

---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: Update tests on kedro-docker for 0.5.0 release. (#496)

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* Lint

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* fix test path for e2e tests

Signed-off-by: lrcouto <[email protected]>

* fix requirements path on dockerfiles

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* Remove redundant test

Signed-off-by: lrcouto <[email protected]>

* Alter test for custom GID and UID

Signed-off-by: lrcouto <[email protected]>

* Update release notes

Signed-off-by: lrcouto <[email protected]>

* Revert version bump to put in in separate PR

Signed-off-by: lrcouto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build: Release kedro-docker 0.5.0 (#497)

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* Lint

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* fix test path for e2e tests

Signed-off-by: lrcouto <[email protected]>

* fix requirements path on dockerfiles

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* Remove redundant test

Signed-off-by: lrcouto <[email protected]>

* Alter test for custom GID and UID

Signed-off-by: lrcouto <[email protected]>

* Update release notes

Signed-off-by: lrcouto <[email protected]>

* Revert version bump to put in in separate PR

Signed-off-by: lrcouto <[email protected]>

* Bump kedro-docker to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* Add release notes

Signed-off-by: lrcouto <[email protected]>

* Update kedro-docker/RELEASE.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Update partitioned dataset docstring (#502)

Update partitioned dataset docstring

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* Fix GeotiffDataset import + casing

Signed-off-by: Merel Theisen <[email protected]>

* Fix lint

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Relax pandas.HDFDataSet dependencies which are broken on Windows (#426)

* Relax pandas.HDFDataSet dependencies which are broken on Window (#402)

Signed-off-by: Yolan Honoré-Rougé <[email protected]>

* Update RELEASE.md

Signed-off-by: Yolan Honoré-Rougé <[email protected]>

* Apply suggestions from code review

Signed-off-by: Merel Theisen <[email protected]>

* Update kedro-datasets/setup.py

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Yolan Honoré-Rougé <[email protected]>
Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: airflow metadata (#498)

* Add example pipeline entry to metadata declaration

Signed-off-by: Ahdra Merali <[email protected]>

* Fix entry

Signed-off-by: Ahdra Merali <[email protected]>

* Make entries consistent

Signed-off-by: Ahdra Merali <[email protected]>

* Add tools to config

Signed-off-by: Ahdra Merali <[email protected]>

* fix: telemetry metadata (#495)

---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: Ahdra Merali <[email protected]>

* Revert "Add tools to config"

This reverts commit 14732d772a3c2f4787063071a68fdf1512c93488.

Signed-off-by: Ahdra Merali <[email protected]>

* Quick fix

Signed-off-by: Ahdra Merali <[email protected]>

* Lint

Signed-off-by: Ahdra Merali <[email protected]>

* Remove outdated config key

Signed-off-by: Ahdra Merali <[email protected]>

* Use kedro new instead of cookiecutter

Signed-off-by: Ahdra Merali <[email protected]>

---------

Signed-off-by: Ahdra Merali <[email protected]>
Signed-off-by: Dmitry Sorokin <[email protected]>
Co-authored-by: Dmitry Sorokin <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(airflow): Bump `apache-airflow` version (#511)

* Bump apache airflow

Signed-off-by: Ankita Katiyar <[email protected]>

* Change starter

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e test steps

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e test steps

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): Unpin dask (#522)

* Unpin dask

Signed-off-by: Ankita Katiyar <[email protected]>

* Update doctest

Signed-off-by: Ankita Katiyar <[email protected]>

* Update doctest

Signed-off-by: Ankita Katiyar <[email protected]>

* Update kedro-datasets/setup.py

Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Add `MatlabDataset` to `kedro-datasets` (#515)

* Refork and commit kedro matlab datasets

Signed-off-by: samuelleeshemen <[email protected]>

* Fix lint, add to docs

Signed-off-by: Ankita Katiyar <[email protected]>

* Try fixing docstring

Signed-off-by: Ankita Katiyar <[email protected]>

* Try fixing save

Signed-off-by: Ankita Katiyar <[email protected]>

* Try fix docstest

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix unit tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update release notes:

Signed-off-by: Ankita Katiyar <[email protected]>

* Not hardcode load mode

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: samuelleeshemen <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(airflow): Pin `Flask-Session` version (#521)

* Restrict pendulum version

Signed-off-by: Ankita Katiyar <[email protected]>

* Update airflow init step

Signed-off-by: Ankita Katiyar <[email protected]>

* Remove pendulum pin

Signed-off-by: Ankita Katiyar <[email protected]>

* Update create connections step

Signed-off-by: Ankita Katiyar <[email protected]>

* Pin flask session

Signed-off-by: Ankita Katiyar <[email protected]>

* Add comment

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat: `kedro-airflow` group in memory nodes (#241)

* feat: option to group in-memory nodes

Signed-off-by: Simon Brugman <[email protected]>

* fix: MemoryDataset

Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/README.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/README.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/README.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/RELEASE.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/plugin.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/tests/test_node_grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/tests/test_node_grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/grouping.py

Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* fix: tests

Signed-off-by: Simon Brugman <[email protected]>

* Bump minimum kedro version

Signed-off-by: Simon Brugman <[email protected]>

* fixes

Signed-off-by: Simon Brugman <[email protected]>

---------

Signed-off-by: Simon Brugman <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): Update pyproject.toml to pin Kedro 0.19 for kedro-datasets (#526)

Update pyproject.toml

Signed-off-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(airflow): include environment name in DAG filename (#492)

* feat: include environment name in DAG file

Signed-off-by: Simon Brugman <[email protected]>

* doc: add update to release notes

Signed-off-by: Simon Brugman <[email protected]>

---------

Signed-off-by: Simon Brugman <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Enable search-as-you type on Kedro-datasets docs (#532)

* done

Signed-off-by: rashidakanchwala <[email protected]>

* fix lint

Signed-off-by: rashidakanchwala <[email protected]>

---------

Signed-off-by: rashidakanchwala <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Debug and fix `kedro-datasets` nightly build failures (#541)

* pin deltalake

* Update kedro-datasets/setup.py

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Update setup.py

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* sort order and compare

* Update setup.py

* lint

* pin deltalake

* add comment to pin

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Dataset Preview Refactor  (#504)

* test

* done

* change from _preview to preview

* fix lint and tests

* added docstrings

* rtd fix

* rtd fix

* fix rtd

Signed-off-by: rashidakanchwala <[email protected]>

* fix rtd

Signed-off-by: rashidakanchwala <[email protected]>

* fix rtd - pls"

Signed-off-by: rashidakanchwala <[email protected]>

* add nitpick ignore

Signed-off-by: rashidakanchwala <[email protected]>

* test again

Signed-off-by: rashidakanchwala <[email protected]>

* move tracking datasets to constant

Signed-off-by: rashidakanchwala <[email protected]>

* remove comma

Signed-off-by: rashidakanchwala <[email protected]>

* remove Newtype from json_dataset"

Signed-off-by: rashidakanchwala <[email protected]>

* pls work

Signed-off-by: rashidakanchwala <[email protected]>

* confirm rtd works locally

Signed-off-by: rashidakanchwala <[email protected]>

* juanlu's fix

Signed-off-by: rashidakanchwala <[email protected]>

* fix tests

Signed-off-by: rashidakanchwala <[email protected]>

* remove unnecessary stuff from conf.py

Signed-off-by: rashidakanchwala <[email protected]>

* fixes based on review

Signed-off-by: rashidakanchwala <[email protected]>

* changes based on review

Signed-off-by: rashidakanchwala <[email protected]>

* fix tests

Signed-off-by: rashidakanchwala <[email protected]>

* add suffix Preview

Signed-off-by: rashidakanchwala <[email protected]>

* change img return type to bytes

Signed-off-by: rashidakanchwala <[email protected]>

* fix tests

Signed-off-by: rashidakanchwala <[email protected]>

* update release note

* fix lint

---------

Signed-off-by: rashidakanchwala <[email protected]>
Co-authored-by: ravi-kumar-pilla <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Drop pyarrow constraint when using snowpark (#538)

* Free pyarrow req

Signed-off-by: Felipe Monroy <[email protected]>

* Free pyarrow req

Signed-off-by: Felipe Monroy <[email protected]>

---------

Signed-off-by: Felipe Monroy <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: Update kedro-telemetry docs on which data is collected (#546)

* Update data being collected
---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: Dmitry Sorokin <[email protected]>
Co-authored-by: Jo Stichbury <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(docker): Trying to fix e2e tests (#548)

* Pin psutil

Signed-off-by: Ankita Katiyar <[email protected]>

* Add no capture to test

Signed-off-by: Ankita Katiyar <[email protected]>

* Update pip version

Signed-off-by: Ankita Katiyar <[email protected]>

* Update call

Signed-off-by: Ankita Katiyar <[email protected]>

* Update pip

Signed-off-by: Ankita Katiyar <[email protected]>

* pip ruamel

Signed-off-by: Ankita Katiyar <[email protected]>

* change pip v

Signed-off-by: Ankita Katiyar <[email protected]>

* change pip v

Signed-off-by: Ankita Katiyar <[email protected]>

* show stdout

Signed-off-by: Ankita Katiyar <[email protected]>

* use no cache dir

Signed-off-by: Ankita Katiyar <[email protected]>

* revert extra changes

Signed-off-by: Ankita Katiyar <[email protected]>

* pin pip

Signed-off-by: Ankita Katiyar <[email protected]>

* gitpod

Signed-off-by: Ankita Katiyar <[email protected]>

* pip inside dockerfile

Signed-off-by: Ankita Katiyar <[email protected]>

* pip pip inside dockerfile

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: bump actions versions (#539)

* Unpin pip and bump actions versions

Signed-off-by: Ankita Katiyar <[email protected]>

* remove version

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert unpinning of pip

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(telemetry): Direct readers to Kedro documentation for further information on telemetry (#555)

* Direct readers to Kedro documentation for further information on telemetry

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Wording improvements

Co-authored-by: Jo Stichbury <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Amend README section

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Jo Stichbury <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: kedro-telemetry masking (#552)

* Fix masking

Signed-off-by: Dmitr…
  • Loading branch information
Show file tree
Hide file tree
Showing 10 changed files with 415 additions and 0 deletions.
6 changes: 6 additions & 0 deletions kedro-datasets/RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,13 @@
| `langchain.ChatCohereDataset` | A dataset for loading a ChatCohere langchain model. | `kedro_datasets_experimental.langchain` |
| `langchain.OpenAIEmbeddingsDataset` | A dataset for loading a OpenAIEmbeddings langchain model. | `kedro_datasets_experimental.langchain` |
| `langchain.ChatOpenAIDataset` | A dataset for loading a ChatOpenAI langchain model. | `kedro_datasets_experimental.langchain` |
| `rioxarray.GeoTIFFDataset` | A dataset for loading and saving geotiff raster data | `kedro_datasets_experimental.rioxarray` |
| `netcdf.NetCDFDataset` | A dataset for loading and saving "*.nc" files. | `kedro_datasets_experimental.netcdf` |

* `netcdf.NetCDFDataset` moved from `kedro_datasets` to `kedro_datasets_experimental`.

* Added the following new core datasets:

| Type | Description | Location |
|-------------------------------------|-----------------------------------------------------------|-----------------------------------------|
| `dask.CSVDataset` | A dataset for loading a CSV files using `dask` | `kedro_datasets.dask` |
Expand All @@ -22,6 +25,9 @@
## Community contributions

Many thanks to the following Kedroids for contributing PRs to this release:
* [Ian Whalen](https://github.com/ianwhale)
* [Charles Guan](https://github.com/charlesbmi)
* [Thomas Gölles](https://github.com/tgoelles)
* [Lukas Innig](https://github.com/derluke)
* [Michael Sexton](https://github.com/michaelsexton)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,3 +16,4 @@ kedro_datasets_experimental
kedro_datasets_experimental.langchain.ChatOpenAIDataset
kedro_datasets_experimental.langchain.OpenAIEmbeddingsDataset
kedro_datasets_experimental.netcdf.NetCDFDataset
kedro_datasets_experimental.rioxarray.GeoTIFFDataset
13 changes: 13 additions & 0 deletions kedro-datasets/kedro_datasets_experimental/rioxarray/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
"""``AbstractDataset`` implementation to load/save data from/to a geospatial raster files."""
from __future__ import annotations

from typing import Any

import lazy_loader as lazy

# https://github.com/pylint-dev/pylint/issues/4300#issuecomment-1043601901
GeoTIFFDataset: Any

__getattr__, __dir__, __all__ = lazy.attach(
__name__, submod_attrs={"geotiff_dataset": ["GeoTIFFDataset"]}
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,209 @@
"""GeoTIFFDataset loads geospatial raster data and saves it to a local geoiff file. The
underlying functionality is supported by rioxarray and xarray. A read rasterdata file
returns a xarray.DataArray object.
"""
import logging
from copy import deepcopy
from pathlib import PurePosixPath
from typing import Any

import fsspec
import rasterio
import rioxarray as rxr
import xarray
from kedro.io import AbstractVersionedDataset, DatasetError
from kedro.io.core import Version, get_filepath_str, get_protocol_and_path
from rasterio.crs import CRS
from rasterio.transform import from_bounds

logger = logging.getLogger(__name__)

SUPPORTED_DIMS = [("band", "x", "y"), ("x", "y")]
DEFAULT_NO_DATA_VALUE = -9999
SUPPORTED_FILE_FORMATS = [".tif", ".tiff"]


class GeoTIFFDataset(AbstractVersionedDataset[xarray.DataArray, xarray.DataArray]):
"""``GeoTIFFDataset`` loads and saves rasterdata files and reads them as xarray
DataArrays. The underlying functionality is supported by rioxarray, rasterio and xarray.
Reading and writing of single and multiband GeoTIFFs data is supported. There are sanity checks to ensure that a coordinate reference system (CRS) is present.
Supported dimensions are ("band", "x", "y") and ("x", "y") and xarray.DataArray with other dimension can not be saved to a GeoTIFF file.
Have a look at netcdf if this is what you need.
.. code-block:: yaml
sentinal_data:
type: rioxarray.GeoTIFFDataset
filepath: sentinal_data.tif
Example usage for the
`Python API <https://kedro.readthedocs.io/en/stable/data/\
advanced_data_catalog_usage.html>`_:
.. code-block:: pycon
>>> from kedro_datasets.rioxarray import GeoTIFFDataset
>>> import xarray as xr
>>> import numpy as np
>>>
>>> data = xr.DataArray(
... np.random.randn(2, 3, 2),
... dims=("band", "y", "x"),
... coords={"band": [1, 2], "y": [0.5, 1.5, 2.5], "x": [0.5, 1.5]}
... )
>>> data_crs = data.rio.write_crs("epsg:4326")
>>> data_spatial_dims = data_crs.rio.set_spatial_dims("x", "y")
>>> dataset = GeoTIFFDataset(filepath="test.tif")
>>> dataset.save(data_spatial_dims)
>>> reloaded = dataset.load()
>>> xr.testing.assert_allclose(data_spatial_dims, reloaded, rtol=1e-5)
"""

DEFAULT_LOAD_ARGS: dict[str, Any] = {}
DEFAULT_SAVE_ARGS: dict[str, Any] = {}

def __init__( # noqa: PLR0913
self,
*,
filepath: str,
load_args: dict[str, Any] | None = None,
save_args: dict[str, Any] | None = None,
version: Version | None = None,
metadata: dict[str, Any] | None = None,
):
"""Creates a new instance of ``GeoTIFFDataset`` pointing to a concrete
geospatial raster data file.
Args:
filepath: Filepath in POSIX format to a rasterdata file.
The prefix should be any protocol supported by ``fsspec``.
load_args: rioxarray options for loading rasterdata files.
Here you can find all available arguments:
https://corteva.github.io/rioxarray/html/rioxarray.html#rioxarray-open-rasterio
All defaults are preserved.
save_args: options for rioxarray for data without the band dimension and rasterio otherwhise.
version: If specified, should be an instance of
``kedro.io.core.Version``. If its ``load`` attribute is
None, the latest version will be loaded. If its ``save``
attribute is None, save version will be autogenerated.
metadata: Any arbitrary metadata.
This is ignored by Kedro, but may be consumed by users or external plugins.
"""
protocol, path = get_protocol_and_path(filepath, version)
self._protocol = protocol
self._fs = fsspec.filesystem(self._protocol)
self.metadata = metadata

super().__init__(
filepath=PurePosixPath(path),
version=version,
exists_function=self._fs.exists,
glob_function=self._fs.glob,
)

# Handle default load and save arguments
self._load_args = deepcopy(self.DEFAULT_LOAD_ARGS)
if load_args is not None:
self._load_args.update(load_args)
self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)
if save_args is not None:
self._save_args.update(save_args)

def _describe(self) -> dict[str, Any]:
return {
"filepath": self._filepath,
"protocol": self._protocol,
"load_args": self._load_args,
"save_args": self._save_args,
"version": self._version,
}

def _load(self) -> xarray.DataArray:
load_path = self._get_load_path().as_posix()
with rasterio.open(load_path) as data:
tags = data.tags()
data = rxr.open_rasterio(load_path, **self._load_args)
data.attrs.update(tags)
self._sanity_check(data)
logger.info(f"found coordinate rerence system {data.rio.crs}")
return data

def _save(self, data: xarray.DataArray) -> None:
self._sanity_check(data)
save_path = get_filepath_str(self._get_save_path(), self._protocol)
if not save_path.endswith(tuple(SUPPORTED_FILE_FORMATS)):
raise ValueError(
f"Unsupported file format. Supported formats are: {SUPPORTED_FILE_FORMATS}"
)
if "band" in data.dims:
self._save_multiband(data, save_path)
else:
data.rio.to_raster(save_path, **self._save_args)
self._fs.invalidate_cache(save_path)

def _exists(self) -> bool:
try:
load_path = get_filepath_str(self._get_load_path(), self._protocol)
except DatasetError:
return False

return self._fs.exists(load_path)

def _release(self) -> None:
super()._release()
self._invalidate_cache()

def _invalidate_cache(self) -> None:
"""Invalidate underlying filesystem caches."""
filepath = get_filepath_str(self._filepath, self._protocol)
self._fs.invalidate_cache(filepath)

def _save_multiband(self, data: xarray.DataArray, save_path: str):
"""Saving multiband raster data to a geotiff file."""
bands_data = [data.sel(band=band) for band in data.band.values]
transform = from_bounds(
west=data.x.min(),
south=data.y.min(),
east=data.x.max(),
north=data.y.max(),
width=data[0].shape[1],
height=data[0].shape[0],
)

nodata_value = (
data.rio.nodata if data.rio.nodata is not None else DEFAULT_NO_DATA_VALUE
)
crs = data.rio.crs

meta = {
"driver": "GTiff",
"height": bands_data[0].shape[0],
"width": bands_data[0].shape[1],
"count": len(bands_data),
"dtype": str(bands_data[0].dtype),
"crs": crs,
"transform": transform,
"nodata": nodata_value,
}
with rasterio.open(save_path, "w", **meta) as dst:
for idx, band in enumerate(bands_data, start=1):
dst.write(band.data, idx, **self._save_args)

def _sanity_check(self, data: xarray.DataArray) -> None:
"""Perform sanity checks on the data to ensure it meets the requirements."""
if not isinstance(data, xarray.DataArray):
raise NotImplementedError(
"Currently only supporting xarray.DataArray while saving raster data."
)

if not isinstance(data.rio.crs, CRS):
raise ValueError("Dataset lacks a coordinate reference system.")

if all(set(data.dims) != set(dims) for dims in SUPPORTED_DIMS):
raise ValueError(
f"Data has unsupported dimensions: {data.dims}. Supported dimensions are: {SUPPORTED_DIMS}"
)
Empty file.
Binary file not shown.
Loading

0 comments on commit b1a4142

Please sign in to comment.