Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(datasets): Add rioxarray and RasterDataset #355

Merged
merged 238 commits into from
Jul 5, 2024

Conversation

tgoelles
Copy link
Contributor

@tgoelles tgoelles commented Sep 28, 2023

Description

  • Added GeoTiffDataset to read and write geotiff files with rioxarray and work with xarray in kedro. This is also related to issue Xarray NetCDFDataSet #165

Development notes

Added kedro_datasets/xarray/geotiff_dataset.py and a very basic framework for testing in /kedro-datasets/tests/xarray/test_geotiff_dataset.py using a sample file from rioxarray. I could not run the test suit locally or on github due to the complex environment. So more tests are certainly needed.
I think it would also be easy to make a preview method.

Checklist

  • Opened this PR as a 'Draft Pull Request' if it is work-in-progress
  • Updated the documentation to reflect the code changes
  • Added a description of this change in the relevant RELEASE.md file
  • Added tests to cover my changes

@astrojuanlu
Copy link
Member

Thanks @tgoelles for this pull request 🚀

I could not run the test suit locally or on github due to the complex environment.

What instructions did you follow? Would love to understand how can we make the process easier. It's definitely a challenge to install all the dependencies of all datasets, but we could try to make running the tests of only 1 dataset easier.

@tgoelles
Copy link
Contributor Author

tgoelles commented Oct 3, 2023

Thanks @tgoelles for this pull request 🚀

I could not run the test suit locally or on github due to the complex environment.

What instructions did you follow? Would love to understand how can we make the process easier. It's definitely a challenge to install all the dependencies of all datasets, but we could try to make running the tests of only 1 dataset easier.

I did not follow any instruction. Is there one? Yes It would be great to just test one dataset and not on windows at the moment with the github actions. VS code debugging does not work for some reason: discovery error. Also if I want to run it in the terminal I get this:

"no tests ran" even when I runnning yaml for example

@tgoelles
Copy link
Contributor Author

tgoelles commented Oct 5, 2023

A basic test suite now works. There I need help, but the basic functionality works and I guess the rest can be added by someone who knows kedro better

Copy link
Member

@merelcht merelcht left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this contribution @tgoelles! I've left some minor comments, but I gather you'll need some help to finish this in terms of tests?

It might take some time for the team to help out, but I'll make sure your PR is not forgotten.

kedro-datasets/docs/source/kedro_datasets.rst Outdated Show resolved Hide resolved
kedro-datasets/kedro_datasets/xarray/geotiff_dataset.py Outdated Show resolved Hide resolved
@merelcht merelcht added the Community Issue/PR opened by the open-source community label Oct 11, 2023
@merelcht merelcht changed the title xarray and GeoTiffDataset feat(datasets): Add xarray and GeoTiffDataset Oct 12, 2023
Copy link
Member

@astrojuanlu astrojuanlu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one broad question: are there any parts of this dataset that are GeoTIFF-specific, and that prevent it to work on generic n-dimensional labeled arrays from xarray?

@astrojuanlu
Copy link
Member

I did not follow any instruction. Is there one? Yes It would be great to just test one dataset and not on windows at the moment with the github actions. VS code debugging does not work for some reason: discovery error. Also if I want to run it in the terminal I get this:

After you commented this I realized that this plugin didn't have a dedicated CONTRIBUTING.md (#364) and that's fixed now: https://github.com/kedro-org/kedro-plugins/blob/main/kedro-datasets/CONTRIBUTING.md#unit-tests-100-coverage-pytest-pytest-cov

This however runs all the tests. You can try pytest -k xarray to only run those related to xarray. Let me know if that helps.

@tgoelles
Copy link
Contributor Author

Just one broad question: are there any parts of this dataset that are GeoTIFF-specific, and that prevent it to work on generic n-dimensional labeled arrays from xarray?

It uses rioxarray to handle GeoTiffs: https://corteva.github.io/rioxarray/stable/getting_started/getting_started.html

@@ -202,6 +205,7 @@ def _collect_requirements(requires):
"redis~=4.1",
"requests-mock~=1.6",
"requests~=2.20",
"rioxarray>=0.9.0",

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why the rioxarray dependency is not >=0.15.0 as above?

dataset = GeoTiffDataset(filepath=tmp_path.joinpath("test.tif").as_posix())
dataset.save(data)
reloaded = dataset.load()
xr.testing.assert_allclose(data, reloaded, rtol=1e-5)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the is rtol needed? It is causing issues with resolution or transform parameter. Pure curiosity

Copy link

@nickofca-mck nickofca-mck left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good overall. Would be curious about the dependency on rioxarray 0.15.0. rioxarray 0.14.0+ dropped support of python 3.8 (https://corteva.github.io/rioxarray/stable/history.html#id2), so it could constrain 3.7 & 3.8 users (which base kedro supports for now).

@@ -84,6 +84,7 @@ def _collect_requirements(requires):
]
}
video_require = {"video.VideoDataSet": ["opencv-python~=4.5.5.64"]}
xarray_require = {"xarray.GeoTIFFDataSet": ["rioxarray>=0.15.0"]}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rioxarray 0.14.0+ dropped support of python 3.8 (https://corteva.github.io/rioxarray/stable/history.html#id2)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kedro 0.19.0 will drop support for Python 3.7. About Python 3.8, as per the current policy it will continue supporting it, but it's up for discussion kedro-org/kedro#2815

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

kedro-datasets will follow NEP 29, hence you can focus on Python 3.9+

@astrojuanlu
Copy link
Member

Hello @tgoelles! We'd love to get this PR in. How can we help?

@tgoelles
Copy link
Contributor Author

Hello @tgoelles! We'd love to get this PR in. How can we help?

I am currently super busy with other things. But I think its ready and someone needs to see that all the checks are Ok. I might have time in 2 weeks again for this.

@merelcht
Copy link
Member

merelcht commented Jan 4, 2024

I've resolved the merge conflicts for this PR and done some small cleanups, but there's still more things to do:

  1. The doctest is failing for the example provided in GeoTiffDataset. This is the python example inside the docstring.
  2. Test coverage is not at 100% for GeoTiffDataset:
Name                                                         Stmts   Miss  Cover   Missing
------------------------------------------------------------------------------------------
kedro_datasets/xarray/geotiff_dataset.py     42      6    86%   100, 103, 117-118, 132-133

@tgoelles Can you finish this or do you want someone to take over?

@astrojuanlu
Copy link
Member

cc @riley-brady FYI :)

Copy link
Contributor

@riley-brady riley-brady left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some initial comments from scanning through this!

We have a highly specialized implementation here: https://github.com/McK-Internal/CRA-data_pipelines/blob/dev/src/cra_data_pipelines/datasets/geotiff.py.

I would think about saving/loading from S3, because I think geotiff might suffer from the same thing as NetCDF and not allow streaming from cloud buckets (unlike COGs). So you might need to do the temporary file trick that we did with the NetCDF data.

There's also some preprocessing that needs to be done for concatenating a set of globbed files.

from kedro.io.core import Version, get_filepath_str, get_protocol_and_path


class GeoTiffDataset(AbstractVersionedDataset[xarray.DataArray, xarray.DataArray]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering if this should be GeoTIFFDataset to follow conventional naming for GeoTIFF. For example, our NetCDF PR was NetCDFDataset.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wondering on organization here. We return all netcdf objects as an xarray dataset here: https://github.com/kedro-org/kedro-plugins/pull/360/files. But organize it under kedro_datasets/netcdf/.

NetCDFs don't have to be loaded as xarray, but it's the most convenient platform IMO. It could also be loaded in with the NetCDF package or iris as a cube object.

Should the current NetCDFDataset object be moved to an xarray folder for clarity? Or should this PR just be housed under kedro_datasets/geotiff?

return rxr.open_rasterio(load_path, **self._load_args)

def _save(self, data: xarray.DataArray) -> None:
save_path = get_filepath_str(self._get_save_path(), self._protocol)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to follow this commit: 210e4ed and replace these instances of get_filepath_str. It seemed to have caused an issue with tests breaking in the NetCDF PR.

self._save_args = deepcopy(self.DEFAULT_SAVE_ARGS)
if save_args is not None:
self._save_args.update(save_args)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might want to add a detector for wildcard globbing and set up an _is_multifile like in #360. It's pretty common to have a set of GeoTIFF layers that need to be concatenated on load.


def _load(self) -> xarray.DataArray:
load_path = self._get_load_path().as_posix()
return rxr.open_rasterio(load_path, **self._load_args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rioxarray for some reason ignores all tags inside a geotiff and doesn't assign them appropriately . We deal with this by dual-loading with rasterio.

            with rasterio.open(load_path) as data:
                # Pull in relevant tags.
                tags = data.tags()
            data = rioxarray.open_rasterio(load_path, **self._load_args).squeeze()
            data.attrs.update(tags)

@merelcht
Copy link
Member

I've been doing some small bits fixing the doc string and added some missing tests, but while doing this I also realised I'm probably not the right person to finish the implementation. I'm not familiar at all with GeoTiff and its purpose. I noticed the dataset as it stands now doesn't have the credentials argument and is maybe missing some other bits when I compare it to e.g. NetCDFDataset that was merged yesterday.

Either someone with more knowledge about this data type should finish this PR or we can take this as our first "experimental" contribution (see #517) and get it in that way. WDYT @astrojuanlu?

@BielStela
Copy link

Hello! just a note: I think the name GeoTiffDataset is a bit misleading because rioxarray wraps rastertio.open() which can open any kind of raster dataset. It can open any format supported by gdal raster drivers. So maybe something in the line of RasterDataset would be more strict and avoid possible confusions.

@merelcht
Copy link
Member

Hi @tgoelles, I'm aware that this PR has been open for quite a while and has had a lot of comments. I'm not sure the Kedro team can help at this point to get the dataset to a full ready state. However, we're about to launch our new experimental dataset contribution model, which basically means you can contribute datasets that are more experimental and don't have to have full test coverage etc here https://github.com/kedro-org/kedro-plugins/tree/main/kedro-datasets/kedro_datasets_experimental.

I think the GeoTiffDataset might be a good candidate to get contributed in that way. What do you think?

@tgoelles
Copy link
Contributor Author

I think the GeoTiffDataset might be a good candidate to get contributed in that way. What do you think?

Yes I think that would be a possible way. Also as @BielStela pointed out it is for geospatial raster data. So there is also an overlap with the already now existing netcdf format.

I could be like this. RasterDataset => rioxarray.
Then we could also think about enforcing that a coordinate reference system (CRS) is used. Or at least make an option for this. Also this needs to work well together with other geospatial datasets like netcdf and geopandas.

Yes this needs more experimenting, but it would introduce a huge list of new file formats which can be used.

Will you move it to the experimental folder? I could then rework it with the new concept of dealing with raster data in general and not just GeoTiff. I also gained a lot of experience working with these kind of data recently.

@BielStela
Copy link

I'm happy to give a hand with this if needed @tgoelles :)

@merelcht
Copy link
Member

@tgoelles I've moved it! There's some issues with the docs, which I'll continue to fix, but you should be able to work on the code 🙂

Copy link
Contributor

@riley-brady riley-brady May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In light of the recent comments I would say this should be /raster/raster_dataset.py with this named RasterDataset. Given this is more than just xarray, which is just being used as a frontend to load in tiff datasets.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would also be interesting to think about whether this supports COGs directly or needs an explicit kwarg to get those loaded in


def _load(self) -> xarray.DataArray:
load_path = self._get_load_path().as_posix()
return rxr.open_rasterio(load_path, **self._load_args)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replicating a comment from earlier, but I've found that rioxarray can't handle loading in any tags associated with the raster layer. There might be a better solution, but we do

import rasterio

with rasterio.open(load_path) as data:
    # Pull in relevant tags like units and return period.
    tags = data.tags()
data = rioxarray.open_rasterio(load_path, **self._load_args).squeeze()
data.attrs.update(tags)

if save_args is not None:
self._save_args.update(save_args)

def _load(self) -> xarray.DataArray:
Copy link
Contributor

@riley-brady riley-brady May 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's also nice to support multiple files if you want to concatenate many individual layers into a multi-dimensional xarray object. See how we do it for NetCDF here: https://github.com/kedro-org/kedro-plugins/blob/main/kedro-datasets/kedro_datasets/netcdf/netcdf_dataset.py.

self._is_multifile = (
        True if "*" in str(PurePosixPath(self._filepath).stem) else False
    )

if self._is_multifile:
  # Note that we require an explicit "concat_dim" entry in the dataset
  # catalog. There might be a better way to do this though.
  concat_dim = self._load_args.get("concat_dim", None)
  
  remote_files = fs.glob(load_path)
  if concat_dim:
      preprocess = partial(
          self._preproc_multiple_geotiff, concat_dim=concat_dim
      )
  data = xr.open_mfdataset(
      remote_files,
      combine_attrs="drop_conflicts",
      engine="rasterio",
      **self._load_args,
  ).sortby(coords)

def _preproc_multiple_geotiff(self, x: xr.DataArray, concat_dim: str):
    """Helper function for open_mfdataset to set up coordinates for concatenated
       geotiffs.

   `concat_dim` is a required argument in the DataCatalog and should also be an
    attribute/tag on the geotiff for labeling the coordinate.

    Args:
        x: Individual tif DataArray being concatenated (handled automatically by
            `open_mfdataset`).
        concat_dim: Name of the dimension to concatenate over.

    Returns:
        Individual DataArray slice to be concatenated with proper formatting,
        coordinates, attributes, etc.
    """
    # Squeeze out `band_data`
    x = x.squeeze()

    # Pull individual filepath so we can use `rasterio` to get the tag
    # attributes.
    path = x.encoding["source"]
    with rasterio.open(path) as data:
        # Pull in relevant tags like units and return period.
        tags = data.tags()
    
    # Note here that this makes this tedious as well. Need to have `concat_dim`
    # in the catalog as well as a tag in the tif to label the axis. I guess you
    # could not require this and would just leave the xarray coordinate unlabeled
    if concat_dim not in tags:
        raise ValueError(
            f"{concat_dim} needs to be a tag in geotiff files to help with "
            + "coordinate axis labeling."
        )
    x.attrs.update(tags)

    x.expand_dims(concat_dim)
    x = x.assign_coords({concat_dim: tags[concat_dim]})
    return x

@tgoelles
Copy link
Contributor Author

tgoelles commented Jun 7, 2024

I don't know why this pandas check failed. it is the same code as in the main branch. I tried to solve the DCO, following the instructions of it with git rebase HEAD~228 --signoff, but this than leads to a huge amount of conflicts. there must be an easier way out of this. Also I am afraid to make mistakes there.

@astrojuanlu
Copy link
Member

I tried to solve the DCO, following the instructions of it with git rebase HEAD~228 --signoff, but this than leads to a huge amount of conflicts. there must be an easier way out of this. Also I am afraid to make mistakes there.

You're not the first one...

I see maintainers are allowed to push to this branch, so I'll try to get you some help from the team.

@merelcht merelcht mentioned this pull request Jun 17, 2024
2 tasks
Copy link
Member

@astrojuanlu astrojuanlu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did notice one thing that didn't work but this PR has been open for very long already so I don't want to block it.

I'm approving it and leaving it to the team to decide what to do with it.

Thansk a lot @tgoelles and also to all the people who reviewed this 🙏🏼

Comment on lines +126 to +129
load_path = self._get_load_path().as_posix()
with rasterio.open(load_path) as data:
tags = data.tags()
data = rxr.open_rasterio(load_path, **self._load_args)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried using the dataset with a remote path and it didn't work:

In [17]: ds = GeoTIFFDataset(filepath="https://download.osgeo.org/geotiff/samples/GeogToWGS84GeoKey/GeogToWGS84GeoKey5.tif")

In [18]: ds._load()
---------------------------------------------------------------------------
CPLE_OpenFailedError                      Traceback (most recent call last)
File rasterio/_base.pyx:310, in rasterio._base.DatasetBase.__init__()

File rasterio/_base.pyx:221, in rasterio._base.open_dataset()

File rasterio/_err.pyx:221, in rasterio._err.exc_wrap_pointer()

CPLE_OpenFailedError: download.osgeo.org/geotiff/samples/GeogToWGS84GeoKey/GeogToWGS84GeoKey5.tif: No such file or directory

During handling of the above exception, another exception occurred:

RasterioIOError                           Traceback (most recent call last)
Cell In[18], line 1
----> 1 ds._load()

File ~/Projects/QuantumBlackLabs/Kedro/kedro-plugins/kedro-datasets/kedro_datasets_experimental/rioxarray/geotiff_dataset.py:127, in GeoTIFFDataset._load(self)
    125 def _load(self) -> xarray.DataArray:
    126     load_path = self._get_load_path().as_posix()
--> 127     with rasterio.open(load_path) as data:
    128         tags = data.tags()
    129     data = rxr.open_rasterio(load_path, **self._load_args)

File ~/Projects/QuantumBlackLabs/Kedro/kedro/.venv/lib/python3.11/site-packages/rasterio/env.py:451, in ensure_env_with_credentials.<locals>.wrapper(*args, **kwds)
    448     session = DummySession()
    450 with env_ctor(session=session):
--> 451     return f(*args, **kwds)

File ~/Projects/QuantumBlackLabs/Kedro/kedro/.venv/lib/python3.11/site-packages/rasterio/__init__.py:304, in open(fp, mode, driver, width, height, count, crs, transform, dtype, nodata, sharing, **kwargs)
    301 path = _parse_path(raw_dataset_path)
    303 if mode == "r":
--> 304     dataset = DatasetReader(path, driver=driver, sharing=sharing, **kwargs)
    305 elif mode == "r+":
    306     dataset = get_writer_for_path(path, driver=driver)(
    307         path, mode, driver=driver, sharing=sharing, **kwargs
    308     )

File rasterio/_base.pyx:312, in rasterio._base.DatasetBase.__init__()

RasterioIOError: download.osgeo.org/geotiff/samples/GeogToWGS84GeoKey/GeogToWGS84GeoKey5.tif: No such file or directory

But rasterio knows how to deal with remote paths:

In [15]: with rasterio.open("https://download.osgeo.org/geotiff/samples/GeogToWGS84GeoKey/GeogToWGS84GeoKey5.tif") as data:
    ...:     tags = data.tags()
    ...: 

In [16]: tags
Out[16]: 
{'AREA_OR_POINT': 'Area',
 'TIFFTAG_ARTIST': '',
 'TIFFTAG_DATETIME': '2008:03:01 10:28:18',
 'TIFFTAG_RESOLUTIONUNIT': '2 (pixels/inch)',
 'TIFFTAG_SOFTWARE': 'Paint Shop Pro 8.0',
 'TIFFTAG_XRESOLUTION': '300',
 'TIFFTAG_YRESOLUTION': '300'}

so maybe there's something wrong in how the load path is handled here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@astrojuanlu Use get_filepath_str instead

The reason is that self._get_load_path doesn't handle protocols correctly. I suggest we merge this now and handle this separately as I think it probably affect more than 1 dataset

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest we merge this now and handle this separately as I think it probably affect more than 1 dataset

I agree!

noklam and others added 3 commits July 2, 2024 12:20
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
@astrojuanlu
Copy link
Member

CI failure is unrelated #740 (comment)

@astrojuanlu astrojuanlu merged commit 5ab51af into kedro-org:main Jul 5, 2024
14 checks passed
@astrojuanlu
Copy link
Member

This is merged :shipit: thanks again everyone for your patience!

merelcht added a commit to galenseilis/kedro-plugins that referenced this pull request Aug 27, 2024
* refactor(datasets): deprecate "DataSet" type names (#328)

* refactor(datasets): deprecate "DataSet" type names (api)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (biosequence)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (dask)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (databricks)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (email)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (geopandas)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (holoviews)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (json)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (matplotlib)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (networkx)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.csv_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.deltatable_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.excel_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.feather_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.gbq_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.generic_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.hdf_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.json_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.parquet_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.sql_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pandas.xml_dataset)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pickle)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (pillow)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (plotly)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (polars)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (redis)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (snowflake)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (spark)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (svmlight)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (tensorflow)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (text)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (tracking)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (video)

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): deprecate "DataSet" type names (yaml)

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): ignore TensorFlow coverage issues

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* added basic code for geotiff

Signed-off-by: tgoelles <[email protected]>

* renamed to xarray

Signed-off-by: tgoelles <[email protected]>

* renamed to xarray

Signed-off-by: tgoelles <[email protected]>

* added load and self args

Signed-off-by: tgoelles <[email protected]>

* only local files

Signed-off-by: tgoelles <[email protected]>

* added empty test

Signed-off-by: tgoelles <[email protected]>

* added test data

Signed-off-by: tgoelles <[email protected]>

* added rioxarray requirements

Signed-off-by: tgoelles <[email protected]>

* reformat with black

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.14

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.15

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.12

Signed-off-by: tgoelles <[email protected]>

* rioxarray 0.9

Signed-off-by: tgoelles <[email protected]>

* fixed dataset typo

Signed-off-by: tgoelles <[email protected]>

* fixed docstring for sphinx

Signed-off-by: tgoelles <[email protected]>

* run black

Signed-off-by: tgoelles <[email protected]>

* sort imports

Signed-off-by: tgoelles <[email protected]>

* class docstring

Signed-off-by: tgoelles <[email protected]>

* black

Signed-off-by: tgoelles <[email protected]>

* fixed pylint

Signed-off-by: tgoelles <[email protected]>

* added release notes

Signed-off-by: tgoelles <[email protected]>

* added yaml example

Signed-off-by: tgoelles <[email protected]>

* improve testing WIP

Signed-off-by: tgoelles <[email protected]>

* basic test success

Signed-off-by: tgoelles <[email protected]>

* test reloaded

Signed-off-by: tgoelles <[email protected]>

* test exists

Signed-off-by: tgoelles <[email protected]>

* added version

Signed-off-by: tgoelles <[email protected]>

* basic test suite

Signed-off-by: tgoelles <[email protected]>

* run black

Signed-off-by: tgoelles <[email protected]>

* added example and test it

Signed-off-by: tgoelles <[email protected]>

* deleted duplications

Signed-off-by: tgoelles <[email protected]>

* fixed position of example

Signed-off-by: tgoelles <[email protected]>

* black

Signed-off-by: tgoelles <[email protected]>

* style: Introduce `ruff` for linting in all plugins. (#354)

Signed-off-by: Merel Theisen <[email protected]>

* feat(datasets): create custom `DeprecationWarning` (#356)

* feat(datasets): create custom `DeprecationWarning`

Signed-off-by: Deepyaman Datta <[email protected]>

* feat(datasets): use the custom deprecation warning

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): show Kedro's deprecation warnings

Signed-off-by: Deepyaman Datta <[email protected]>

* fix(datasets): remove unused imports in test files

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): add note about DataSet deprecation (#357)

Signed-off-by: tgoelles <[email protected]>

* test(datasets): skip `tensorflow` tests on Windows (#363)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci: Pin `tables` version (#370)

* Pin tables version

Signed-off-by: Ankita Katiyar <[email protected]>

* Also fix kedro-airflow

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert trying to fix airflow

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Release `1.7.1` (#378)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: Update CONTRIBUTING.md and add one for `kedro-datasets` (#379)

Update CONTRIBUTING.md + add one for kedro-datasets

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): Run tensorflow tests separately from other dataset tests (#377)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat: Kedro-Airflow convert all pipelines option (#335)

* feat: kedro airflow convert --all option

Signed-off-by: Simon Brugman <[email protected]>

* docs: release docs

Signed-off-by: Simon Brugman <[email protected]>

---------

Signed-off-by: Simon Brugman <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): blacken code in rst literal blocks (#362)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: cloudpickle is an interesting extension of the pickle functionality (#361)

Signed-off-by: H. Felix Wittmann <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Fix secret scan entropy error (#383)

Fix secret scan entropy error

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* style: Rename mentions of `DataSet` to `Dataset` in `kedro-airflow` and `kedro-telemetry` (#384)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Migrated `PartitionedDataSet` and `IncrementalDataSet` from main repository to kedro-datasets (#253)

Signed-off-by: Peter Bludau <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>

* fix: backwards compatibility for `kedro-airflow` (#381)

Signed-off-by: Simon Brugman <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* added metadata

Signed-off-by: tgoelles <[email protected]>

* after linting

Signed-off-by: tgoelles <[email protected]>

* ignore ruff PLR0913

Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Don't warn for SparkDataset on Databricks when using s3 (#341)

Signed-off-by: Alistair McKelvie <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Hot fix for RTD due to bad pip version (#396)

fix RTD

Signed-off-by: Nok <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Pin pip version temporarily (#398)

* Pin pip version temporarily

Signed-off-by: Ankita Katiyar <[email protected]>

* Hive support failures

Signed-off-by: Ankita Katiyar <[email protected]>

* Also pin pip on lint

Signed-off-by: Ankita Katiyar <[email protected]>

* Temporary ignore databricks spark tests

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* perf(datasets): don't create connection until need (#281)

* perf(datasets): delay `Engine` creation until need

Signed-off-by: Deepyaman Datta <[email protected]>

* chore: don't check coverage in TYPE_CHECKING block

Signed-off-by: Deepyaman Datta <[email protected]>

* perf(datasets): don't connect in `__init__` method

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): fix tests to touch `create_engine`

Signed-off-by: Deepyaman Datta <[email protected]>

* perf(datasets): don't connect in `__init__` method

Signed-off-by: Deepyaman Datta <[email protected]>

* style(datasets): exec Ruff on sql_dataset.py :dog:

Signed-off-by: Deepyaman Datta <[email protected]>

* Undo changes to `engines` values type (for Sphinx)

Signed-off-by: Deepyaman Datta <[email protected]>

* Patch Sphinx build by removing `Engine` references

* perf(datasets): don't connect in `__init__` method

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): don't require coverage for import

* chore(datasets): del unused `TYPE_CHECKING` import

* docs(datasets): document lazy connection in README

* perf(datasets): remove create in `SQLQueryDataset`

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): do not return the created conn

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>

* chore: Drop Python 3.7 support for kedro-plugins (#392)

* Remove references to Python 3.7

Signed-off-by: lrcouto <[email protected]>

* Revert kedro-dataset changes

Signed-off-by: lrcouto <[email protected]>

* Revert kedro-dataset changes

Signed-off-by: lrcouto <[email protected]>

* Add information to release docs

Signed-off-by: lrcouto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): support Polars lazy evaluation  (#350)

* feat(datasets) add PolarsDataset to support Polars's Lazy API

Signed-off-by: Matthias Roels <[email protected]>

* Fix(datasets): rename PolarsDataSet to PolarsDataSet

Add PolarsDataSet as an alias for PolarsDataset with
deprecation warning.

Signed-off-by: Matthias Roels <[email protected]>

* Fix(datasets): apply ruff linting rules

Signed-off-by: Matthias Roels <[email protected]>

* Fix(datasets): Correct pattern matching when Raising exceptions

Corrected PolarsDataSet to PolarsDataset in the pattern to match
in test_load_missing_file

Signed-off-by: Matthias Roels <[email protected]>

* fix(datasets): clean up PolarsDataset related code

Remove reference to PolarsDataSet as this is not required for new
dataset implementations.

Signed-off-by: Matthias Roels <[email protected]>

* feat(datasets): Rename Polars Datasets to better describe their intent

Signed-off-by: Matthias Roels <[email protected]>

* feat(datasets): clean up LazyPolarsDataset

Signed-off-by: Matthias Roels <[email protected]>

* fix(datasets): increase test coverage for PolarsDataset classes

Signed-off-by: Matthias Roels <[email protected]>

* docs(datasets): add renamed Polars datasets to docs

Signed-off-by: Matthias Roels <[email protected]>

* docs(datasets): Add new polars datasets to release notes

Signed-off-by: Matthias Roels <[email protected]>

* fix(datasets): load_args not properly passed to LazyPolarsDataset.load

Signed-off-by: Matthias Roels <[email protected]>

* docs(datasets): fix spelling error in release notes

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Matthias Roels <[email protected]>

---------

Signed-off-by: Matthias Roels <[email protected]>
Signed-off-by: Matthias Roels <[email protected]>
Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Matthias Roels <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Release `1.8.0` (#406)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(airflow): Release 0.7.0 (#407)

* bump version

Signed-off-by: Ankita Katiyar <[email protected]>

* Update release notes

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(telemetry): Release 0.3.0 (#408)

Bump version

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(docker): Release 0.4.0 (#409)

Bump version

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* style(airflow): blacken README.md of Kedro-Airflow (#418)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Fix missing jQuery (#414)

Fix missing jQuery

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Fix Lazy Polars dataset to use the new-style base class (#413)

* Fix Lazy Polars dataset to use the new-style base class

Fix gh-412

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Update release notes

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert "Update release notes"

This reverts commit 92ceea6d8fa412abf3d8abd28a2f0a22353867ed.

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets):  lazily load `partitions` classes (#411)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): fix code blocks and `data_set` use (#417)

* chore(datasets):  lazily load `partitions` classes

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): run doctests to check examples run

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): keep running tests amidst failures

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): format ManagedTableDataset example

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): ignore breaking mods for doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* style(airflow): black code in Kedro-Airflow README

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): fix example syntax, and autoformat

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `>>> ` prefix for YAML code

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): replace `data_set`s with `dataset`s

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): undo changes for running doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* revert(datasets):  undo lazily load `partitions` classes

Refs: 3fdc5a8efa034fa9a18b7683a942415915b42fb5
Signed-off-by: Deepyaman Datta <[email protected]>

* revert(airflow): undo black code in Kedro-Airflow README

Refs: dc3476ea36bac98e2adcc0b52a11b0f90001e31d

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: TF model load failure when model is saved as a TensorFlow Saved Model format (#410)

* fixes TF model load failure when model is saved as a TensorFlow Saved Model format

when a model is saved in the TensorFlow SavedModel format ("tf" default option in tf.save_model when using TF 2.x) via the catalog.xml file, the subsequent loading of that model for further use in a subsequent node fails. The issue is linked to the fact that the model files don't get copied into the temporary folder, presumably because the _fs.get function "thinks" that the provided path is a file and not a folder. Adding an terminating "/" to the path fixes the issue.

Signed-off-by: Edouard59 <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Drop support for Python 3.7 on kedro-datasets (#419)

* Drop support for Python 3.7 on kedro-datasets

Signed-off-by: lrcouto <[email protected]>

* Remove redundant 3.8 markers

Signed-off-by: lrcouto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>
Signed-off-by: Sajid Alam <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>

* test(datasets): run doctests to check examples run (#416)

* chore(datasets):  lazily load `partitions` classes

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): run doctests to check examples run

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): keep running tests amidst failures

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): format ManagedTableDataset example

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): ignore breaking mods for doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* style(airflow): black code in Kedro-Airflow README

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): fix example syntax, and autoformat

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `>>> ` prefix for YAML code

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): remove `kedro.extras.datasets` ref

Signed-off-by: Deepyaman Datta <[email protected]>

* docs(datasets): replace `data_set`s with `dataset`s

Signed-off-by: Deepyaman Datta <[email protected]>

* refactor(datasets): run doctests separately

Signed-off-by: Deepyaman Datta <[email protected]>

* separate dataset-doctests

Signed-off-by: Nok <[email protected]>

* chore(datasets): ignore non-passing tests to make CI pass

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): fix comment location

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): fix .py.py

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): don't measure coverage on doctest run

Signed-off-by: Deepyaman Datta <[email protected]>

* build(datasets): fix windows and snowflake stuff in Makefile

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: Nok <[email protected]>
Co-authored-by: Nok <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Add support for `databricks-connect>=13.0` (#352)

Signed-off-by: Miguel Rodriguez Gutierrez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(telemetry): remove double execution by moving to after catalog created hook (#422)

* remove double execution by moving to after catalog created hook

Signed-off-by: Florian Roessler <[email protected]>

* update release notes

Signed-off-by: Florian Roessler <[email protected]>

* fix tests

Signed-off-by: Florian Roessler <[email protected]>

* remove unsued fixture

Signed-off-by: Florian Roessler <[email protected]>

---------

Signed-off-by: Florian Roessler <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: Add python version support policy to plugin `README.md`s (#425)

* Add python version support policy to plugin readmes

Signed-off-by: Merel Theisen <[email protected]>

* Temporarily pin connexion

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(airflow): Use new docs link (#393)

Use new docs link

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Jo Stichbury <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* style: Add shared CSS and meganav to datasets docs (#400)

* Add shared CSS and meganav

Signed-off-by: Jo Stichbury <[email protected]>

* Add end of file

Signed-off-by: Jo Stichbury <[email protected]>

* Add new heap data source

Signed-off-by: Jo Stichbury <[email protected]>

* adjust heap parameter

Signed-off-by: Jo Stichbury <[email protected]>

* Remove nav_version next to Kedro logo in top left; add Kedro logo

* Revise project name and author name

Signed-off-by: Jo Stichbury <[email protected]>

* Use full kedro icon and type for logo

* Add close btn to mobile nav

Signed-off-by: vladimir-mck <[email protected]>

* Add css for mobile nav logo image

Signed-off-by: vladimir-mck <[email protected]>

* Update close button for mobile nav

Signed-off-by: vladimir-mck <[email protected]>

* Add open button to mobile nav

Signed-off-by: vladimir-mck <[email protected]>

* Delete kedro-datasets/docs/source/kedro-horizontal-color-on-light.svg

Signed-off-by: vladimir-mck <[email protected]>

* Update conf.py

Signed-off-by: vladimir-mck <[email protected]>

* Update layout.html

Add links to subprojects

Signed-off-by: Jo Stichbury <[email protected]>

* Remove svg from docs -- not needed??

Signed-off-by: Jo Stichbury <[email protected]>

* linter error fix

Signed-off-by: Jo Stichbury <[email protected]>

---------

Signed-off-by: Jo Stichbury <[email protected]>
Signed-off-by: vladimir-mck <[email protected]>
Co-authored-by: Tynan DeBold <[email protected]>
Co-authored-by: vladimir-mck <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Add Hugging Face datasets (#344)

* Add HuggingFace datasets

Co-authored-by: Danny Farah <[email protected]>
Co-authored-by: Kevin Koga <[email protected]>
Co-authored-by: Mate Scharnitzky <[email protected]>
Co-authored-by: Tomer Shor <[email protected]>
Co-authored-by: Pierre-Yves Mousset <[email protected]>
Co-authored-by: Bela Chupal <[email protected]>
Co-authored-by: Khangjrakpam Arjun <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Apply suggestions from code review

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

Co-authored-by: Joel <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>

* Typo

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Fix docstring

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add docstring for HFTransformerPipelineDataset

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Use intersphinx for cross references in Hugging Face docstrings

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add docstring for HFDataset

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add missing test dependencies

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add tests for huggingface datasets

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Fix HFDataset.save

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Add test for HFDataset.list_datasets

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Use new name

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Consolidate imports

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Danny Farah <[email protected]>
Co-authored-by: Kevin Koga <[email protected]>
Co-authored-by: Mate Scharnitzky <[email protected]>
Co-authored-by: Tomer Shor <[email protected]>
Co-authored-by: Pierre-Yves Mousset <[email protected]>
Co-authored-by: Bela Chupal <[email protected]>
Co-authored-by: Khangjrakpam Arjun <[email protected]>
Co-authored-by: Joel <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): fix `dask.ParquetDataset` doctests (#439)

* test(datasets): fix `dask.ParquetDataset` doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): use `tmp_path` fixture in doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): simplify by not passing the schema

Signed-off-by: Deepyaman Datta <[email protected]>

* test(datasets): ignore conftest for doctests cover

Signed-off-by: Deepyaman Datta <[email protected]>

* Create MANIFEST.in

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* refactor: Remove `DataSet` aliases and mentions (#440)

Signed-off-by: Merel Theisen <[email protected]>

* chore(datasets): replace "Pyspark" with "PySpark" (#423)

Consistently write "PySpark" rather than "Pyspark"

Also, fix list formatting

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): make `api.APIDataset` doctests run (#448)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix `pandas.GenericDataset` doctest (#445)

Fix pandas.GenericDataset doctest

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): make datasets arguments keywords only (#358)

* feat(datasets): make `APIDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `BioSequenceDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ParquetDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `EmailMessageDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GeoJSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `HoloviewsWriter.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `MatplotlibWriter.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GraphMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make NetworkX `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `PickleDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ImageDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make plotly `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `PlotlyDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make polars `CSVDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make polars `GenericDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make redis `PickleDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SnowparkTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SVMLightDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `TensorFlowModelDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `TextDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `YAMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ManagedTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `VideoDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `CSVDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `DeltaTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `ExcelDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `FeatherDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GBQTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `GenericDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make pandas `JSONDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make pandas `ParquerDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SQLTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `XMLDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `HDFDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `DeltaTableDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkHiveDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkJDBCDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `SparkStreamingDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `IncrementalDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* feat(datasets): make `LazyPolarsDataset.__init__` keyword only

Signed-off-by: Felix Scherz <[email protected]>

* docs(datasets): update doctests for HoloviewsWriter

Signed-off-by: Felix Scherz <[email protected]>

* Update release notes

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Felix Scherz <[email protected]>
Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Felix Scherz <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Drop support for python 3.8 on kedro-datasets (#442)

* Drop support for python 3.8 on kedro-datasets

---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: Dmitry Sorokin <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): add outputs to matplotlib doctests (#449)

* test(datasets): add outputs to matplotlib doctests

Signed-off-by: Deepyaman Datta <[email protected]>

* Update Makefile

Signed-off-by: Deepyaman Datta <[email protected]>

* Reformat code example, line length is short enough

* Update kedro-datasets/kedro_datasets/matplotlib/matplotlib_writer.py

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix more doctest issues  (#451)

Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): fix failing doctests in Windows CI (#457)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): fix accidental reference to NumPy (#450)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): don't pollute dev env in doctests (#452)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat: Add tools to heap event (#430)

* Add add-on data to heap event

Signed-off-by: lrcouto <[email protected]>

* Move addons logic to _get_project_property

Signed-off-by: Ankita Katiyar <[email protected]>

* Add condition for pyproject.toml

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix tests

Signed-off-by: Ankita Katiyar <[email protected]>

* add tools to mock

Signed-off-by: lrcouto <[email protected]>

* lint

Signed-off-by: lrcouto <[email protected]>

* Update tools test

Signed-off-by: Ankita Katiyar <[email protected]>

* Add after_context_created tools test

Signed-off-by: lrcouto <[email protected]>

* Update rename to tools

Signed-off-by: Ankita Katiyar <[email protected]>

* Update kedro-telemetry/tests/test_plugin.py

Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): install deps in single `pip install` (#454)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Bump s3fs (#463)

* Use mocking for AWS responses

Signed-off-by: Merel Theisen <[email protected]>

* Add change to release notes

Signed-off-by: Merel Theisen <[email protected]>

* Update release notes

Signed-off-by: Merel Theisen <[email protected]>

* Use pytest xfail instead of commenting out test

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* test(datasets): make SQL dataset examples runnable (#455)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): correct pandas-gbq as py311 dependency (#460)

* update pandas-gbq dependency declaration

Signed-off-by: Onur Kuru <[email protected]>

* fix fmt

Signed-off-by: Onur Kuru <[email protected]>

---------

Signed-off-by: Onur Kuru <[email protected]>
Co-authored-by: Ahdra Merali <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): Document `IncrementalDataset` (#468)

Document IncrementalDataset

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Update datasets to be arguments keyword only (#466)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Clean up code for old dataset syntax compatibility (#465)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: Update scikit-learn version (#469)

Update scikit-learn version

Signed-off-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): support versioning data partitions (#447)

* feat(datasets): support versioning data partitions

Signed-off-by: Deepyaman Datta <[email protected]>

* Remove unused import

Signed-off-by: Deepyaman Datta <[email protected]>

* chore(datasets): use keyword arguments when needed

Signed-off-by: Deepyaman Datta <[email protected]>

* Apply suggestions from code review

Signed-off-by: Deepyaman Datta <[email protected]>

* Update kedro-datasets/kedro_datasets/partitions/partitioned_dataset.py

Signed-off-by: Deepyaman Datta <[email protected]>

---------

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): Improve documentation index (#428)

Rework documentation index

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(datasets): update wrong docstring about `con` (#461)

Signed-off-by: Deepyaman Datta <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(datasets): Release `2.0.0`  (#472)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(telemetry): Pin `PyYAML` (#474)

Pin PyYaml

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(telemetry): Release 0.3.1 (#475)

Signed-off-by: tgoelles <[email protected]>

* docs(datasets): Fix broken links in README (#477)

Fix broken links in README

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): replace more "data_set" instances (#476)

Signed-off-by: Deepyaman Datta <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix doctests (#488)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Fix delta + incremental dataset docstrings (#489)

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(airflow): Post 0.19 cleanup (#478)

* bump version

Signed-off-by: Ankita Katiyar <[email protected]>

* Unbump version and clean test

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Split big test into smaller tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update conftest

Signed-off-by: Ankita Katiyar <[email protected]>

* Update conftest

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix coverage

Signed-off-by: Ankita Katiyar <[email protected]>

* Try unpin airflow

Signed-off-by: Ankita Katiyar <[email protected]>

* remove datacatalog step

Signed-off-by: Ankita Katiyar <[email protected]>

* Change node

Signed-off-by: Ankita Katiyar <[email protected]>

* update tasks test step

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert to older airflow and constraint pendulum

Signed-off-by: Ankita Katiyar <[email protected]>

* Update template

Signed-off-by: Ankita Katiyar <[email protected]>

* Update message in e2e step

Signed-off-by: Ankita Katiyar <[email protected]>

* Final cleanup

Signed-off-by: Ankita Katiyar <[email protected]>

* Update kedro-airflow/pyproject.toml

Signed-off-by: Nok Lam Chan <[email protected]>

* Pin apache-airflow again

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Nok Lam Chan <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build(airflow): Release 0.8.0 (#491)

Bump version

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: telemetry metadata (#495)

---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: Update tests on kedro-docker for 0.5.0 release. (#496)

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* Lint

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* fix test path for e2e tests

Signed-off-by: lrcouto <[email protected]>

* fix requirements path on dockerfiles

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* Remove redundant test

Signed-off-by: lrcouto <[email protected]>

* Alter test for custom GID and UID

Signed-off-by: lrcouto <[email protected]>

* Update release notes

Signed-off-by: lrcouto <[email protected]>

* Revert version bump to put in in separate PR

Signed-off-by: lrcouto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* build: Release kedro-docker 0.5.0 (#497)

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* bump version to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* Lint

Signed-off-by: lrcouto <[email protected]>

* update e2e tests to use new starters

Signed-off-by: lrcouto <[email protected]>

* fix test path for e2e tests

Signed-off-by: lrcouto <[email protected]>

* fix requirements path on dockerfiles

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* update tests to fit with current log format

Signed-off-by: lrcouto <[email protected]>

* Remove redundant test

Signed-off-by: lrcouto <[email protected]>

* Alter test for custom GID and UID

Signed-off-by: lrcouto <[email protected]>

* Update release notes

Signed-off-by: lrcouto <[email protected]>

* Revert version bump to put in in separate PR

Signed-off-by: lrcouto <[email protected]>

* Bump kedro-docker to 0.5.0

Signed-off-by: lrcouto <[email protected]>

* Add release notes

Signed-off-by: lrcouto <[email protected]>

* Update kedro-docker/RELEASE.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>

---------

Signed-off-by: lrcouto <[email protected]>
Signed-off-by: L. R. Couto <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(datasets): Update partitioned dataset docstring (#502)

Update partitioned dataset docstring

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* Fix GeotiffDataset import + casing

Signed-off-by: Merel Theisen <[email protected]>

* Fix lint

Signed-off-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Relax pandas.HDFDataSet dependencies which are broken on Windows (#426)

* Relax pandas.HDFDataSet dependencies which are broken on Window (#402)

Signed-off-by: Yolan Honoré-Rougé <[email protected]>

* Update RELEASE.md

Signed-off-by: Yolan Honoré-Rougé <[email protected]>

* Apply suggestions from code review

Signed-off-by: Merel Theisen <[email protected]>

* Update kedro-datasets/setup.py

Signed-off-by: Merel Theisen <[email protected]>

---------

Signed-off-by: Yolan Honoré-Rougé <[email protected]>
Signed-off-by: Merel Theisen <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: airflow metadata (#498)

* Add example pipeline entry to metadata declaration

Signed-off-by: Ahdra Merali <[email protected]>

* Fix entry

Signed-off-by: Ahdra Merali <[email protected]>

* Make entries consistent

Signed-off-by: Ahdra Merali <[email protected]>

* Add tools to config

Signed-off-by: Ahdra Merali <[email protected]>

* fix: telemetry metadata (#495)

---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: Ahdra Merali <[email protected]>

* Revert "Add tools to config"

This reverts commit 14732d772a3c2f4787063071a68fdf1512c93488.

Signed-off-by: Ahdra Merali <[email protected]>

* Quick fix

Signed-off-by: Ahdra Merali <[email protected]>

* Lint

Signed-off-by: Ahdra Merali <[email protected]>

* Remove outdated config key

Signed-off-by: Ahdra Merali <[email protected]>

* Use kedro new instead of cookiecutter

Signed-off-by: Ahdra Merali <[email protected]>

---------

Signed-off-by: Ahdra Merali <[email protected]>
Signed-off-by: Dmitry Sorokin <[email protected]>
Co-authored-by: Dmitry Sorokin <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore(airflow): Bump `apache-airflow` version (#511)

* Bump apache airflow

Signed-off-by: Ankita Katiyar <[email protected]>

* Change starter

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e test steps

Signed-off-by: Ankita Katiyar <[email protected]>

* Update e2e test steps

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): Unpin dask (#522)

* Unpin dask

Signed-off-by: Ankita Katiyar <[email protected]>

* Update doctest

Signed-off-by: Ankita Katiyar <[email protected]>

* Update doctest

Signed-off-by: Ankita Katiyar <[email protected]>

* Update kedro-datasets/setup.py

Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Add `MatlabDataset` to `kedro-datasets` (#515)

* Refork and commit kedro matlab datasets

Signed-off-by: samuelleeshemen <[email protected]>

* Fix lint, add to docs

Signed-off-by: Ankita Katiyar <[email protected]>

* Try fixing docstring

Signed-off-by: Ankita Katiyar <[email protected]>

* Try fixing save

Signed-off-by: Ankita Katiyar <[email protected]>

* Try fix docstest

Signed-off-by: Ankita Katiyar <[email protected]>

* Fix unit tests

Signed-off-by: Ankita Katiyar <[email protected]>

* Update release notes:

Signed-off-by: Ankita Katiyar <[email protected]>

* Not hardcode load mode

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: samuelleeshemen <[email protected]>
Signed-off-by: Ankita Katiyar <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(airflow): Pin `Flask-Session` version (#521)

* Restrict pendulum version

Signed-off-by: Ankita Katiyar <[email protected]>

* Update airflow init step

Signed-off-by: Ankita Katiyar <[email protected]>

* Remove pendulum pin

Signed-off-by: Ankita Katiyar <[email protected]>

* Update create connections step

Signed-off-by: Ankita Katiyar <[email protected]>

* Pin flask session

Signed-off-by: Ankita Katiyar <[email protected]>

* Add comment

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat: `kedro-airflow` group in memory nodes (#241)

* feat: option to group in-memory nodes

Signed-off-by: Simon Brugman <[email protected]>

* fix: MemoryDataset

Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/README.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/README.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/README.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/RELEASE.md

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/plugin.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/tests/test_node_grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/tests/test_node_grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/grouping.py

Co-authored-by: Merel Theisen <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* Update kedro-airflow/kedro_airflow/grouping.py

Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>

* fix: tests

Signed-off-by: Simon Brugman <[email protected]>

* Bump minimum kedro version

Signed-off-by: Simon Brugman <[email protected]>

* fixes

Signed-off-by: Simon Brugman <[email protected]>

---------

Signed-off-by: Simon Brugman <[email protected]>
Signed-off-by: Simon Brugman <[email protected]>
Co-authored-by: Merel Theisen <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(datasets): Update pyproject.toml to pin Kedro 0.19 for kedro-datasets (#526)

Update pyproject.toml

Signed-off-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(airflow): include environment name in DAG filename (#492)

* feat: include environment name in DAG file

Signed-off-by: Simon Brugman <[email protected]>

* doc: add update to release notes

Signed-off-by: Simon Brugman <[email protected]>

---------

Signed-off-by: Simon Brugman <[email protected]>
Co-authored-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Enable search-as-you type on Kedro-datasets docs (#532)

* done

Signed-off-by: rashidakanchwala <[email protected]>

* fix lint

Signed-off-by: rashidakanchwala <[email protected]>

---------

Signed-off-by: rashidakanchwala <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Debug and fix `kedro-datasets` nightly build failures (#541)

* pin deltalake

* Update kedro-datasets/setup.py

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Update setup.py

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* sort order and compare

* Update setup.py

* lint

* pin deltalake

* add comment to pin

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* feat(datasets): Dataset Preview Refactor  (#504)

* test

* done

* change from _preview to preview

* fix lint and tests

* added docstrings

* rtd fix

* rtd fix

* fix rtd

Signed-off-by: rashidakanchwala <[email protected]>

* fix rtd

Signed-off-by: rashidakanchwala <[email protected]>

* fix rtd - pls"

Signed-off-by: rashidakanchwala <[email protected]>

* add nitpick ignore

Signed-off-by: rashidakanchwala <[email protected]>

* test again

Signed-off-by: rashidakanchwala <[email protected]>

* move tracking datasets to constant

Signed-off-by: rashidakanchwala <[email protected]>

* remove comma

Signed-off-by: rashidakanchwala <[email protected]>

* remove Newtype from json_dataset"

Signed-off-by: rashidakanchwala <[email protected]>

* pls work

Signed-off-by: rashidakanchwala <[email protected]>

* confirm rtd works locally

Signed-off-by: rashidakanchwala <[email protected]>

* juanlu's fix

Signed-off-by: rashidakanchwala <[email protected]>

* fix tests

Signed-off-by: rashidakanchwala <[email protected]>

* remove unnecessary stuff from conf.py

Signed-off-by: rashidakanchwala <[email protected]>

* fixes based on review

Signed-off-by: rashidakanchwala <[email protected]>

* changes based on review

Signed-off-by: rashidakanchwala <[email protected]>

* fix tests

Signed-off-by: rashidakanchwala <[email protected]>

* add suffix Preview

Signed-off-by: rashidakanchwala <[email protected]>

* change img return type to bytes

Signed-off-by: rashidakanchwala <[email protected]>

* fix tests

Signed-off-by: rashidakanchwala <[email protected]>

* update release note

* fix lint

---------

Signed-off-by: rashidakanchwala <[email protected]>
Co-authored-by: ravi-kumar-pilla <[email protected]>
Co-authored-by: Sajid Alam <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix(datasets): Drop pyarrow constraint when using snowpark (#538)

* Free pyarrow req

Signed-off-by: Felipe Monroy <[email protected]>

* Free pyarrow req

Signed-off-by: Felipe Monroy <[email protected]>

---------

Signed-off-by: Felipe Monroy <[email protected]>
Co-authored-by: Nok Lam Chan <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs: Update kedro-telemetry docs on which data is collected (#546)

* Update data being collected
---------

Signed-off-by: Dmitry Sorokin <[email protected]>
Signed-off-by: Dmitry Sorokin <[email protected]>
Co-authored-by: Jo Stichbury <[email protected]>
Co-authored-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* ci(docker): Trying to fix e2e tests (#548)

* Pin psutil

Signed-off-by: Ankita Katiyar <[email protected]>

* Add no capture to test

Signed-off-by: Ankita Katiyar <[email protected]>

* Update pip version

Signed-off-by: Ankita Katiyar <[email protected]>

* Update call

Signed-off-by: Ankita Katiyar <[email protected]>

* Update pip

Signed-off-by: Ankita Katiyar <[email protected]>

* pip ruamel

Signed-off-by: Ankita Katiyar <[email protected]>

* change pip v

Signed-off-by: Ankita Katiyar <[email protected]>

* change pip v

Signed-off-by: Ankita Katiyar <[email protected]>

* show stdout

Signed-off-by: Ankita Katiyar <[email protected]>

* use no cache dir

Signed-off-by: Ankita Katiyar <[email protected]>

* revert extra changes

Signed-off-by: Ankita Katiyar <[email protected]>

* pin pip

Signed-off-by: Ankita Katiyar <[email protected]>

* gitpod

Signed-off-by: Ankita Katiyar <[email protected]>

* pip inside dockerfile

Signed-off-by: Ankita Katiyar <[email protected]>

* pip pip inside dockerfile

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* chore: bump actions versions (#539)

* Unpin pip and bump actions versions

Signed-off-by: Ankita Katiyar <[email protected]>

* remove version

Signed-off-by: Ankita Katiyar <[email protected]>

* Revert unpinning of pip

Signed-off-by: Ankita Katiyar <[email protected]>

---------

Signed-off-by: Ankita Katiyar <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* docs(telemetry): Direct readers to Kedro documentation for further information on telemetry (#555)

* Direct readers to Kedro documentation for further information on telemetry

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Wording improvements

Co-authored-by: Jo Stichbury <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

* Amend README section

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>

---------

Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Signed-off-by: Juan Luis Cano Rodríguez <[email protected]>
Co-authored-by: Jo Stichbury <[email protected]>
Signed-off-by: tgoelles <[email protected]>

* fix: kedro-telemetry masking (#552)

* Fix masking

Signed-off-by: Dmitr…
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Community Issue/PR opened by the open-source community
Projects
None yet
Development

Successfully merging this pull request may close these issues.