-
Notifications
You must be signed in to change notification settings - Fork 159
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[FEAT]: huggingface integration (#2701)
added a few public sample files to my personal huggingface account ```py df = daft.read_csv("hf://datasets/universalmind303/daft-docs/iris.csv") ```
- Loading branch information
1 parent
a72321c
commit 7e9208e
Showing
25 changed files
with
879 additions
and
70 deletions.
There are no files selected for viewing
Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -11,3 +11,4 @@ Integrations | |
integrations/microsoft-azure | ||
integrations/aws | ||
integrations/sql | ||
integrations/huggingface |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,64 @@ | ||
Huggingface Datasets | ||
=========== | ||
|
||
Daft is able to read datasets directly from Huggingface via the ``hf://`` protocol. | ||
|
||
Since huggingface will `automatically convert <https://huggingface.co/docs/dataset-viewer/en/parquet>`_ all public datasets to parquet format, | ||
we can read these datasets using the ``read_parquet`` method. | ||
|
||
.. NOTE:: | ||
This is limited to either public datasets, or PRO/ENTERPRISE datasets. | ||
|
||
For other file formats, you will need to manually specify the path or glob pattern to the files you want to read, similar to how you would read from a local file system. | ||
|
||
|
||
Reading Public Datasets | ||
----------------------- | ||
|
||
.. code:: python | ||
import daft | ||
df = daft.read_parquet("hf://username/dataset_name") | ||
This will read the entire dataset into a daft DataFrame. | ||
|
||
Not only can you read entire datasets, but you can also read individual files from a dataset. | ||
|
||
.. code:: python | ||
import daft | ||
df = daft.read_parquet("hf://username/dataset_name/file_name.parquet") | ||
# or a csv file | ||
df = daft.read_csv("hf://username/dataset_name/file_name.csv") | ||
# or a glob pattern | ||
df = daft.read_parquet("hf://username/dataset_name/**/*.parquet") | ||
Authorization | ||
------------- | ||
|
||
For authenticated datasets: | ||
|
||
.. code:: python | ||
from daft.io import IOConfig, HTTPConfig | ||
io_config = IoConfig(http=HTTPConfig(bearer_token="your_token")) | ||
df = daft.read_parquet("hf://username/dataset_name", io_config=io_config) | ||
It's important to note that this will not work with standard tier private datasets. | ||
Huggingface does not auto convert private datasets to parquet format, so you will need to specify the path to the files you want to read. | ||
|
||
.. code:: python | ||
df = daft.read_parquet("hf://username/my_private_dataset", io_config=io_config) # Errors | ||
to get around this, you can read all files using a glob pattern *(assuming they are in parquet format)* | ||
|
||
.. code:: python | ||
df = daft.read_parquet("hf://username/my_private_dataset/**/*.parquet", io_config=io_config) # Works |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,58 @@ | ||
use std::str::FromStr; | ||
|
||
use common_error::{DaftError, DaftResult}; | ||
use common_py_serde::impl_bincode_py_state_serialization; | ||
#[cfg(feature = "python")] | ||
use pyo3::prelude::*; | ||
|
||
use serde::{Deserialize, Serialize}; | ||
|
||
/// Format of a file, e.g. Parquet, CSV, JSON. | ||
#[derive(Clone, Debug, PartialEq, Eq, Hash, Serialize, Deserialize, Copy)] | ||
#[cfg_attr(feature = "python", pyclass(module = "daft.daft"))] | ||
pub enum FileFormat { | ||
Parquet, | ||
Csv, | ||
Json, | ||
Database, | ||
Python, | ||
} | ||
|
||
#[cfg(feature = "python")] | ||
#[pymethods] | ||
impl FileFormat { | ||
fn ext(&self) -> &'static str { | ||
match self { | ||
Self::Parquet => "parquet", | ||
Self::Csv => "csv", | ||
Self::Json => "json", | ||
Self::Database => "db", | ||
Self::Python => "py", | ||
} | ||
} | ||
} | ||
|
||
impl FromStr for FileFormat { | ||
type Err = DaftError; | ||
|
||
fn from_str(file_format: &str) -> DaftResult<Self> { | ||
use FileFormat::*; | ||
|
||
if file_format.trim().eq_ignore_ascii_case("parquet") { | ||
Ok(Parquet) | ||
} else if file_format.trim().eq_ignore_ascii_case("csv") { | ||
Ok(Csv) | ||
} else if file_format.trim().eq_ignore_ascii_case("json") { | ||
Ok(Json) | ||
} else if file_format.trim().eq_ignore_ascii_case("database") { | ||
Ok(Database) | ||
} else { | ||
Err(DaftError::TypeError(format!( | ||
"FileFormat {} not supported!", | ||
file_format | ||
))) | ||
} | ||
} | ||
} | ||
|
||
impl_bincode_py_state_serialization!(FileFormat); |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.