Skip to content

Polars extension for general data science use cases

License

Notifications You must be signed in to change notification settings

remiadon/polars_ds_extension

 
 

Repository files navigation

Polars for Data Science

Documentation | User Guide | Want to Contribute?
pip install polars-ds

The Project

The goal of the project is to reduce dependencies, improve code organization, simplify data pipelines and overall faciliate analysis of various kinds of tabular data that a data scientist may encounter. It is a package built around your favorite Polars dataframe. Here are the current namespaces (Polars Extensions) provided by the package:

  1. A numerical extension (num), which focuses on numerical quantities common in many fields of data analysis (credit modelling, time series, other well-known quantities, etc.), such as rfft, entropies, k-nearest-neighbors queries, Population Stability Index, Information Value, etc.

  2. A metrics extension (metric), which contains a lot of common error/loss functions, model evaluation metrics. This module is mostly designed to generate model performance monitoring data.

  3. A str extension (str2), which focuses on str distances/similarities, and other commonly used string manipulation procedures.

  4. A stats extension (stats), which has common statistical tests such as t-test, chi2, and f-test, etc., and random sampling from a distribution, etc.

  5. A complex extension (c), which treats complex numbers as a column of array of size 2. Sometimes complex numbers are needed for processing FFT outputs.

  6. A graph extension (graph) for very simple graph queries, such as shortest path queries, eigenvector centrality computations. More will be added. (Usable but limited. Will to be refactored/redesigned.)

But why? Why not use Sklearn? SciPy? NumPy?

The goal of the package is to facilitate data processes and analysis that go beyond standard SQL queries, and to reduce the number of dependencies in your project. It incorproates parts of SciPy, NumPy, Scikit-learn, and NLP (NLTK), etc., and treats them as Polars queries so that they can be run in parallel, in group_by contexts, all for almost no extra engineering effort.

Let's see an example. Say we want to generate a model performance report. In our data, we have segments. We are not only interested in the ROC AUC of our model on the entire dataset, but we are also interested in the model's performance on different segments.

import polars as pl
import polars_ds as pld

size = 100_000
df = pl.DataFrame({
    "a": np.random.random(size = size)
    , "b": np.random.random(size = size)
    , "x1" : range(size)
    , "x2" : range(size, size + size)
    , "y": range(-size, 0)
    , "actual": np.round(np.random.random(size=size)).astype(np.int32)
    , "predicted": np.random.random(size=size)
    , "segments":["a"] * (size//2 + 100) + ["b"] * (size//2 - 100) 
})
print(df.head())

shape: (5, 8)
┌──────────┬──────────┬─────┬────────┬─────────┬────────┬───────────┬──────────┐
│ abx1x2yactualpredictedsegments │
│ ------------------------      │
│ f64f64i64i64i64i32f64str      │
╞══════════╪══════════╪═════╪════════╪═════════╪════════╪═══════════╪══════════╡
│ 0.194830.4575160100000-10000000.929007a        │
│ 0.3962650.8335351100001-9999910.103915a        │
│ 0.8005580.0304372100002-9999810.558918a        │
│ 0.6080230.4113893100003-9999710.883684a        │
│ 0.8475270.5065044100004-9999610.070269a        │
└──────────┴──────────┴─────┴────────┴─────────┴────────┴───────────┴──────────┘

Traditionally, using the Pandas + Sklearn stack, we would do:

import pandas as pd
from sklearn.metrics import roc_auc_score

df_pd = df.to_pandas()

segments = []
rocaucs = []

for (segment, subdf) in df_pd.groupby("segments"):
    segments.append(segment)
    rocaucs.append(
        roc_auc_score(subdf["actual"], subdf["predicted"])
    )

report = pd.DataFrame({
    "segments": segments,
    "roc_auc": rocaucs
})
print(report)

  segments   roc_auc
0        a  0.497745
1        b  0.498801

This is ok, but not great, because (1) we are running for loops in Python, which tends to be slow. (2) We are writing more Python code, which leaves more room for errors in bigger projects. (3) The code is not very intuitive for beginners. Using Polars + Polars ds, one can do the following:

df.lazy().group_by("segments").agg(
    pl.col("actual").metric.roc_auc(pl.col("predicted")).alias("roc_auc"),
    pl.col("actual").metric.log_loss(pl.col("predicted")).alias("log_loss"),
).collect()

shape: (2, 3)
┌──────────┬──────────┬──────────┐
│ segments ┆ roc_auc  ┆ log_loss │
│ ---      ┆ ---      ┆ ---      │
│ str      ┆ f64      ┆ f64      │
╞══════════╪══════════╪══════════╡
│ a        ┆ 0.497745 ┆ 1.006438 │
│ b        ┆ 0.498801 ┆ 0.997226 │
└──────────┴──────────┴──────────┘

Notice a few things: (1) Computing ROC AUC on different segments is equivalent to an aggregation on segments! It is a concept everyone who knows SQL (aka everybody who works with data) will be familiar with! (2) There is no Python code. The extension is written in pure Rust and all complexities are hidden away from the end user. (3) Because Polars provides parallel execution for free, we can compute ROC AUC and log loss simultaneously on each segment! (In Pandas, one can do something like this in aggregations but is soooo much harder to write and way more confusing to reason about.)

The end result is simpler, more intuitive code that is also easier to reason about, and faster execution time. Because of Polars's extension (plugin) system, we are now blessed with both:

Performance and elegance - something that is quite rare in the Python world.

Getting Started

import polars_ds as pds

when you want to access the namespaces provided by the package.

Examples

In-dataframe statistical testing

df.select(
    pl.col("group1").stats.ttest_ind(pl.col("group2"), equal_var = True).alias("t-test"),
    pl.col("category_1").stats.chi2(pl.col("category_2")).alias("chi2-test"),
    pl.col("category_1").stats.f_test(pl.col("group1")).alias("f-test")
)

shape: (1, 3)
┌───────────────────┬──────────────────────┬────────────────────┐
│ t-testchi2-testf-test             │
│ ---------                │
│ struct[2]         ┆ struct[2]            ┆ struct[2]          │
╞═══════════════════╪══════════════════════╪════════════════════╡
│ {-0.004,0.996809} ┆ {37.823816,0.386001} ┆ {1.354524,0.24719} │
└───────────────────┴──────────────────────┴────────────────────┘

Generating random numbers according to reference column

df.with_columns(
    # Sample from normal distribution, using reference column "a" 's mean and std
    pl.col("a").stats.sample_normal().alias("test1") 
    # Sample from uniform distribution, with low = 0 and high = "a"'s max, and respect the nulls in "a"
    , pl.col("a").stats.sample_uniform(low = 0., high = None, respect_null=True).alias("test2")
).head()

shape: (5, 3)
┌───────────┬───────────┬──────────┐
│ atest1test2    │
│ ---------      │
│ f64f64f64      │
╞═══════════╪═══════════╪══════════╡
│ null0.459357null     │
│ null0.038007null     │
│ -0.8265180.2419630.968385 │
│ 0.737955-0.8194752.429615 │
│ 1.10397-0.6842892.483368 │
└───────────┴───────────┴──────────┘

Blazingly fast string similarity comparisons. (Thanks to RapidFuzz)

df.select(
    pl.col("word").str2.levenshtein("asasasa", return_sim=True).alias("asasasa"),
    pl.col("word").str2.levenshtein("sasaaasss", return_sim=True).alias("sasaaasss"),
    pl.col("word").str2.levenshtein("asdasadadfa", return_sim=True).alias("asdasadadfa"),
    pl.col("word").str2.fuzz("apples").alias("LCS based Fuzz match - apples"),
    pl.col("word").str2.osa("apples", return_sim = True).alias("Optimal String Alignment - apples"),
    pl.col("word").str2.jw("apples").alias("Jaro-Winkler - apples"),
)
shape: (5, 6)
┌──────────┬───────────┬─────────────┬────────────────┬───────────────────────────┬────────────────┐
│ asasasasasaaasssasdasadadfaLCS based FuzzOptimal String AlignmentJaro-Winkler - │
│ ---------match - apples- apple…                  ┆ apples         │
│ f64f64f64---------            │
│          ┆           ┆             ┆ f64f64f64            │
╞══════════╪═══════════╪═════════════╪════════════════╪═══════════════════════════╪════════════════╡
│ 0.1428570.1111110.0909090.8333330.8333330.966667       │
│ 0.4285710.3333330.2727270.1666670.00.444444       │
│ 0.1111110.1111110.0909090.5555560.4444440.5            │
│ 0.8750.6666670.5454550.250.250.527778       │
│ 0.750.7777780.4545450.250.250.527778       │
└──────────┴───────────┴─────────────┴────────────────┴───────────────────────────┴────────────────┘

Even in-dataframe nearest neighbors queries! 😲

df.select(
    pl.col("row_num"),
    pds.query_radius_ptwise(
        pl.col("feature_1"), pl.col("feature_2"), pl.col("feature_3"), # Columns used as the coordinates in n-d space
        index = pl.col("row_num"),
        r = 0.1, 
        dist = "l2", # actually this is squared l2
        parallel = True
    ).alias("best friends"),
).with_columns( # -1 to remove the point itself
    (pl.col("best friends").list.len() - 1).alias("best friends count")
).head()

shape: (5, 3)
┌─────────┬─────────────────────┬────────────────────┐
│ row_numbest friendsbest friends count │
│ ---------                │
│ u64list[u64]           ┆ u32                │
╞═════════╪═════════════════════╪════════════════════╡
│ 0       ┆ [0, 1291, … 1810]   ┆ 1297               │
│ 1       ┆ [1, 12076, … 14844] ┆ 682                │
│ 2       ┆ [2, 6843, … 6221]   ┆ 1050               │
│ 3       ┆ [3, 4104, … 14867]  ┆ 1428               │
│ 4       ┆ [4, 9075, … 8890]   ┆ 1872               │
└─────────┴─────────────────────┴────────────────────┘

Disclaimer

Currently in Beta. Feel free to submit feature requests in the issues section of the repo. This library will only depend on python Polars and will try to be as stable as possible for polars>=0.20.6. Exceptions will be made when Polars's update forces changes in the plugins.

This package is not tested with Polars streaming mode and is not designed to work with data so big that has to be streamed.

The recommended usage will be for datasets of size 1k to 2-3mm rows, but actual performance will vary depending on dataset and hardware. Performance will only be a priority for datasets that fit in memory. It is a known fact that knn performance suffers greatly with a large k. Str-knn and Graph queries are only suitable for smaller data, of size ~1-5k for common computers.

Credits

  1. Rust Snowball Stemmer is taken from Tsoding's Seroost project (MIT). See here
  2. Some statistics functions are taken from Statrs (MIT) and internalized. See here
  3. Graph functionalities are powered by the petgragh crate. See here
  4. Linear algebra routines are powered partly by faer

Other related Projects

  1. Take a look at our friendly neighbor functime
  2. String similarity metrics is soooo fast and easy to use because of RapidFuzz

About

Polars extension for general data science use cases

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 56.3%
  • Python 41.6%
  • Jupyter Notebook 2.0%
  • Makefile 0.1%