Skip to content

Collaborative & Open-Source Quality Assurance for all AI models πŸ§‘β€πŸ”§βš‘οΈ

License

Notifications You must be signed in to change notification settings

Giskard-AI/giskard-backup

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

giskardlogo

Quality Assurance for AI models

Eliminate AI bias in production. Deliver ML products, better & faster

GitHub release GitHub build Reliability Rating Giskard on Discord

Documentation β€’ Blog β€’ Website β€’ Discord Community β€’ Advisors


About Giskard

Giskard creates interfaces for humans to inspect & test AI models. It is open-source and self-hosted.

Giskard lets you instantly see your model's prediction for a given set of feature values. You can set the values directly in Giskard and see the prediction change.

Saw anything strange? Leave a feedback directly within Giskard, so that your team can explore the query that generated the faulty result. Designed for both tech and business users, Giskard is super intuitive to use! πŸ‘Œ

And of course, Giskard works with any model, any environment and integrates seamlessly with your favorite tools ‡️


Why use Giskard

Inspect the model's performance

  • instantly see the model's prediction for a given set of feature values
  • change the values directly in Giskard and see the prediction change
  • works with any type of models, datasets and environments

Collaborate with business stakeholders

  • leave notes and tag teammates directly within the Giskard interface
  • use discussion threads to have all information centralized for easier follow-up and decision making
  • enjoy Giskard's super-intuitive design, made with both tech and business users in mind

Automatically test & monitor

  • turn the collected feedback into executable tests for safe deployment. Giskard provides presets of tests so that you design and execute your tests in no time
  • receive actionable alerts on AI model bugs in production
  • protect your ML models against the risk of regressions, drift and bias

πŸ„πŸ½β€β™‚οΈ Workflow

  1. Explore your ML model: Easily upload any Python model: PyTorch, TensorFlow, πŸ€— Transformers, Scikit-learn, etc. Play with the model to test its performance.

  1. Discuss and analyze feedback: Enter feedback directly within Giskard and discuss it with your team.

  1. Turn feedback into tests: Use Giskard test presets to design and execute your tests in no time.

Giskard lets you automate tests in any of the categories below:

Metamorphic testing Test if your model outputs behave as expected before and after input perturbation
Statistical testing Test if your model output respects some business rules
Performance testing Test if your model performance is sufficiently high within some particular data slices
Data drift testing Test if your features don't drift between the reference and actual dataset
Prediction drift testing Test the absence of concept drift inside your model

Interactive demo

Play with Giskard before installing! Click the image below to start the demo:

Interactive demo

Are you a developer? Check our developer's readme

Getting Started with Giskard

πŸš€ Installation

Requirements: git, docker and docker-compose

git clone https://github.com/Giskard-AI/giskard.git
cd giskard
docker-compose up -d

That's it. Access at http://localhost:19000 with login/password: admin/admin.

πŸ₯‹ Guides: Jump right in

Follow our handy guides to get started on the basics as quickly as possible:

  1. Installation
  2. Configuration
  3. Upload your ML model & data
  4. Evaluate your ML model
  5. Test your ML model

How to contribute

We welcome contributions from the Machine Learning community!

Read this guide to get started.


Like what we're doing?

🌟 Leave us a star, it helps the project to get discovered by others and keeps us motivated to build awesome open-source tools! 🌟

About

Collaborative & Open-Source Quality Assurance for all AI models πŸ§‘β€πŸ”§βš‘οΈ

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 47.6%
  • Python 20.9%
  • Vue 20.3%
  • TypeScript 7.6%
  • Kotlin 1.5%
  • Dockerfile 0.7%
  • Other 1.4%