Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explainable AI Integration for Time Series Model Interpretability #82

Open
sharayuanuse opened this issue Oct 8, 2024 · 2 comments · May be fixed by #90
Open

Explainable AI Integration for Time Series Model Interpretability #82

sharayuanuse opened this issue Oct 8, 2024 · 2 comments · May be fixed by #90

Comments

@sharayuanuse
Copy link
Contributor

Description

This feature aims to enhance time series analysis by integrating the Explainable AI (XAI) package into the workflow. Users will be able to visualize temporal data, understand the factors influencing model predictions, and generate comprehensive reports. The integration includes applying stationarity tests, estimating model parameters using ACF and PACF plots, and utilizing the Explainable AI package to interpret model outputs. All visualizations and explanations will be saved as images within the same Colab notebook and compiled into a PDF report.

Problem it Solves

  • Lack of Interpretability in Time Series Models: Traditional time series models like ARIMA or SARIMA often act as "black boxes," making it difficult for users to understand how input features affect predictions.
  • Difficulty in Visualizing Temporal Feature Importance: Users struggle to identify which time-based features (e.g., trend, seasonality) are most influential in their models.
  • Inefficient Reporting Process: Manually generating and compiling visualizations and explanations into reports is time-consuming and prone to errors.

Proposed Solution

  • Explainable AI (SHAP) Integration: Leverage the SHAP (SHapley Additive exPlanations) library to explain the feature importance of time series models. SHAP values help users understand the individual contribution of features, even for complex models like SARIMA, ARIMA, or any tree-based regression model used in time series forecasting.
  • Temporal Feature Importance: Implement time-specific explanations where SHAP values can highlight how certain time periods or lags contribute to the predictions.
  • Model Explainability Visualization: Generate SHAP summary plots and force plots, illustrating how different features impact model outcomes across time. This will be crucial for both forecasting and general time series modeling tasks.
  • Report Generation: All visualizations, including SHAP plots and time series plots, will be automatically compiled into a PDF report. This allows users to have a structured output summarizing both the temporal patterns and the explainability analysis of the model.

Alternatives Considered

  • LIME: Another explainability method, LIME (Local Interpretable Model-agnostic Explanations), was considered, but SHAP provides more reliable global explanations, especially for tree-based models and regression models, making it a better fit for time series.
  • Manual Feature Analysis: Instead of using SHAP, manual feature importance analysis using correlation metrics or simple feature elimination could be implemented, but these would be less sophisticated and not as visually informative as SHAP.

Additional Context

sarima_forecast
seasonal_decomposition
shap_summary
acf_pacf_plot

@sharayuanuse sharayuanuse added the enhancement New feature or request label Oct 8, 2024
Copy link

github-actions bot commented Oct 8, 2024

👋 Thank you for raising an issue! We appreciate your effort in helping us improve. Our team will review it shortly. Stay tuned!

@ombhojane
Copy link
Owner

Yes @sharayuanuse go ahead!
feel free to reach out,
update the core explaianbleai codes such that user can perform time series models efficiently with explainableai, as it's currently compatible with scikitlearn models

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants