Skip to content

Releases: cnellington/Contextualized

v0.2.8 -- Documentation Overhaul, New Statistical Tests (JOSS)

07 May 20:45
9065efe
Compare
Choose a tag to compare

New tagged v0.2.8 release for Zenodo archive for JOSS

v0.2.8 -- Documentation Overhaul, New Statistical Tests

07 May 14:49
b734627
Compare
Choose a tag to compare

Major Updates

Minor Updates

  • Fixed bug in easy correlation networks prediction when indivudal_preds=False
  • Increased overall test coverage to 87%, only leaving out visualization utilities.
  • Added tests for correctness to all analysis tools. Models must identify the significance of known heterogeneous and homogeneous effects to pass integration tests.

Auto-generated release notes

Full Changelog: v0.2.7...v0.2.8

v0.2.7 -- Linear Encoders, API Reference, and Developer Resources

30 Dec 00:16
Compare
Choose a tag to compare

Major Updates

  • Contextualize all models with linear encoders for simplicity and interpretability (set encoder_type='linear')
  • Added stable versions to all requirements, upgrading to torch >=2.1.0 and lightning >=2.0.0, Python >=3.8
  • Added API reference for contextualized.easy and contextualized.analysis and search to online documentation
  • Added the Benefits of Contextualized ML demo notebook to the website
  • Silenced most pytorch logging to make the important stuff stand out

Minor Updates

  • Fixed bug in easy Bayesian networks error computation
  • Resolved distributed training on GPUs for the dags TensorDataset
  • Depreciating dev branch, using pypi for stable versioning and main branch for active development
  • Added tests for correctness, where models must recover known known data generation mechanisms to pass (not just converging on random data)
  • All code is black formatted and began adding type hints and docstrings as standard practice. Docstrings use Google style now.
  • Added resources and scripts to standardize development workflows and requirements (dev_requirements.txt, make_docs.sh, format_style.sh)

v0.2.6 -- Consistency Tests, P-values

13 May 22:39
40fe501
Compare
Choose a tag to compare

Contextualization adds new flexibility to model inference. In this update, we add tests to assess when contextualization captures meaningful heterogeneous effects, and determine when these effects should be ignored.

Use analysis.pvals to estimate the significance of effects captured by contextualized modeling

  • calc_homogeneous_context_effects_pvals tests if the direct effect of context on outcomes is significant
  • calc_homogeneous_predictor_effects_pvals tests if the effect of context-invariant models on outcomes is significant
  • calc_heterogeneous_predictor_effects_pvals tests it the effect of context-dependent models on outcomes is significant

Use analysis.bootstraps with select_good_bootstraps to select only training runs which converge for follow-up analysis.

v0.2.5 -- DAG losses, factor graphs, neighborhood selection, and sklearn-style graph baselines

05 May 16:57
d00cd16
Compare
Choose a tag to compare
  • Major NOTMAD upgrades: Infer Contextualized DAGs with NOTMAD using the NOTEARS, Poly, or DAGMA losses, and use factor graphs to infer high dimensional DAGs!
  • More Contextualized Networks: Use ContextualizedNeighborhoodSelection in the regression module to do contextualized neighborhood selection and infer graphs using lasso regression with minimal assumptions.
  • Network Baselines: Use contextualized.baselines to infer traditional correlation networks and Bayesian networks using a simple sklearn-style interface (fit, predict, measure_mses), and create "grouped" versions of these models using any grouping or discrete context (e.g. clustering, feature splits, age groups, cell types) with the GroupedNetworks class -- and follow up with ContextualizedCorrelationNetworks or ContextualizedBayesianNetworks in the easy module to see how much more accurate contextualized models can be!
  • Set metamodel_type='Naive' in the regression and dags lightning_modules to remove archetypes and estimate models directly from a neural network (not yet in the sklearn-style easy models).
  • Set fit_intercept=False in the regression lightning_modules to remove contextualized intercepts, and only infer models with context-varying coefficients (not yet in the sklearn-style easy models).
  • Various bugfixes, including a sign-flip in easy correlation measure_mses.
  • Disabled dynamic trainer output to improve readability of stdout

v0.2.4 -- BCELoss and Easy Network MSE

28 Nov 21:02
30cb09c
Compare
Choose a tag to compare
  • Contextualized Classifier now uses binary cross-entropy loss, solving a problem with nan-values in backpropagation
  • Bugfix in Contextualized Easy Networks measure_mse

v0.2.3 -- Easy Early Stopping + Bugfixes

20 Nov 19:46
00dc129
Compare
Choose a tag to compare
  • Added kwargs for controlling early stopping and checkpointing in the easy modules
  • Bugfixes for predicting bootstrapped vs. averaged params in easy modules

v0.2.2 -- Analysis Utils and PyPI

07 Nov 18:58
d821522
Compare
Choose a tag to compare
  • Added analysis and plotting utilities for contextualized models under contextualized.analysis
  • Install the latest stable release of Contextualized from PyPI with pip install contextualized-ml

v0.2.0 -- Contextualized Graphical Models

07 Oct 23:40
da56222
Compare
Choose a tag to compare
  • Estimate contextualized graphical models under a cohesive PyTorch Lightning framework, with simple multi-gpu acceleration.
    • Bayesian networks: contextualized.dags.torch_notmad.torch_notmad.NOTMAD_model
    • Correlation networks: contextualized.regression.lightning_modules.ContextualizedCorrelation
    • Markov networks: contextualized.regression.lightning_modules.ContextualizedMarkovGraph

v0.1.2 -- Saving, loading, and base_predictors

20 Jun 15:15
e1b1a06
Compare
Choose a tag to compare
  • Include a base_predictor in your contextualized.regression or contextualized.easy object to improve learning and restrict contextualized to estimate only non-base effects (differences in parameters/outcomes from the base model)
  • Easily share your results and transfer your experiments between machines with contextualized.save and contextualized.load