Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ADD] Documentation update in base_trainer.py #447

Closed
wants to merge 73 commits into from

Conversation

theodorju
Copy link
Collaborator

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)

Checklist:

  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • Have you checked to ensure there aren't other open Pull Requests for the same update/change?
  • Have you added an explanation of what your changes do and why you'd like us to include them?
  • Have you written new tests for your core changes, as applicable?
  • Have you successfully ran tests with your changes locally?

Description

I have updated the documentation of base_trainer for #406 in places where it seemed reasonable to add documentation.

Motivation and Context

Changes fix #406

How has this been tested?

Since the changes are only in the documentation, I have not executed any tests for this issue.

nabenabe0928 and others added 30 commits December 21, 2021 16:16
…oml#334)

* [feat] Support statistics print by adding results manager object

* [refactor] Make SearchResults extract run_history at __init__

Since the search results should not be kept in eternally,
I made this class to take run_history in __init__ so that
we can implicitly call extraction inside.
From this change, the call of extraction from outside is not recommended.
However, you can still call it from outside and to prevent mixup of
the environment, self.clear() will be called.

* [fix] Separate those changes into PR#336

* [fix] Fix so that test_loss includes all the metrics

* [enhance] Strengthen the test for sprint and SearchResults

* [fix] Fix an issue in documentation

* [enhance] Increase the coverage

* [refactor] Separate the test for results_manager to organize the structure

* [test] Add the test for get_incumbent_Result

* [test] Remove the previous test_get_incumbent and see the coverage

* [fix] [test] Fix reversion of metric and strengthen the test cases

* [fix] Fix flake8 issues and increase coverage

* [fix] Address Ravin's comments

* [enhance] Increase the coverage

* [fix] Fix a flake8 issu
* [doc] Add workflow of the AutoPytorch

* [doc] Address Ravin's comment
* [feat] Add an object that realizes the perf over time viz

* [fix] Modify TODOs and add comments to avoid complications

* [refactor] [feat] Format visualizer API and integrate this feature into BaseTask

* [refactor] Separate a shared raise error process as a function

* [refactor] Gather params in Dataclass to look smarter

* [refactor] Merge extraction from history to the result manager

Since this feature was added in a previous PR, we now rely on this
feature to extract the history.
To handle the order by the start time issue, I added the sort by endtime
feature.

* [feat] Merge the viz in the latest version

* [fix] Fix nan --> worst val so that we can always handle by number

* [fix] Fix mypy issues

* [test] Add test for get_start_time

* [test] Add test for order by end time

* [test] Add tests for ensemble results

* [test] Add tests for merging ensemble results and run history

* [test] Add the tests in the case of ensemble_results is None

* [fix] Alternate datetime to timestamp in tests to pass universally

Since the mapping of timestamp to datetime variates on machine,
the tests failed in the previous version.
In this version, we changed the datetime in the tests to the fixed
timestamp so that the tests will pass universally.

* [fix] Fix status_msg --> status_type because it does not need to be str

* [fix] Change the name for the homogeniety

* [fix] Fix based on the file name change

* [test] Add tests for set_plot_args

* [test] Add tests for plot_perf_over_time in BaseTask

* [refactor] Replace redundant lines by pytest parametrization

* [test] Add tests for _get_perf_and_time

* [fix] Remove viz attribute based on Ravin's comment

* [fix] Fix doc-string based on Ravin's comments

* [refactor] Hide color label settings extraction in dataclass

Since this process makes the method in BaseTask redundant and this was
pointed out by Ravin, I made this process a method of dataclass so that
we can easily fetch this information.
Note that since the color and label information always depend on the
optimization results, we always need to pass metric results to ensure
we only get related keys.

* [test] Add tests for color label dicts extraction

* [test] Add tests for checking if plt.show is called or not

* [refactor] Address Ravin's comments and add TODO for the refactoring

* [refactor] Change KeyError in EnsembleResults to empty

Since it is not convenient to not be able to instantiate EnsembleResults
in the case when we do not have any histories,
I changed the functionality so that we can still instantiate even when
the results are empty.
In this case, we have empty arrays and it also matches the developers
intuition.

* [refactor] Prohibit external updates to make objects more robust

* [fix] Remove a member variable _opt_scores since it is confusing

Since opt_scores are taken from cost in run_history and metric_dict
takes from additional_info, it was confusing for me where I should
refer to what. By removing this, we can always refer to additional_info
when fetching information and metrics are always available as a raw
value. Although I changed a lot, the functionality did not change and
it is easier to add any other functionalities now.

* [example] Add an example how to plot performance over time

* [fix] Fix unexpected train loss when using cross validation

* [fix] Remove __main__ from example based on the Ravin's comment

* [fix] Move results_xxx to utils from API

* [enhance] Change example for the plot over time to save fig

Since the plt.show() does not work on some environments,
I changed the example so that everyone can run at least this example.
* cleanup of simple_imputer

* Fixed doc and typo

* Fixed docs

* Made changes, added test

* Fixed init statement

* Fixed docs

* Flake'd
…#351)

* [feat] Add the option to save a figure in plot setting params

Since non-GUI based environments would like to avoid the usage of
show method in the matplotlib, I added the option to savefig and
thus users can complete the operations inside AutoPytorch.

* [doc] Add a comment for non-GUI based computer in plot_perf_over_time method

* [test] Add a test to check the priority of show and savefig

Since plt.savefig and plt.show do not work at the same time due to the
matplotlib design, we need to check whether show will not be called
when a figname is specified. We can actually raise an error, but plot
will be basically called in the end of an optimization, so I wanted
to avoid raising an error and just sticked to a check by tests.
* update workflow files

* Remove double quotes

* Exclude python 3.10

* Fix mypy compliance check

* Added PEP 561 compliance

* Add py.typed to MANIFEST for dist

* Update .github/workflows/dist.yml

Co-authored-by: Ravin Kohli <[email protected]>

Co-authored-by: Ravin Kohli <[email protected]>
* Add fit pipeline with tests

* Add documentation for get dataset

* update documentation

* fix tests

* remove permutation importance from visualisation example

* change disable_file_output

* add

* fix flake

* fix test and examples

* change type of disable_file_output

* Address comments from eddie

* fix docstring in api

* fix tests for base api

* fix tests for base api

* fix tests after rebase

* reduce dataset size in example

* remove optional from  doc string

* Handle unsuccessful fitting of pipeline better

* fix flake in tests

* change to default configuration for documentation

* add warning for no ensemble created when y_optimization in disable_file_output

* reduce budget for single configuration

* address comments from eddie

* address comments from shuhei

* Add autoPyTorchEnum

* fix flake in tests

* address comments from shuhei

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* fix flake

* use **dataset_kwargs

* fix flake

* change to enforce keyword args

Co-authored-by: nabenabe0928 <[email protected]>
* Add workflow for publishing docker image to github packages and dockerhub

* add docker installation to docs

* add workflow dispatch
* check if N==0, and handle this case

* change position of comment

* Address comments from shuhei
* add test evaluator

* add no resampling and other changes for test evaluator

* finalise changes for test_evaluator, TODO: tests

* add tests for new functionality

* fix flake and mypy

* add documentation for the evaluator

* add NoResampling to fit_pipeline

* raise error when trying to construct ensemble with noresampling

* fix tests

* reduce fit_pipeline accuracy check

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* address comments from shuhei

* fix bug in base data loader

* fix bug in data loader for val set

* fix bugs introduced in suggestions

* fix flake

* fix bug in test preprocessing

* fix bug in test data loader

* merge tests for evaluators and change listcomp in get_best_epoch

* rename resampling strategies

* add test for get dataset

Co-authored-by: nabenabe0928 <[email protected]>
* [fix] Fix the no-training-issue when using simple intensifier

* [test] Add a test for the modification

* [fix] Modify the default budget so that the budget is compatible

Since the previous version does not consider the provided budget_type
when determining the default budget, I modified this part so that
the default budget does not mix up the default budget for epochs
and runtime.
Note that since the default pipeline config defines epochs as the
default budget, I also followed this rule when taking the default value.

* [fix] Fix a mypy error

* [fix] Change the total runtime for single config in the example

Since the training sometimes does not finish in time,
I increased the total runtime for the training so that we can accomodate
the training in the given amount of time.

* [fix] [refactor] Fix the SMAC requirement and refactor some conditions
* add variance thresholding

* fix flake and mypy

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

Co-authored-by: nabenabe0928 <[email protected]>
* Add new scalers

* fix flake and mypy

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* add robust scaler

* fix documentation

* remove power transformer from feature preprocessing

* fix tests

* check for default in include and exclude

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

Co-authored-by: nabenabe0928 <[email protected]>
* remove categorical strategy from simple imputer

* fix tests

* address comments from eddie

* fix flake and mypy error

* fix test cases for imputation
* [fix] Add check dataset in transform as well for test dataset, which does not require fit
* [test] Migrate tests from the francisco's PR without modifications
* [fix] Modify so that tests pass
* [test] Increase the coverage
* Fix: keyword arguments to submit

* Fix: Missing param for implementing AbstractTA

* Fix: Typing of multi_objectives

* Add: mutli_objectives to each ExecuteTaFucnWithQueue
* remove datamanager instances from evaluation and smbo

* fix flake

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* fix flake

Co-authored-by: nabenabe0928 <[email protected]>
* [fix] Fix the task inference issue mentioned in automl#352

Since sklearn task inference regards targets with integers as
a classification task, I modified target_validator so that we always
cast targets for regression to float.
This workaround is mentioned in the reference below:
scikit-learn/scikit-learn#8952

* [fix] [test] Add a small number to label for regression and add tests

Since target labels are required to be float and sklearn requires
numbers after a decimal point, I added a workaround to add the almost
possible minimum fraction to array so that we can avoid a mis-inference
of task type from sklearn.
Plus, I added tests to check if we get the expected results for
extreme cases.

* [fix] [test] Adapt the modification of targets to scipy.sparse.xxx_matrix

* [fix] Address Ravin's comments and loosen the small number choice
* Initial implementation without tests

* add tests and make necessary changes

* improve documentation

* fix tests

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* undo change in  as it causes tests to fail

* change name from InputValidator to input_validator

* extract statements to methods

* refactor code

* check if mapping is the same as expected

* update precision reduction for dataframes and tests

* fix flake

Co-authored-by: nabenabe0928 <[email protected]>
* in progress

* add remaining preprocessors

* fix flake and mypy after rebase

* Fix tests and add documentation

* fix tests bug

* fix bug in tests

* fix bug where search space updates were not honoured

* handle check for score func in feature preprocessors

* address comments from shuhei

* apply suggestions from code review

* add documentation for feature preprocessors with percent to int value range

* fix tests

* fix tests

* address comments from shuhei

* fix tests which fail due to scaler
* initial implementation

* fix issue with missing classes

* finalise implementation, add documentation

* fix tests

* add tests from ask

* fix issues from feature preprocessing PR

* address comments from shuhei

* address comments from code review

* address comments from shuhei
Fix mypy

Initial implementation of adversarial training

Modifying the code to have activation controlled batch normalization

Adding activation controlled weight decay, updating the style for code style check

Commit for passing style check

Style check try 2

Bug fix

Adding unit test for adversarial trainer

Adding code for activation controlled skip connections, with the additional choice of shake-shake and shake-drop being hyperparameters as the choice for the multi branch networks

Bug fix for the failing tests

Adding better conditions

Try at a fix

Temporary fix for the failing test

Failing code check

Failing code check v2

Add new update to fix break

Flake8 coding style fix

Removing duplicate unit test

In progress
swa working, se in progress

Fixed bug in update model with swa model, add predict with snapshot ensemble; todo: add tests for both
Add pytest_mock to test dependencies

lookahead in progress

add lookahead hyperparameters
ArlindKadra and others added 15 commits March 9, 2022 18:09
* Fixes for the development branch and regularization cocktails

* Update implementation

* Fix unit tests temporarily

* Implementation update and bug fixes

* Removing unecessary code

* Addressing Ravin's comments

[refactor] Address Shuhei's comments

[refactor] Address Shuhei's comments

[refactor] Address Shuhei's comments

[refactor] Address Shuhei's comments
[fix] Fix Flake8 issues

[refactor] Address Shuhei's comment

[refactor] Address Shuhei's comments

[refactor] Address Shuhei's comments

[refactor] Address Shuhei's comments
* Update implementation

* Coding style fixes

* Implementation update

* Style fix

* Turn weighted loss into a constant again, implementation update

* Cocktail branch inconsistencies (automl#275)

* To nemo

* Revert change in T_curr as results conclusively prove it should be 0

* Revert cutmix change after data from run

* Final conclusion after results

* FIX bug in shake alpha beta

* Updated if is_training condition for shake drop

* Remove temp fix in row cutmic

* Cocktail fixes time debug (automl#286)

* preprocess inside data validator

* add time debug statements

* Add fixes for categorical data

* add fit_ensemble

* add arlind fix for swa and se

* fix bug in trainer choice fit

* fix ensemble bug

* Correct bug in cleanup

* Cleanup for removing time debug statements

* ablation for adversarial

* shuffle false in dataloader

* drop last false in dataloader

* fix bug for validation set, and cutout and cutmix

* shuffle = False

* Shake Shake updates (automl#287)

* To test locally

* fix bug in trainer choice fit

* fix ensemble bug

* Correct bug in cleanup

* To test locally

* Cleanup for removing time debug statements

* ablation for adversarial

* shuffle false in dataloader

* drop last false in dataloader

* fix bug for validation set, and cutout and cutmix

* To test locally

* shuffle = False

* To test locally

* updates to search space

* updates to search space

* update branch with search space

* undo search space update

* fix bug in shake shake flag

* limit to shake-even

* restrict to even even

* Add even even and others for shake-drop also

* fix bug in passing alpha beta method

* restrict to only even even

* fix silly bug:

* remove imputer and ordinal encoder for categorical transformer in feature validator

* Address comments from shuhei

* fix issues with ensemble fitting post hoc

* Address comments on the PR

* Fix flake and mypy errors

* Address comments from PR automl#286

* fix bug in embedding

* Update autoPyTorch/api/tabular_classification.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/datasets/base_dataset.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/datasets/base_dataset.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/pipeline/components/training/trainer/base_trainer.py

Co-authored-by: nabenabe0928 <[email protected]>

* Address comments from shuhei

* adress comments from shuhei

* fix flake and mypy

* Update autoPyTorch/pipeline/components/training/trainer/RowCutMixTrainer.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/pipeline/tabular_classification.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/pipeline/components/setup/network_backbone/utils.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/pipeline/components/setup/network_backbone/utils.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/pipeline/components/setup/network_backbone/utils.py

Co-authored-by: nabenabe0928 <[email protected]>

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* increase threads_per_worker

* fix bug in rowcutmix

* Enhancement for the tabular validator. (automl#291)

* Initial try at an enhancement for the tabular validator

* Adding a few type annotations

* Fixing bugs in implementation

* Adding wrongly deleted code part during rebase

* Fix bug in _get_args

* Fix bug in _get_args

* Addressing Shuhei's comments

* Address Shuhei's comments

* Refactoring code

* Refactoring code

* Typos fix and additional comments

* Replace nan in categoricals with simple imputer

* Remove unused function

* add comment

* Update autoPyTorch/data/tabular_feature_validator.py

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/data/tabular_feature_validator.py

Co-authored-by: nabenabe0928 <[email protected]>

* Adding unit test for only nall columns in the tabular feature categorical evaluator

* fix bug in remove all nan columns

* Bug fix for making tests run by arlind

* fix flake errors in feature validator

* made typing code uniform

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* address comments from shuhei

* address comments from shuhei (2)

Co-authored-by: Ravin Kohli <[email protected]>
Co-authored-by: Ravin Kohli <[email protected]>
Co-authored-by: nabenabe0928 <[email protected]>

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* resolve code issues with new versions

* Address comments from shuhei

* make run_traditional_ml function

* implement suggestion from shuhei and fix bug in rowcutmixtrainer

* fix return type docstring

* add better documentation and fix bug in shake_drop_get_bl

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* add test for comparator and other improvements based on PR comments

* fix bug in test

* [fix] Fix the condition in the raising error of all_nan_columns

* [refactor] Unite name conventions of numpy array and pandas dataframe

* [doc] Add the description about the tabular feature transformation

* [doc] Add the description of the tabular feature transformation

* address comments from arlind

* address comments from arlind

* change to as_tensor and address comments from arlind

* correct description for functions in data module

Co-authored-by: nabenabe0928 <[email protected]>
Co-authored-by: Arlind Kadra <[email protected]>
Co-authored-by: nabenabe0928 <[email protected]>

* Addressing Shuhei's comments

* flake8 problems fix

* Update autoPyTorch/api/base_task.py

Add indent.

Co-authored-by: Ravin Kohli <[email protected]>

* Update autoPyTorch/api/base_task.py

Add indent.

Co-authored-by: Ravin Kohli <[email protected]>

* Update autoPyTorch/data/tabular_feature_validator.py

Add indentation.

Co-authored-by: Ravin Kohli <[email protected]>

* Update autoPyTorch/pipeline/components/setup/network_backbone/utils.py

Add line indentation.

Co-authored-by: Ravin Kohli <[email protected]>

* Update autoPyTorch/data/tabular_feature_validator.py

Validate if there is a column transformer since for sparse matrices we will not have one.

Co-authored-by: Ravin Kohli <[email protected]>

* Update autoPyTorch/utils/implementations.py

Delete uncommented line.

Co-authored-by: Ravin Kohli <[email protected]>

* Allow the number of threads to be given by the user

* Removing unnecessary argument and refactoring the attribute.

* Addressing Ravin's comments

* Update autoPyTorch/pipeline/components/setup/network_backbone/utils.py

Updating the function documentation according to the agreed style.

Co-authored-by: Ravin Kohli <[email protected]>

* Update autoPyTorch/pipeline/components/setup/network_backbone/utils.py

Providing information on the wrong method provided for shake-shake regularization.

Co-authored-by: nabenabe0928 <[email protected]>

* add todo for backend and accept changes from shuhei

* Addressing Shuhei's and Ravin's comments

* Addressing Shuhei's and Ravin's comments, bug fix

* Update autoPyTorch/pipeline/components/setup/network_backbone/ResNetBackbone.py

Improving code readibility.

Co-authored-by: nabenabe0928 <[email protected]>

* Update autoPyTorch/pipeline/components/setup/network_backbone/ResNetBackbone.py

Improving consistency.

Co-authored-by: nabenabe0928 <[email protected]>

* bug fix

Co-authored-by: Ravin Kohli <[email protected]>
Co-authored-by: nabenabe0928 <[email protected]>
Co-authored-by: nabenabe0928 <[email protected]>
Co-authored-by: Ravin Kohli <[email protected]>
* Initial fix for all tests passing locally py=3.8

* fix bug in tests

* fix bug in test for data

* debugging error in dummy forward pass

* debug try -2

* catch runtime error in ci

* catch runtime error in ci

* add better debug test setup

* debug some more

* run this test only

* remove sum backward

* remove inplace in inception block

* undo silly change

* Enable all tests

* fix flake

* fix bug in test setup

* remove anamoly detection

* minor changes to comments

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

* Address comments from Shuhei

* revert change leading to bug

* fix flake

* change comment position in feature validator

* Add documentation for _is_datasets_consistent

* address comments from arlind

* case when all nans in test

Co-authored-by: nabenabe0928 <[email protected]>
* update requirements

* update requirements

* resolve remaining conflicts and fix flake and mypy

* Fix remaining tests and examples

* fix failing checks

* fix flake
* enable preprocessing and remove is_small_preprocess

* address comments from shuhei and fix precommit checks

* fix tests

* fix precommit checks

* add suggestions from shuhei for astype use

* address speed issue when using object_dtype_mapping

* make code more readable

* improve documentation for base network embedding
* Enable learned embeddings, fix bug with non cyclic schedulers

* add forbidden condition cyclic lr

* refactor base_pipeline forbidden conditions

* Apply suggestions from code review

Co-authored-by: nabenabe0928 <[email protected]>

Co-authored-by: nabenabe0928 <[email protected]>
@theodorju theodorju linked an issue Jul 16, 2022 that may be closed by this pull request

Returns:
True if the current epoch is larger than the maximum epochs, False otherwise.
Additionally, returns False if the run is without this constrain.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Additionally, returns False if the run is without this constrain.
Additionally, returns False if the run is without this constraint.

runtime is reached.
Returns:
True if the maximum runtime is reached, False otherwise.
Additionally, returns False if the run is without this constrain.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Additionally, returns False if the run is without this constrain.
Additionally, returns False if the run is without this constraint.

@@ -44,14 +44,28 @@ def __init__(self,

It also allows to define a 'epoch_or_time' budget type, which means,
the first of them both which is exhausted, is honored
Args:
budget_type (str): Type of budget to be used when fitting the pipeline.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you add a new line and indentation to the description of budget_type? You can check the base_task.py file for how we format the docstrings.

Args:
budget_type (str): Type of budget to be used when fitting the pipeline.
Possible values are 'epochs', 'runtime', or 'epoch_or_time'
max_epochs (Optional[int], default=None): Maximum number of epochs to train the pipeline for
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also do this for all the other parameter descriptions?

Args:
epoch (int): the current epoch
start_time (float): timestamp at the beginning of current epoch
end_time (float): timestamp when gathering the information
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
end_time (float): timestamp when gathering the information
end_time (float): timestamp when gathering the information after the current epoch.

Copy link
Contributor

@ravinkohli ravinkohli left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey, thanks for the pull request. I have left some minor comments after which I think we can merge this PR.

@theodorju
Copy link
Collaborator Author

Thanks for the comments. I have applied all the suggested changes. Additionally:

  • I have kept the 120 max line width, and also modified some pre-existing docstrings accordingly
  • Added more detailed descriptions in return sections, where applicable
  • Updated some docstrings that I've missed in the previous commit (e.g. criterion_preparation)

@ravinkohli ravinkohli changed the title Documentation update in base_trainer.py [ADD] Documentation update in base_trainer.py Aug 9, 2022
@theodorju theodorju closed this Aug 9, 2022
@theodorju theodorju deleted the reg_cocktails-406 branch August 9, 2022 21:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Documentation update
6 participants