From c1144681138e97302d376f05fdd30e3d3833d745 Mon Sep 17 00:00:00 2001 From: Songki Choi Date: Fri, 17 Mar 2023 16:28:48 +0900 Subject: [PATCH 01/34] Bump up version to 1.1.0rc1 Signed-off-by: Songki Choi --- CHANGELOG.md | 26 +++++++++++++++++++ README.md | 14 +++++----- otx/__init__.py | 2 +- .../exportable_code/demo/requirements.txt | 2 +- tox.ini | 4 +-- 5 files changed, 37 insertions(+), 11 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 7c22002aaa1..d61f76bd738 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,32 @@ All notable changes to this project will be documented in this file. +## \[v1.1.0\] + +### New features + +- Add FP16 IR export support (#1683) +- Add in-memory caching in dataloader (#1694) +- Add MoViNet template for action classification (#1742) +- Add Semi-SL multilabel classification algorithm (#1805) +- Integrate multi-gpu training for semi-supervised learning and self-supervised learning (#1534) +- Add train-type parameter to otx train (#1874) +- Add embedding of inference configuration to IR for classification (#1842) +- Enable VOC dataset in OTX (#1862) + +### Enhancements + +- Parametrize saliency maps dumping in export (#1888) +- Bring mmdeploy to action recognition model export & Test optimization of action tasks (#1848) +- Update backbone lists (#1835) + +### Bug fixes + +- Handle unpickable update_progress_callback (#1892) +- Dataset Adapter: Avoid duplicated annotation and permit empty image (#1873) +- Arrange scale between bbox preds and bbox targets in ATSS (#1880) +- Fix label mismatch of evaluation and validation with large dataset in semantic segmentation (#1851) + ## \[v1.0.1\] ### Enhancements diff --git a/README.md b/README.md index a5f78ca9fdb..948528fcca8 100644 --- a/README.md +++ b/README.md @@ -5,8 +5,8 @@ --- [Key Features](#key-features) • -[Quick Start](https://openvinotoolkit.github.io/training_extensions/latest/guide/get_started/quick_start_guide/index.html) • -[Documentation](https://openvinotoolkit.github.io/training_extensions/latest/index.html) • +[Quick Start](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/guide/get_started/quick_start_guide/index.html) • +[Documentation](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/index.html) • [License](#license) [![PyPI](https://img.shields.io/pypi/v/otx)](https://pypi.org/project/otx) @@ -49,7 +49,7 @@ OpenVINO™ Training Extensions supports the following computer vision tasks: - **Action recognition** including action classification and detection - **Anomaly recognition** tasks including anomaly classification, detection and segmentation -OpenVINO™ Training Extensions supports the [following learning methods](https://openvinotoolkit.github.io/training_extensions/latest/guide/explanation/algorithms/index.html): +OpenVINO™ Training Extensions supports the [following learning methods](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/guide/explanation/algorithms/index.html): - **Supervised**, incremental training, which includes class incremental scenario and contrastive learning for classification and semantic segmentation tasks - **Semi-supervised learning** @@ -59,9 +59,9 @@ OpenVINO™ Training Extensions will provide the following features in coming re - **Distributed training** to accelerate the training process when you have multiple GPUs - **Half-precision training** to save GPUs memory and use larger batch sizes -- Integrated, efficient [hyper-parameter optimization module (HPO)](https://openvinotoolkit.github.io/training_extensions/latest/guide/explanation/additional_features/hpo.html). Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget. +- Integrated, efficient [hyper-parameter optimization module (HPO)](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/guide/explanation/additional_features/hpo.html). Through dataset proxy and built-in hyper-parameter optimizer, you can get much faster hyper-parameter optimization compared to other off-the-shelf tools. The hyperparameter optimization is dynamically scheduled based on your resource budget. - OpenVINO™ Training Extensions uses [Datumaro](https://openvinotoolkit.github.io/datumaro/docs/) as the backend to hadle datasets. Thanks to that, OpenVINO™ Training Extensions supports the most common academic field dataset formats for each task. We constantly working to extend supported formats to give more freedom of datasets format choice. -- [Auto-configuration functionality](https://openvinotoolkit.github.io/training_extensions/latest/guide/explanation/additional_features/auto_configuration.html). OpenVINO™ Training Extensions analyzes provided dataset and selects the proper task and model template to provide the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided. +- [Auto-configuration functionality](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/guide/explanation/additional_features/auto_configuration.html). OpenVINO™ Training Extensions analyzes provided dataset and selects the proper task and model template to provide the best accuracy/speed trade-off. It will also make a random auto-split of your dataset if there is no validation set provided. --- @@ -69,7 +69,7 @@ OpenVINO™ Training Extensions will provide the following features in coming re ### Installation -Please refer to the [installation guide](https://openvinotoolkit.github.io/training_extensions/latest/guide/get_started/quick_start_guide/installation.html). +Please refer to the [installation guide](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/guide/get_started/quick_start_guide/installation.html). ### OpenVINO™ Training Extensions CLI Commands @@ -83,7 +83,7 @@ Please refer to the [installation guide](https://openvinotoolkit.github.io/train - `otx demo` allows one to apply a trained model on the custom data or the online footage from a web camera and see how it will work in a real-life scenario. - `otx explain` runs explain algorithm on the provided data and outputs images with the saliency maps to show how your model makes predictions. -You can find more details with examples in the [CLI command intro](https://openvinotoolkit.github.io/training_extensions/latest/guide/get_started/quick_start_guide/cli_commands.html). +You can find more details with examples in the [CLI command intro](https://openvinotoolkit.github.io/training_extensions/releases/1.1.0/guide/get_started/quick_start_guide/cli_commands.html). --- diff --git a/otx/__init__.py b/otx/__init__.py index 5b1ea9b4f72..71b1d0995c8 100644 --- a/otx/__init__.py +++ b/otx/__init__.py @@ -3,7 +3,7 @@ # Copyright (C) 2021-2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 -__version__ = "1.1.0rc0" +__version__ = "1.1.0rc1" # NOTE: Sync w/ otx/api/usecases/exportable_code/demo/requirements.txt on release MMCLS_AVAILABLE = True diff --git a/otx/api/usecases/exportable_code/demo/requirements.txt b/otx/api/usecases/exportable_code/demo/requirements.txt index 2d13ad05327..5b1bd43c42a 100644 --- a/otx/api/usecases/exportable_code/demo/requirements.txt +++ b/otx/api/usecases/exportable_code/demo/requirements.txt @@ -1,3 +1,3 @@ openmodelzoo-modelapi==2022.3.0 -otx @ git+https://github.com/openvinotoolkit/training_extensions/@3faaa782718d8d02e6303fba004c9123ee37d76a#egg=otx +otx @ git+https://github.com/openvinotoolkit/training_extensions/@128154fd7d58d6ef996c46b58cb8432f7110e0ca#egg=otx numpy>=1.21.0,<=1.23.5 # np.bool was removed in 1.24.0 which was used in openvino runtime diff --git a/tox.ini b/tox.ini index 558965b1394..b63089fb958 100644 --- a/tox.ini +++ b/tox.ini @@ -176,8 +176,8 @@ allowlist_externals = commands = rm -rf ./dist python -m build --sdist - python -m pip install dist/otx-1.0.0.tar.gz[full] - # python -m pip install otx[full]==1.0.0 + python -m pip install dist/otx-1.1.0rc1.tar.gz[full] + # python -m pip install otx[full]==1.1.0rc1 pytest {posargs:tests/unit tests/integration/cli} From e3aa4335cb441be683e81f68901937ef0da2ffc1 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Fri, 17 Mar 2023 17:47:01 +0900 Subject: [PATCH 02/34] Updated daily workflow (#1905) - remove if statement to allow running on any branch by manually --- .github/workflows/daily.yml | 1 - 1 file changed, 1 deletion(-) diff --git a/.github/workflows/daily.yml b/.github/workflows/daily.yml index 4ea8cecd687..67a2cce0a62 100644 --- a/.github/workflows/daily.yml +++ b/.github/workflows/daily.yml @@ -10,7 +10,6 @@ jobs: Daily-Tests: runs-on: [self-hosted, linux, x64, dev] timeout-minutes: 1440 - if: github.ref == 'refs/heads/develop' steps: - name: Checkout repository uses: actions/checkout@v3 From e456f02abd6854e2e0b264bb1ab34d91dd016a62 Mon Sep 17 00:00:00 2001 From: Harim Kang Date: Mon, 20 Mar 2023 14:43:36 +0900 Subject: [PATCH 03/34] [FIX] Wrong test temp directory path (#1902) * Fix wrong test temp directory * Update tests/unit/algorithms/action/adapters/mmaction/utils/test_action_config_utils.py Co-authored-by: Eunwoo Shin * Update test_action_config_utils.py --------- Co-authored-by: Eunwoo Shin --- .../action/adapters/mmaction/utils/test_action_config_utils.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/tests/unit/algorithms/action/adapters/mmaction/utils/test_action_config_utils.py b/tests/unit/algorithms/action/adapters/mmaction/utils/test_action_config_utils.py index 202cc129767..33033b9c47c 100644 --- a/tests/unit/algorithms/action/adapters/mmaction/utils/test_action_config_utils.py +++ b/tests/unit/algorithms/action/adapters/mmaction/utils/test_action_config_utils.py @@ -4,6 +4,7 @@ # SPDX-License-Identifier: Apache-2.0 # +import tempfile from collections import defaultdict import pytest @@ -42,7 +43,7 @@ def test_patch_config() -> None: """ cls_datapipeline_path = "otx/algorithms/action/configs/classification/x3d/data_pipeline.py" - work_dir = "OTX-tempdir9104" + work_dir = str(tempfile.TemporaryDirectory("OTX-tempdir9104")) with pytest.raises(NotImplementedError): patch_config(CLS_CONFIG, cls_datapipeline_path, work_dir, TaskType.CLASSIFICATION) From 79581fa728735f3ebbc7d9ba78e4240da3f46ca7 Mon Sep 17 00:00:00 2001 From: Emily Chun Date: Tue, 21 Mar 2023 09:31:45 +0900 Subject: [PATCH 04/34] Update PR template (#1914) Co-authored-by: emily.chun --- .github/pull_request_template.md | 13 +++++++------ 1 file changed, 7 insertions(+), 6 deletions(-) diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 39e05772c3a..4d3c5c7ea4d 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -22,17 +22,18 @@ not fully covered by unit tests or manual testing can be complicated. --> -- [ ] I submit my changes into the `develop` branch -- [ ] I have added description of my changes into [CHANGELOG](https://github.com/openvinotoolkit/training_extensions/blob/develop/CHANGELOG.md) -- [ ] I have updated the [documentation](https://github.com/openvinotoolkit/training_extensions/tree/develop/docs) accordingly -- [ ] I have added tests to cover my changes -- [ ] I have [linked related issues](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword) +- [ ] I have added unit tests to cover my changes.​ +- [ ] I have added integration tests to cover my changes.​ +- [ ] I have added e2e tests for validation. +- [ ] I have added the description of my changes into CHANGELOG in my target branch. (e.g., [CHANGELOG](https://github.com/openvinotoolkit/training_extensions/blob/develop/CHANGELOG.md) in develop)​ +- [ ] I have updated the documentation in my target branch accordingly. (e.g., [documentation](https://github.com/openvinotoolkit/training_extensions/tree/develop/docs) in develop) +- [ ] I have [linked related issues](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword). ### License - [ ] I submit _my code changes_ under the same [MIT License](https://github.com/openvinotoolkit/training_extensions/blob/develop/LICENSE) that covers the project. Feel free to contact the maintainers if that's a concern. -- [ ] I have updated the license header for each file (see an example below) +- [ ] I have updated the license header for each file. (see an example below) ```python # Copyright (C) 2023 Intel Corporation From d7a4539368d60952c8b56a37388672845edaa8a8 Mon Sep 17 00:00:00 2001 From: Emily Chun Date: Tue, 21 Mar 2023 09:36:34 +0900 Subject: [PATCH 05/34] Update . location. --- .github/pull_request_template.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 4d3c5c7ea4d..94e1806c381 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -25,15 +25,15 @@ not fully covered by unit tests or manual testing can be complicated. --> - [ ] I have added unit tests to cover my changes.​ - [ ] I have added integration tests to cover my changes.​ - [ ] I have added e2e tests for validation. -- [ ] I have added the description of my changes into CHANGELOG in my target branch. (e.g., [CHANGELOG](https://github.com/openvinotoolkit/training_extensions/blob/develop/CHANGELOG.md) in develop)​ -- [ ] I have updated the documentation in my target branch accordingly. (e.g., [documentation](https://github.com/openvinotoolkit/training_extensions/tree/develop/docs) in develop) +- [ ] I have added the description of my changes into CHANGELOG in my target branch (e.g., [CHANGELOG](https://github.com/openvinotoolkit/training_extensions/blob/develop/CHANGELOG.md) in develop).​ +- [ ] I have updated the documentation in my target branch accordingly (e.g., [documentation](https://github.com/openvinotoolkit/training_extensions/tree/develop/docs) in develop). - [ ] I have [linked related issues](https://help.github.com/en/github/managing-your-work-on-github/linking-a-pull-request-to-an-issue#linking-a-pull-request-to-an-issue-using-a-keyword). ### License - [ ] I submit _my code changes_ under the same [MIT License](https://github.com/openvinotoolkit/training_extensions/blob/develop/LICENSE) that covers the project. Feel free to contact the maintainers if that's a concern. -- [ ] I have updated the license header for each file. (see an example below) +- [ ] I have updated the license header for each file (see an example below). ```python # Copyright (C) 2023 Intel Corporation From f08e9cb9a4cda7b7f14421a774dd3d7e0a17c937 Mon Sep 17 00:00:00 2001 From: Songki Choi Date: Tue, 21 Mar 2023 09:45:17 +0900 Subject: [PATCH 06/34] Fix OTX1.1 -> Geti1.4 integration issues (#1910) * Add dill to requirements/api.txt * Return None instead of raising NotImplementedError in IMedia2DEntity.path --------- Signed-off-by: Songki Choi --- otx/api/entities/media.py | 2 +- requirements/api.txt | 1 + 2 files changed, 2 insertions(+), 1 deletion(-) diff --git a/otx/api/entities/media.py b/otx/api/entities/media.py index ac69461ff4a..b32deb8f831 100644 --- a/otx/api/entities/media.py +++ b/otx/api/entities/media.py @@ -54,4 +54,4 @@ def width(self) -> int: @property def path(self) -> Optional[str]: """Returns the path of the 2D Media object.""" - raise NotImplementedError + return None diff --git a/requirements/api.txt b/requirements/api.txt index 6baa41a1243..4db8862a507 100644 --- a/requirements/api.txt +++ b/requirements/api.txt @@ -9,3 +9,4 @@ pymongo==3.12.0 scikit-learn==0.24.* Shapely>=1.7.1,<=1.8.0 imagesize==1.4.1 +dill>=0.3.6 From f7c2f4577c92694650e7a07d38b0665816c60578 Mon Sep 17 00:00:00 2001 From: Eunwoo Shin Date: Tue, 21 Mar 2023 09:53:18 +0900 Subject: [PATCH 07/34] Move utils HPO uses into HPO directory (#1912) move utils into hpo dir --- otx/algorithms/common/utils/utils.py | 76 +---------------------- otx/hpo/hpo_base.py | 2 +- otx/hpo/hyperband.py | 4 +- otx/hpo/resource_manager.py | 2 +- otx/hpo/search_space.py | 2 +- otx/hpo/utils.py | 91 ++++++++++++++++++++++++++++ 6 files changed, 97 insertions(+), 80 deletions(-) create mode 100644 otx/hpo/utils.py diff --git a/otx/algorithms/common/utils/utils.py b/otx/algorithms/common/utils/utils.py index 4f13334b57b..ea3935d97f0 100644 --- a/otx/algorithms/common/utils/utils.py +++ b/otx/algorithms/common/utils/utils.py @@ -17,7 +17,7 @@ import importlib import inspect from collections import defaultdict -from typing import Callable, Literal, Optional, Tuple +from typing import Callable, Optional, Tuple import yaml @@ -94,77 +94,3 @@ def get_arg_spec( # noqa: C901 # pylint: disable=too-many-branches if spec.varkw is None and spec.varargs is None: break return tuple(args) - - -def left_vlaue_is_better(val1, val2, mode: Literal["max", "min"]) -> bool: - """Check left value is better than right value. - - Whether check it's greather or lesser is changed depending on 'model'. - - Args: - val1 : value to check that it's bigger than other value. - val2 : value to check that it's bigger than other value. - mode (Literal['max', 'min']): value to decide whether better means greater or lesser. - - Returns: - bool: whether val1 is better than val2. - """ - check_mode_input(mode) - if mode == "max": - return val1 > val2 - return val1 < val2 - - -def check_positive(value, variable_name: Optional[str] = None, error_message: Optional[str] = None): - """Validate that value is positivle. - - Args: - value (Any): value to validate. - variable_name (Optional[str], optional): name of value. It's used for error message. Defaults to None. - error_message (Optional[str], optional): Error message to use when type is different. Defaults to None. - - Raises: - ValueError: If value isn't positive, the error is raised. - """ - if value <= 0: - if error_message is not None: - message = error_message - elif variable_name: - message = f"{variable_name} should be positive.\n" f"your value : {value}" - else: - raise ValueError - raise ValueError(message) - - -def check_not_negative(value, variable_name: Optional[str] = None, error_message: Optional[str] = None): - """Validate that value isn't negative. - - Args: - value (Any): value to validate. - variable_name (Optional[str], optional): name of value. It's used for error message. Defaults to None. - error_message (Optional[str], optional): Error message to use when type is different. Defaults to None. - - Raises: - ValueError: If value is negative, the error is raised. - """ - if value < 0: - if error_message is not None: - message = error_message - elif variable_name: - message = f"{variable_name} should be positive.\n" f"your value : {value}" - else: - raise ValueError - raise ValueError(message) - - -def check_mode_input(mode: str): - """Validate that mode is 'max' or 'min'. - - Args: - mode (str): string to validate. - - Raises: - ValueError: If 'mode' is not both 'max' and 'min', the error is raised. - """ - if mode not in ["max", "min"]: - raise ValueError("mode should be max or min.\n" f"Your value : {mode}") diff --git a/otx/hpo/hpo_base.py b/otx/hpo/hpo_base.py index 2192bf85dcc..f41e8a20caa 100644 --- a/otx/hpo/hpo_base.py +++ b/otx/hpo/hpo_base.py @@ -21,8 +21,8 @@ from enum import IntEnum from typing import Any, Dict, List, Optional, Union -from otx.algorithms.common.utils.utils import check_mode_input, check_positive from otx.hpo.search_space import SearchSpace +from otx.hpo.utils import check_mode_input, check_positive logger = logging.getLogger(__name__) diff --git a/otx/hpo/hyperband.py b/otx/hpo/hyperband.py index 5f16dbfedf4..030a5391cb3 100644 --- a/otx/hpo/hyperband.py +++ b/otx/hpo/hyperband.py @@ -23,13 +23,13 @@ from scipy.stats.qmc import LatinHypercube -from otx.algorithms.common.utils.utils import ( +from otx.hpo.hpo_base import HpoBase, Trial, TrialStatus +from otx.hpo.utils import ( check_mode_input, check_not_negative, check_positive, left_vlaue_is_better, ) -from otx.hpo.hpo_base import HpoBase, Trial, TrialStatus logger = logging.getLogger(__name__) diff --git a/otx/hpo/resource_manager.py b/otx/hpo/resource_manager.py index 0342fb6987b..c514577ab9d 100644 --- a/otx/hpo/resource_manager.py +++ b/otx/hpo/resource_manager.py @@ -21,7 +21,7 @@ import torch -from otx.algorithms.common.utils.utils import check_positive +from otx.hpo.utils import check_positive logger = logging.getLogger(__name__) diff --git a/otx/hpo/search_space.py b/otx/hpo/search_space.py index c64842be7c1..81698b56578 100644 --- a/otx/hpo/search_space.py +++ b/otx/hpo/search_space.py @@ -20,7 +20,7 @@ import typing from typing import Any, Dict, List, Optional, Tuple, Union -from otx.algorithms.common.utils.utils import check_positive +from otx.hpo.utils import check_positive logger = logging.getLogger(__name__) diff --git a/otx/hpo/utils.py b/otx/hpo/utils.py new file mode 100644 index 00000000000..886cfb44853 --- /dev/null +++ b/otx/hpo/utils.py @@ -0,0 +1,91 @@ +"""Collections of Utils for HPO.""" + +# Copyright (C) 2022 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from typing import Literal, Optional + + +def left_vlaue_is_better(val1, val2, mode: Literal["max", "min"]) -> bool: + """Check left value is better than right value. + + Whether check it's greather or lesser is changed depending on 'model'. + + Args: + val1 : value to check that it's bigger than other value. + val2 : value to check that it's bigger than other value. + mode (Literal['max', 'min']): value to decide whether better means greater or lesser. + + Returns: + bool: whether val1 is better than val2. + """ + check_mode_input(mode) + if mode == "max": + return val1 > val2 + return val1 < val2 + + +def check_positive(value, variable_name: Optional[str] = None, error_message: Optional[str] = None): + """Validate that value is positivle. + + Args: + value (Any): value to validate. + variable_name (Optional[str], optional): name of value. It's used for error message. Defaults to None. + error_message (Optional[str], optional): Error message to use when type is different. Defaults to None. + + Raises: + ValueError: If value isn't positive, the error is raised. + """ + if value <= 0: + if error_message is not None: + message = error_message + elif variable_name: + message = f"{variable_name} should be positive.\n" f"your value : {value}" + else: + raise ValueError + raise ValueError(message) + + +def check_not_negative(value, variable_name: Optional[str] = None, error_message: Optional[str] = None): + """Validate that value isn't negative. + + Args: + value (Any): value to validate. + variable_name (Optional[str], optional): name of value. It's used for error message. Defaults to None. + error_message (Optional[str], optional): Error message to use when type is different. Defaults to None. + + Raises: + ValueError: If value is negative, the error is raised. + """ + if value < 0: + if error_message is not None: + message = error_message + elif variable_name: + message = f"{variable_name} should be positive.\n" f"your value : {value}" + else: + raise ValueError + raise ValueError(message) + + +def check_mode_input(mode: str): + """Validate that mode is 'max' or 'min'. + + Args: + mode (str): string to validate. + + Raises: + ValueError: If 'mode' is not both 'max' and 'min', the error is raised. + """ + if mode not in ["max", "min"]: + raise ValueError("mode should be max or min.\n" f"Your value : {mode}") From 7088095fc07b36959568c17509ed7618d26e7fcc Mon Sep 17 00:00:00 2001 From: Sungman Cho Date: Tue, 21 Mar 2023 10:34:42 +0900 Subject: [PATCH 08/34] [FIX] Add stability to explain detection (#1901) Add stability to explain detection --- .../mmdet/models/heads/custom_atss_head.py | 14 ++++++-------- otx/mpa/det/explainer.py | 4 +++- 2 files changed, 9 insertions(+), 9 deletions(-) diff --git a/otx/algorithms/detection/adapters/mmdet/models/heads/custom_atss_head.py b/otx/algorithms/detection/adapters/mmdet/models/heads/custom_atss_head.py index deb7abfee95..0f57b49f907 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/heads/custom_atss_head.py +++ b/otx/algorithms/detection/adapters/mmdet/models/heads/custom_atss_head.py @@ -172,18 +172,16 @@ def loss_single( pos_centerness = centerness[pos_inds] centerness_targets = self.centerness_target(pos_anchors, pos_bbox_targets) - pos_decode_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) - pos_decode_bbox_targets = self.bbox_coder.decode(pos_anchors, pos_bbox_targets) + if self.reg_decoded_bbox: + pos_bbox_pred = self.bbox_coder.decode(pos_anchors, pos_bbox_pred) if self.use_qfl: - quality[pos_inds] = bbox_overlaps( - pos_decode_bbox_pred.detach(), pos_decode_bbox_targets, is_aligned=True - ).clamp(min=1e-6) + quality[pos_inds] = bbox_overlaps(pos_bbox_pred.detach(), pos_bbox_targets, is_aligned=True).clamp( + min=1e-6 + ) # regression loss - loss_bbox = self.loss_bbox( - pos_decode_bbox_pred, pos_decode_bbox_targets, weight=centerness_targets, avg_factor=1.0 - ) + loss_bbox = self.loss_bbox(pos_bbox_pred, pos_bbox_targets, weight=centerness_targets, avg_factor=1.0) # centerness loss loss_centerness = self.loss_centerness(pos_centerness, centerness_targets, avg_factor=num_total_samples) diff --git a/otx/mpa/det/explainer.py b/otx/mpa/det/explainer.py index 0ddf52f00a4..237e3f46b57 100644 --- a/otx/mpa/det/explainer.py +++ b/otx/mpa/det/explainer.py @@ -2,6 +2,7 @@ # SPDX-License-Identifier: Apache-2.0 # +import torch from mmcv.utils import Config, ConfigDict from mmdet.datasets import build_dataloader as mmdet_build_dataloader from mmdet.datasets import build_dataset as mmdet_build_dataset @@ -153,7 +154,8 @@ def explain(self, cfg, model_builder=None): eval_predictions = [] with self.explainer_hook(feature_model) as saliency_hook: for data in test_dataloader: - result = model(return_loss=False, rescale=True, **data) + with torch.no_grad(): + result = model(return_loss=False, rescale=True, **data) eval_predictions.extend(result) saliency_maps = saliency_hook.records From e7325d58c4e87ad4e704dc9d2498de64b7c006a9 Mon Sep 17 00:00:00 2001 From: Sungman Cho Date: Tue, 21 Mar 2023 11:19:05 +0900 Subject: [PATCH 09/34] [Fix] e2e tests FQ references (#1918) Fix FQ references --- .../compressed_model.yml | 4 ++-- .../compressed_model.yml | 4 ++-- .../compressed_model.yml | 4 ++-- .../compressed_model.yml | 4 ++-- .../compressed_model.yml | 4 ++-- .../Custom_Object_Detection_Gen3_ATSS/compressed_model.yml | 4 ++-- .../Custom_Object_Detection_Gen3_SSD/compressed_model.yml | 4 ++-- .../Custom_Object_Detection_YOLOX/compressed_model.yml | 4 ++-- 8 files changed, 16 insertions(+), 16 deletions(-) diff --git a/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficientNet-V2-S/compressed_model.yml b/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficientNet-V2-S/compressed_model.yml index 154bb9b4307..9e0c559a076 100644 --- a/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficientNet-V2-S/compressed_model.yml +++ b/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficientNet-V2-S/compressed_model.yml @@ -1,11 +1,11 @@ TestToolsMultiClassClassification: pot: - number_of_fakequantizers: 216 + number_of_fakequantizers: 208 nncf: number_of_fakequantizers: 267 TestToolsMultilabelClassification: pot: - number_of_fakequantizers: 220 + number_of_fakequantizers: 210 nncf: number_of_fakequantizers: 269 TestToolsHierarchicalClassification: diff --git a/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficinetNet-B0/compressed_model.yml b/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficinetNet-B0/compressed_model.yml index a2bea89bd63..2ea456f8f38 100644 --- a/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficinetNet-B0/compressed_model.yml +++ b/tests/e2e/cli/classification/reference/Custom_Image_Classification_EfficinetNet-B0/compressed_model.yml @@ -1,11 +1,11 @@ TestToolsMultiClassClassification: pot: - number_of_fakequantizers: 100 + number_of_fakequantizers: 92 nncf: number_of_fakequantizers: 124 TestToolsMultilabelClassification: pot: - number_of_fakequantizers: 104 + number_of_fakequantizers: 94 nncf: number_of_fakequantizers: 126 TestToolsHierarchicalClassification: diff --git a/tests/e2e/cli/classification/reference/Custom_Image_Classification_MobileNet-V3-large-1x/compressed_model.yml b/tests/e2e/cli/classification/reference/Custom_Image_Classification_MobileNet-V3-large-1x/compressed_model.yml index 35a8eea185e..757ec370d16 100644 --- a/tests/e2e/cli/classification/reference/Custom_Image_Classification_MobileNet-V3-large-1x/compressed_model.yml +++ b/tests/e2e/cli/classification/reference/Custom_Image_Classification_MobileNet-V3-large-1x/compressed_model.yml @@ -1,11 +1,11 @@ TestToolsMultiClassClassification: pot: - number_of_fakequantizers: 146 + number_of_fakequantizers: 135 nncf: number_of_fakequantizers: 91 TestToolsMultilabelClassification: pot: - number_of_fakequantizers: 146 + number_of_fakequantizers: 135 nncf: number_of_fakequantizers: 93 TestToolsHierarchicalClassification: diff --git a/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_EfficientNetB2B/compressed_model.yml b/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_EfficientNetB2B/compressed_model.yml index 957a7a0dd90..2b1f14454be 100644 --- a/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_EfficientNetB2B/compressed_model.yml +++ b/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_EfficientNetB2B/compressed_model.yml @@ -1,10 +1,10 @@ TestToolsMPAInstanceSegmentation: pot: - number_of_fakequantizers: 143 + number_of_fakequantizers: 137 nncf: number_of_fakequantizers: 204 TestToolsTilingInstanceSegmentation: pot: - number_of_fakequantizers: 143 + number_of_fakequantizers: 137 nncf: number_of_fakequantizers: -1 diff --git a/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_ResNet50/compressed_model.yml b/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_ResNet50/compressed_model.yml index 780930d6dfc..7d25949404a 100644 --- a/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_ResNet50/compressed_model.yml +++ b/tests/e2e/cli/detection/reference/Custom_Counting_Instance_Segmentation_MaskRCNN_ResNet50/compressed_model.yml @@ -1,10 +1,10 @@ TestToolsMPAInstanceSegmentation: pot: - number_of_fakequantizers: 82 + number_of_fakequantizers: 76 nncf: number_of_fakequantizers: 92 TestToolsTilingInstanceSegmentation: pot: - number_of_fakequantizers: 82 + number_of_fakequantizers: 76 nncf: number_of_fakequantizers: -1 diff --git a/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_ATSS/compressed_model.yml b/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_ATSS/compressed_model.yml index d181b6c65be..aae68242725 100644 --- a/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_ATSS/compressed_model.yml +++ b/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_ATSS/compressed_model.yml @@ -1,10 +1,10 @@ TestToolsMPADetection: pot: - number_of_fakequantizers: 212 + number_of_fakequantizers: 196 nncf: number_of_fakequantizers: 155 TestToolsTilingDetection: pot: - number_of_fakequantizers: 212 + number_of_fakequantizers: 196 nncf: number_of_fakequantizers: -1 diff --git a/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_SSD/compressed_model.yml b/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_SSD/compressed_model.yml index 8cd26e145f8..6063c5cdaf9 100644 --- a/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_SSD/compressed_model.yml +++ b/tests/e2e/cli/detection/reference/Custom_Object_Detection_Gen3_SSD/compressed_model.yml @@ -1,10 +1,10 @@ TestToolsMPADetection: pot: - number_of_fakequantizers: 77 + number_of_fakequantizers: 67 nncf: number_of_fakequantizers: 67 TestToolsTilingDetection: pot: - number_of_fakequantizers: 77 + number_of_fakequantizers: 67 nncf: number_of_fakequantizers: -1 diff --git a/tests/e2e/cli/detection/reference/Custom_Object_Detection_YOLOX/compressed_model.yml b/tests/e2e/cli/detection/reference/Custom_Object_Detection_YOLOX/compressed_model.yml index 851297fd424..0ada6aed686 100644 --- a/tests/e2e/cli/detection/reference/Custom_Object_Detection_YOLOX/compressed_model.yml +++ b/tests/e2e/cli/detection/reference/Custom_Object_Detection_YOLOX/compressed_model.yml @@ -1,10 +1,10 @@ TestToolsMPADetection: pot: - number_of_fakequantizers: 97 + number_of_fakequantizers: 85 nncf: number_of_fakequantizers: 84 TestToolsTilingDetection: pot: - number_of_fakequantizers: 97 + number_of_fakequantizers: 85 nncf: number_of_fakequantizers: -1 From abe6aae7ff932c410ec82cadf459f274984bc49d Mon Sep 17 00:00:00 2001 From: Harim Kang Date: Tue, 21 Mar 2023 13:14:39 +0900 Subject: [PATCH 10/34] Move mpa.deploy to otx.algorithms.common (#1903) * Move deploy modules to otx * test: fix mmdeploy api replacement error * Fix pre-commit issues * Update otx/mpa/det/exporter.py Co-authored-by: Jaeguk Hyun * Update otx/mpa/exporter_mixin.py Co-authored-by: Jaeguk Hyun * Update otx/mpa/seg/exporter.py Co-authored-by: Jaeguk Hyun * Update otx/mpa/cls/exporter.py Co-authored-by: Jaeguk Hyun * Update otx/algorithms/common/adapters/mmdeploy/utils/mmdeploy.py Co-authored-by: Jaeguk Hyun * Update otx/algorithms/common/adapters/mmdeploy/utils/operations_domain.py Co-authored-by: Jihwan Eom --------- Co-authored-by: Inhyuk Andy Cho Co-authored-by: Jaeguk Hyun Co-authored-by: Jihwan Eom --- docs/source/guide/reference/mpa/deploy.rst | 34 ------------------ docs/source/guide/reference/mpa/index.rst | 1 - .../common/adapters/mmdeploy/__init__.py | 10 ++++++ .../common/adapters/mmdeploy}/apis.py | 35 ++++++++++++------- .../adapters/mmdeploy}/utils/__init__.py | 1 + .../adapters/mmdeploy}/utils/mmdeploy.py | 20 +++++++++-- .../common/adapters/mmdeploy}/utils/onnx.py | 3 ++ .../mmdeploy}/utils/operations_domain.py | 2 ++ .../common/adapters/mmdeploy}/utils/utils.py | 3 ++ .../models/detectors/custom_atss_detector.py | 2 +- .../detectors/custom_maskrcnn_detector.py | 2 +- .../detectors/custom_single_stage_detector.py | 2 +- .../models/detectors/custom_yolox_detector.py | 2 +- .../detection/adapters/mmdet/nncf/patches.py | 2 +- otx/mpa/cls/exporter.py | 4 +-- otx/mpa/deploy/__init__.py | 9 ----- otx/mpa/det/exporter.py | 4 +-- otx/mpa/exporter_mixin.py | 2 +- .../models/classifiers/sam_classifier.py | 2 +- .../models/segmentors/otx_encoder_decoder.py | 2 +- otx/mpa/seg/exporter.py | 4 +-- .../common/adapters/mmdeploy/__init__.py | 3 ++ .../adapters/mmdeploy}/test_deploy_apis.py | 15 +++++--- .../common/adapters/mmdeploy}/test_helpers.py | 0 .../utils/test_deploy_utils_mmdeploy.py | 7 ++-- .../mmdeploy}/utils/test_deploy_utils_onnx.py | 9 +++-- .../test_deploy_utils_operations_domain.py | 5 ++- .../utils/test_deploy_utils_utils.py | 5 ++- tests/unit/mpa/cls/test_cls_exporter.py | 2 +- tests/unit/mpa/deploy/__init__.py | 12 +------ tests/unit/mpa/det/test_det_exporter.py | 2 +- tests/unit/mpa/seg/test_seg_exporter.py | 2 +- tests/unit/mpa/test_export_mixin.py | 2 +- 33 files changed, 110 insertions(+), 100 deletions(-) delete mode 100644 docs/source/guide/reference/mpa/deploy.rst create mode 100644 otx/algorithms/common/adapters/mmdeploy/__init__.py rename otx/{mpa/deploy => algorithms/common/adapters/mmdeploy}/apis.py (91%) rename otx/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/__init__.py (80%) rename otx/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/mmdeploy.py (71%) rename otx/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/onnx.py (87%) rename otx/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/operations_domain.py (73%) rename otx/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/utils.py (90%) delete mode 100644 otx/mpa/deploy/__init__.py create mode 100644 tests/unit/algorithms/common/adapters/mmdeploy/__init__.py rename tests/unit/{mpa/deploy => algorithms/common/adapters/mmdeploy}/test_deploy_apis.py (93%) rename tests/unit/{mpa/deploy => algorithms/common/adapters/mmdeploy}/test_helpers.py (100%) rename tests/unit/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/test_deploy_utils_mmdeploy.py (83%) rename tests/unit/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/test_deploy_utils_onnx.py (85%) rename tests/unit/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/test_deploy_utils_operations_domain.py (67%) rename tests/unit/{mpa/deploy => algorithms/common/adapters/mmdeploy}/utils/test_deploy_utils_utils.py (95%) diff --git a/docs/source/guide/reference/mpa/deploy.rst b/docs/source/guide/reference/mpa/deploy.rst deleted file mode 100644 index 7a4d154095a..00000000000 --- a/docs/source/guide/reference/mpa/deploy.rst +++ /dev/null @@ -1,34 +0,0 @@ -Deploy -^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.deploy - :members: - :undoc-members: - -.. automodule:: otx.mpa.deploy.apis - :members: - :undoc-members: - -.. automodule:: otx.mpa.deploy.utils - :members: - :undoc-members: - -.. automodule:: otx.mpa.deploy.utils.mmdeploy - :members: - :undoc-members: - -.. automodule:: otx.mpa.deploy.utils.onnx - :members: - :undoc-members: - -.. automodule:: otx.mpa.deploy.utils.operations_domain - :members: - :undoc-members: - -.. automodule:: otx.mpa.deploy.utils.utils - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/index.rst b/docs/source/guide/reference/mpa/index.rst index d1e88cd2ae8..5a17c54b1e5 100644 --- a/docs/source/guide/reference/mpa/index.rst +++ b/docs/source/guide/reference/mpa/index.rst @@ -8,5 +8,4 @@ Model Preparation Algorithm classification detection segmentation - deploy utils diff --git a/otx/algorithms/common/adapters/mmdeploy/__init__.py b/otx/algorithms/common/adapters/mmdeploy/__init__.py new file mode 100644 index 00000000000..01687800428 --- /dev/null +++ b/otx/algorithms/common/adapters/mmdeploy/__init__.py @@ -0,0 +1,10 @@ +"""Adapters for mmdeploy.""" +# Copyright (C) 2023 Intel Corporation +# +# SPDX-License-Identifier: MIT + +from .utils.mmdeploy import is_mmdeploy_enabled + +__all__ = [ + "is_mmdeploy_enabled", +] diff --git a/otx/mpa/deploy/apis.py b/otx/algorithms/common/adapters/mmdeploy/apis.py similarity index 91% rename from otx/mpa/deploy/apis.py rename to otx/algorithms/common/adapters/mmdeploy/apis.py index 4e71e18d8bc..4ba61772449 100644 --- a/otx/mpa/deploy/apis.py +++ b/otx/algorithms/common/adapters/mmdeploy/apis.py @@ -1,3 +1,4 @@ +"""API of otx.algorithms.common.adapters.mmdeploy.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -12,20 +13,23 @@ import mmcv import numpy as np -import onnx import torch from mmcv.parallel import collate, scatter -from .utils import numpy_2_list from .utils.mmdeploy import ( is_mmdeploy_enabled, mmdeploy_init_model_helper, update_deploy_cfg, ) from .utils.onnx import prepare_onnx_for_openvino +from .utils.utils import numpy_2_list + +# pylint: disable=too-many-locals class NaiveExporter: + """NaiveExporter for non-mmdeploy export.""" + @staticmethod def export2openvino( output_dir: str, @@ -38,13 +42,15 @@ def export2openvino( input_names: Optional[List[str]] = None, output_names: Optional[List[str]] = None, opset_version: int = 11, - dynamic_axes: Dict[Any, Any] = {}, + dynamic_axes: Optional[Dict[Any, Any]] = None, mo_transforms: str = "", ): + """Function for exporting to openvino.""" input_data = scatter(collate([input_data], samples_per_gpu=1), [-1])[0] model = model_builder(cfg) model = model.cpu().eval() + dynamic_axes = dynamic_axes if dynamic_axes else dict() onnx_path = NaiveExporter.torch2onnx( output_dir, @@ -108,10 +114,11 @@ def torch2onnx( input_names: Optional[List[str]] = None, output_names: Optional[List[str]] = None, opset_version: int = 11, - dynamic_axes: Dict[Any, Any] = {}, + dynamic_axes: Optional[Dict[Any, Any]] = None, verbose: bool = False, **onnx_options, ) -> str: + """Function for torch to onnx exporting.""" img_metas = input_data.get("img_metas") numpy_2_list(img_metas) @@ -119,6 +126,7 @@ def torch2onnx( model.forward = partial(model.forward, img_metas=img_metas, return_loss=False) onnx_file_name = model_name + ".onnx" + dynamic_axes = dynamic_axes if dynamic_axes else dict() torch.onnx.export( model, imgs, @@ -143,6 +151,7 @@ def onnx2openvino( model_name: str = "model", **openvino_options, ) -> Tuple[str, str]: + """Function for onnx to openvino exporting.""" from otx.mpa.utils import mo_wrapper mo_args = { @@ -163,17 +172,15 @@ def onnx2openvino( if is_mmdeploy_enabled(): import mmdeploy.apis.openvino as openvino_api - from mmdeploy.apis import ( - build_task_processor, - extract_model, - get_predefined_partition_cfg, - torch2onnx, - ) + from mmdeploy.apis import build_task_processor, extract_model, torch2onnx from mmdeploy.apis.openvino import get_input_info_from_cfg, get_mo_options_from_cfg - from mmdeploy.core import FUNCTION_REWRITER - from mmdeploy.utils import get_backend_config, get_ir_config, get_partition_config + + # from mmdeploy.core import FUNCTION_REWRITER + from mmdeploy.utils import get_ir_config, get_partition_config class MMdeployExporter: + """MMdeployExporter for mmdeploy exporting.""" + @staticmethod def export2openvino( output_dir: str, @@ -183,6 +190,7 @@ def export2openvino( *, model_name: str = "model", ): + """Function for exporting to openvino.""" task_processor = build_task_processor(cfg, deploy_cfg, "cpu") @@ -248,6 +256,7 @@ def torch2onnx( *, model_name: str = "model", ) -> str: + """Function for torch to onnx exporting.""" onnx_file_name = model_name + ".onnx" torch2onnx( input_data, @@ -266,6 +275,7 @@ def partition_onnx( onnx_path: str, partition_cfgs: Union[mmcv.ConfigDict, List[mmcv.ConfigDict]], ) -> Tuple[str, ...]: + """Function for parition onnx.""" partitioned_paths = [] if not isinstance(partition_cfgs, list): @@ -290,6 +300,7 @@ def onnx2openvino( *, model_name: Optional[str] = None, ) -> Tuple[str, str]: + """Function for onnx to openvino exporting.""" input_info = get_input_info_from_cfg(deploy_cfg) output_names = get_ir_config(deploy_cfg).output_names diff --git a/otx/mpa/deploy/utils/__init__.py b/otx/algorithms/common/adapters/mmdeploy/utils/__init__.py similarity index 80% rename from otx/mpa/deploy/utils/__init__.py rename to otx/algorithms/common/adapters/mmdeploy/utils/__init__.py index d4800169f5e..5c1f7760edd 100644 --- a/otx/mpa/deploy/utils/__init__.py +++ b/otx/algorithms/common/adapters/mmdeploy/utils/__init__.py @@ -1,3 +1,4 @@ +"""Init file for otx.algorithms.common.adapters.mmdeploy.utils.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/mpa/deploy/utils/mmdeploy.py b/otx/algorithms/common/adapters/mmdeploy/utils/mmdeploy.py similarity index 71% rename from otx/mpa/deploy/utils/mmdeploy.py rename to otx/algorithms/common/adapters/mmdeploy/utils/mmdeploy.py index eba7c531b2d..ee33bbd3705 100644 --- a/otx/mpa/deploy/utils/mmdeploy.py +++ b/otx/algorithms/common/adapters/mmdeploy/utils/mmdeploy.py @@ -1,3 +1,4 @@ +"""Functions for mmdeploy adapters.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -5,14 +6,24 @@ import importlib import onnx -from mmcv.utils import ConfigDict def is_mmdeploy_enabled(): + """Checks if the 'mmdeploy' Python module is installed and available for use. + + Returns: + bool: True if 'mmdeploy' is installed, False otherwise. + + Example: + >>> is_mmdeploy_enabled() + True + """ return importlib.util.find_spec("mmdeploy") is not None def mmdeploy_init_model_helper(ctx, model_checkpoint=None, cfg_options=None, **kwargs): + """Helper function for initializing a model for inference using the 'mmdeploy' library.""" + model_builder = kwargs.pop("model_builder") model = model_builder( ctx.model_cfg, @@ -31,12 +42,14 @@ def mmdeploy_init_model_helper(ctx, model_checkpoint=None, cfg_options=None, **k return model -def update_deploy_cfg(onnx_path, deploy_cfg, mo_options={}): +def update_deploy_cfg(onnx_path, deploy_cfg, mo_options=None): + """Update the 'deploy_cfg' configuration file based on the ONNX model specified by 'onnx_path'.""" + from mmdeploy.utils import get_backend_config, get_ir_config onnx_model = onnx.load(onnx_path) ir_config = get_ir_config(deploy_cfg) - backend_config = get_backend_config(deploy_cfg) + get_backend_config(deploy_cfg) # update input input_names = [i.name for i in onnx_model.graph.input] @@ -47,6 +60,7 @@ def update_deploy_cfg(onnx_path, deploy_cfg, mo_options={}): ir_config["output_names"] = output_names # update mo options + mo_options = mo_options if mo_options else dict() deploy_cfg.merge_from_dict({"backend_config": {"mo_options": mo_options}}) diff --git a/otx/mpa/deploy/utils/onnx.py b/otx/algorithms/common/adapters/mmdeploy/utils/onnx.py similarity index 87% rename from otx/mpa/deploy/utils/onnx.py rename to otx/algorithms/common/adapters/mmdeploy/utils/onnx.py index 9ec80579904..b44812324e2 100644 --- a/otx/mpa/deploy/utils/onnx.py +++ b/otx/algorithms/common/adapters/mmdeploy/utils/onnx.py @@ -1,3 +1,4 @@ +"""Functions for onnx adapters.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,6 +7,7 @@ def remove_nodes_by_op_type(onnx_model, op_type): + """Remove all nodes of a specified op type from the ONNX model.""" # TODO: support more nodes supported_op_types = ["Mark", "Conv", "Gemm"] @@ -42,6 +44,7 @@ def remove_nodes_by_op_type(onnx_model, op_type): def prepare_onnx_for_openvino(in_path, out_path): + """Modify the specified ONNX model to be compatible with OpenVINO by removing 'Mark' op nodes.""" onnx_model = onnx.load(in_path) onnx_model = remove_nodes_by_op_type(onnx_model, "Mark") onnx.checker.check_model(onnx_model) diff --git a/otx/mpa/deploy/utils/operations_domain.py b/otx/algorithms/common/adapters/mmdeploy/utils/operations_domain.py similarity index 73% rename from otx/mpa/deploy/utils/operations_domain.py rename to otx/algorithms/common/adapters/mmdeploy/utils/operations_domain.py index 11ffdcc48f8..e54af8bf1ab 100644 --- a/otx/mpa/deploy/utils/operations_domain.py +++ b/otx/algorithms/common/adapters/mmdeploy/utils/operations_domain.py @@ -1,3 +1,4 @@ +"""Add domain function.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,4 +7,5 @@ def add_domain(name_operator: str) -> str: + """Function for adding to DOMAIN_CUSTOM_OPS_NAME.""" return DOMAIN_CUSTOM_OPS_NAME + "::" + name_operator diff --git a/otx/mpa/deploy/utils/utils.py b/otx/algorithms/common/adapters/mmdeploy/utils/utils.py similarity index 90% rename from otx/mpa/deploy/utils/utils.py rename to otx/algorithms/common/adapters/mmdeploy/utils/utils.py index 9c6148b7062..a0025bc8364 100644 --- a/otx/mpa/deploy/utils/utils.py +++ b/otx/algorithms/common/adapters/mmdeploy/utils/utils.py @@ -1,3 +1,4 @@ +"""Util functions of otx.algorithms.common.adapters.mmdeploy.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -9,6 +10,7 @@ def sync_batchnorm_2_batchnorm(module, dim=2): + """Syncs the BatchNorm layers in a model to use regular BatchNorm layers.""" if dim == 1: bn = torch.nn.BatchNorm1d elif dim == 2: @@ -48,6 +50,7 @@ def sync_batchnorm_2_batchnorm(module, dim=2): def numpy_2_list(data): + """Converts NumPy arrays to Python lists.""" if isinstance(data, np.ndarray): return data.tolist() diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py index c27feb43515..04eba1011f9 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py @@ -9,10 +9,10 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.atss import ATSS +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.deploy.utils import is_mmdeploy_enabled from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py index ed96bf83e51..56b8da87164 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py @@ -9,7 +9,7 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.mask_rcnn import MaskRCNN -from otx.mpa.deploy.utils import is_mmdeploy_enabled +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.mpa.modules.hooks.recording_forward_hooks import ( ActivationMapHook, FeatureVectorHook, diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py index 32f6d9ffe00..1ad6a744819 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py @@ -9,10 +9,10 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.single_stage import SingleStageDetector +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.deploy.utils import is_mmdeploy_enabled from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py index 26b4e2712eb..20432c8fb0b 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py @@ -9,10 +9,10 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.yolox import YOLOX +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.deploy.utils import is_mmdeploy_enabled from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/detection/adapters/mmdet/nncf/patches.py b/otx/algorithms/detection/adapters/mmdet/nncf/patches.py index 5e36c254654..da640d248c9 100644 --- a/otx/algorithms/detection/adapters/mmdet/nncf/patches.py +++ b/otx/algorithms/detection/adapters/mmdet/nncf/patches.py @@ -19,6 +19,7 @@ from mmdet.models.roi_heads.bbox_heads.sabl_head import SABLHead from mmdet.models.roi_heads.mask_heads.fcn_mask_head import FCNMaskHead +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.common.adapters.nncf import ( NNCF_PATCHER, is_in_nncf_tracing, @@ -26,7 +27,6 @@ no_nncf_trace_wrapper, ) from otx.algorithms.common.adapters.nncf.patches import nncf_trace_context -from otx.mpa.deploy.utils import is_mmdeploy_enabled HEADS_TARGETS = dict( classes=( diff --git a/otx/mpa/cls/exporter.py b/otx/mpa/cls/exporter.py index 23734d2e055..c49a39f43f6 100644 --- a/otx/mpa/cls/exporter.py +++ b/otx/mpa/cls/exporter.py @@ -5,7 +5,7 @@ import numpy as np from mmcv.runner import wrap_fp16_model -from otx.mpa.deploy.utils import sync_batchnorm_2_batchnorm +from otx.algorithms.common.adapters.mmdeploy.utils import sync_batchnorm_2_batchnorm from otx.mpa.exporter_mixin import ExporterMixin from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger @@ -47,7 +47,7 @@ def model_builder_helper(*args, **kwargs): def naive_export(output_dir, model_builder, precision, cfg, model_name="model"): from mmcls.datasets.pipelines import Compose - from ..deploy.apis import NaiveExporter + from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter def get_fake_data(cfg, orig_img_shape=(128, 128, 3)): pipeline = cfg.data.test.pipeline diff --git a/otx/mpa/deploy/__init__.py b/otx/mpa/deploy/__init__.py deleted file mode 100644 index cf2e118cd0e..00000000000 --- a/otx/mpa/deploy/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from .utils import is_mmdeploy_enabled - -__all__ = [ - "is_mmdeploy_enabled", -] diff --git a/otx/mpa/det/exporter.py b/otx/mpa/det/exporter.py index 6af24b6801a..b3e08149ff9 100644 --- a/otx/mpa/det/exporter.py +++ b/otx/mpa/det/exporter.py @@ -5,7 +5,7 @@ import numpy as np from mmcv.runner import wrap_fp16_model -from otx.mpa.deploy.utils import sync_batchnorm_2_batchnorm +from otx.algorithms.common.adapters.mmdeploy.utils import sync_batchnorm_2_batchnorm from otx.mpa.exporter_mixin import ExporterMixin from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger @@ -46,7 +46,7 @@ def naive_export(output_dir, model_builder, precision, cfg, model_name="model"): from mmdet.apis.inference import LoadImage from mmdet.datasets.pipelines import Compose - from ..deploy.apis import NaiveExporter + from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter def get_fake_data(cfg, orig_img_shape=(128, 128, 3)): pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] diff --git a/otx/mpa/exporter_mixin.py b/otx/mpa/exporter_mixin.py index 9597ec3d25f..905708041eb 100644 --- a/otx/mpa/exporter_mixin.py +++ b/otx/mpa/exporter_mixin.py @@ -91,7 +91,7 @@ def mmdeploy_export( deploy_cfg, model_name="model", ): - from .deploy.apis import MMdeployExporter + from otx.algorithms.common.adapters.mmdeploy.apis import MMdeployExporter if precision == "FP16": deploy_cfg.backend_config.mo_options.flags.append("--compress_to_fp16") diff --git a/otx/mpa/modules/models/classifiers/sam_classifier.py b/otx/mpa/modules/models/classifiers/sam_classifier.py index 8edbffda75e..86bb09edd68 100644 --- a/otx/mpa/modules/models/classifiers/sam_classifier.py +++ b/otx/mpa/modules/models/classifiers/sam_classifier.py @@ -9,7 +9,7 @@ from mmcls.models.classifiers.base import BaseClassifier from mmcls.models.classifiers.image import ImageClassifier -from otx.mpa.deploy.utils import is_mmdeploy_enabled +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/mpa/modules/models/segmentors/otx_encoder_decoder.py b/otx/mpa/modules/models/segmentors/otx_encoder_decoder.py index 4b6085340f8..a5812775259 100644 --- a/otx/mpa/modules/models/segmentors/otx_encoder_decoder.py +++ b/otx/mpa/modules/models/segmentors/otx_encoder_decoder.py @@ -6,7 +6,7 @@ from mmseg.models import SEGMENTORS from mmseg.models.segmentors.encoder_decoder import EncoderDecoder -from otx.mpa.deploy.utils import is_mmdeploy_enabled +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled @SEGMENTORS.register_module() diff --git a/otx/mpa/seg/exporter.py b/otx/mpa/seg/exporter.py index de42b7f870b..c64065a666f 100644 --- a/otx/mpa/seg/exporter.py +++ b/otx/mpa/seg/exporter.py @@ -5,7 +5,7 @@ import numpy as np from mmcv.runner import wrap_fp16_model -from otx.mpa.deploy.utils import sync_batchnorm_2_batchnorm +from otx.algorithms.common.adapters.mmdeploy.utils import sync_batchnorm_2_batchnorm from otx.mpa.exporter_mixin import ExporterMixin from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger @@ -46,7 +46,7 @@ def naive_export(output_dir, model_builder, precision, cfg, model_name="model"): from mmseg.apis.inference import LoadImage from mmseg.datasets.pipelines import Compose - from ..deploy.apis import NaiveExporter + from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter def get_fake_data(cfg, orig_img_shape=(128, 128, 3)): pipeline = [LoadImage()] + cfg.data.test.pipeline[1:] diff --git a/tests/unit/algorithms/common/adapters/mmdeploy/__init__.py b/tests/unit/algorithms/common/adapters/mmdeploy/__init__.py new file mode 100644 index 00000000000..ff847f01203 --- /dev/null +++ b/tests/unit/algorithms/common/adapters/mmdeploy/__init__.py @@ -0,0 +1,3 @@ +# Copyright (C) 2023 Intel Corporation +# +# SPDX-License-Identifier: MIT diff --git a/tests/unit/mpa/deploy/test_deploy_apis.py b/tests/unit/algorithms/common/adapters/mmdeploy/test_deploy_apis.py similarity index 93% rename from tests/unit/mpa/deploy/test_deploy_apis.py rename to tests/unit/algorithms/common/adapters/mmdeploy/test_deploy_apis.py index 3b72c29fd0b..fb4bfd33b52 100644 --- a/tests/unit/mpa/deploy/test_deploy_apis.py +++ b/tests/unit/algorithms/common/adapters/mmdeploy/test_deploy_apis.py @@ -9,10 +9,13 @@ import torch from mmcv.utils import Config -from otx.mpa.deploy.apis import NaiveExporter -from otx.mpa.deploy.utils import is_mmdeploy_enabled +from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from tests.test_suite.e2e_test_system import e2e_pytest_unit -from tests.unit.mpa.deploy.test_helpers import create_config, create_model +from tests.unit.algorithms.common.adapters.mmdeploy.test_helpers import ( + create_config, + create_model, +) class TestNaiveExporter: @@ -58,7 +61,7 @@ def test_export2openvino(self): if is_mmdeploy_enabled(): from mmdeploy.core import FUNCTION_REWRITER, mark - from otx.mpa.deploy.apis import MMdeployExporter + from otx.algorithms.common.adapters.mmdeploy.apis import MMdeployExporter class TestMMdeployExporter: @e2e_pytest_unit @@ -197,7 +200,9 @@ def test_partition(self): ) create_model("mmcls") - @FUNCTION_REWRITER.register_rewriter("tests.unit.mpa.deploy.test_helpers.MockModel.forward") + @FUNCTION_REWRITER.register_rewriter( + "tests.unit.algorithms.common.adapters.mmdeploy.test_helpers.MockModel.forward" + ) @mark("test", inputs=["input"], outputs=["output"]) def forward(ctx, self, *args, **kwargs): return ctx.origin_func(self, *args, **kwargs) diff --git a/tests/unit/mpa/deploy/test_helpers.py b/tests/unit/algorithms/common/adapters/mmdeploy/test_helpers.py similarity index 100% rename from tests/unit/mpa/deploy/test_helpers.py rename to tests/unit/algorithms/common/adapters/mmdeploy/test_helpers.py diff --git a/tests/unit/mpa/deploy/utils/test_deploy_utils_mmdeploy.py b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_mmdeploy.py similarity index 83% rename from tests/unit/mpa/deploy/utils/test_deploy_utils_mmdeploy.py rename to tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_mmdeploy.py index 88b86c9a092..b22c8178350 100644 --- a/tests/unit/mpa/deploy/utils/test_deploy_utils_mmdeploy.py +++ b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_mmdeploy.py @@ -6,12 +6,15 @@ from mmcv.utils import Config -from otx.mpa.deploy.utils.mmdeploy import ( +from otx.algorithms.common.adapters.mmdeploy.utils.mmdeploy import ( is_mmdeploy_enabled, mmdeploy_init_model_helper, ) from tests.test_suite.e2e_test_system import e2e_pytest_unit -from tests.unit.mpa.deploy.test_helpers import create_config, create_model +from tests.unit.algorithms.common.adapters.mmdeploy.test_helpers import ( + create_config, + create_model, +) @e2e_pytest_unit diff --git a/tests/unit/mpa/deploy/utils/test_deploy_utils_onnx.py b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_onnx.py similarity index 85% rename from tests/unit/mpa/deploy/utils/test_deploy_utils_onnx.py rename to tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_onnx.py index 376d9c16664..d6ea85a004c 100644 --- a/tests/unit/mpa/deploy/utils/test_deploy_utils_onnx.py +++ b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_onnx.py @@ -8,10 +8,13 @@ import onnx import torch -from otx.mpa.deploy.apis import NaiveExporter -from otx.mpa.deploy.utils.onnx import prepare_onnx_for_openvino, remove_nodes_by_op_type +from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter +from otx.algorithms.common.adapters.mmdeploy.utils.onnx import ( + prepare_onnx_for_openvino, + remove_nodes_by_op_type, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit -from tests.unit.mpa.deploy.test_helpers import create_model +from tests.unit.algorithms.common.adapters.mmdeploy.test_helpers import create_model @e2e_pytest_unit diff --git a/tests/unit/mpa/deploy/utils/test_deploy_utils_operations_domain.py b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_operations_domain.py similarity index 67% rename from tests/unit/mpa/deploy/utils/test_deploy_utils_operations_domain.py rename to tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_operations_domain.py index d6573075619..6a74f2aed8c 100644 --- a/tests/unit/mpa/deploy/utils/test_deploy_utils_operations_domain.py +++ b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_operations_domain.py @@ -2,7 +2,10 @@ # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.deploy.utils.operations_domain import DOMAIN_CUSTOM_OPS_NAME, add_domain +from otx.algorithms.common.adapters.mmdeploy.utils.operations_domain import ( + DOMAIN_CUSTOM_OPS_NAME, + add_domain, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/deploy/utils/test_deploy_utils_utils.py b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_utils.py similarity index 95% rename from tests/unit/mpa/deploy/utils/test_deploy_utils_utils.py rename to tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_utils.py index 6328cc6bbbd..b4e1dd41ee6 100644 --- a/tests/unit/mpa/deploy/utils/test_deploy_utils_utils.py +++ b/tests/unit/algorithms/common/adapters/mmdeploy/utils/test_deploy_utils_utils.py @@ -5,7 +5,10 @@ import numpy as np import torch -from otx.mpa.deploy.utils.utils import numpy_2_list, sync_batchnorm_2_batchnorm +from otx.algorithms.common.adapters.mmdeploy.utils.utils import ( + numpy_2_list, + sync_batchnorm_2_batchnorm, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/cls/test_cls_exporter.py b/tests/unit/mpa/cls/test_cls_exporter.py index 427c58281e0..48c6afe3f39 100644 --- a/tests/unit/mpa/cls/test_cls_exporter.py +++ b/tests/unit/mpa/cls/test_cls_exporter.py @@ -1,8 +1,8 @@ import pytest from otx.algorithms.classification.adapters.mmcls.utils.builder import build_classifier +from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter from otx.mpa.cls.exporter import ClsExporter -from otx.mpa.deploy.apis import NaiveExporter from otx.mpa.exporter_mixin import ExporterMixin from tests.test_suite.e2e_test_system import e2e_pytest_unit from tests.unit.algorithms.classification.test_helper import setup_mpa_task_parameters diff --git a/tests/unit/mpa/deploy/__init__.py b/tests/unit/mpa/deploy/__init__.py index 2faffbe2b1f..ff847f01203 100644 --- a/tests/unit/mpa/deploy/__init__.py +++ b/tests/unit/mpa/deploy/__init__.py @@ -1,13 +1,3 @@ # Copyright (C) 2023 Intel Corporation # -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions -# and limitations under the License. +# SPDX-License-Identifier: MIT diff --git a/tests/unit/mpa/det/test_det_exporter.py b/tests/unit/mpa/det/test_det_exporter.py index 0cc89684254..7e0797799d2 100644 --- a/tests/unit/mpa/det/test_det_exporter.py +++ b/tests/unit/mpa/det/test_det_exporter.py @@ -2,8 +2,8 @@ import pytest +from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter from otx.algorithms.detection.adapters.mmdet.utils.builder import build_detector -from otx.mpa.deploy.apis import NaiveExporter from otx.mpa.det.exporter import DetectionExporter from otx.mpa.exporter_mixin import ExporterMixin from otx.mpa.utils.config_utils import MPAConfig diff --git a/tests/unit/mpa/seg/test_seg_exporter.py b/tests/unit/mpa/seg/test_seg_exporter.py index ccfc595be4f..28d35c2f455 100644 --- a/tests/unit/mpa/seg/test_seg_exporter.py +++ b/tests/unit/mpa/seg/test_seg_exporter.py @@ -2,8 +2,8 @@ import pytest +from otx.algorithms.common.adapters.mmdeploy.apis import NaiveExporter from otx.algorithms.segmentation.adapters.mmseg.utils.builder import build_segmentor -from otx.mpa.deploy.apis import NaiveExporter from otx.mpa.exporter_mixin import ExporterMixin from otx.mpa.seg.exporter import SegExporter from otx.mpa.utils.config_utils import MPAConfig diff --git a/tests/unit/mpa/test_export_mixin.py b/tests/unit/mpa/test_export_mixin.py index 6a3d045ca0d..ce5284532ba 100644 --- a/tests/unit/mpa/test_export_mixin.py +++ b/tests/unit/mpa/test_export_mixin.py @@ -62,7 +62,7 @@ def mock_mmdeploy_export(output_dir, model_builder, precision, cfg, deploy_cfg, @e2e_pytest_unit def test_mmdeploy_export(self, mocker): - from otx.mpa.deploy.apis import MMdeployExporter + from otx.algorithms.common.adapters.mmdeploy.apis import MMdeployExporter mock_export_openvino = mocker.patch.object(MMdeployExporter, "export2openvino") From f7e4799c4383712e34e931a062f8a1816683cd10 Mon Sep 17 00:00:00 2001 From: Harim Kang Date: Tue, 21 Mar 2023 13:25:50 +0900 Subject: [PATCH 11/34] Add mmcls.VisionTransformer backbone support (#1908) * Add mmcls transformer backbones * Fix VisionTransformeroutput check * Add changes * Disable recording forward hooks in inferrer * Remove unused import --- otx/algorithms/__init__.py | 2 ++ .../classification/configs/configuration.yaml | 2 +- otx/algorithms/common/configs/training_base.py | 2 +- otx/cli/builder/builder.py | 11 +++++++++-- otx/cli/builder/supported_backbone/mmcls.json | 6 +++--- otx/mpa/cls/inferrer.py | 5 +++++ otx/mpa/cls/stage.py | 8 ++++++++ 7 files changed, 29 insertions(+), 7 deletions(-) diff --git a/otx/algorithms/__init__.py b/otx/algorithms/__init__.py index 3d087f538e4..daf814e52b2 100644 --- a/otx/algorithms/__init__.py +++ b/otx/algorithms/__init__.py @@ -2,3 +2,5 @@ # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 + +TRANSFORMER_BACKBONES = ["VisionTransformer", "T2T_ViT", "Conformer"] diff --git a/otx/algorithms/classification/configs/configuration.yaml b/otx/algorithms/classification/configs/configuration.yaml index 897c3f7e13f..dd2a93c51a0 100644 --- a/otx/algorithms/classification/configs/configuration.yaml +++ b/otx/algorithms/classification/configs/configuration.yaml @@ -10,7 +10,7 @@ learning_parameters: stable. A larger batch size has higher memory requirements. editable: true header: Batch size - max_value: 512 + max_value: 2048 min_value: 1 type: INTEGER ui_rules: diff --git a/otx/algorithms/common/configs/training_base.py b/otx/algorithms/common/configs/training_base.py index 1e99f5048ee..4c397554c54 100644 --- a/otx/algorithms/common/configs/training_base.py +++ b/otx/algorithms/common/configs/training_base.py @@ -65,7 +65,7 @@ class BaseLearningParameters(ParameterGroup): batch_size = configurable_integer( default_value=5, min_value=1, - max_value=512, + max_value=2048, header="Batch size", description="The number of training samples seen in each iteration of training. Increasing thisvalue " "improves training time and may make the training more stable. A larger batch size has higher " diff --git a/otx/cli/builder/builder.py b/otx/cli/builder/builder.py index 5adfe235a96..aabff5429ab 100644 --- a/otx/cli/builder/builder.py +++ b/otx/cli/builder/builder.py @@ -28,6 +28,7 @@ from mmcv.utils import Registry, build_from_cfg from torch import nn +from otx.algorithms import TRANSFORMER_BACKBONES from otx.api.entities.model_template import TaskType from otx.cli.utils.importing import ( get_backbone_list, @@ -101,8 +102,8 @@ def update_backbone_args(backbone_config: dict, registry: Registry, backend: str def update_channels(model_config: MPAConfig, out_channels: Any): """Update in_channel of head or neck.""" - if hasattr(model_config.model, "neck"): - if model_config.model.neck.type == "GlobalAveragePooling": + if hasattr(model_config.model, "neck") and model_config.model.neck: + if model_config.model.neck.get("type", None) == "GlobalAveragePooling": model_config.model.neck.pop("in_channels", None) else: print(f"\tUpdate model.neck.in_channels: {out_channels}") @@ -212,6 +213,12 @@ def merge_backbone( out_channels = -1 if hasattr(model_config.model, "head"): model_config.model.head.in_channels = -1 + # TODO: This is a hard coded part of the Transformer backbone and needs to be refactored. + if backend == "mmcls" and backbone_class in TRANSFORMER_BACKBONES: + if hasattr(model_config.model, "neck"): + model_config.model.neck = None + if hasattr(model_config.model, "head"): + model_config.model.head["type"] = "VisionTransformerClsHead" else: # Need to update in/out channel configuration here out_channels = get_backbone_out_channels(backbone) diff --git a/otx/cli/builder/supported_backbone/mmcls.json b/otx/cli/builder/supported_backbone/mmcls.json index 6b5f1343a2e..71f10692aa5 100644 --- a/otx/cli/builder/supported_backbone/mmcls.json +++ b/otx/cli/builder/supported_backbone/mmcls.json @@ -11,7 +11,7 @@ "options": { "arch": ["tiny", "small", "base"] }, - "available": [] + "available": ["CLASSIFICATION"] }, "mmcls.ConvMixer": { "required": ["arch"], @@ -287,7 +287,7 @@ "mmcls.T2T_ViT": { "required": [], "options": {}, - "available": [] + "available": ["CLASSIFICATION"] }, "mmcls.TIMMBackbone": { "required": ["model_name"], @@ -341,7 +341,7 @@ "deit-base" ] }, - "available": [] + "available": ["CLASSIFICATION"] } } } diff --git a/otx/mpa/cls/inferrer.py b/otx/mpa/cls/inferrer.py index 17336bf7cd4..9c7e5770219 100644 --- a/otx/mpa/cls/inferrer.py +++ b/otx/mpa/cls/inferrer.py @@ -11,6 +11,7 @@ from mmcls.datasets import build_dataset as mmcls_build_dataset from mmcv import Config, ConfigDict +from otx.algorithms import TRANSFORMER_BACKBONES from otx.algorithms.common.adapters.mmcv.utils import ( build_data_parallel, build_dataloader, @@ -53,6 +54,10 @@ def run(self, model_cfg, model_ckpt, data_cfg, **kwargs): model_builder = kwargs.get("model_builder", None) dump_features = kwargs.get("dump_features", False) dump_saliency_map = kwargs.get("dump_saliency_map", False) + # TODO: It looks like we need to modify that code in an appropriate way. + if model_cfg.model.head.get("type", None) == "VisionTransformerClsHead": + dump_features = False + dump_saliency_map = False eval = kwargs.get("eval", False) outputs = self.infer( cfg, diff --git a/otx/mpa/cls/stage.py b/otx/mpa/cls/stage.py index d24abbe12fd..dd78acbfffa 100644 --- a/otx/mpa/cls/stage.py +++ b/otx/mpa/cls/stage.py @@ -8,6 +8,7 @@ import torch from mmcv import ConfigDict, build_from_cfg +from otx.algorithms import TRANSFORMER_BACKBONES from otx.algorithms.classification.adapters.mmcls.utils.builder import build_classifier from otx.mpa.stage import Stage from otx.mpa.utils.config_utils import recursively_update_cfg, update_or_add_custom_hook @@ -89,6 +90,13 @@ def configure_in_channel(cfg): output = layer(torch.rand([1] + list(input_shape))) if isinstance(output, (tuple, list)): output = output[-1] + + if layer.__class__.__name__ in TRANSFORMER_BACKBONES and isinstance(output, (tuple, list)): + # mmcls.VisionTransformer outputs Tuple[List[...]] and the last index of List is the final logit. + _, output = output + if cfg.model.head.type != "VisionTransformerClsHead": + raise ValueError(f"{layer.__class__.__name__ } needs VisionTransformerClsHead as head") + in_channels = output.shape[1] if cfg.model.get("neck") is not None: if cfg.model.neck.get("in_channels") is not None: From f8e1ced6ee47260e00ec3575c3794e1dbc17c812 Mon Sep 17 00:00:00 2001 From: Soobee Lee Date: Tue, 21 Mar 2023 15:12:13 +0900 Subject: [PATCH 12/34] Move semantic-segmentation related codes to otx adapters (#1911) --- .../mpa/modules/models/backbones.rst | 14 - .../reference/mpa/modules/models/heads.rst | 21 -- .../reference/mpa/modules/models/index.rst | 3 - .../reference/mpa/modules/models/losses.rst | 17 - .../mpa/modules/models/scalar_schedulers.rst | 10 - .../mpa/modules/models/segmentors.rst | 30 -- .../guide/reference/mpa/modules/ov/models.rst | 4 - .../guide/reference/mpa/modules/utils.rst | 4 - .../segmentation/adapters/__init__.py | 14 +- .../segmentation/adapters/mmseg/__init__.py | 54 ++- .../mmseg/{data => datasets}/__init__.py | 12 +- .../mmseg/{data => datasets}/dataset.py | 2 +- .../mmseg/datasets/pipelines/__init__.py | 21 ++ .../mmseg}/datasets/pipelines/compose.py | 21 +- .../mmseg/datasets/pipelines/loads.py | 57 ++++ .../pipelines/transforms.py} | 160 +++++++-- .../adapters/mmseg/models/__init__.py | 36 +- .../mmseg/models/backbones/__init__.py | 24 ++ .../mmseg}/models/backbones/litehrnet.py | 87 +++-- .../mmseg/models}/backbones/mmov_backbone.py | 14 +- .../adapters/mmseg/models/heads/__init__.py | 21 ++ .../mmseg}/models/heads/custom_fcn_head.py | 18 +- .../adapters/mmseg/models/heads/mixin.py} | 145 +++++++- .../mmseg/models/heads}/mmov_decode_head.py | 26 +- .../adapters/mmseg/models/losses/__init__.py | 5 +- .../mmseg}/models/losses/base_pixel_loss.py | 21 +- .../models/losses/base_weighted_loss.py | 21 +- .../losses/cross_entropy_loss_with_ignore.py | 10 +- .../mmseg/models/losses/detcon_loss.py | 2 +- .../mmseg/models/losses/otx_pixel_base.py} | 13 +- .../adapters/mmseg/models/necks/__init__.py | 2 +- .../adapters/mmseg/models/necks/selfsl_mlp.py | 2 +- .../mmseg/models/schedulers/__init__.py | 25 ++ .../adapters/mmseg/models/schedulers}/base.py | 7 +- .../mmseg/models/schedulers}/constant.py | 9 +- .../adapters/mmseg/models/schedulers}/poly.py | 9 +- .../adapters/mmseg/models/schedulers}/step.py | 8 +- .../mmseg/models/segmentors/__init__.py | 6 +- .../segmentors/class_incr_encoder_decoder.py | 109 ++++++ .../mmseg/models/segmentors/detcon.py | 15 +- .../segmentors/mean_teacher_segmentor.py | 64 ++-- .../mmseg/models/segmentors/mixin.py} | 17 +- .../models/segmentors/otx_encoder_decoder.py | 25 +- .../adapters/mmseg/nncf/__init__.py | 17 +- .../adapters/mmseg/nncf/builder.py | 2 +- .../segmentation/adapters/mmseg/nncf/hooks.py | 2 +- .../adapters/mmseg/nncf/patches.py | 2 +- .../adapters/mmseg/utils/__init__.py | 21 +- .../adapters/mmseg/utils/builder.py | 19 +- .../adapters/mmseg/utils/data_utils.py | 12 + .../transforms/seg_custom_pipelines.py | 122 ------- otx/mpa/modules/models/builder.py | 20 -- .../modules/models/heads/aggregator_mixin.py | 63 ---- .../modules/models/heads/mix_loss_mixin.py | 40 --- .../models/heads/segment_out_norm_mixin.py | 32 -- .../models/scalar_schedulers/__init__.py | 13 - otx/mpa/modules/models/segmentors/__init__.py | 6 - .../segmentors/class_incr_encoder_decoder.py | 67 ---- .../models/segmentors/mix_loss_mixin.py | 40 --- .../ov/models/mmseg/decode_heads/__init__.py | 6 - otx/mpa/modules/utils/seg_utils.py | 15 - otx/mpa/seg/__init__.py | 12 +- .../adapters/mmseg/data/__init__.py | 4 - .../adapters/mmseg/data/test_pipelines.py | 151 --------- .../adapters/mmseg/datasets}/__init__.py | 3 +- .../mmseg/datasets/pipelines}/__init__.py | 4 +- .../mmseg}/datasets/pipelines/test_compose.py | 2 +- .../mmseg/datasets/pipelines/test_loads.py | 53 +++ .../test_pipelines_params_validation.py | 2 +- .../datasets/pipelines/test_transforms.py | 313 ++++++++++++++++++ .../mmseg/{ => datasets}/test_dataset.py | 2 +- .../test_dataset_params_validation.py | 2 +- .../mmseg/models/backbones}/__init__.py | 2 +- .../mmseg}/models/backbones/test_litehrnet.py | 2 +- .../backbones/test_mmseg_mmov_backbone.py} | 2 +- .../adapters/mmseg/models/heads}/__init__.py | 2 +- .../heads/test_mmseg_mmov_decode_head.py} | 2 +- .../models/scalar_schedulers/__init__.py | 4 + .../scalar_schedulers/test_schedulers.py | 2 +- .../adapters/mmseg/test_pipelines.py | 122 ------- .../adapters/mmseg/utils}/__init__.py | 4 +- .../mmseg/{ => utils}/test_config_utils.py | 0 .../test_config_utils_params_validation.py | 0 .../mmseg/{ => utils}/test_data_utils.py | 0 .../test_data_utils_params_validation.py | 0 .../transforms/test_seg_custom_pipelines.py | 103 ------ 86 files changed, 1344 insertions(+), 1133 deletions(-) delete mode 100644 docs/source/guide/reference/mpa/modules/models/backbones.rst delete mode 100644 docs/source/guide/reference/mpa/modules/models/scalar_schedulers.rst delete mode 100644 docs/source/guide/reference/mpa/modules/models/segmentors.rst rename otx/algorithms/segmentation/adapters/mmseg/{data => datasets}/__init__.py (89%) rename otx/algorithms/segmentation/adapters/mmseg/{data => datasets}/dataset.py (99%) create mode 100644 otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/datasets/pipelines/compose.py (87%) create mode 100644 otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/loads.py rename otx/algorithms/segmentation/adapters/mmseg/{data/pipelines.py => datasets/pipelines/transforms.py} (65%) create mode 100644 otx/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/backbones/litehrnet.py (96%) rename otx/{mpa/modules/ov/models/mmseg => algorithms/segmentation/adapters/mmseg/models}/backbones/mmov_backbone.py (64%) create mode 100644 otx/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/heads/custom_fcn_head.py (62%) rename otx/{mpa/modules/models/heads/pixel_weights_mixin.py => algorithms/segmentation/adapters/mmseg/models/heads/mixin.py} (59%) rename otx/{mpa/modules/ov/models/mmseg/decode_heads => algorithms/segmentation/adapters/mmseg/models/heads}/mmov_decode_head.py (78%) rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/losses/base_pixel_loss.py (89%) rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/losses/base_weighted_loss.py (83%) rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/losses/cross_entropy_loss_with_ignore.py (88%) rename otx/{mpa/modules/models/losses/mpa_pixel_base.py => algorithms/segmentation/adapters/mmseg/models/losses/otx_pixel_base.py} (88%) create mode 100644 otx/algorithms/segmentation/adapters/mmseg/models/schedulers/__init__.py rename otx/{mpa/modules/models/scalar_schedulers => algorithms/segmentation/adapters/mmseg/models/schedulers}/base.py (70%) rename otx/{mpa/modules/models/scalar_schedulers => algorithms/segmentation/adapters/mmseg/models/schedulers}/constant.py (73%) rename otx/{mpa/modules/models/scalar_schedulers => algorithms/segmentation/adapters/mmseg/models/schedulers}/poly.py (90%) rename otx/{mpa/modules/models/scalar_schedulers => algorithms/segmentation/adapters/mmseg/models/schedulers}/step.py (89%) create mode 100644 otx/algorithms/segmentation/adapters/mmseg/models/segmentors/class_incr_encoder_decoder.py rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/segmentors/mean_teacher_segmentor.py (66%) rename otx/{mpa/modules/models/segmentors/pixel_weights_mixin.py => algorithms/segmentation/adapters/mmseg/models/segmentors/mixin.py} (94%) rename otx/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/segmentors/otx_encoder_decoder.py (72%) delete mode 100644 otx/mpa/modules/datasets/pipelines/transforms/seg_custom_pipelines.py delete mode 100644 otx/mpa/modules/models/builder.py delete mode 100644 otx/mpa/modules/models/heads/aggregator_mixin.py delete mode 100644 otx/mpa/modules/models/heads/mix_loss_mixin.py delete mode 100644 otx/mpa/modules/models/heads/segment_out_norm_mixin.py delete mode 100644 otx/mpa/modules/models/scalar_schedulers/__init__.py delete mode 100644 otx/mpa/modules/models/segmentors/__init__.py delete mode 100644 otx/mpa/modules/models/segmentors/class_incr_encoder_decoder.py delete mode 100644 otx/mpa/modules/models/segmentors/mix_loss_mixin.py delete mode 100644 otx/mpa/modules/ov/models/mmseg/decode_heads/__init__.py delete mode 100644 otx/mpa/modules/utils/seg_utils.py delete mode 100644 tests/unit/algorithms/segmentation/adapters/mmseg/data/__init__.py delete mode 100644 tests/unit/algorithms/segmentation/adapters/mmseg/data/test_pipelines.py rename {otx/mpa/modules/models/backbones => tests/unit/algorithms/segmentation/adapters/mmseg/datasets}/__init__.py (54%) rename {otx/mpa/modules/ov/models/mmseg/backbones => tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines}/__init__.py (50%) rename tests/unit/{mpa/modules => algorithms/segmentation/adapters/mmseg}/datasets/pipelines/test_compose.py (98%) create mode 100644 tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_loads.py rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => datasets/pipelines}/test_pipelines_params_validation.py (97%) create mode 100644 tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_transforms.py rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => datasets}/test_dataset.py (97%) rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => datasets}/test_dataset_params_validation.py (99%) rename tests/unit/{mpa/modules/models/scalar_schedulers => algorithms/segmentation/adapters/mmseg/models/backbones}/__init__.py (50%) rename tests/unit/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/backbones/test_litehrnet.py (97%) rename tests/unit/{mpa/modules/ov/models/mmseg/backbones/test_ov_mmseg_mmov_backbone.py => algorithms/segmentation/adapters/mmseg/models/backbones/test_mmseg_mmov_backbone.py} (94%) rename tests/unit/{mpa/modules/models/backbones => algorithms/segmentation/adapters/mmseg/models/heads}/__init__.py (52%) rename tests/unit/{mpa/modules/ov/models/mmseg/decode_heads/test_ov_mmseg_mmov_decode_head.py => algorithms/segmentation/adapters/mmseg/models/heads/test_mmseg_mmov_decode_head.py} (95%) create mode 100644 tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/__init__.py rename tests/unit/{mpa/modules => algorithms/segmentation/adapters/mmseg}/models/scalar_schedulers/test_schedulers.py (98%) delete mode 100644 tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines.py rename {otx/mpa/modules/ov/models/mmseg => tests/unit/algorithms/segmentation/adapters/mmseg/utils}/__init__.py (55%) rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => utils}/test_config_utils.py (100%) rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => utils}/test_config_utils_params_validation.py (100%) rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => utils}/test_data_utils.py (100%) rename tests/unit/algorithms/segmentation/adapters/mmseg/{ => utils}/test_data_utils_params_validation.py (100%) delete mode 100644 tests/unit/mpa/modules/datasets/pipelines/transforms/test_seg_custom_pipelines.py diff --git a/docs/source/guide/reference/mpa/modules/models/backbones.rst b/docs/source/guide/reference/mpa/modules/models/backbones.rst deleted file mode 100644 index 249f934ebb5..00000000000 --- a/docs/source/guide/reference/mpa/modules/models/backbones.rst +++ /dev/null @@ -1,14 +0,0 @@ -Backbones -^^^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.models.backbones - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.backbones.litehrnet - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/heads.rst b/docs/source/guide/reference/mpa/modules/models/heads.rst index 5b5a5de1c83..8fbf6601254 100644 --- a/docs/source/guide/reference/mpa/modules/models/heads.rst +++ b/docs/source/guide/reference/mpa/modules/models/heads.rst @@ -9,19 +9,10 @@ Heads :members: :undoc-members: -.. automodule:: otx.mpa.modules.models.heads.aggregator_mixin - :members: - :undoc-members: - - .. automodule:: otx.mpa.modules.models.heads.custom_cls_head :members: :undoc-members: -.. automodule:: otx.mpa.modules.models.heads.custom_fcn_head - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.models.heads.custom_hierarchical_linear_cls_head :members: :undoc-members: @@ -38,22 +29,10 @@ Heads :members: :undoc-members: -.. automodule:: otx.mpa.modules.models.heads.mix_loss_mixin - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.models.heads.non_linear_cls_head :members: :undoc-members: -.. automodule:: otx.mpa.modules.models.heads.pixel_weights_mixin - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.segment_out_norm_mixin - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.models.heads.semisl_cls_head :members: :undoc-members: diff --git a/docs/source/guide/reference/mpa/modules/models/index.rst b/docs/source/guide/reference/mpa/modules/models/index.rst index 45d93070e47..65621b35cdc 100644 --- a/docs/source/guide/reference/mpa/modules/models/index.rst +++ b/docs/source/guide/reference/mpa/modules/models/index.rst @@ -4,10 +4,7 @@ Models .. toctree:: :maxdepth: 1 - backbones classifiers heads losses - scalar_schedulers - segmentors utils \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/losses.rst b/docs/source/guide/reference/mpa/modules/models/losses.rst index d084ee3a9ab..f4c9301cb99 100644 --- a/docs/source/guide/reference/mpa/modules/models/losses.rst +++ b/docs/source/guide/reference/mpa/modules/models/losses.rst @@ -21,18 +21,6 @@ Losses :members: :undoc-members: -.. automodule:: otx.mpa.modules.models.losses.base_pixel_loss - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.base_weighted_loss - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.cross_entropy_loss_with_ignore - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.models.losses.cross_entropy_loss :members: :undoc-members: @@ -41,11 +29,6 @@ Losses :members: :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.mpa_pixel_base - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.models.losses.utils :members: :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/scalar_schedulers.rst b/docs/source/guide/reference/mpa/modules/models/scalar_schedulers.rst deleted file mode 100644 index 869c52a169b..00000000000 --- a/docs/source/guide/reference/mpa/modules/models/scalar_schedulers.rst +++ /dev/null @@ -1,10 +0,0 @@ -Scalar Schedulers -^^^^^^^^^^^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.models.scalar_schedulers - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/segmentors.rst b/docs/source/guide/reference/mpa/modules/models/segmentors.rst deleted file mode 100644 index 48184f2e04c..00000000000 --- a/docs/source/guide/reference/mpa/modules/models/segmentors.rst +++ /dev/null @@ -1,30 +0,0 @@ -Segmentors -^^^^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.models.segmentors - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.segmentors.class_incr_encoder_decoder - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.segmentors.mean_teacher_segmentor - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.segmentors.mix_loss_mixin - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.segmentors.otx_encoder_decoder - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.segmentors.pixel_weights_mixin - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/ov/models.rst b/docs/source/guide/reference/mpa/modules/ov/models.rst index 362d788172d..15ed83acb06 100644 --- a/docs/source/guide/reference/mpa/modules/ov/models.rst +++ b/docs/source/guide/reference/mpa/modules/ov/models.rst @@ -24,7 +24,3 @@ Models .. automodule:: otx.mpa.modules.ov.models.mmcls :members: :undoc-members: - -.. automodule:: otx.mpa.modules.ov.models.mmseg - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/utils.rst b/docs/source/guide/reference/mpa/modules/utils.rst index a22e69d158c..340be22339b 100644 --- a/docs/source/guide/reference/mpa/modules/utils.rst +++ b/docs/source/guide/reference/mpa/modules/utils.rst @@ -13,10 +13,6 @@ Utils :members: :undoc-members: -.. automodule:: otx.mpa.modules.utils.seg_utils - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.utils.task_adapt :members: :undoc-members: \ No newline at end of file diff --git a/otx/algorithms/segmentation/adapters/__init__.py b/otx/algorithms/segmentation/adapters/__init__.py index 8830b8e5239..53d34a44210 100644 --- a/otx/algorithms/segmentation/adapters/__init__.py +++ b/otx/algorithms/segmentation/adapters/__init__.py @@ -1,4 +1,16 @@ """Adapters for Segmentation.""" + # Copyright (C) 2023 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. diff --git a/otx/algorithms/segmentation/adapters/mmseg/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/__init__.py index d8754290882..4651aac0f2b 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/__init__.py @@ -1,10 +1,38 @@ """OTX Adapters - mmseg.""" -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -from .data import MPASegDataset -from .models import DetConB, DetConLoss, SelfSLMLP, SupConDetConB +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + + +from .datasets import MPASegDataset +from .models import ( + ClassIncrEncoderDecoder, + ConstantScalarScheduler, + CrossEntropyLossWithIgnore, + CustomFCNHead, + DetConB, + DetConLoss, + LiteHRNet, + MeanTeacherSegmentor, + MMOVBackbone, + MMOVDecodeHead, + PolyScalarScheduler, + SelfSLMLP, + StepScalarScheduler, + SupConDetConB, +) # fmt: off # isort: off @@ -16,4 +44,20 @@ # fmt: off # isort: on -__all__ = ["MPASegDataset", "DetConLoss", "SelfSLMLP", "DetConB", "SupConDetConB"] +__all__ = [ + "MPASegDataset", + "LiteHRNet", + "MMOVBackbone", + "CustomFCNHead", + "MMOVDecodeHead", + "DetConLoss", + "SelfSLMLP", + "ConstantScalarScheduler", + "PolyScalarScheduler", + "StepScalarScheduler", + "DetConB", + "CrossEntropyLossWithIgnore", + "SupConDetConB", + "ClassIncrEncoderDecoder", + "MeanTeacherSegmentor", +] diff --git a/otx/algorithms/segmentation/adapters/mmseg/data/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/datasets/__init__.py similarity index 89% rename from otx/algorithms/segmentation/adapters/mmseg/data/__init__.py rename to otx/algorithms/segmentation/adapters/mmseg/datasets/__init__.py index f62eeed6289..6072fc61b45 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/data/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/datasets/__init__.py @@ -1,6 +1,6 @@ """OTX Algorithms - Segmentation Dataset.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -18,13 +18,17 @@ from .pipelines import ( LoadAnnotationFromOTXDataset, LoadImageFromOTXDataset, + MaskCompose, + ProbCompose, TwoCropTransform, ) __all__ = [ - "get_annotation_mmseg_format", - "LoadImageFromOTXDataset", "LoadAnnotationFromOTXDataset", - "MPASegDataset", + "LoadImageFromOTXDataset", + "MaskCompose", + "ProbCompose", "TwoCropTransform", + "get_annotation_mmseg_format", + "MPASegDataset", ] diff --git a/otx/algorithms/segmentation/adapters/mmseg/data/dataset.py b/otx/algorithms/segmentation/adapters/mmseg/datasets/dataset.py similarity index 99% rename from otx/algorithms/segmentation/adapters/mmseg/data/dataset.py rename to otx/algorithms/segmentation/adapters/mmseg/datasets/dataset.py index 9e506a04cc2..eb267d4fe12 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/data/dataset.py +++ b/otx/algorithms/segmentation/adapters/mmseg/datasets/dataset.py @@ -1,6 +1,6 @@ """Base MMDataset for Segmentation Task.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py new file mode 100644 index 00000000000..ec2878f7abc --- /dev/null +++ b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py @@ -0,0 +1,21 @@ +"""OTX Algorithms - Segmentation pipelines.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from .compose import MaskCompose, ProbCompose +from .loads import LoadAnnotationFromOTXDataset, LoadImageFromOTXDataset +from .transforms import TwoCropTransform + +__all__ = ["MaskCompose", "ProbCompose", "LoadImageFromOTXDataset", "LoadAnnotationFromOTXDataset", "TwoCropTransform"] diff --git a/otx/mpa/modules/datasets/pipelines/compose.py b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/compose.py similarity index 87% rename from otx/mpa/modules/datasets/pipelines/compose.py rename to otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/compose.py index f8ea7953a62..99ecfc4bf36 100644 --- a/otx/mpa/modules/datasets/pipelines/compose.py +++ b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/compose.py @@ -1,4 +1,6 @@ -# Copyright (C) 2022 Intel Corporation +"""Collection of compose pipelines for segmentation task.""" + +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -11,8 +13,11 @@ from scipy.ndimage import gaussian_filter +# pylint: disable=consider-using-f-string @PIPELINES.register_module() -class ProbCompose(object): +class ProbCompose: + """Compose pipelines in a list and enable or disable them with the probability.""" + def __init__(self, transforms, probs): assert isinstance(transforms, Sequence) assert isinstance(probs, Sequence) @@ -35,6 +40,7 @@ def __init__(self, transforms, probs): raise TypeError(f"transform must be callable or a dict, but got {type(transform)}") def __call__(self, data): + """Callback function of ProbCompose.""" rand_value = np.random.rand() transform_id = np.max(np.where(rand_value > self.limits)[0]) @@ -44,17 +50,20 @@ def __call__(self, data): return data def __repr__(self): + """Repr.""" format_string = self.__class__.__name__ + "(" for t in self.transforms: format_string += "\n" - format_string += " {0}".format(t) + format_string += f" {t}" format_string += "\n)" return format_string @PIPELINES.register_module() -class MaskCompose(object): +class MaskCompose: + """Compose mask-related pipelines in a list and enable or disable them with the probability.""" + def __init__(self, transforms, prob, lambda_limits=(4, 16), keep_original=False): self.keep_original = keep_original self.prob = prob @@ -102,6 +111,7 @@ def _mix_img(main_img, aux_img, mask): return np.where(np.expand_dims(mask, axis=2), main_img, aux_img) def __call__(self, data): + """Callback function of MaskCompose.""" main_data = self._apply_transforms(deepcopy(data), self.transforms) assert main_data is not None if not self.keep_original and np.random.rand() > self.prob: @@ -123,10 +133,11 @@ def __call__(self, data): return main_data def __repr__(self): + """Repr.""" format_string = self.__class__.__name__ + "(" for t in self.transforms: format_string += "\n" - format_string += " {0}".format(t) + format_string += f" {t}" format_string += "\n)" return format_string diff --git a/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/loads.py b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/loads.py new file mode 100644 index 00000000000..e35c589f492 --- /dev/null +++ b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/loads.py @@ -0,0 +1,57 @@ +"""Collection of load pipelines for segmentation task.""" + +# Copyright (C) 2021 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. +from typing import Any, Dict + +from mmseg.datasets.builder import PIPELINES + +import otx.core.data.pipelines.load_image_from_otx_dataset as load_image_base +from otx.algorithms.segmentation.adapters.mmseg.datasets.dataset import ( + get_annotation_mmseg_format, +) +from otx.api.utils.argument_checks import check_input_parameters_type + + +# pylint: disable=too-many-instance-attributes, too-many-arguments +@PIPELINES.register_module() +class LoadImageFromOTXDataset(load_image_base.LoadImageFromOTXDataset): + """Pipeline element that loads an image from a OTX Dataset on the fly.""" + + +@PIPELINES.register_module() +class LoadAnnotationFromOTXDataset: + """Pipeline element that loads an annotation from a OTX Dataset on the fly. + + Expected entries in the 'results' dict that should be passed to this pipeline element are: + results['dataset_item']: dataset_item from which to load the annotation + results['ann_info']['label_list']: list of all labels in the project + + """ + + def __init__(self): + pass + + @check_input_parameters_type() + def __call__(self, results: Dict[str, Any]): + """Callback function of LoadAnnotationFromOTXDataset.""" + dataset_item = results["dataset_item"] + labels = results["ann_info"]["labels"] + + ann_info = get_annotation_mmseg_format(dataset_item, labels) + + results["gt_semantic_seg"] = ann_info["gt_semantic_seg"] + results["seg_fields"].append("gt_semantic_seg") + + return results diff --git a/otx/algorithms/segmentation/adapters/mmseg/data/pipelines.py b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/transforms.py similarity index 65% rename from otx/algorithms/segmentation/adapters/mmseg/data/pipelines.py rename to otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/transforms.py index 9d0f0278954..2ba3ada9e27 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/data/pipelines.py +++ b/otx/algorithms/segmentation/adapters/mmseg/datasets/pipelines/transforms.py @@ -1,66 +1,156 @@ -"""Collection Pipeline for segmentation task.""" -# Copyright (C) 2021 Intel Corporation -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 +"""Collection of transfrom pipelines for segmentation task.""" + +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 # -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions -# and limitations under the License. + from copy import deepcopy from typing import Any, Dict, List +import mmcv import numpy as np +from mmcv.parallel import DataContainer as DC from mmcv.utils import build_from_cfg from mmseg.datasets.builder import PIPELINES from mmseg.datasets.pipelines import Compose +from mmseg.datasets.pipelines.formatting import to_tensor from PIL import Image from torchvision import transforms as T from torchvision.transforms import functional as F -import otx.core.data.pipelines.load_image_from_otx_dataset as load_image_base from otx.api.utils.argument_checks import check_input_parameters_type -from .dataset import get_annotation_mmseg_format +@PIPELINES.register_module(force=True) +class Normalize: + """Normalize the image. -# pylint: disable=too-many-instance-attributes, too-many-arguments -@PIPELINES.register_module() -class LoadImageFromOTXDataset(load_image_base.LoadImageFromOTXDataset): - """Pipeline element that loads an image from a OTX Dataset on the fly.""" + Added key is "img_norm_cfg". + + Args: + mean (sequence): Mean values of 3 channels. + std (sequence): Std values of 3 channels. + to_rgb (bool): Whether to convert the image from BGR to RGB, + default is true. + """ + def __init__(self, mean, std, to_rgb=True): + self.mean = np.array(mean, dtype=np.float32) + self.std = np.array(std, dtype=np.float32) + self.to_rgb = to_rgb -@PIPELINES.register_module() -class LoadAnnotationFromOTXDataset: - """Pipeline element that loads an annotation from a OTX Dataset on the fly. + def __call__(self, results): + """Call function to normalize images. + + Args: + results (dict): Result dict from loading pipeline. + + Returns: + dict: Normalized results, 'img_norm_cfg' key is added into + result dict. + """ - Expected entries in the 'results' dict that should be passed to this pipeline element are: - results['dataset_item']: dataset_item from which to load the annotation - results['ann_info']['label_list']: list of all labels in the project + for target in ["img", "ul_w_img", "aux_img"]: + if target in results: + results[target] = mmcv.imnormalize(results[target], self.mean, self.std, self.to_rgb) + results["img_norm_cfg"] = dict(mean=self.mean, std=self.std, to_rgb=self.to_rgb) + return results + + def __repr__(self): + """Repr.""" + repr_str = self.__class__.__name__ + repr_str += f"(mean={self.mean}, std={self.std}, to_rgb=" f"{self.to_rgb})" + return repr_str + + +@PIPELINES.register_module(force=True) +class DefaultFormatBundle: + """Default formatting bundle. + + It simplifies the pipeline of formatting common fields, including "img" + and "gt_semantic_seg". These fields are formatted as follows. + + - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) + - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, + (3)to DataContainer (stack=True) """ - def __init__(self): - pass + def __call__(self, results): + """Call function to transform and format common fields in results. - @check_input_parameters_type() - def __call__(self, results: Dict[str, Any]): - """Callback function of LoadAnnotationFromOTXDataset.""" - dataset_item = results["dataset_item"] - labels = results["ann_info"]["labels"] + Args: + results (dict): Result dict contains the data to convert. - ann_info = get_annotation_mmseg_format(dataset_item, labels) + Returns: + dict: The result dict contains the data that is formatted with + default bundle. + """ + for target in ["img", "ul_w_img", "aux_img"]: + if target not in results: + continue + + img = results[target] + if len(img.shape) < 3: + img = np.expand_dims(img, -1) + + if len(img.shape) == 3: + img = np.ascontiguousarray(img.transpose(2, 0, 1)).astype(np.float32) + elif len(img.shape) == 4: + # for selfsl or supcon + img = np.ascontiguousarray(img.transpose(0, 3, 1, 2)).astype(np.float32) + else: + raise ValueError(f"img.shape={img.shape} is not supported.") + + results[target] = DC(to_tensor(img), stack=True) + + for trg_name in ["gt_semantic_seg", "gt_class_borders", "pixel_weights"]: + if trg_name not in results: + continue + + out_type = np.float32 if trg_name == "pixel_weights" else np.int64 + results[trg_name] = DC(to_tensor(results[trg_name][None, ...].astype(out_type)), stack=True) + + return results + + def __repr__(self): + """Repr.""" + return self.__class__.__name__ + + +@PIPELINES.register_module() +class BranchImage: + """Branch images by copying with name of key. - results["gt_semantic_seg"] = ann_info["gt_semantic_seg"] - results["seg_fields"].append("gt_semantic_seg") + Args: + key_map (dict): keys to name each image. + """ + + def __init__(self, key_map): + self.key_map = key_map + + def __call__(self, results): + """Call function to branch images in img_fields in results. + Args: + results (dict): Result dict contains the image data to branch. + + Returns: + dict: The result dict contains the original image data and copied image data. + """ + for key1, key2 in self.key_map.items(): + if key1 in results: + results[key2] = results[key1] + if key1 in results["img_fields"]: + results["img_fields"].append(key2) return results + def __repr__(self): + """Repr.""" + + repr_str = self.__class__.__name__ + return repr_str + @PIPELINES.register_module() class TwoCropTransform: diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/__init__.py index d4ef2e9c4ef..fa66af700d4 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/__init__.py @@ -1,6 +1,6 @@ """Adapters for OTX Common Algorithm. - mmseg.model.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,9 +14,35 @@ # See the License for the specific language governing permissions # and limitations under the License. - -from .losses import DetConLoss +from .backbones import LiteHRNet, MMOVBackbone +from .heads import CustomFCNHead, MMOVDecodeHead +from .losses import CrossEntropyLossWithIgnore, DetConLoss from .necks import SelfSLMLP -from .segmentors import DetConB, SupConDetConB +from .schedulers import ( + ConstantScalarScheduler, + PolyScalarScheduler, + StepScalarScheduler, +) +from .segmentors import ( + ClassIncrEncoderDecoder, + DetConB, + MeanTeacherSegmentor, + SupConDetConB, +) -__all__ = ["DetConLoss", "SelfSLMLP", "DetConB", "SupConDetConB"] +__all__ = [ + "LiteHRNet", + "MMOVBackbone", + "CustomFCNHead", + "MMOVDecodeHead", + "DetConLoss", + "SelfSLMLP", + "ConstantScalarScheduler", + "PolyScalarScheduler", + "StepScalarScheduler", + "DetConB", + "CrossEntropyLossWithIgnore", + "SupConDetConB", + "ClassIncrEncoderDecoder", + "MeanTeacherSegmentor", +] diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py new file mode 100644 index 00000000000..a241bbb48f8 --- /dev/null +++ b/otx/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py @@ -0,0 +1,24 @@ +"""Backbones for semantic segmentation.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + + +from .litehrnet import LiteHRNet +from .mmov_backbone import MMOVBackbone + +__all__ = [ + "LiteHRNet", + "MMOVBackbone", +] diff --git a/otx/mpa/modules/models/backbones/litehrnet.py b/otx/algorithms/segmentation/adapters/mmseg/models/backbones/litehrnet.py similarity index 96% rename from otx/mpa/modules/models/backbones/litehrnet.py rename to otx/algorithms/segmentation/adapters/mmseg/models/backbones/litehrnet.py index 85f7df0ac31..acbe46316af 100644 --- a/otx/mpa/modules/models/backbones/litehrnet.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/backbones/litehrnet.py @@ -1,19 +1,22 @@ +"""HRNet network modules for base backbone. + +Modified from: +- https://github.com/HRNet/Lite-HRNet +""" + # Copyright (c) 2018-2020 Open-MMLab. # SPDX-License-Identifier: Apache-2.0 # # Copyright (c) 2021 DeLightCMU # SPDX-License-Identifier: Apache-2.0 # -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -"""Modified from: https://github.com/HRNet/Lite-HRNet""" - import mmcv import torch -import torch.nn as nn import torch.nn.functional as F import torch.utils.checkpoint as cp from mmcv.cnn import ( @@ -28,8 +31,9 @@ from mmseg.models.backbones.resnet import BasicBlock, Bottleneck from mmseg.models.builder import BACKBONES from mmseg.utils import get_root_logger +from torch import nn -from ..utils import ( +from otx.mpa.modules.models.utils import ( AsymmetricPositionAttentionModule, IterativeAggregator, LocalAttentionModule, @@ -37,7 +41,11 @@ ) +# pylint: disable=invalid-name, too-many-lines, too-many-instance-attributes, too-many-locals, too-many-arguments +# pylint: disable=unused-argument, consider-using-enumerate class NeighbourSupport(nn.Module): + """Neighbour support module.""" + def __init__(self, channels, kernel_size=3, key_ratio=8, value_ratio=8, conv_cfg=None, norm_cfg=None): super().__init__() @@ -100,6 +108,7 @@ def __init__(self, channels, kernel_size=3, key_ratio=8, value_ratio=8, conv_cfg ) def forward(self, x): + """Forward.""" h, w = [int(_) for _ in x.size()[-2:]] key = self.key(x).view(-1, 1, self.kernel_size**2, h, w) @@ -115,6 +124,8 @@ def forward(self, x): class CrossResolutionWeighting(nn.Module): + """Cross resolution weighting.""" + def __init__( self, channels, ratio=16, conv_cfg=None, norm_cfg=None, act_cfg=(dict(type="ReLU"), dict(type="Sigmoid")) ): @@ -148,6 +159,7 @@ def __init__( ) def forward(self, x): + """Forward.""" min_size = [int(_) for _ in x[-1].size()[-2:]] out = [F.adaptive_avg_pool2d(s, min_size) for s in x[:-1]] + [x[-1]] @@ -161,6 +173,8 @@ def forward(self, x): class SpatialWeighting(nn.Module): + """Spatial weighting.""" + def __init__(self, channels, ratio=16, conv_cfg=None, act_cfg=(dict(type="ReLU"), dict(type="Sigmoid")), **kwargs): super().__init__() @@ -188,6 +202,7 @@ def __init__(self, channels, ratio=16, conv_cfg=None, act_cfg=(dict(type="ReLU") ) def forward(self, x): + """Forward.""" out = self.global_avgpool(x) out = self.conv1(out) out = self.conv2(out) @@ -196,7 +211,7 @@ def forward(self, x): class SpatialWeightingV2(nn.Module): - """The original repo: https://github.com/DeLightCMU/PSA""" + """The original repo: https://github.com/DeLightCMU/PSA.""" def __init__(self, channels, ratio=16, conv_cfg=None, norm_cfg=None, enable_norm=False, **kwargs): super().__init__() @@ -294,6 +309,7 @@ def _spatial_weighting(self, x): return out def forward(self, x): + """Forward.""" y_channel = self._channel_weighting(x) y_spatial = self._spatial_weighting(x) out = y_channel + y_spatial @@ -302,13 +318,15 @@ def forward(self, x): class ConditionalChannelWeighting(nn.Module): + """Conditional channel weighting module.""" + def __init__( self, in_channels, stride, reduce_ratio, conv_cfg=None, - norm_cfg=dict(type="BN"), + norm_cfg=None, with_cp=False, dropout=None, weighting_module_version="v1", @@ -317,6 +335,9 @@ def __init__( ): super().__init__() + if norm_cfg is None: + norm_cfg = dict(type="BN") + self.with_cp = with_cp self.stride = stride assert stride in [1, 2] @@ -389,6 +410,7 @@ def _inner_forward(self, x): return out def forward(self, x): + """Forward.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self._inner_forward, x) else: @@ -398,6 +420,8 @@ def forward(self, x): class Stem(nn.Module): + """Stem.""" + def __init__( self, in_channels, @@ -405,7 +429,7 @@ def __init__( out_channels, expand_ratio, conv_cfg=None, - norm_cfg=dict(type="BN"), + norm_cfg=None, with_cp=False, strides=(2, 2), extra_stride=False, @@ -413,6 +437,9 @@ def __init__( ): super().__init__() + if norm_cfg is None: + norm_cfg = dict(type="BN") + assert isinstance(strides, (tuple, list)) assert len(strides) == 2 @@ -535,6 +562,7 @@ def _inner_forward(self, x): return out def forward(self, x): + """Forward.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self._inner_forward, x) else: @@ -544,6 +572,8 @@ def forward(self, x): class StemV2(nn.Module): + """StemV2.""" + def __init__( self, in_channels, @@ -551,7 +581,7 @@ def __init__( out_channels, expand_ratio, conv_cfg=None, - norm_cfg=dict(type="BN"), + norm_cfg=None, with_cp=False, num_stages=1, strides=(2, 2), @@ -560,6 +590,9 @@ def __init__( ): super().__init__() + if norm_cfg is None: + norm_cfg = dict(type="BN") + assert num_stages > 0 assert isinstance(strides, (tuple, list)) assert len(strides) == 1 + num_stages @@ -689,6 +722,7 @@ def _inner_forward(self, x): return out_list def forward(self, x): + """Forward.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self._inner_forward, x) else: @@ -720,11 +754,17 @@ def __init__( out_channels, stride=1, conv_cfg=None, - norm_cfg=dict(type="BN"), - act_cfg=dict(type="ReLU"), + norm_cfg=None, + act_cfg=None, with_cp=False, ): super().__init__() + + if norm_cfg is None: + norm_cfg = dict(type="BN") + if act_cfg is None: + act_cfg = dict(type="ReLU") + self.stride = stride self.with_cp = with_cp @@ -812,6 +852,7 @@ def _inner_forward(self, x): return out def forward(self, x): + """Forward.""" if self.with_cp and x.requires_grad: out = cp.checkpoint(self._inner_forward, x) else: @@ -821,6 +862,8 @@ def forward(self, x): class LiteHRModule(nn.Module): + """LiteHR module.""" + def __init__( self, num_branches, @@ -831,13 +874,16 @@ def __init__( multiscale_output=False, with_fuse=True, conv_cfg=None, - norm_cfg=dict(type="BN"), + norm_cfg=None, with_cp=False, dropout=None, weighting_module_version="v1", neighbour_weighting=False, ): super().__init__() + + if norm_cfg is None: + norm_cfg = dict(type="BN") self._check_branches(num_branches, in_channels) self.in_channels = in_channels @@ -871,7 +917,7 @@ def _check_branches(num_branches, in_channels): def _make_weighting_blocks(self, num_blocks, reduce_ratio, stride=1, dropout=None): layers = [] - for i in range(num_blocks): + for _ in range(num_blocks): layers.append( ConditionalChannelWeighting( self.in_channels, @@ -902,7 +948,7 @@ def _make_one_branch(self, branch_index, num_blocks, stride=1): with_cp=self.with_cp, ) ] - for i in range(1, num_blocks): + for _ in range(1, num_blocks): layers.append( ShuffleUnit( self.in_channels[branch_index], @@ -1081,7 +1127,7 @@ def __init__( extra, in_channels=3, conv_cfg=None, - norm_cfg=dict(type="BN"), + norm_cfg=None, norm_eval=False, with_cp=False, zero_init_residual=False, @@ -1090,6 +1136,9 @@ def __init__( ): super().__init__(init_cfg=init_cfg) + if norm_cfg is None: + norm_cfg = dict(type="BN") + self.extra = extra self.conv_cfg = conv_cfg self.norm_cfg = norm_cfg @@ -1123,12 +1172,12 @@ def __init__( num_channels = self.stages_spec["num_channels"][i] num_channels = [num_channels[i] for i in range(len(num_channels))] - setattr(self, "transition{}".format(i), self._make_transition_layer(num_channels_last, num_channels)) + setattr(self, f"transition{i}", self._make_transition_layer(num_channels_last, num_channels)) stage, num_channels_last = self._make_stage( self.stages_spec, i, num_channels, multiscale_output=True, dropout=dropout ) - setattr(self, "stage{}".format(i), stage) + setattr(self, f"stage{i}", stage) self.out_modules = None if self.extra.get("out_modules") is not None: @@ -1356,7 +1405,7 @@ def forward(self, x): y_list = [y] for i in range(self.num_stages): - transition_modules = getattr(self, "transition{}".format(i)) + transition_modules = getattr(self, f"transition{i}") stage_inputs = [] for j in range(self.stages_spec["num_branches"][i]): @@ -1368,7 +1417,7 @@ def forward(self, x): else: stage_inputs.append(y_list[j]) - stage_module = getattr(self, "stage{}".format(i)) + stage_module = getattr(self, f"stage{i}") y_list = stage_module(stage_inputs) if self.out_modules is not None: diff --git a/otx/mpa/modules/ov/models/mmseg/backbones/mmov_backbone.py b/otx/algorithms/segmentation/adapters/mmseg/models/backbones/mmov_backbone.py similarity index 64% rename from otx/mpa/modules/ov/models/mmseg/backbones/mmov_backbone.py rename to otx/algorithms/segmentation/adapters/mmseg/models/backbones/mmov_backbone.py index 31e4aaaf218..0b19fc7eb12 100644 --- a/otx/mpa/modules/ov/models/mmseg/backbones/mmov_backbone.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/backbones/mmov_backbone.py @@ -1,18 +1,25 @@ -# Copyright (C) 2022 Intel Corporation +"""Backbone used for openvino export.""" + +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from mmseg.models.builder import BACKBONES -from ...mmov_model import MMOVModel +from otx.mpa.modules.ov.models.mmov_model import MMOVModel + +# pylint: disable=unused-argument @BACKBONES.register_module() class MMOVBackbone(MMOVModel): + """MMOVBackbone.""" + def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) def forward(self, *args, **kwargs): + """Forward.""" outputs = super().forward(*args, **kwargs) if not isinstance(outputs, tuple): outputs = (outputs,) @@ -20,5 +27,6 @@ def forward(self, *args, **kwargs): return outputs def init_weights(self, pretrained=None): + """Initialize the weights.""" # TODO - pass + return diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py new file mode 100644 index 00000000000..8ea771a0803 --- /dev/null +++ b/otx/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py @@ -0,0 +1,21 @@ +"""Semantic segmentation heads.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + + +from .custom_fcn_head import CustomFCNHead +from .mmov_decode_head import MMOVDecodeHead + +__all__ = ["MMOVDecodeHead", "CustomFCNHead"] diff --git a/otx/mpa/modules/models/heads/custom_fcn_head.py b/otx/algorithms/segmentation/adapters/mmseg/models/heads/custom_fcn_head.py similarity index 62% rename from otx/mpa/modules/models/heads/custom_fcn_head.py rename to otx/algorithms/segmentation/adapters/mmseg/models/heads/custom_fcn_head.py index 00efd56cec0..20b3fb2039b 100644 --- a/otx/mpa/modules/models/heads/custom_fcn_head.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/heads/custom_fcn_head.py @@ -1,18 +1,24 @@ -# Copyright (C) 2022 Intel Corporation +"""Custom FCN head.""" + +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from mmseg.models.builder import HEADS from mmseg.models.decode_heads.fcn_head import FCNHead -from .aggregator_mixin import AggregatorMixin -from .mix_loss_mixin import MixLossMixin -from .pixel_weights_mixin import PixelWeightsMixin2 -from .segment_out_norm_mixin import SegmentOutNormMixin +from .mixin import ( + AggregatorMixin, + MixLossMixin, + PixelWeightsMixin2, + SegmentOutNormMixin, +) @HEADS.register_module() -class CustomFCNHead(SegmentOutNormMixin, AggregatorMixin, MixLossMixin, PixelWeightsMixin2, FCNHead): +class CustomFCNHead( + SegmentOutNormMixin, AggregatorMixin, MixLossMixin, PixelWeightsMixin2, FCNHead +): # pylint: disable=too-many-ancestors """Custom Fully Convolution Networks for Semantic Segmentation.""" def __init__(self, *args, **kwargs): diff --git a/otx/mpa/modules/models/heads/pixel_weights_mixin.py b/otx/algorithms/segmentation/adapters/mmseg/models/heads/mixin.py similarity index 59% rename from otx/mpa/modules/models/heads/pixel_weights_mixin.py rename to otx/algorithms/segmentation/adapters/mmseg/models/heads/mixin.py index 6ca8821b3ac..ecb47213a8d 100644 --- a/otx/mpa/modules/models/heads/pixel_weights_mixin.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/heads/mixin.py @@ -1,26 +1,144 @@ -# Copyright (C) 2022 Intel Corporation +"""Modules for aggregator and loss mix.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # - -import torch.nn as nn +import torch +import torch.nn.functional as F from mmcv.runner import force_fp32 from mmseg.core import add_prefix from mmseg.models.losses import accuracy from mmseg.ops import resize +from torch import nn -from otx.mpa.modules.utils.seg_utils import get_valid_label_mask_per_batch +from otx.algorithms.segmentation.adapters.mmseg.utils import ( + get_valid_label_mask_per_batch, +) +from otx.mpa.modules.models.losses.utils import LossEqualizer +from otx.mpa.modules.models.utils import AngularPWConv, IterativeAggregator, normalize -from ..losses.utils import LossEqualizer +# pylint: disable=abstract-method, unused-argument, keyword-arg-before-vararg -class PixelWeightsMixin(nn.Module): +class SegmentOutNormMixin(nn.Module): + """SegmentOutNormMixin.""" + + def __init__(self, *args, enable_out_seg=True, enable_out_norm=False, **kwargs): + super().__init__(*args, **kwargs) + + self.enable_out_seg = enable_out_seg + self.enable_out_norm = enable_out_norm + + if enable_out_seg: + if enable_out_norm: + self.conv_seg = AngularPWConv(self.channels, self.out_channels, clip_output=True) + else: + self.conv_seg = None + + def cls_seg(self, feat): + """Classify each pixel.""" + if self.dropout is not None: + feat = self.dropout(feat) + if self.enable_out_norm: + feat = normalize(feat, dim=1, p=2) + if self.conv_seg is not None: + return self.conv_seg(feat) + return feat + + +class AggregatorMixin(nn.Module): + """A class for creating an aggregator.""" + def __init__( self, - enable_loss_equalizer=False, - loss_target="gt_semantic_seg", *args, + enable_aggregator=False, + aggregator_min_channels=None, + aggregator_merge_norm=None, + aggregator_use_concat=False, **kwargs, ): + + in_channels = kwargs.get("in_channels") + in_index = kwargs.get("in_index") + norm_cfg = kwargs.get("norm_cfg") + conv_cfg = kwargs.get("conv_cfg") + input_transform = kwargs.get("input_transform") + + aggregator = None + if enable_aggregator: + assert isinstance(in_channels, (tuple, list)) + assert len(in_channels) > 1 + + aggregator = IterativeAggregator( + in_channels=in_channels, + min_channels=aggregator_min_channels, + conv_cfg=conv_cfg, + norm_cfg=norm_cfg, + merge_norm=aggregator_merge_norm, + use_concat=aggregator_use_concat, + ) + + aggregator_min_channels = aggregator_min_channels if aggregator_min_channels is not None else 0 + # change arguments temporarily + kwargs["in_channels"] = max(in_channels[0], aggregator_min_channels) + kwargs["input_transform"] = None + if in_index is not None: + kwargs["in_index"] = in_index[0] + + super().__init__(*args, **kwargs) + + self.aggregator = aggregator + # re-define variables + self.in_channels = in_channels + self.input_transform = input_transform + self.in_index = in_index + + def _transform_inputs(self, inputs): + inputs = super()._transform_inputs(inputs) + if self.aggregator is not None: + inputs = self.aggregator(inputs)[0] + return inputs + + +class MixLossMixin(nn.Module): + """Loss mixing module.""" + + @staticmethod + def _mix_loss(logits, target, ignore_index=255): + num_samples = logits.size(0) + assert num_samples % 2 == 0 + + with torch.no_grad(): + probs = F.softmax(logits, dim=1) + probs_a, probs_b = torch.split(probs, num_samples // 2) + mean_probs = 0.5 * (probs_a + probs_b) + trg_probs = torch.cat([mean_probs, mean_probs], dim=0) + + log_probs = torch.log_softmax(logits, dim=1) + losses = torch.sum(trg_probs * log_probs, dim=1).neg() + + valid_mask = target != ignore_index + valid_losses = torch.where(valid_mask, losses, torch.zeros_like(losses)) + + return valid_losses.mean() + + @force_fp32(apply_to=("seg_logit",)) + def losses(self, seg_logit, seg_label, train_cfg, *args, **kwargs): + """Loss computing.""" + loss = super().losses(seg_logit, seg_label, train_cfg, *args, **kwargs) + if train_cfg.get("mix_loss", None) and train_cfg.mix_loss.get("enable", False): + mix_loss = self._mix_loss(seg_logit, seg_label, ignore_index=self.ignore_index) + + mix_loss_weight = train_cfg.mix_loss.get("weight", 1.0) + loss["loss_mix"] = mix_loss_weight * mix_loss + + return loss + + +class PixelWeightsMixin(nn.Module): + """PixelWeightsMixin.""" + + def __init__(self, enable_loss_equalizer=False, loss_target="gt_semantic_seg", *args, **kwargs): super().__init__(*args, **kwargs) self.enable_loss_equalizer = enable_loss_equalizer @@ -34,10 +152,12 @@ def __init__( @property def loss_target_name(self): + """Return loss target name.""" return self.loss_target @property def last_scale(self): + """Return the last scale.""" if not isinstance(self.loss_decode, nn.ModuleList): losses_decode = [self.loss_decode] else: @@ -54,6 +174,7 @@ def last_scale(self): return loss_module.last_scale def set_step_params(self, init_iter, epoch_size): + """Set step parameters.""" if not isinstance(self.loss_decode, nn.ModuleList): losses_decode = [self.loss_decode] else: @@ -73,6 +194,7 @@ def forward_train( return_logits=False, ): """Forward function for training. + Args: inputs (list[Tensor]): List of multi-level img features. img_metas (list[dict]): List of image info dict where each dict @@ -138,6 +260,8 @@ def losses(self, seg_logit, seg_label, train_cfg, pixel_weights=None): class PixelWeightsMixin2(PixelWeightsMixin): + """Pixel weight mixin class.""" + def forward_train( self, inputs, @@ -148,6 +272,7 @@ def forward_train( return_logits=False, ): """Forward function for training. + Args: inputs (list[Tensor]): List of multi-level img features. img_metas (list[dict]): List of image info dict where each dict @@ -176,7 +301,9 @@ def forward_train( return losses @force_fp32(apply_to=("seg_logit",)) - def losses(self, seg_logit, seg_label, train_cfg, valid_label_mask, pixel_weights=None): + def losses( + self, seg_logit, seg_label, train_cfg, valid_label_mask, pixel_weights=None + ): # pylint: disable=arguments-renamed """Compute segmentation loss.""" loss = dict() diff --git a/otx/mpa/modules/ov/models/mmseg/decode_heads/mmov_decode_head.py b/otx/algorithms/segmentation/adapters/mmseg/models/heads/mmov_decode_head.py similarity index 78% rename from otx/mpa/modules/ov/models/mmseg/decode_heads/mmov_decode_head.py rename to otx/algorithms/segmentation/adapters/mmseg/models/heads/mmov_decode_head.py index 375fb51b17b..75fc3083919 100644 --- a/otx/mpa/modules/ov/models/mmseg/decode_heads/mmov_decode_head.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/heads/mmov_decode_head.py @@ -1,4 +1,6 @@ -# Copyright (C) 2022 Intel Corporation +"""Decode-head used for openvino export.""" + +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,25 +8,31 @@ from typing import Dict, List, Optional, Union import openvino.runtime as ov -from mmseg.models.builder import HEADS from mmseg.models.decode_heads.decode_head import BaseDecodeHead -from ...mmov_model import MMOVModel +from otx.mpa.modules.ov.models.mmov_model import MMOVModel + +# pylint: disable=too-many-instance-attributes, keyword-arg-before-vararg -@HEADS.register_module() class MMOVDecodeHead(BaseDecodeHead): + """MMOVDecodeHead.""" + def __init__( self, model_path_or_model: Union[str, ov.Model] = None, weight_path: Optional[str] = None, - inputs: Dict[str, Union[str, List[str]]] = {}, - outputs: Dict[str, Union[str, List[str]]] = {}, + inputs: Optional[Dict[str, Union[str, List[str]]]] = None, + outputs: Optional[Dict[str, Union[str, List[str]]]] = None, init_weight: bool = False, verify_shape: bool = True, *args, - **kwargs, + **kwargs ): + if inputs is None: + inputs = {} + if outputs is None: + outputs = {} self._model_path_or_model = model_path_or_model self._weight_path = weight_path self._inputs = deepcopy(inputs) @@ -68,10 +76,12 @@ def __init__( ) def init_weights(self): + """Init weights.""" # TODO - pass + return def forward(self, inputs): + """Forward.""" outputs = self._transform_inputs(inputs) if getattr(self, "extractor"): outputs = self.extractor(outputs) diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/losses/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/losses/__init__.py index 7455c2529c3..f6ba9b2e32b 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/losses/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/losses/__init__.py @@ -1,6 +1,6 @@ """Segmentation losses.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,6 +14,7 @@ # See the License for the specific language governing permissions # and limitations under the License. +from .cross_entropy_loss_with_ignore import CrossEntropyLossWithIgnore from .detcon_loss import DetConLoss -__all__ = ["DetConLoss"] +__all__ = ["DetConLoss", "CrossEntropyLossWithIgnore"] diff --git a/otx/mpa/modules/models/losses/base_pixel_loss.py b/otx/algorithms/segmentation/adapters/mmseg/models/losses/base_pixel_loss.py similarity index 89% rename from otx/mpa/modules/models/losses/base_pixel_loss.py rename to otx/algorithms/segmentation/adapters/mmseg/models/losses/base_pixel_loss.py index 423cf744fea..1d0d69e27c8 100644 --- a/otx/mpa/modules/models/losses/base_pixel_loss.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/losses/base_pixel_loss.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Base pixel loss.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,18 +9,23 @@ import torch.nn.functional as F from mmseg.models.losses.utils import weight_reduce_loss -from otx.mpa.modules.models.builder import build_scalar_scheduler +from otx.algorithms.segmentation.adapters.mmseg.utils.builder import ( + build_scalar_scheduler, +) from .base_weighted_loss import BaseWeightedLoss def entropy(p, dim=1, keepdim=False): + """Calculates the entropy.""" return -torch.where(p > 0.0, p * p.log(), torch.zeros_like(p)).sum(dim=dim, keepdim=keepdim) class BasePixelLoss(BaseWeightedLoss): + """Base pixel loss.""" + def __init__(self, scale_cfg=None, pr_product=False, conf_penalty_weight=None, border_reweighting=False, **kwargs): - super(BasePixelLoss, self).__init__(**kwargs) + super().__init__(**kwargs) self._enable_pr_product = pr_product self._border_reweighting = border_reweighting @@ -32,22 +38,27 @@ def __init__(self, scale_cfg=None, pr_product=False, conf_penalty_weight=None, b @property def last_scale(self): + """Return last_scale.""" return self._last_scale @property def last_reg_weight(self): + """Return last_reg_weight.""" return self._last_reg_weight @property def with_regularization(self): + """Check regularization use.""" return self._reg_weight_scheduler is not None @property def with_pr_product(self): + """Check pr_product.""" return self._enable_pr_product @property def with_border_reweighting(self): + """Check border reweighting.""" return self._border_reweighting @staticmethod @@ -99,7 +110,9 @@ def _pred_stat(output, labels, valid_mask, window_size=5, min_group_ratio=0.6): return out_ratio.item() - def _forward(self, output, labels, avg_factor=None, pixel_weights=None, reduction_override=None): + def _forward( + self, output, labels, avg_factor=None, pixel_weights=None, reduction_override=None + ): # pylint: disable=too-many-locals assert reduction_override in (None, "none", "mean", "sum") reduction = reduction_override if reduction_override else self.reduction diff --git a/otx/mpa/modules/models/losses/base_weighted_loss.py b/otx/algorithms/segmentation/adapters/mmseg/models/losses/base_weighted_loss.py similarity index 83% rename from otx/mpa/modules/models/losses/base_weighted_loss.py rename to otx/algorithms/segmentation/adapters/mmseg/models/losses/base_weighted_loss.py index e74b5233e97..2487eed40fe 100644 --- a/otx/mpa/modules/models/losses/base_weighted_loss.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/losses/base_weighted_loss.py @@ -1,17 +1,21 @@ -# Copyright (C) 2022 Intel Corporation +"""Base weighted loss function for semantic segmentation.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from abc import ABCMeta, abstractmethod import torch -import torch.nn as nn from mmseg.core import build_pixel_sampler -from scipy.special import erfinv +from scipy.special import erfinv # pylint: disable=no-name-in-module +from torch import nn -from otx.mpa.modules.models.builder import build_scalar_scheduler +from otx.algorithms.segmentation.adapters.mmseg.utils.builder import ( + build_scalar_scheduler, +) +# pylint: disable=too-many-instance-attributes, unused-argument class BaseWeightedLoss(nn.Module, metaclass=ABCMeta): """Base class for loss. @@ -57,6 +61,7 @@ def __init__( self._epoch_size = 1 def set_step_params(self, init_iter, epoch_size): + """Set step parameters.""" assert init_iter >= 0 assert epoch_size > 0 @@ -65,18 +70,22 @@ def set_step_params(self, init_iter, epoch_size): @property def with_loss_jitter(self): + """Check loss jitter.""" return self._jitter_sigma_factor is not None @property def iter(self): + """Return iteration.""" return self._iter @property def epoch_size(self): + """Return epoch size.""" return self._epoch_size @property def last_loss_weight(self): + """Return last loss weight.""" return self._last_loss_weight @abstractmethod @@ -99,8 +108,8 @@ def forward(self, *args, **kwargs): loss, meta = self._forward(*args, **kwargs) # make sure meta data are tensor as well for aggregation # when parsing loss in sgementator - for k, v in meta.items(): - meta[k] = torch.tensor(v, dtype=loss.dtype, device=loss.device) + for key, val in meta.items(): + meta[key] = torch.tensor(val, dtype=loss.dtype, device=loss.device) if self.with_loss_jitter and loss.numel() == 1: if self._smooth_loss is None: diff --git a/otx/mpa/modules/models/losses/cross_entropy_loss_with_ignore.py b/otx/algorithms/segmentation/adapters/mmseg/models/losses/cross_entropy_loss_with_ignore.py similarity index 88% rename from otx/mpa/modules/models/losses/cross_entropy_loss_with_ignore.py rename to otx/algorithms/segmentation/adapters/mmseg/models/losses/cross_entropy_loss_with_ignore.py index cdabc189798..57e9c24c268 100644 --- a/otx/mpa/modules/models/losses/cross_entropy_loss_with_ignore.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/losses/cross_entropy_loss_with_ignore.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Cross entropy loss for ignored mode in class-incremental learning.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -7,11 +8,11 @@ from mmseg.models.builder import LOSSES from mmseg.models.losses.utils import get_class_weight -from .mpa_pixel_base import MPABasePixelLoss +from .otx_pixel_base import OTXBasePixelLoss @LOSSES.register_module() -class CrossEntropyLossWithIgnore(MPABasePixelLoss): +class CrossEntropyLossWithIgnore(OTXBasePixelLoss): """CrossEntropyLossWithIgnore with Ignore Mode Support for Class Incremental Learning. Args: @@ -24,13 +25,14 @@ class CrossEntropyLossWithIgnore(MPABasePixelLoss): """ def __init__(self, reduction="mean", loss_weight=None, **kwargs): - super(CrossEntropyLossWithIgnore, self).__init__(**kwargs) + super().__init__(**kwargs) self.reduction = reduction self.class_weight = get_class_weight(loss_weight) @property def name(self): + """name.""" return "ce_with_ignore" def _calculate(self, cls_score, label, valid_label_mask, scale): diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/losses/detcon_loss.py b/otx/algorithms/segmentation/adapters/mmseg/models/losses/detcon_loss.py index 8d93688cbc1..19140effac1 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/losses/detcon_loss.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/losses/detcon_loss.py @@ -1,6 +1,6 @@ """DetCon loss.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # # pylint: disable=no-name-in-module, not-callable diff --git a/otx/mpa/modules/models/losses/mpa_pixel_base.py b/otx/algorithms/segmentation/adapters/mmseg/models/losses/otx_pixel_base.py similarity index 88% rename from otx/mpa/modules/models/losses/mpa_pixel_base.py rename to otx/algorithms/segmentation/adapters/mmseg/models/losses/otx_pixel_base.py index e7a7bacdd96..b9f68a5ffc4 100644 --- a/otx/mpa/modules/models/losses/mpa_pixel_base.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/losses/otx_pixel_base.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""OTX pixel loss.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -7,10 +8,14 @@ from .base_pixel_loss import BasePixelLoss +# pylint: disable=too-many-function-args, too-many-locals + + +class OTXBasePixelLoss(BasePixelLoss): # pylint: disable=abstract-method + """OTXBasePixelLoss.""" -class MPABasePixelLoss(BasePixelLoss): def __init__(self, **kwargs): - super(MPABasePixelLoss, self).__init__(**kwargs) + super().__init__(**kwargs) def _forward( self, @@ -20,7 +25,7 @@ def _forward( avg_factor=None, pixel_weights=None, reduction_override=None, - ): + ): # pylint: disable=arguments-renamed assert reduction_override in (None, "none", "mean", "sum") reduction = reduction_override if reduction_override else self.reduction diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/necks/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/necks/__init__.py index 841d7bf50d4..cf76dc5c172 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/necks/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/necks/__init__.py @@ -1,6 +1,6 @@ """OTX Algorithms - Segmentation Necks.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/necks/selfsl_mlp.py b/otx/algorithms/segmentation/adapters/mmseg/models/necks/selfsl_mlp.py index 563c89b4df4..e7656efd150 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/necks/selfsl_mlp.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/necks/selfsl_mlp.py @@ -3,7 +3,7 @@ This MLP consists of fc (conv) - norm - relu - fc (conv). """ -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # # pylint: disable=dangerous-default-value diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/__init__.py new file mode 100644 index 00000000000..47c10da0113 --- /dev/null +++ b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/__init__.py @@ -0,0 +1,25 @@ +"""Scaler schedulers for semantic segmentation.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from .constant import ConstantScalarScheduler +from .poly import PolyScalarScheduler +from .step import StepScalarScheduler + +__all__ = [ + "ConstantScalarScheduler", + "PolyScalarScheduler", + "StepScalarScheduler", +] diff --git a/otx/mpa/modules/models/scalar_schedulers/base.py b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/base.py similarity index 70% rename from otx/mpa/modules/models/scalar_schedulers/base.py rename to otx/algorithms/segmentation/adapters/mmseg/models/schedulers/base.py index e3000f0a21b..600309d8f1d 100644 --- a/otx/mpa/modules/models/scalar_schedulers/base.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/base.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Base scalar scheduler.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,10 +7,10 @@ class BaseScalarScheduler(metaclass=ABCMeta): - def __init__(self): - super(BaseScalarScheduler, self).__init__() + """Base scalar scheduler.""" def __call__(self, step, epoch_size) -> float: + """Callback function of BaseScalarScheduler.""" return self._get_value(step, epoch_size) @abstractmethod diff --git a/otx/mpa/modules/models/scalar_schedulers/constant.py b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/constant.py similarity index 73% rename from otx/mpa/modules/models/scalar_schedulers/constant.py rename to otx/algorithms/segmentation/adapters/mmseg/models/schedulers/constant.py index 96536dd8994..f7819d18ce5 100644 --- a/otx/mpa/modules/models/scalar_schedulers/constant.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/constant.py @@ -1,8 +1,10 @@ -# Copyright (C) 2022 Intel Corporation +"""Constant scheduler.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from ..builder import SCALAR_SCHEDULERS +from otx.algorithms.segmentation.adapters.mmseg.utils.builder import SCALAR_SCHEDULERS + from .base import BaseScalarScheduler @@ -11,12 +13,13 @@ class ConstantScalarScheduler(BaseScalarScheduler): """The learning rate remains constant over time. The learning rate equals the scale. + Args: scale (float): The learning rate scale. """ def __init__(self, scale: float = 30.0): - super(ConstantScalarScheduler, self).__init__() + super().__init__() self._end_s = scale assert self._end_s > 0.0 diff --git a/otx/mpa/modules/models/scalar_schedulers/poly.py b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/poly.py similarity index 90% rename from otx/mpa/modules/models/scalar_schedulers/poly.py rename to otx/algorithms/segmentation/adapters/mmseg/models/schedulers/poly.py index 6b70fe9423b..f173b62f374 100644 --- a/otx/mpa/modules/models/scalar_schedulers/poly.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/poly.py @@ -1,10 +1,13 @@ -# Copyright (C) 2022 Intel Corporation +"""Polynomial scheduler.""" + +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import numpy as np -from ..builder import SCALAR_SCHEDULERS +from otx.algorithms.segmentation.adapters.mmseg.utils.builder import SCALAR_SCHEDULERS + from .base import BaseScalarScheduler @@ -23,7 +26,7 @@ class PolyScalarScheduler(BaseScalarScheduler): def __init__( self, start_scale: float, end_scale: float, num_iters: int, power: float = 1.2, by_epoch: bool = False ): - super(PolyScalarScheduler, self).__init__() + super().__init__() self._start_s = start_scale assert self._start_s >= 0.0 diff --git a/otx/mpa/modules/models/scalar_schedulers/step.py b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/step.py similarity index 89% rename from otx/mpa/modules/models/scalar_schedulers/step.py rename to otx/algorithms/segmentation/adapters/mmseg/models/schedulers/step.py index 3646f148960..19c4f81563d 100644 --- a/otx/mpa/modules/models/scalar_schedulers/step.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/schedulers/step.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Step scheduler.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,7 +7,8 @@ import numpy as np -from ..builder import SCALAR_SCHEDULERS +from otx.algorithms.segmentation.adapters.mmseg.utils.builder import SCALAR_SCHEDULERS + from .base import BaseScalarScheduler @@ -26,7 +28,7 @@ class StepScalarScheduler(BaseScalarScheduler): """ def __init__(self, scales: List[float], num_iters: List[int], by_epoch: bool = False): - super(StepScalarScheduler, self).__init__() + super().__init__() self.by_epoch = by_epoch diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/__init__.py index cf76332e0d2..d953b628f81 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/__init__.py @@ -1,6 +1,6 @@ """OTX Algorithms - Segmentation Segmentors.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,6 +14,8 @@ # See the License for the specific language governing permissions # and limitations under the License. +from .class_incr_encoder_decoder import ClassIncrEncoderDecoder from .detcon import DetConB, SupConDetConB +from .mean_teacher_segmentor import MeanTeacherSegmentor -__all__ = ["DetConB", "SupConDetConB"] +__all__ = ["DetConB", "SupConDetConB", "ClassIncrEncoderDecoder", "MeanTeacherSegmentor"] diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/class_incr_encoder_decoder.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/class_incr_encoder_decoder.py new file mode 100644 index 00000000000..f18de3a5193 --- /dev/null +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/class_incr_encoder_decoder.py @@ -0,0 +1,109 @@ +"""Encoder-decoder for incremental learning.""" + +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + +import functools + +import torch +from mmseg.models import SEGMENTORS +from mmseg.utils import get_root_logger + +from otx.mpa.modules.utils.task_adapt import map_class_names + +from .mixin import PixelWeightsMixin +from .otx_encoder_decoder import OTXEncoderDecoder + + +@SEGMENTORS.register_module() +class ClassIncrEncoderDecoder(PixelWeightsMixin, OTXEncoderDecoder): + """Encoder-decoder for incremental learning.""" + + def __init__(self, *args, task_adapt=None, **kwargs): + super().__init__(*args, **kwargs) + + # Hook for class-sensitive weight loading + assert task_adapt is not None, "When using task_adapt, task_adapt must be set." + + self._register_load_state_dict_pre_hook( + functools.partial( + self.load_state_dict_pre_hook, + self, # model + task_adapt["dst_classes"], # model_classes + task_adapt["src_classes"], # chkpt_classes + ) + ) + + def forward_train( + self, + img, + img_metas, + gt_semantic_seg, + aux_img=None, + **kwargs, + ): # pylint: disable=arguments-renamed + """Forward function for training. + + Args: + img (Tensor): Input images. + img_metas (list[dict]): List of image info dict where each dict + has: 'img_shape', 'scale_factor', 'flip', and may also contain + 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. + For details on the values of these keys see + `mmseg/datasets/pipelines/formatting.py:Collect`. + gt_semantic_seg (Tensor): Semantic segmentation masks + used if the architecture supports semantic segmentation task. + aux_img (Tensor): Auxiliary images. + + Returns: + dict[str, Tensor]: a dictionary of loss components + """ + if aux_img is not None: + mix_loss_enabled = False + mix_loss_cfg = self.train_cfg.get("mix_loss", None) + if mix_loss_cfg is not None: + mix_loss_enabled = mix_loss_cfg.get("enable", False) + if mix_loss_enabled: + self.train_cfg.mix_loss.enable = mix_loss_enabled + + if self.train_cfg.mix_loss.enable: + img = torch.cat([img, aux_img], dim=0) + gt_semantic_seg = torch.cat([gt_semantic_seg, gt_semantic_seg], dim=0) + + return super().forward_train(img, img_metas, gt_semantic_seg, **kwargs) + + @staticmethod + def load_state_dict_pre_hook( + model, model_classes, chkpt_classes, chkpt_dict, prefix, *args, **kwargs + ): # pylint: disable=too-many-locals, unused-argument + """Modify input state_dict according to class name matching before weight loading.""" + logger = get_root_logger("INFO") + logger.info(f"----------------- ClassIncrEncoderDecoder.load_state_dict_pre_hook() called w/ prefix: {prefix}") + + # Dst to src mapping index + model_classes = list(model_classes) + chkpt_classes = list(chkpt_classes) + model2chkpt = map_class_names(model_classes, chkpt_classes) + logger.info(f"{chkpt_classes} -> {model_classes} ({model2chkpt})") + + model_dict = model.state_dict() + param_names = [ + "decode_head.conv_seg.weight", + "decode_head.conv_seg.bias", + ] + for model_name in param_names: + chkpt_name = prefix + model_name + if model_name not in model_dict or chkpt_name not in chkpt_dict: + logger.info(f"Skipping weight copy: {chkpt_name}") + continue + + # Mix weights + model_param = model_dict[model_name].clone() + chkpt_param = chkpt_dict[chkpt_name] + for model_key, c in enumerate(model2chkpt): + if c >= 0: + model_param[model_key].copy_(chkpt_param[c]) + + # Replace checkpoint weight by mixed weights + chkpt_dict[chkpt_name] = model_param diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/detcon.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/detcon.py index 399b2384290..af4ca52d56d 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/detcon.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/detcon.py @@ -4,7 +4,7 @@ - 'Efficient Visual Pretraining with Contrastive Detection', https://arxiv.org/abs/2103.10957 """ -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # # pylint: disable=unused-argument, invalid-name, unnecessary-pass, not-callable @@ -24,11 +24,10 @@ from mmseg.ops import resize from torch import nn -from otx.mpa.modules.models.segmentors.class_incr_encoder_decoder import ( - ClassIncrEncoderDecoder, -) from otx.mpa.utils.logger import get_logger +from .class_incr_encoder_decoder import ClassIncrEncoderDecoder + logger = get_logger() @@ -517,10 +516,10 @@ def __init__( # pylint: disable=arguments-renamed def forward_train( self, - img: torch.Tensor, - img_metas: List[Dict], - gt_semantic_seg: torch.Tensor, - pixel_weights: Optional[torch.Tensor] = None, + img, + img_metas, + gt_semantic_seg, + pixel_weights=None, **kwargs, ): """Forward function for training. diff --git a/otx/mpa/modules/models/segmentors/mean_teacher_segmentor.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/mean_teacher_segmentor.py similarity index 66% rename from otx/mpa/modules/models/segmentors/mean_teacher_segmentor.py rename to otx/algorithms/segmentation/adapters/mmseg/models/segmentors/mean_teacher_segmentor.py index cf3eb65c8d3..d12418a9670 100644 --- a/otx/mpa/modules/models/segmentors/mean_teacher_segmentor.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/mean_teacher_segmentor.py @@ -1,5 +1,6 @@ +"""Mean teacher segmentor for semi-supervised learning.""" + import functools -from collections import OrderedDict import torch from mmseg.models import SEGMENTORS, build_segmentor @@ -10,11 +11,18 @@ logger = get_logger() +# pylint: disable=too-many-locals, protected-access + @SEGMENTORS.register_module() class MeanTeacherSegmentor(BaseSegmentor): + """Mean teacher segmentor for semi-supervised learning. + + It creates two models and ema from one to the other for consistency loss. + """ + def __init__(self, orig_type=None, unsup_weight=0.1, semisl_start_iter=30, **kwargs): - super(MeanTeacherSegmentor, self).__init__() + super().__init__() self.test_cfg = kwargs["test_cfg"] self.semisl_start_iter = semisl_start_iter self.count_iter = 0 @@ -31,21 +39,27 @@ def __init__(self, orig_type=None, unsup_weight=0.1, semisl_start_iter=30, **kwa self._register_load_state_dict_pre_hook(functools.partial(self.load_state_dict_pre_hook, self)) def encode_decode(self, img, img_metas): + """Encode and decode images.""" return self.model_s.encode_decode(img, img_metas) def extract_feat(self, imgs): + """Extract feature.""" return self.model_s.extract_feat(imgs) - def simple_test(self, img, img_metas, **kwargs): - return self.model_s.simple_test(img, img_metas, **kwargs) + def simple_test(self, img, img_meta, **kwargs): + """Simple test.""" + return self.model_s.simple_test(img, img_meta, **kwargs) def aug_test(self, imgs, img_metas, **kwargs): + """Aug test.""" return self.model_s.aug_test(imgs, img_metas, **kwargs) def forward_dummy(self, img, **kwargs): + """Forward dummy.""" return self.model_s.forward_dummy(img, **kwargs) def forward_train(self, img, img_metas, gt_semantic_seg, **kwargs): + """Forward train.""" self.count_iter += 1 if self.semisl_start_iter > self.count_iter or "extra_0" not in kwargs: x = self.model_s.extract_feat(img) @@ -63,7 +77,7 @@ def forward_train(self, img, img_metas, gt_semantic_seg, **kwargs): teacher_logit = resize( input=teacher_logit, size=ul_w_img.shape[2:], mode="bilinear", align_corners=self.align_corners ) - conf_from_teacher, pl_from_teacher = torch.max(torch.softmax(teacher_logit, axis=1), axis=1, keepdim=True) + _, pl_from_teacher = torch.max(torch.softmax(teacher_logit, axis=1), axis=1, keepdim=True) losses = dict() @@ -72,34 +86,34 @@ def forward_train(self, img, img_metas, gt_semantic_seg, **kwargs): loss_decode = self.model_s._decode_head_forward_train(x, img_metas, gt_semantic_seg=gt_semantic_seg) loss_decode_u = self.model_s._decode_head_forward_train(x_u, ul_img_metas, gt_semantic_seg=pl_from_teacher) - for (k, v) in loss_decode_u.items(): - if v is None: + for (key, value) in loss_decode_u.items(): + if value is None: continue - losses[k] = loss_decode[k] + loss_decode_u[k] * self.unsup_weight + losses[key] = loss_decode[key] + loss_decode_u[key] * self.unsup_weight return losses @staticmethod - def state_dict_hook(module, state_dict, prefix, *args, **kwargs): - """Redirect student model as output state_dict (teacher as auxilliary)""" + def state_dict_hook(module, state_dict, prefix, *args, **kwargs): # pylint: disable=unused-argument + """Redirect student model as output state_dict (teacher as auxilliary).""" logger.info("----------------- MeanTeacherSegmentor.state_dict_hook() called") - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("model_s."): - k = k.replace("model_s.", "", 1) - elif k.startswith("model_t."): + for key in list(state_dict.keys()): + value = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("model_s."): + key = key.replace("model_s.", "", 1) + elif key.startswith("model_t."): continue - k = prefix + k - state_dict[k] = v + key = prefix + key + state_dict[key] = value return state_dict @staticmethod - def load_state_dict_pre_hook(module, state_dict, *args, **kwargs): - """Redirect input state_dict to teacher model""" + def load_state_dict_pre_hook(module, state_dict, *args, **kwargs): # pylint: disable=unused-argument + """Redirect input state_dict to teacher model.""" logger.info("----------------- MeanTeacherSegmentor.load_state_dict_pre_hook() called") - for k in list(state_dict.keys()): - v = state_dict.pop(k) - state_dict["model_s." + k] = v - state_dict["model_t." + k] = v + for key in list(state_dict.keys()): + value = state_dict.pop(key) + state_dict["model_s." + key] = value + state_dict["model_t." + key] = value diff --git a/otx/mpa/modules/models/segmentors/pixel_weights_mixin.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/mixin.py similarity index 94% rename from otx/mpa/modules/models/segmentors/pixel_weights_mixin.py rename to otx/algorithms/segmentation/adapters/mmseg/models/segmentors/mixin.py index b5ed556856f..cd075524755 100644 --- a/otx/mpa/modules/models/segmentors/pixel_weights_mixin.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/mixin.py @@ -1,16 +1,20 @@ -# Copyright (C) 2022 Intel Corporation +"""Modules for decode and loss reweighting/mix.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # - -import torch.nn as nn from mmseg.core import add_prefix from mmseg.models.builder import build_loss from mmseg.ops import resize +from torch import nn + +from otx.mpa.modules.models.losses.utils import LossEqualizer -from ..losses.utils import LossEqualizer +# pylint: disable=too-many-locals -class PixelWeightsMixin(object): +class PixelWeightsMixin: + """PixelWeightsMixin.""" + def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self._init_train_components(self.train_cfg) @@ -45,6 +49,7 @@ def _get_argument_by_name(trg_name, arguments): return arguments[trg_name] def set_step_params(self, init_iter, epoch_size): + """Sets the step params for the current object's decode head.""" self.decode_head.set_step_params(init_iter, epoch_size) if getattr(self, "auxiliary_head", None) is not None: @@ -55,7 +60,7 @@ def set_step_params(self, init_iter, epoch_size): self.auxiliary_head.set_step_params(init_iter, epoch_size) def _decode_head_forward_train(self, x, img_metas, pixel_weights=None, **kwargs): - + """Run forward train in decode head.""" trg_map = self._get_argument_by_name(self.decode_head.loss_target_name, kwargs) loss_decode, logits_decode = self.decode_head.forward_train( x, diff --git a/otx/mpa/modules/models/segmentors/otx_encoder_decoder.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py similarity index 72% rename from otx/mpa/modules/models/segmentors/otx_encoder_decoder.py rename to otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py index a5812775259..6bacc0d59cb 100644 --- a/otx/mpa/modules/models/segmentors/otx_encoder_decoder.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""OTX encoder decoder for semantic segmentation.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -9,8 +10,11 @@ from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled +# pylint: disable=unused-argument, line-too-long @SEGMENTORS.register_module() class OTXEncoderDecoder(EncoderDecoder): + """OTX encoder decoder.""" + def simple_test(self, img, img_meta, rescale=True, output_logits=False): """Simple test with single image.""" seg_logit = self.inference(img, img_meta, rescale) @@ -34,26 +38,27 @@ def simple_test(self, img, img_meta, rescale=True, output_logits=False): if is_mmdeploy_enabled(): from mmdeploy.core import FUNCTION_REWRITER - from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook - - @FUNCTION_REWRITER.register_rewriter( - "otx.mpa.modules.models.segmentors.otx_encoder_decoder.OTXEncoderDecoder.extract_feat" + from otx.mpa.modules.hooks.recording_forward_hooks import ( # pylint: disable=ungrouped-imports + FeatureVectorHook, ) + + BASE_CLASS = "otx.algorithms.segmentation.adapters.mmseg.models.segmentors.otx_encoder_decoder.OTXEncoderDecoder" + + @FUNCTION_REWRITER.register_rewriter(f"{BASE_CLASS}.extract_feat") def single_stage_detector__extract_feat(ctx, self, img): + """Extract feature.""" feat = self.backbone(img) self.feature_map = feat if self.with_neck: feat = self.neck(feat) return feat - @FUNCTION_REWRITER.register_rewriter( - "otx.mpa.modules.models.segmentors.otx_encoder_decoder.OTXEncoderDecoder.simple_test" - ) + @FUNCTION_REWRITER.register_rewriter(f"{BASE_CLASS}.simple_test") def single_stage_detector__simple_test(ctx, self, img, img_metas, **kwargs): + """Test.""" # with output activation seg_logit = self.inference(img, img_metas, True) if ctx.cfg["dump_features"]: feature_vector = FeatureVectorHook.func(self.feature_map) return seg_logit, feature_vector - else: - return seg_logit + return seg_logit diff --git a/otx/algorithms/segmentation/adapters/mmseg/nncf/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/nncf/__init__.py index 23b312c33a5..1463ba35711 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/nncf/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/nncf/__init__.py @@ -1,9 +1,18 @@ """NNCF utils for mmseg.""" -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# -# flake8: noqa +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. from .builder import build_nncf_segmentor from .hooks import CustomstepLrUpdaterHook diff --git a/otx/algorithms/segmentation/adapters/mmseg/nncf/builder.py b/otx/algorithms/segmentation/adapters/mmseg/nncf/builder.py index cb8f9c54cb4..d09e1c8b157 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/nncf/builder.py +++ b/otx/algorithms/segmentation/adapters/mmseg/nncf/builder.py @@ -1,5 +1,5 @@ """NNCF wrapped mmcls models builder.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/segmentation/adapters/mmseg/nncf/hooks.py b/otx/algorithms/segmentation/adapters/mmseg/nncf/hooks.py index 7c572870719..b7270e60354 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/nncf/hooks.py +++ b/otx/algorithms/segmentation/adapters/mmseg/nncf/hooks.py @@ -1,5 +1,5 @@ """NNCF task related hooks.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/segmentation/adapters/mmseg/nncf/patches.py b/otx/algorithms/segmentation/adapters/mmseg/nncf/patches.py index 4acf96f333a..6955d15c02f 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/nncf/patches.py +++ b/otx/algorithms/segmentation/adapters/mmseg/nncf/patches.py @@ -1,5 +1,5 @@ """Patch mmseg library.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/segmentation/adapters/mmseg/utils/__init__.py b/otx/algorithms/segmentation/adapters/mmseg/utils/__init__.py index 8d8bd75c07e..690936587bd 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/utils/__init__.py +++ b/otx/algorithms/segmentation/adapters/mmseg/utils/__init__.py @@ -1,9 +1,20 @@ """OTX Adapters - mmseg.utils.""" -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. -from .builder import build_segmentor +from .builder import build_scalar_scheduler, build_segmentor from .config_utils import ( patch_config, patch_datasets, @@ -11,7 +22,7 @@ prepare_for_training, set_hyperparams, ) -from .data_utils import load_dataset_items +from .data_utils import get_valid_label_mask_per_batch, load_dataset_items __all__ = [ "patch_config", @@ -20,5 +31,7 @@ "prepare_for_training", "set_hyperparams", "load_dataset_items", + "build_scalar_scheduler", "build_segmentor", + "get_valid_label_mask_per_batch", ] diff --git a/otx/algorithms/segmentation/adapters/mmseg/utils/builder.py b/otx/algorithms/segmentation/adapters/mmseg/utils/builder.py index df0c8777b80..6c8d5512851 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/utils/builder.py +++ b/otx/algorithms/segmentation/adapters/mmseg/utils/builder.py @@ -1,5 +1,5 @@ """MMseg model builder.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -9,6 +9,23 @@ import torch from mmcv.runner import load_checkpoint from mmcv.utils import Config, ConfigDict +from mmseg.models.builder import MODELS + +SCALAR_SCHEDULERS = MODELS + + +def build_scalar_scheduler(cfg, default_value=None): + """Build scalar scheduler.""" + if cfg is None: + if default_value is not None: + assert isinstance(default_value, (int, float)) + cfg = dict(type="ConstantScalarScheduler", scale=float(default_value)) + else: + return None + elif isinstance(cfg, (int, float)): + cfg = dict(type="ConstantScalarScheduler", scale=float(cfg)) + + return SCALAR_SCHEDULERS.build(cfg) def build_segmentor( diff --git a/otx/algorithms/segmentation/adapters/mmseg/utils/data_utils.py b/otx/algorithms/segmentation/adapters/mmseg/utils/data_utils.py index bfcddf65e77..eaa38e49f13 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/utils/data_utils.py +++ b/otx/algorithms/segmentation/adapters/mmseg/utils/data_utils.py @@ -20,6 +20,7 @@ import cv2 import numpy as np +import torch import tqdm from mmseg.datasets.custom import CustomDataset from skimage.segmentation import felzenszwalb @@ -156,6 +157,17 @@ def get_extended_label_names(labels: List[LabelEntity]): return all_labels +def get_valid_label_mask_per_batch(img_metas, num_classes): + """Get valid label mask removing ignored classes to zero mask in a batch.""" + valid_label_mask_per_batch = [] + for _, meta in enumerate(img_metas): + valid_label_mask = torch.Tensor([1 for _ in range(num_classes)]) + if "ignored_labels" in meta and meta["ignored_labels"]: + valid_label_mask[meta["ignored_labels"]] = 0 + valid_label_mask_per_batch.append(valid_label_mask) + return valid_label_mask_per_batch + + @check_input_parameters_type() def create_pseudo_masks(ann_file_path: str, data_root_dir: str, mode="FH"): """Create pseudo masks for Self-SL using DetCon.""" diff --git a/otx/mpa/modules/datasets/pipelines/transforms/seg_custom_pipelines.py b/otx/mpa/modules/datasets/pipelines/transforms/seg_custom_pipelines.py deleted file mode 100644 index 51829425725..00000000000 --- a/otx/mpa/modules/datasets/pipelines/transforms/seg_custom_pipelines.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import mmcv -import numpy as np -from mmcv.parallel import DataContainer as DC -from mmseg.datasets import PIPELINES -from mmseg.datasets.pipelines.formatting import to_tensor - - -@PIPELINES.register_module(force=True) -class Normalize(object): - """Normalize the image. - - Added key is "img_norm_cfg". - - Args: - mean (sequence): Mean values of 3 channels. - std (sequence): Std values of 3 channels. - to_rgb (bool): Whether to convert the image from BGR to RGB, - default is true. - """ - - def __init__(self, mean, std, to_rgb=True): - self.mean = np.array(mean, dtype=np.float32) - self.std = np.array(std, dtype=np.float32) - self.to_rgb = to_rgb - - def __call__(self, results): - """Call function to normalize images. - - Args: - results (dict): Result dict from loading pipeline. - - Returns: - dict: Normalized results, 'img_norm_cfg' key is added into - result dict. - """ - - for target in ["img", "ul_w_img", "aux_img"]: - if target in results: - results[target] = mmcv.imnormalize(results[target], self.mean, self.std, self.to_rgb) - results["img_norm_cfg"] = dict(mean=self.mean, std=self.std, to_rgb=self.to_rgb) - - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f"(mean={self.mean}, std={self.std}, to_rgb=" f"{self.to_rgb})" - return repr_str - - -@PIPELINES.register_module(force=True) -class DefaultFormatBundle(object): - """Default formatting bundle. - - It simplifies the pipeline of formatting common fields, including "img" - and "gt_semantic_seg". These fields are formatted as follows. - - - img: (1)transpose, (2)to tensor, (3)to DataContainer (stack=True) - - gt_semantic_seg: (1)unsqueeze dim-0 (2)to tensor, - (3)to DataContainer (stack=True) - """ - - def __call__(self, results): - """Call function to transform and format common fields in results. - - Args: - results (dict): Result dict contains the data to convert. - - Returns: - dict: The result dict contains the data that is formatted with - default bundle. - """ - for target in ["img", "ul_w_img", "aux_img"]: - if target not in results: - continue - - img = results[target] - if len(img.shape) < 3: - img = np.expand_dims(img, -1) - - if len(img.shape) == 3: - img = np.ascontiguousarray(img.transpose(2, 0, 1)).astype(np.float32) - elif len(img.shape) == 4: - # for selfsl or supcon - img = np.ascontiguousarray(img.transpose(0, 3, 1, 2)).astype(np.float32) - else: - raise ValueError(f"img.shape={img.shape} is not supported.") - - results[target] = DC(to_tensor(img), stack=True) - - for trg_name in ["gt_semantic_seg", "gt_class_borders", "pixel_weights"]: - if trg_name not in results: - continue - - out_type = np.float32 if trg_name == "pixel_weights" else np.int64 - results[trg_name] = DC(to_tensor(results[trg_name][None, ...].astype(out_type)), stack=True) - - return results - - def __repr__(self): - return self.__class__.__name__ - - -@PIPELINES.register_module() -class BranchImage(object): - def __init__(self, key_map={}): - self.key_map = key_map - - def __call__(self, results): - for k1, k2 in self.key_map.items(): - if k1 in results: - results[k2] = results[k1] - if k1 in results["img_fields"]: - results["img_fields"].append(k2) - return results - - def __repr__(self): - repr_str = self.__class__.__name__ - return repr_str diff --git a/otx/mpa/modules/models/builder.py b/otx/mpa/modules/models/builder.py deleted file mode 100644 index f064c6321d1..00000000000 --- a/otx/mpa/modules/models/builder.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from mmseg.models.builder import MODELS - -SCALAR_SCHEDULERS = MODELS - - -def build_scalar_scheduler(cfg, default_value=None): - if cfg is None: - if default_value is not None: - assert isinstance(default_value, (int, float)) - cfg = dict(type="ConstantScalarScheduler", scale=float(default_value)) - else: - return None - elif isinstance(cfg, (int, float)): - cfg = dict(type="ConstantScalarScheduler", scale=float(cfg)) - - return SCALAR_SCHEDULERS.build(cfg) diff --git a/otx/mpa/modules/models/heads/aggregator_mixin.py b/otx/mpa/modules/models/heads/aggregator_mixin.py deleted file mode 100644 index 4f320c7bb9d..00000000000 --- a/otx/mpa/modules/models/heads/aggregator_mixin.py +++ /dev/null @@ -1,63 +0,0 @@ -# Copyright (c) 2020-2021 The MMSegmentation Authors -# SPDX-License-Identifier: Apache-2.0 -# -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import torch.nn as nn - -from ..utils import IterativeAggregator, IterativeConcatAggregator - - -class AggregatorMixin(nn.Module): - def __init__( - self, - *args, - enable_aggregator=False, - aggregator_min_channels=None, - aggregator_merge_norm=None, - aggregator_use_concat=False, - **kwargs - ): - - in_channels = kwargs.get("in_channels") - in_index = kwargs.get("in_index") - norm_cfg = kwargs.get("norm_cfg") - conv_cfg = kwargs.get("conv_cfg") - input_transform = kwargs.get("input_transform") - - aggregator = None - if enable_aggregator: - assert isinstance(in_channels, (tuple, list)) - assert len(in_channels) > 1 - - aggregator = IterativeAggregator( - in_channels=in_channels, - min_channels=aggregator_min_channels, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - merge_norm=aggregator_merge_norm, - use_concat=aggregator_use_concat, - ) - - aggregator_min_channels = aggregator_min_channels if aggregator_min_channels is not None else 0 - # change arguments temporarily - kwargs["in_channels"] = max(in_channels[0], aggregator_min_channels) - kwargs["input_transform"] = None - if in_index is not None: - kwargs["in_index"] = in_index[0] - - super(AggregatorMixin, self).__init__(*args, **kwargs) - - self.aggregator = aggregator - # re-define variables - self.in_channels = in_channels - self.input_transform = input_transform - self.in_index = in_index - - def _transform_inputs(self, inputs): - inputs = super()._transform_inputs(inputs) - if self.aggregator is not None: - inputs = self.aggregator(inputs)[0] - return inputs diff --git a/otx/mpa/modules/models/heads/mix_loss_mixin.py b/otx/mpa/modules/models/heads/mix_loss_mixin.py deleted file mode 100644 index 7dccb520748..00000000000 --- a/otx/mpa/modules/models/heads/mix_loss_mixin.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.runner import force_fp32 - - -class MixLossMixin(nn.Module): - @staticmethod - def _mix_loss(logits, target, ignore_index=255): - num_samples = logits.size(0) - assert num_samples % 2 == 0 - - with torch.no_grad(): - probs = F.softmax(logits, dim=1) - probs_a, probs_b = torch.split(probs, num_samples // 2) - mean_probs = 0.5 * (probs_a + probs_b) - trg_probs = torch.cat([mean_probs, mean_probs], dim=0) - - log_probs = torch.log_softmax(logits, dim=1) - losses = torch.sum(trg_probs * log_probs, dim=1).neg() - - valid_mask = target != ignore_index - valid_losses = torch.where(valid_mask, losses, torch.zeros_like(losses)) - - return valid_losses.mean() - - @force_fp32(apply_to=("seg_logit",)) - def losses(self, seg_logit, seg_label, train_cfg, *args, **kwargs): - loss = super().losses(seg_logit, seg_label, train_cfg, *args, **kwargs) - if train_cfg.get("mix_loss", None) and train_cfg.mix_loss.get("enable", False): - mix_loss = self._mix_loss(seg_logit, seg_label, ignore_index=self.ignore_index) - - mix_loss_weight = train_cfg.mix_loss.get("weight", 1.0) - loss["loss_mix"] = mix_loss_weight * mix_loss - - return loss diff --git a/otx/mpa/modules/models/heads/segment_out_norm_mixin.py b/otx/mpa/modules/models/heads/segment_out_norm_mixin.py deleted file mode 100644 index c82af88c817..00000000000 --- a/otx/mpa/modules/models/heads/segment_out_norm_mixin.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import torch.nn as nn - -from ..utils import AngularPWConv, normalize - - -class SegmentOutNormMixin(nn.Module): - def __init__(self, *args, enable_out_seg=True, enable_out_norm=False, **kwargs): - super().__init__(*args, **kwargs) - - self.enable_out_seg = enable_out_seg - self.enable_out_norm = enable_out_norm - - if enable_out_seg: - if enable_out_norm: - self.conv_seg = AngularPWConv(self.channels, self.out_channels, clip_output=True) - else: - self.conv_seg = None - - def cls_seg(self, feat): - """Classify each pixel.""" - if self.dropout is not None: - feat = self.dropout(feat) - if self.enable_out_norm: - feat = normalize(feat, dim=1, p=2) - if self.conv_seg is not None: - return self.conv_seg(feat) - else: - return feat diff --git a/otx/mpa/modules/models/scalar_schedulers/__init__.py b/otx/mpa/modules/models/scalar_schedulers/__init__.py deleted file mode 100644 index f79e183f50f..00000000000 --- a/otx/mpa/modules/models/scalar_schedulers/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from .constant import ConstantScalarScheduler -from .poly import PolyScalarScheduler -from .step import StepScalarScheduler - -__all__ = [ - "ConstantScalarScheduler", - "PolyScalarScheduler", - "StepScalarScheduler", -] diff --git a/otx/mpa/modules/models/segmentors/__init__.py b/otx/mpa/modules/models/segmentors/__init__.py deleted file mode 100644 index d7b6a1934c7..00000000000 --- a/otx/mpa/modules/models/segmentors/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from . import class_incr_encoder_decoder, mean_teacher_segmentor, otx_encoder_decoder diff --git a/otx/mpa/modules/models/segmentors/class_incr_encoder_decoder.py b/otx/mpa/modules/models/segmentors/class_incr_encoder_decoder.py deleted file mode 100644 index 6cb17955106..00000000000 --- a/otx/mpa/modules/models/segmentors/class_incr_encoder_decoder.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import functools - -from mmseg.models import SEGMENTORS -from mmseg.utils import get_root_logger - -from otx.mpa.modules.utils.task_adapt import map_class_names - -from .mix_loss_mixin import MixLossMixin -from .otx_encoder_decoder import OTXEncoderDecoder -from .pixel_weights_mixin import PixelWeightsMixin - - -@SEGMENTORS.register_module() -class ClassIncrEncoderDecoder(MixLossMixin, PixelWeightsMixin, OTXEncoderDecoder): - """ """ - - def __init__(self, *args, task_adapt=None, **kwargs): - super().__init__(*args, **kwargs) - - # Hook for class-sensitive weight loading - assert task_adapt is not None, "When using task_adapt, task_adapt must be set." - - self._register_load_state_dict_pre_hook( - functools.partial( - self.load_state_dict_pre_hook, - self, # model - task_adapt["dst_classes"], # model_classes - task_adapt["src_classes"], # chkpt_classes - ) - ) - - @staticmethod - def load_state_dict_pre_hook(model, model_classes, chkpt_classes, chkpt_dict, prefix, *args, **kwargs): - """Modify input state_dict according to class name matching before weight loading""" - logger = get_root_logger("INFO") - logger.info(f"----------------- ClassIncrEncoderDecoder.load_state_dict_pre_hook() called w/ prefix: {prefix}") - - # Dst to src mapping index - model_classes = list(model_classes) - chkpt_classes = list(chkpt_classes) - model2chkpt = map_class_names(model_classes, chkpt_classes) - logger.info(f"{chkpt_classes} -> {model_classes} ({model2chkpt})") - - model_dict = model.state_dict() - param_names = [ - "decode_head.conv_seg.weight", - "decode_head.conv_seg.bias", - ] - for model_name in param_names: - chkpt_name = prefix + model_name - if model_name not in model_dict or chkpt_name not in chkpt_dict: - logger.info(f"Skipping weight copy: {chkpt_name}") - continue - - # Mix weights - model_param = model_dict[model_name].clone() - chkpt_param = chkpt_dict[chkpt_name] - for m, c in enumerate(model2chkpt): - if c >= 0: - model_param[m].copy_(chkpt_param[c]) - - # Replace checkpoint weight by mixed weights - chkpt_dict[chkpt_name] = model_param diff --git a/otx/mpa/modules/models/segmentors/mix_loss_mixin.py b/otx/mpa/modules/models/segmentors/mix_loss_mixin.py deleted file mode 100644 index 63a820d612a..00000000000 --- a/otx/mpa/modules/models/segmentors/mix_loss_mixin.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import torch -import torch.nn as nn - - -class MixLossMixin(object): - def forward_train(self, img, img_metas, gt_semantic_seg, aux_img=None, **kwargs): - """Forward function for training. - - Args: - img (Tensor): Input images. - img_metas (list[dict]): List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - `mmseg/datasets/pipelines/formatting.py:Collect`. - gt_semantic_seg (Tensor): Semantic segmentation masks - used if the architecture supports semantic segmentation task. - aux_img (Tensor): Auxiliary images. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - - if aux_img is not None: - mix_loss_enabled = False - mix_loss_cfg = self.train_cfg.get("mix_loss", None) - if mix_loss_cfg is not None: - mix_loss_enabled = mix_loss_cfg.get("enable", False) - if mix_loss_enabled: - self.train_cfg.mix_loss.enable = mix_loss_enabled - - if self.train_cfg.mix_loss.enable: - img = torch.cat([img, aux_img], dim=0) - gt_semantic_seg = torch.cat([gt_semantic_seg, gt_semantic_seg], dim=0) - - return super().forward_train(img, img_metas, gt_semantic_seg, **kwargs) diff --git a/otx/mpa/modules/ov/models/mmseg/decode_heads/__init__.py b/otx/mpa/modules/ov/models/mmseg/decode_heads/__init__.py deleted file mode 100644 index 26515f29058..00000000000 --- a/otx/mpa/modules/ov/models/mmseg/decode_heads/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from .mmov_decode_head import MMOVDecodeHead diff --git a/otx/mpa/modules/utils/seg_utils.py b/otx/mpa/modules/utils/seg_utils.py deleted file mode 100644 index a293c4fd378..00000000000 --- a/otx/mpa/modules/utils/seg_utils.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import torch - - -def get_valid_label_mask_per_batch(img_metas, num_classes): - valid_label_mask_per_batch = [] - for _, meta in enumerate(img_metas): - valid_label_mask = torch.Tensor([1 for _ in range(num_classes)]) - if "ignored_labels" in meta and meta["ignored_labels"]: - valid_label_mask[meta["ignored_labels"]] = 0 - valid_label_mask_per_batch.append(valid_label_mask) - return valid_label_mask_per_batch diff --git a/otx/mpa/seg/__init__.py b/otx/mpa/seg/__init__.py index 8c091f1f20c..b7b2c4ffb47 100644 --- a/otx/mpa/seg/__init__.py +++ b/otx/mpa/seg/__init__.py @@ -2,16 +2,10 @@ # SPDX-License-Identifier: Apache-2.0 # -import otx.mpa.modules.datasets.pipelines.compose -import otx.mpa.modules.datasets.pipelines.transforms.seg_custom_pipelines +import otx.algorithms.segmentation.adapters.mmseg +import otx.algorithms.segmentation.adapters.mmseg.models +import otx.algorithms.segmentation.adapters.mmseg.models.schedulers import otx.mpa.modules.hooks -import otx.mpa.modules.models.backbones.litehrnet -import otx.mpa.modules.models.heads.custom_fcn_head -import otx.mpa.modules.models.losses.cross_entropy_loss_with_ignore -import otx.mpa.modules.models.scalar_schedulers.constant -import otx.mpa.modules.models.scalar_schedulers.poly -import otx.mpa.modules.models.scalar_schedulers.step -import otx.mpa.modules.models.segmentors from otx.mpa.seg.incremental import IncrSegInferrer, IncrSegTrainer from otx.mpa.seg.semisl import SemiSLSegExporter, SemiSLSegInferrer, SemiSLSegTrainer diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/data/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/data/__init__.py deleted file mode 100644 index 3bdbe22ef68..00000000000 --- a/tests/unit/algorithms/segmentation/adapters/mmseg/data/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -"""Test for otx.algorithms.segmentation.adapters.mmseg.data""" -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/data/test_pipelines.py b/tests/unit/algorithms/segmentation/adapters/mmseg/data/test_pipelines.py deleted file mode 100644 index 2ff48d5c195..00000000000 --- a/tests/unit/algorithms/segmentation/adapters/mmseg/data/test_pipelines.py +++ /dev/null @@ -1,151 +0,0 @@ -from typing import Any, Dict - -import numpy as np -import pytest -from PIL import Image - -from otx.algorithms.segmentation.adapters.mmseg.data.pipelines import ( - NDArrayToPILImage, - PILImageToNDArray, - RandomColorJitter, - RandomGaussianBlur, - RandomGrayscale, - RandomResizedCrop, - RandomSolarization, - TwoCropTransform, -) -from tests.test_suite.e2e_test_system import e2e_pytest_unit - - -@pytest.fixture(scope="module") -def inputs_np(): - return { - "img": np.random.randint(0, 10, (16, 16, 3), dtype=np.uint8), - "gt_semantic_seg": np.random.rand(16, 16), - "flip": True, - } - - -@pytest.fixture(scope="module") -def inputs_PIL(): - return { - "img": Image.fromarray(np.random.randint(0, 10, (16, 16, 3), dtype=np.uint8)), - "gt_semantic_seg": np.random.randint(0, 5, (16, 16), dtype=np.uint8), - "seg_fields": ["gt_semantic_seg"], - "ori_shape": (16, 16, 3), - } - - -class TestTwoCropTransform: - @pytest.fixture(autouse=True) - def setup(self, mocker) -> None: - mocker.patch( - "otx.algorithms.segmentation.adapters.mmseg.data.pipelines.build_from_cfg", return_value=lambda x: x - ) - self.two_crop_transform = TwoCropTransform(view0=[], view1=[]) - - @e2e_pytest_unit - def test_call(self, mocker, inputs_np: Dict[str, Any]) -> None: - """Test __call__.""" - results = self.two_crop_transform(inputs_np) - - assert isinstance(results, dict) - assert "img" in results and results["img"].ndim == 4 - assert "gt_semantic_seg" in results and results["gt_semantic_seg"].ndim == 3 - assert "flip" in results and isinstance(results["flip"], list) - - @e2e_pytest_unit - def test_call_with_single_pipeline(self, mocker, inputs_np: Dict[str, Any]) -> None: - """Test __call__ with single pipeline.""" - self.two_crop_transform.is_both = False - - results = self.two_crop_transform(inputs_np) - - assert isinstance(results, dict) - assert "img" in results and results["img"].ndim == 3 - assert "gt_semantic_seg" in results and results["gt_semantic_seg"].ndim == 2 - assert "flip" in results and isinstance(results["flip"], bool) - - -@e2e_pytest_unit -def test_random_resized_crop(inputs_PIL: Dict[str, Any]) -> None: - """Test RandomResizedCrop.""" - random_resized_crop = RandomResizedCrop(size=(8, 8)) - - results = random_resized_crop(inputs_PIL) - - assert isinstance(results, dict) - assert "img" in results and results["img"].size == (8, 8) - assert "gt_semantic_seg" in results and results["gt_semantic_seg"].shape == (8, 8) - assert "img_shape" in results - assert "ori_shape" in results - assert "scale_factor" in results - - -@e2e_pytest_unit -def test_random_color_jitter(inputs_PIL: Dict[str, Any]) -> None: - """Test RandomColorJitter.""" - random_color_jitter = RandomColorJitter(p=1.0) - - results = random_color_jitter(inputs_PIL) - - assert isinstance(results, dict) - assert "img" in results - - -@e2e_pytest_unit -def test_random_grayscale(inputs_PIL: Dict[str, Any]) -> None: - """Test RandomGrayscale.""" - random_grayscale = RandomGrayscale() - - results = random_grayscale(inputs_PIL) - - assert isinstance(results, dict) - assert "img" in results - - -@e2e_pytest_unit -def test_random_gaussian_blur(inputs_PIL: Dict[str, Any]) -> None: - """Test RandomGaussianBlur.""" - random_gaussian_blur = RandomGaussianBlur(p=1.0, kernel_size=3) - - results = random_gaussian_blur(inputs_PIL) - - assert isinstance(results, dict) - assert "img" in results - - -@e2e_pytest_unit -def test_random_solarization(inputs_np: Dict[str, Any]) -> None: - """Test RandomSolarization.""" - random_solarization = RandomSolarization(p=1.0) - - results = random_solarization(inputs_np) - - assert isinstance(results, dict) - assert "img" in results - assert repr(random_solarization) == "RandomSolarization" - - -@e2e_pytest_unit -def test_nd_array_to_pil_image(inputs_np: Dict[str, Any]) -> None: - """Test NDArrayToPILImage.""" - nd_array_to_pil_image = NDArrayToPILImage(keys=["img"]) - - results = nd_array_to_pil_image(inputs_np) - - assert "img" in results - assert isinstance(results["img"], Image.Image) - assert repr(nd_array_to_pil_image) == "NDArrayToPILImage" - - -@e2e_pytest_unit -def test_pil_image_to_nd_array(inputs_PIL: Dict[str, Any]) -> None: - """Test PILImageToNDArray.""" - pil_image_to_nd_array = PILImageToNDArray(keys=["img"]) - - results = pil_image_to_nd_array(inputs_PIL) - - assert "img" in results - assert isinstance(results["img"], np.ndarray) - assert repr(pil_image_to_nd_array) == "PILImageToNDArray" diff --git a/otx/mpa/modules/models/backbones/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/__init__.py similarity index 54% rename from otx/mpa/modules/models/backbones/__init__.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/datasets/__init__.py index 4e1701262e2..d671e6bb59c 100644 --- a/otx/mpa/modules/models/backbones/__init__.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/__init__.py @@ -1,5 +1,4 @@ +"""Test for otx.algorithms.segmentation.adapters.mmseg.datasets""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # - -# flake8: noqa diff --git a/otx/mpa/modules/ov/models/mmseg/backbones/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py similarity index 50% rename from otx/mpa/modules/ov/models/mmseg/backbones/__init__.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py index 1ad81562177..e2b1bd6ce7b 100644 --- a/otx/mpa/modules/ov/models/mmseg/backbones/__init__.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/__init__.py @@ -1,6 +1,4 @@ +"""Test for otx.algorithms.segmentation.adapters.mmseg.datasets.pipelines""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # - -# flake8: noqa -from .mmov_backbone import MMOVBackbone diff --git a/tests/unit/mpa/modules/datasets/pipelines/test_compose.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_compose.py similarity index 98% rename from tests/unit/mpa/modules/datasets/pipelines/test_compose.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_compose.py index 96b9602fd8b..4c777d83a3f 100644 --- a/tests/unit/mpa/modules/datasets/pipelines/test_compose.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_compose.py @@ -12,7 +12,7 @@ from mmseg.datasets.builder import PIPELINES from mmseg.datasets.pipelines import RandomCrop -from otx.mpa.modules.datasets.pipelines.compose import MaskCompose, ProbCompose +from otx.algorithms.segmentation.adapters.mmseg.datasets import MaskCompose, ProbCompose class TestProbCompose: diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_loads.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_loads.py new file mode 100644 index 00000000000..37d5275fb2e --- /dev/null +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_loads.py @@ -0,0 +1,53 @@ +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + +import numpy as np +import pytest + +from otx.algorithms.segmentation.adapters.mmseg.datasets.pipelines.loads import ( + LoadAnnotationFromOTXDataset, +) +from otx.api.entities.annotation import ( + Annotation, + AnnotationSceneEntity, + AnnotationSceneKind, +) +from otx.api.entities.dataset_item import DatasetItemEntity +from otx.api.entities.image import Image +from otx.api.entities.label import Domain, LabelEntity +from otx.api.entities.scored_label import ScoredLabel +from otx.api.entities.shapes.rectangle import Rectangle +from tests.test_suite.e2e_test_system import e2e_pytest_unit + + +def label_entity(name="test label") -> LabelEntity: + return LabelEntity(name=name, domain=Domain.SEGMENTATION) + + +def dataset_item() -> DatasetItemEntity: + image: Image = Image(data=np.random.randint(low=0, high=255, size=(10, 16, 3))) + annotation: Annotation = Annotation(shape=Rectangle.generate_full_box(), labels=[ScoredLabel(label_entity())]) + annotation_scene: AnnotationSceneEntity = AnnotationSceneEntity( + annotations=[annotation], kind=AnnotationSceneKind.ANNOTATION + ) + return DatasetItemEntity(media=image, annotation_scene=annotation_scene) + + +class TestLoadAnnotationFromOTXDataset: + @pytest.fixture(autouse=True) + def setUp(self) -> None: + + self.dataset_item: DatasetItemEntity = dataset_item() + self.results: dict = { + "dataset_item": self.dataset_item, + "ann_info": {"labels": [label_entity("class_1")]}, + "seg_fields": [], + } + self.pipeline: LoadAnnotationFromOTXDataset = LoadAnnotationFromOTXDataset() + + @e2e_pytest_unit + def test_call(self) -> None: + loaded_annotations: dict = self.pipeline(self.results) + assert "gt_semantic_seg" in loaded_annotations + assert loaded_annotations["dataset_item"] == self.dataset_item diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines_params_validation.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_pipelines_params_validation.py similarity index 97% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines_params_validation.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_pipelines_params_validation.py index 41373df066c..db88ce2a8ad 100644 --- a/tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines_params_validation.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_pipelines_params_validation.py @@ -4,7 +4,7 @@ import pytest -from otx.algorithms.segmentation.adapters.mmseg.data.pipelines import ( +from otx.algorithms.segmentation.adapters.mmseg.datasets.pipelines import ( LoadAnnotationFromOTXDataset, LoadImageFromOTXDataset, ) diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_transforms.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_transforms.py new file mode 100644 index 00000000000..facded59996 --- /dev/null +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/pipelines/test_transforms.py @@ -0,0 +1,313 @@ +from typing import Any, Dict + +import numpy as np +import pytest +import torch +from mmcv.parallel import DataContainer +from PIL import Image + +from otx.algorithms.segmentation.adapters.mmseg.datasets.pipelines.transforms import ( + BranchImage, + DefaultFormatBundle, + NDArrayToPILImage, + Normalize, + PILImageToNDArray, + RandomColorJitter, + RandomGaussianBlur, + RandomGrayscale, + RandomResizedCrop, + RandomSolarization, + TwoCropTransform, +) +from tests.test_suite.e2e_test_system import e2e_pytest_unit + + +@pytest.fixture(scope="module") +def inputs_np(): + return { + "img": np.random.randint(0, 10, (16, 16, 3), dtype=np.uint8), + "gt_semantic_seg": np.random.rand(16, 16), + "flip": True, + } + + +@pytest.fixture(scope="module") +def inputs_PIL(): + return { + "img": Image.fromarray(np.random.randint(0, 10, (16, 16, 3), dtype=np.uint8)), + "gt_semantic_seg": np.random.randint(0, 5, (16, 16), dtype=np.uint8), + "seg_fields": ["gt_semantic_seg"], + "ori_shape": (16, 16, 3), + } + + +class TestNDArrayToPILImage: + @pytest.fixture(autouse=True) + def setUp(self) -> None: + self.results: dict = {"img": np.random.randint(0, 255, (3, 3, 3), dtype=np.uint8)} + self.nd_array_to_pil_image: NDArrayToPILImage = NDArrayToPILImage(keys=["img"]) + + @e2e_pytest_unit + def test_call(self) -> None: + converted_img: dict = self.nd_array_to_pil_image(self.results) + assert "img" in converted_img + assert isinstance(converted_img["img"], Image.Image) + + @e2e_pytest_unit + def test_repr(self) -> None: + assert str(self.nd_array_to_pil_image) == "NDArrayToPILImage" + + +class TestPILImageToNDArray: + @pytest.fixture(autouse=True) + def setUp(self) -> None: + self.results: dict = {"img": Image.new("RGB", (3, 3))} + self.pil_image_to_nd_array: PILImageToNDArray = PILImageToNDArray(keys=["img"]) + + @e2e_pytest_unit + def test_call(self) -> None: + converted_array: dict = self.pil_image_to_nd_array(self.results) + assert "img" in converted_array + assert isinstance(converted_array["img"], np.ndarray) + + @e2e_pytest_unit + def test_repr(self) -> None: + assert str(self.pil_image_to_nd_array) == "PILImageToNDArray" + + +class TestRandomResizedCrop: + @pytest.fixture(autouse=True) + def setUp(self) -> None: + self.results: dict = {"img": Image.new("RGB", (10, 16)), "img_shape": (10, 16), "ori_shape": (10, 16)} + self.random_resized_crop: RandomResizedCrop = RandomResizedCrop((5, 5), (0.5, 1.0)) + + @e2e_pytest_unit + def test_call(self) -> None: + cropped_img: dict = self.random_resized_crop(self.results) + assert cropped_img["img_shape"] == (5, 5) + assert cropped_img["ori_shape"] == (10, 16) + + +class TestRandomSolarization: + @pytest.fixture(autouse=True) + def setUp(self) -> None: + self.results: dict = {"img": np.random.randint(0, 255, (3, 3, 3), dtype=np.uint8)} + self.random_solarization: RandomSolarization = RandomSolarization(p=1.0) + + @e2e_pytest_unit + def test_call(self) -> None: + solarized: dict = self.random_solarization(self.results) + assert "img" in solarized + assert isinstance(solarized["img"], np.ndarray) + + @e2e_pytest_unit + def test_repr(self) -> None: + assert str(self.random_solarization) == "RandomSolarization" + + +class TestNormalize: + @e2e_pytest_unit + @pytest.mark.parametrize( + "mean,std,to_rgb,expected", + [ + (1.0, 1.0, True, np.array([[[1.0, 0.0, 0.0]]], dtype=np.float32)), + (1.0, 1.0, False, np.array([[[-1.0, 0.0, 0.0]]], dtype=np.float32)), + ], + ) + def test_call(self, mean: float, std: float, to_rgb: bool, expected: np.array) -> None: + """Test __call__.""" + normalize = Normalize(mean=mean, std=std, to_rgb=to_rgb) + inputs = dict(img=np.arange(3).reshape(1, 1, 3)) + + results = normalize(inputs.copy()) + + assert "img" in results + assert "img_norm_cfg" in results + assert np.all(results["img"] == expected) + + @e2e_pytest_unit + @pytest.mark.parametrize("mean,std,to_rgb", [(1.0, 1.0, True)]) + def test_repr(self, mean: float, std: float, to_rgb: bool) -> None: + """Test __repr__.""" + normalize = Normalize(mean=mean, std=std, to_rgb=to_rgb) + + assert repr(normalize) == normalize.__class__.__name__ + f"(mean={mean}, std={std}, to_rgb=" f"{to_rgb})" + + +class TestDefaultFormatBundle: + @pytest.fixture(autouse=True) + def setup(self) -> None: + self.default_format_bundle = DefaultFormatBundle() + + @e2e_pytest_unit + @pytest.mark.parametrize("img", [np.ones((1, 1)), np.ones((1, 1, 1)), np.ones((1, 1, 1, 1))]) + @pytest.mark.parametrize("gt_semantic_seg,pixel_weights", [(np.ones((1, 1)), np.ones((1, 1)))]) + def test_call(self, img: np.array, gt_semantic_seg: np.array, pixel_weights: np.array) -> None: + """Test __call__.""" + inputs = dict(img=img, gt_semantic_seg=gt_semantic_seg, pixel_weights=pixel_weights) + + results = self.default_format_bundle(inputs.copy()) + + assert isinstance(results, dict) + assert "img" in results + assert isinstance(results["img"], DataContainer) + assert len(results["img"].data.shape) >= 3 + assert results["img"].data.dtype == torch.float32 + assert "gt_semantic_seg" in results + assert len(results["gt_semantic_seg"].data.shape) == len(inputs["gt_semantic_seg"].shape) + 1 + assert results["gt_semantic_seg"].data.dtype == torch.int64 + assert "pixel_weights" in results + assert len(results["pixel_weights"].data.shape) == len(inputs["pixel_weights"].shape) + 1 + assert results["pixel_weights"].data.dtype == torch.float32 + + @e2e_pytest_unit + @pytest.mark.parametrize("img", [np.ones((1,))]) + def test_call_invalid_shape(self, img: np.array): + inputs = dict(img=img) + + with pytest.raises(ValueError): + self.default_format_bundle(inputs.copy()) + + @e2e_pytest_unit + def test_repr(self) -> None: + """Test __repr__.""" + assert repr(self.default_format_bundle) == self.default_format_bundle.__class__.__name__ + + +class TestBranchImage: + @pytest.fixture(autouse=True) + def setup(self) -> None: + self.branch_image = BranchImage(key_map={"key1": "key2"}) + + @e2e_pytest_unit + def test_call(self) -> None: + """Test __call__.""" + inputs = dict(key1="key1", img_fields=["key1"]) + + results = self.branch_image(inputs.copy()) + + assert isinstance(results, dict) + assert "key2" in results + assert results["key1"] == results["key2"] + assert "key2" in results["img_fields"] + + @e2e_pytest_unit + def test_repr(self) -> None: + """Test __repr__.""" + assert repr(self.branch_image) == self.branch_image.__class__.__name__ + + +class TestTwoCropTransform: + @pytest.fixture(autouse=True) + def setup(self, mocker) -> None: + mocker.patch( + "otx.algorithms.segmentation.adapters.mmseg.datasets.pipelines.transforms.build_from_cfg", + return_value=lambda x: x, + ) + self.two_crop_transform = TwoCropTransform(view0=[], view1=[]) + + @e2e_pytest_unit + def test_call(self, mocker, inputs_np: Dict[str, Any]) -> None: + """Test __call__.""" + results = self.two_crop_transform(inputs_np) + + assert isinstance(results, dict) + assert "img" in results and results["img"].ndim == 4 + assert "gt_semantic_seg" in results and results["gt_semantic_seg"].ndim == 3 + assert "flip" in results and isinstance(results["flip"], list) + + @e2e_pytest_unit + def test_call_with_single_pipeline(self, mocker, inputs_np: Dict[str, Any]) -> None: + """Test __call__ with single pipeline.""" + self.two_crop_transform.is_both = False + + results = self.two_crop_transform(inputs_np) + + assert isinstance(results, dict) + assert "img" in results and results["img"].ndim == 3 + assert "gt_semantic_seg" in results and results["gt_semantic_seg"].ndim == 2 + assert "flip" in results and isinstance(results["flip"], bool) + + +@e2e_pytest_unit +def test_random_resized_crop(inputs_PIL: Dict[str, Any]) -> None: + """Test RandomResizedCrop.""" + random_resized_crop = RandomResizedCrop(size=(8, 8)) + + results = random_resized_crop(inputs_PIL) + + assert isinstance(results, dict) + assert "img" in results and results["img"].size == (8, 8) + assert "gt_semantic_seg" in results and results["gt_semantic_seg"].shape == (8, 8) + assert "img_shape" in results + assert "ori_shape" in results + assert "scale_factor" in results + + +@e2e_pytest_unit +def test_random_color_jitter(inputs_PIL: Dict[str, Any]) -> None: + """Test RandomColorJitter.""" + random_color_jitter = RandomColorJitter(p=1.0) + + results = random_color_jitter(inputs_PIL) + + assert isinstance(results, dict) + assert "img" in results + + +@e2e_pytest_unit +def test_random_grayscale(inputs_PIL: Dict[str, Any]) -> None: + """Test RandomGrayscale.""" + random_grayscale = RandomGrayscale() + + results = random_grayscale(inputs_PIL) + + assert isinstance(results, dict) + assert "img" in results + + +@e2e_pytest_unit +def test_random_gaussian_blur(inputs_PIL: Dict[str, Any]) -> None: + """Test RandomGaussianBlur.""" + random_gaussian_blur = RandomGaussianBlur(p=1.0, kernel_size=3) + + results = random_gaussian_blur(inputs_PIL) + + assert isinstance(results, dict) + assert "img" in results + + +@e2e_pytest_unit +def test_random_solarization(inputs_np: Dict[str, Any]) -> None: + """Test RandomSolarization.""" + random_solarization = RandomSolarization(p=1.0) + + results = random_solarization(inputs_np) + + assert isinstance(results, dict) + assert "img" in results + assert repr(random_solarization) == "RandomSolarization" + + +@e2e_pytest_unit +def test_nd_array_to_pil_image(inputs_np: Dict[str, Any]) -> None: + """Test NDArrayToPILImage.""" + nd_array_to_pil_image = NDArrayToPILImage(keys=["img"]) + + results = nd_array_to_pil_image(inputs_np) + + assert "img" in results + assert isinstance(results["img"], Image.Image) + assert repr(nd_array_to_pil_image) == "NDArrayToPILImage" + + +@e2e_pytest_unit +def test_pil_image_to_nd_array(inputs_PIL: Dict[str, Any]) -> None: + """Test PILImageToNDArray.""" + pil_image_to_nd_array = PILImageToNDArray(keys=["img"]) + + results = pil_image_to_nd_array(inputs_PIL) + + assert "img" in results + assert isinstance(results["img"], np.ndarray) + assert repr(pil_image_to_nd_array) == "PILImageToNDArray" diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_dataset.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/test_dataset.py similarity index 97% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_dataset.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/datasets/test_dataset.py index 5b797baed65..7327c1f3b35 100644 --- a/tests/unit/algorithms/segmentation/adapters/mmseg/test_dataset.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/test_dataset.py @@ -5,7 +5,7 @@ import numpy as np import pytest -from otx.algorithms.segmentation.adapters.mmseg.data.dataset import MPASegDataset +from otx.algorithms.segmentation.adapters.mmseg.datasets import MPASegDataset from otx.api.entities.annotation import ( Annotation, AnnotationSceneEntity, diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_dataset_params_validation.py b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/test_dataset_params_validation.py similarity index 99% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_dataset_params_validation.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/datasets/test_dataset_params_validation.py index ca629d38734..7ad55560b65 100644 --- a/tests/unit/algorithms/segmentation/adapters/mmseg/test_dataset_params_validation.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/datasets/test_dataset_params_validation.py @@ -4,7 +4,7 @@ import numpy as np import pytest -from otx.algorithms.segmentation.adapters.mmseg.data.dataset import ( +from otx.algorithms.segmentation.adapters.mmseg.datasets.dataset import ( OTXSegDataset, get_annotation_mmseg_format, ) diff --git a/tests/unit/mpa/modules/models/scalar_schedulers/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py similarity index 50% rename from tests/unit/mpa/modules/models/scalar_schedulers/__init__.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py index 57e225a8be4..dc3be0d0648 100644 --- a/tests/unit/mpa/modules/models/scalar_schedulers/__init__.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/__init__.py @@ -1,4 +1,4 @@ -"""Test for otx.mpa.modules.models.scheculers""" +"""Test for otx.algorithms.segmentation.adapters.mmseg.models.backbones.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 diff --git a/tests/unit/mpa/modules/models/backbones/test_litehrnet.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/test_litehrnet.py similarity index 97% rename from tests/unit/mpa/modules/models/backbones/test_litehrnet.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/test_litehrnet.py index f59878fb309..361e1527f1d 100644 --- a/tests/unit/mpa/modules/models/backbones/test_litehrnet.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/test_litehrnet.py @@ -6,7 +6,7 @@ from otx.algorithms.common.adapters.mmcv.configs.backbones.lite_hrnet_18 import ( model as model_cfg, ) -from otx.mpa.modules.models.backbones.litehrnet import ( +from otx.algorithms.segmentation.adapters.mmseg.models.backbones.litehrnet import ( LiteHRNet, NeighbourSupport, SpatialWeightingV2, diff --git a/tests/unit/mpa/modules/ov/models/mmseg/backbones/test_ov_mmseg_mmov_backbone.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/test_mmseg_mmov_backbone.py similarity index 94% rename from tests/unit/mpa/modules/ov/models/mmseg/backbones/test_ov_mmseg_mmov_backbone.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/test_mmseg_mmov_backbone.py index 8796f89ef2a..24aebfcbe81 100644 --- a/tests/unit/mpa/modules/ov/models/mmseg/backbones/test_ov_mmseg_mmov_backbone.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/backbones/test_mmseg_mmov_backbone.py @@ -7,7 +7,7 @@ import pytest import torch -from otx.mpa.modules.ov.models.mmseg.backbones.mmov_backbone import MMOVBackbone +from otx.algorithms.segmentation.adapters.mmseg.models.backbones import MMOVBackbone from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/models/backbones/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py similarity index 52% rename from tests/unit/mpa/modules/models/backbones/__init__.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py index 24b1785922f..60b6f78ebef 100644 --- a/tests/unit/mpa/modules/models/backbones/__init__.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/heads/__init__.py @@ -1,4 +1,4 @@ -"""Test for otx.mpa.modules.models.backbones.""" +"""Test for otx.algorithms.segmentation.adapters.mmseg.models.heads.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 diff --git a/tests/unit/mpa/modules/ov/models/mmseg/decode_heads/test_ov_mmseg_mmov_decode_head.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/heads/test_mmseg_mmov_decode_head.py similarity index 95% rename from tests/unit/mpa/modules/ov/models/mmseg/decode_heads/test_ov_mmseg_mmov_decode_head.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/models/heads/test_mmseg_mmov_decode_head.py index c4fccf7ab9b..07f46d0e8ae 100644 --- a/tests/unit/mpa/modules/ov/models/mmseg/decode_heads/test_ov_mmseg_mmov_decode_head.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/heads/test_mmseg_mmov_decode_head.py @@ -7,7 +7,7 @@ import pytest import torch -from otx.mpa.modules.ov.models.mmseg.decode_heads.mmov_decode_head import MMOVDecodeHead +from otx.algorithms.segmentation.adapters.mmseg.models import MMOVDecodeHead from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/__init__.py new file mode 100644 index 00000000000..c1b25fc5a37 --- /dev/null +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/__init__.py @@ -0,0 +1,4 @@ +"""Test for otx.algorithms.segmentation.adapters.mmseg.models.scheculers""" + +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 diff --git a/tests/unit/mpa/modules/models/scalar_schedulers/test_schedulers.py b/tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/test_schedulers.py similarity index 98% rename from tests/unit/mpa/modules/models/scalar_schedulers/test_schedulers.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/test_schedulers.py index a79cbdc67e6..1295b8d37eb 100644 --- a/tests/unit/mpa/modules/models/scalar_schedulers/test_schedulers.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/models/scalar_schedulers/test_schedulers.py @@ -6,7 +6,7 @@ import pytest -from otx.mpa.modules.models.scalar_schedulers import ( +from otx.algorithms.segmentation.adapters.mmseg.models.schedulers import ( ConstantScalarScheduler, PolyScalarScheduler, StepScalarScheduler, diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines.py b/tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines.py deleted file mode 100644 index d59987c3820..00000000000 --- a/tests/unit/algorithms/segmentation/adapters/mmseg/test_pipelines.py +++ /dev/null @@ -1,122 +0,0 @@ -# Copyright (C) 2023 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -import numpy as np -import PIL.Image -import pytest - -from otx.algorithms.segmentation.adapters.mmseg.data.pipelines import ( - LoadAnnotationFromOTXDataset, - NDArrayToPILImage, - PILImageToNDArray, - RandomResizedCrop, - RandomSolarization, -) -from otx.api.entities.annotation import ( - Annotation, - AnnotationSceneEntity, - AnnotationSceneKind, -) -from otx.api.entities.dataset_item import DatasetItemEntity -from otx.api.entities.image import Image -from otx.api.entities.label import Domain, LabelEntity -from otx.api.entities.scored_label import ScoredLabel -from otx.api.entities.shapes.rectangle import Rectangle -from tests.test_suite.e2e_test_system import e2e_pytest_unit - - -def label_entity(name="test label") -> LabelEntity: - return LabelEntity(name=name, domain=Domain.SEGMENTATION) - - -def dataset_item() -> DatasetItemEntity: - image: Image = Image(data=np.random.randint(low=0, high=255, size=(10, 16, 3))) - annotation: Annotation = Annotation(shape=Rectangle.generate_full_box(), labels=[ScoredLabel(label_entity())]) - annotation_scene: AnnotationSceneEntity = AnnotationSceneEntity( - annotations=[annotation], kind=AnnotationSceneKind.ANNOTATION - ) - return DatasetItemEntity(media=image, annotation_scene=annotation_scene) - - -class TestLoadAnnotationFromOTXDataset: - @pytest.fixture(autouse=True) - def setUp(self) -> None: - - self.dataset_item: DatasetItemEntity = dataset_item() - self.results: dict = { - "dataset_item": self.dataset_item, - "ann_info": {"labels": [label_entity("class_1")]}, - "seg_fields": [], - } - self.pipeline: LoadAnnotationFromOTXDataset = LoadAnnotationFromOTXDataset() - - @e2e_pytest_unit - def test_call(self) -> None: - loaded_annotations: dict = self.pipeline(self.results) - assert "gt_semantic_seg" in loaded_annotations - assert loaded_annotations["dataset_item"] == self.dataset_item - - -class TestNDArrayToPILImage: - @pytest.fixture(autouse=True) - def setUp(self) -> None: - self.results: dict = {"img": np.random.randint(0, 255, (3, 3, 3), dtype=np.uint8)} - self.nd_array_to_pil_image: NDArrayToPILImage = NDArrayToPILImage(keys=["img"]) - - @e2e_pytest_unit - def test_call(self) -> None: - converted_img: dict = self.nd_array_to_pil_image(self.results) - assert "img" in converted_img - assert isinstance(converted_img["img"], PIL.Image.Image) - - @e2e_pytest_unit - def test_repr(self) -> None: - assert str(self.nd_array_to_pil_image) == "NDArrayToPILImage" - - -class TestPILImageToNDArray: - @pytest.fixture(autouse=True) - def setUp(self) -> None: - self.results: dict = {"img": PIL.Image.new("RGB", (3, 3))} - self.pil_image_to_nd_array: PILImageToNDArray = PILImageToNDArray(keys=["img"]) - - @e2e_pytest_unit - def test_call(self) -> None: - converted_array: dict = self.pil_image_to_nd_array(self.results) - assert "img" in converted_array - assert isinstance(converted_array["img"], np.ndarray) - - @e2e_pytest_unit - def test_repr(self) -> None: - assert str(self.pil_image_to_nd_array) == "PILImageToNDArray" - - -class TestRandomResizedCrop: - @pytest.fixture(autouse=True) - def setUp(self) -> None: - self.results: dict = {"img": PIL.Image.new("RGB", (10, 16)), "img_shape": (10, 16), "ori_shape": (10, 16)} - self.random_resized_crop: RandomResizedCrop = RandomResizedCrop((5, 5), (0.5, 1.0)) - - @e2e_pytest_unit - def test_call(self) -> None: - cropped_img: dict = self.random_resized_crop(self.results) - assert cropped_img["img_shape"] == (5, 5) - assert cropped_img["ori_shape"] == (10, 16) - - -class TestRandomSolarization: - @pytest.fixture(autouse=True) - def setUp(self) -> None: - self.results: dict = {"img": np.random.randint(0, 255, (3, 3, 3), dtype=np.uint8)} - self.random_solarization: RandomSolarization = RandomSolarization(p=1.0) - - @e2e_pytest_unit - def test_call(self) -> None: - solarized: dict = self.random_solarization(self.results) - assert "img" in solarized - assert isinstance(solarized["img"], np.ndarray) - - @e2e_pytest_unit - def test_repr(self) -> None: - assert str(self.random_solarization) == "RandomSolarization" diff --git a/otx/mpa/modules/ov/models/mmseg/__init__.py b/tests/unit/algorithms/segmentation/adapters/mmseg/utils/__init__.py similarity index 55% rename from otx/mpa/modules/ov/models/mmseg/__init__.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/utils/__init__.py index f5601bf53ac..2e7d4985d06 100644 --- a/otx/mpa/modules/ov/models/mmseg/__init__.py +++ b/tests/unit/algorithms/segmentation/adapters/mmseg/utils/__init__.py @@ -1,6 +1,4 @@ +"""Test for otx.algorithms.segmentation.adapters.mmseg.utils.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # - -# flake8: noqa -from . import backbones, decode_heads diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_config_utils.py b/tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_config_utils.py similarity index 100% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_config_utils.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_config_utils.py diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_config_utils_params_validation.py b/tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_config_utils_params_validation.py similarity index 100% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_config_utils_params_validation.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_config_utils_params_validation.py diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_data_utils.py b/tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_data_utils.py similarity index 100% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_data_utils.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_data_utils.py diff --git a/tests/unit/algorithms/segmentation/adapters/mmseg/test_data_utils_params_validation.py b/tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_data_utils_params_validation.py similarity index 100% rename from tests/unit/algorithms/segmentation/adapters/mmseg/test_data_utils_params_validation.py rename to tests/unit/algorithms/segmentation/adapters/mmseg/utils/test_data_utils_params_validation.py diff --git a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_seg_custom_pipelines.py b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_seg_custom_pipelines.py deleted file mode 100644 index c966164a992..00000000000 --- a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_seg_custom_pipelines.py +++ /dev/null @@ -1,103 +0,0 @@ -import numpy as np -import pytest -import torch -from mmcv.parallel import DataContainer - -from otx.mpa.modules.datasets.pipelines.transforms.seg_custom_pipelines import ( - BranchImage, - DefaultFormatBundle, - Normalize, -) -from tests.test_suite.e2e_test_system import e2e_pytest_unit - - -class TestNormalize: - @e2e_pytest_unit - @pytest.mark.parametrize( - "mean,std,to_rgb,expected", - [ - (1.0, 1.0, True, np.array([[[1.0, 0.0, 0.0]]], dtype=np.float32)), - (1.0, 1.0, False, np.array([[[-1.0, 0.0, 0.0]]], dtype=np.float32)), - ], - ) - def test_call(self, mean: float, std: float, to_rgb: bool, expected: np.array) -> None: - """Test __call__.""" - normalize = Normalize(mean=mean, std=std, to_rgb=to_rgb) - inputs = dict(img=np.arange(3).reshape(1, 1, 3)) - - results = normalize(inputs.copy()) - - assert "img" in results - assert "img_norm_cfg" in results - assert np.all(results["img"] == expected) - - @e2e_pytest_unit - @pytest.mark.parametrize("mean,std,to_rgb", [(1.0, 1.0, True)]) - def test_repr(self, mean: float, std: float, to_rgb: bool) -> None: - """Test __repr__.""" - normalize = Normalize(mean=mean, std=std, to_rgb=to_rgb) - - assert repr(normalize) == normalize.__class__.__name__ + f"(mean={mean}, std={std}, to_rgb=" f"{to_rgb})" - - -class TestDefaultFormatBundle: - @pytest.fixture(autouse=True) - def setup(self) -> None: - self.default_format_bundle = DefaultFormatBundle() - - @e2e_pytest_unit - @pytest.mark.parametrize("img", [np.ones((1, 1)), np.ones((1, 1, 1)), np.ones((1, 1, 1, 1))]) - @pytest.mark.parametrize("gt_semantic_seg,pixel_weights", [(np.ones((1, 1)), np.ones((1, 1)))]) - def test_call(self, img: np.array, gt_semantic_seg: np.array, pixel_weights: np.array) -> None: - """Test __call__.""" - inputs = dict(img=img, gt_semantic_seg=gt_semantic_seg, pixel_weights=pixel_weights) - - results = self.default_format_bundle(inputs.copy()) - - assert isinstance(results, dict) - assert "img" in results - assert isinstance(results["img"], DataContainer) - assert len(results["img"].data.shape) >= 3 - assert results["img"].data.dtype == torch.float32 - assert "gt_semantic_seg" in results - assert len(results["gt_semantic_seg"].data.shape) == len(inputs["gt_semantic_seg"].shape) + 1 - assert results["gt_semantic_seg"].data.dtype == torch.int64 - assert "pixel_weights" in results - assert len(results["pixel_weights"].data.shape) == len(inputs["pixel_weights"].shape) + 1 - assert results["pixel_weights"].data.dtype == torch.float32 - - @e2e_pytest_unit - @pytest.mark.parametrize("img", [np.ones((1,))]) - def test_call_invalid_shape(self, img: np.array): - inputs = dict(img=img) - - with pytest.raises(ValueError): - self.default_format_bundle(inputs.copy()) - - @e2e_pytest_unit - def test_repr(self) -> None: - """Test __repr__.""" - assert repr(self.default_format_bundle) == self.default_format_bundle.__class__.__name__ - - -class TestBranchImage: - @pytest.fixture(autouse=True) - def setup(self) -> None: - self.branch_image = BranchImage(key_map={"key1": "key2"}) - - @e2e_pytest_unit - def test_call(self) -> None: - """Test __call__.""" - inputs = dict(key1="key1", img_fields=["key1"]) - - results = self.branch_image(inputs.copy()) - - assert isinstance(results, dict) - assert "key2" in results - assert results["key1"] == results["key2"] - assert "key2" in results["img_fields"] - - @e2e_pytest_unit - def test_repr(self) -> None: - """Test __repr__.""" - assert repr(self.branch_image) == self.branch_image.__class__.__name__ From 9f76d878a526ed371d2b348a2dc04df1a585d556 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 21 Mar 2023 16:52:02 +0900 Subject: [PATCH 13/34] Added security.md Add the security related notification --- security.md | 5 +++++ 1 file changed, 5 insertions(+) create mode 100644 security.md diff --git a/security.md b/security.md new file mode 100644 index 00000000000..ccbbdc59297 --- /dev/null +++ b/security.md @@ -0,0 +1,5 @@ +# Security Policy +Intel is committed to rapidly addressing security vulnerabilities affecting our customers and providing clear guidance on the solution, impact, severity and mitigation. + +## Reporting a Vulnerability +Please report any security vulnerabilities in this project [utilizing the guidelines here](https://www.intel.com/content/www/us/en/security-center/vulnerability-handling-guidelines.html). From 074d6af8da6bcfc845d91a5a23862fea8594e2b7 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 21 Mar 2023 17:05:22 +0900 Subject: [PATCH 14/34] Update security.md fix prettier issue --- security.md | 2 ++ 1 file changed, 2 insertions(+) diff --git a/security.md b/security.md index ccbbdc59297..6143e2263ee 100644 --- a/security.md +++ b/security.md @@ -1,5 +1,7 @@ # Security Policy + Intel is committed to rapidly addressing security vulnerabilities affecting our customers and providing clear guidance on the solution, impact, severity and mitigation. ## Reporting a Vulnerability + Please report any security vulnerabilities in this project [utilizing the guidelines here](https://www.intel.com/content/www/us/en/security-center/vulnerability-handling-guidelines.html). From 1d9fd767b6860480aed09e2a06d49733d0735b75 Mon Sep 17 00:00:00 2001 From: Yunchu Lee Date: Tue, 21 Mar 2023 17:29:51 +0900 Subject: [PATCH 15/34] Add custom exception class for CLI (#1919) * Add custom exception class for CLI * Fixed TCs for config_manager --------- Signed-off-by: Yunchu Lee --- otx/cli/manager/config_manager.py | 24 ++++++++++----- otx/cli/utils/errors.py | 30 +++++++++++++++++++ tests/fuzzing/cli_fuzzing.py | 3 ++ tests/unit/cli/manager/test_config_manager.py | 14 ++++++--- tox.ini | 2 +- 5 files changed, 60 insertions(+), 13 deletions(-) create mode 100644 otx/cli/utils/errors.py diff --git a/otx/cli/manager/config_manager.py b/otx/cli/manager/config_manager.py index 906f1d77e20..1136c12ec32 100644 --- a/otx/cli/manager/config_manager.py +++ b/otx/cli/manager/config_manager.py @@ -16,6 +16,12 @@ from otx.api.entities.model_template import ModelTemplate, parse_model_template from otx.cli.registry import Registry as OTXRegistry from otx.cli.utils.config import configure_dataset, override_parameters +from otx.cli.utils.errors import ( + CliException, + ConfigValueError, + FileNotExistError, + NotSupportedError, +) from otx.cli.utils.importing import get_otx_root_path from otx.cli.utils.parser import gen_param_help, gen_params_dict_from_args from otx.core.data.manager.dataset_manager import DatasetManager @@ -112,7 +118,7 @@ def data_config_file_path(self) -> Path: if "data" in self.args and self.args.data: if Path(self.args.data).exists(): return Path(self.args.data) - raise FileNotFoundError(f"Not found: {self.args.data}") + raise FileNotExistError(f"Not found: {self.args.data}") return self.workspace_root / "data.yaml" def check_workspace(self) -> bool: @@ -140,6 +146,8 @@ def configure_template(self, model: str = None) -> None: else: task_type = self.task_type if not task_type and not model: + if not hasattr(self.args, "train_data_roots"): + raise ConfigValueError("Can't find the argument 'train_data_roots'") task_type = self.auto_task_detection(self.args.train_data_roots) self.template = self._get_template(task_type, model=model) self.task_type = self.template.task_type @@ -149,7 +157,7 @@ def configure_template(self, model: str = None) -> None: def _check_rebuild(self): """Checking for Rebuild status.""" if self.args.task and str(self.template.task_type) != self.args.task.upper(): - raise NotImplementedError("Task Update is not yet supported.") + raise NotSupportedError("Task Update is not yet supported.") result = False if self.args.model and self.template.name != self.args.model.upper(): print(f"[*] Rebuild model: {self.template.name} -> {self.args.model.upper()}") @@ -189,7 +197,7 @@ def _get_train_type(self, ignore_args: bool = False) -> str: if hasattr(self.args, "train_type") and self.mode in ("build", "train") and self.args.train_type: self.train_type = self.args.train_type.upper() if self.train_type not in TASK_TYPE_TO_SUB_DIR_NAME: - raise ValueError(f"{self.train_type} is not currently supported by otx.") + raise NotSupportedError(f"{self.train_type} is not currently supported by otx.") if self.train_type in TASK_TYPE_TO_SUB_DIR_NAME: return self.train_type @@ -202,7 +210,7 @@ def _get_train_type(self, ignore_args: bool = False) -> str: def auto_task_detection(self, data_roots: str) -> str: """Detect task type automatically.""" if not data_roots: - raise ValueError("Workspace must already exist or one of {task or model or train-data-roots} must exist.") + raise CliException("Workspace must already exist or one of {task or model or train-data-roots} must exist.") self.data_format = self.dataset_manager.get_data_format(data_roots) return self._get_task_type_from_data_format(self.data_format) @@ -225,7 +233,7 @@ def _get_task_type_from_data_format(self, data_format: str) -> str: self.task_type = task_key print(f"[*] Detected task type: {self.task_type}") return task_key - raise ValueError(f"Can't find proper task. we are not support {data_format} format, yet.") + raise ConfigValueError(f"Can't find proper task. we are not support {data_format} format, yet.") def auto_split_data(self, data_roots: str, task: str): """Automatically Split train data --> train/val dataset.""" @@ -372,7 +380,7 @@ def _get_template(self, task_type: str, model: Optional[str] = None) -> ModelTem if model: template_lst = [temp for temp in otx_registry.templates if temp.name.lower() == model.lower()] if not template_lst: - raise ValueError( + raise NotSupportedError( f"[*] {model} is not a type supported by OTX {task_type}." f"\n[*] Please refer to 'otx find --template --task {task_type}'" ) @@ -426,7 +434,7 @@ def build_workspace(self, new_workspace_path: Optional[str] = None) -> None: model_dir = template_dir.absolute() / train_type_rel_path if not model_dir.exists(): - raise ValueError(f"[*] {self.train_type} is not a type supported by OTX {self.task_type}") + raise NotSupportedError(f"[*] {self.train_type} is not a type supported by OTX {self.task_type}") train_type_dir = self.workspace_root / train_type_rel_path train_type_dir.mkdir(exist_ok=True) @@ -479,7 +487,7 @@ def _copy_config_files(self, target_dir: Path, file_name: str, dest_dir: Path) - config = MPAConfig.fromfile(str(target_dir / file_name)) config.dump(str(dest_dir / file_name)) except Exception as exc: - raise ImportError(f"{self.task_type} requires mmcv-full to be installed.") from exc + raise CliException(f"{self.task_type} requires mmcv-full to be installed.") from exc elif file_name.endswith((".yml", ".yaml")): config = OmegaConf.load(str(target_dir / file_name)) (dest_dir / file_name).write_text(OmegaConf.to_yaml(config)) diff --git a/otx/cli/utils/errors.py b/otx/cli/utils/errors.py new file mode 100644 index 00000000000..3b5bc9657b7 --- /dev/null +++ b/otx/cli/utils/errors.py @@ -0,0 +1,30 @@ +"""Utils for CLI errors.""" + +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + + +class CliException(Exception): + """Custom exception class for CLI.""" + + +class ConfigValueError(CliException): + """Configuration value is not suitable for CLI.""" + + def __init__(self, message): + super().__init__(message) + + +class NotSupportedError(CliException): + """Not supported error.""" + + def __init__(self, message): + super().__init__(message) + + +class FileNotExistError(CliException): + """Not exist given configuration.""" + + def __init__(self, message): + super().__init__(message) diff --git a/tests/fuzzing/cli_fuzzing.py b/tests/fuzzing/cli_fuzzing.py index d1224a4ec3b..ed6d17f89ad 100644 --- a/tests/fuzzing/cli_fuzzing.py +++ b/tests/fuzzing/cli_fuzzing.py @@ -4,6 +4,7 @@ from helper import FuzzingHelper from otx.cli.tools.cli import main as cli_main +from otx.cli.utils.errors import CliException @atheris.instrument_func @@ -21,6 +22,8 @@ def fuzz_otx(input_bytes): # argparser will throw SystemExit with code 2 when some required arguments are missing if e.code != 2: raise + except CliException: + pass # some known exceptions can be catched here finally: sys.argv = backup_argv diff --git a/tests/unit/cli/manager/test_config_manager.py b/tests/unit/cli/manager/test_config_manager.py index d56c2add368..ce7900cef7e 100644 --- a/tests/unit/cli/manager/test_config_manager.py +++ b/tests/unit/cli/manager/test_config_manager.py @@ -10,6 +10,12 @@ set_workspace, ) from otx.cli.registry import Registry +from otx.cli.utils.errors import ( + CliException, + ConfigValueError, + FileNotExistError, + NotSupportedError, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit @@ -319,7 +325,7 @@ def test_data_config_file_path(self, mocker, tmp_dir_path): expected_file_path = tmp_dir_path / "data.yaml" args = parser.parse_args(["--data", str(expected_file_path)]) config_manager.args = args - with pytest.raises(FileNotFoundError): + with pytest.raises(FileNotExistError): config_manager.data_config_file_path mock_exists.return_value = True @@ -389,7 +395,7 @@ def test__check_rebuild(self, mocker): mock_args.template = mock_template config_manager = ConfigManager(mock_args) - with pytest.raises(NotImplementedError): + with pytest.raises(NotSupportedError): config_manager._check_rebuild() config_manager.template.task_type = "DETECTION" @@ -466,13 +472,13 @@ def test__get_train_type(self, mocker): def test_auto_task_detection(self, mocker): mock_args = mocker.MagicMock() config_manager = ConfigManager(args=mock_args) - with pytest.raises(ValueError): + with pytest.raises(CliException): config_manager.auto_task_detection("") mock_get_data_format = mocker.patch( "otx.cli.manager.config_manager.DatasetManager.get_data_format", return_value="Unexpected" ) - with pytest.raises(ValueError): + with pytest.raises(ConfigValueError): config_manager.auto_task_detection("data/roots") mock_get_data_format.return_value = "coco" diff --git a/tox.ini b/tox.ini index b63089fb958..63161ad0ee9 100644 --- a/tox.ini +++ b/tox.ini @@ -147,7 +147,7 @@ deps = use_develop = true commands = coverage erase - - coverage run tests/fuzzing/cli_fuzzing.py {posargs:-dict=tests/fuzzing/assets/cli/operations.dict -artifact_prefix={toxworkdir}/ -print_final_stats=1 -atheris_runs=100000} + - coverage run tests/fuzzing/cli_fuzzing.py {posargs:-dict=tests/fuzzing/assets/cli/operations.dict -artifact_prefix={toxworkdir}/ -print_final_stats=1 -atheris_runs=500000} coverage report --precision=2 ; coverage html -d {toxworkdir}/htmlcov From cd05fc1b8a3fe5893c1bf1afb9212545335e3f42 Mon Sep 17 00:00:00 2001 From: Emily Chun Date: Wed, 22 Mar 2023 10:43:50 +0900 Subject: [PATCH 16/34] Correct License to Apache --- .github/pull_request_template.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/.github/pull_request_template.md b/.github/pull_request_template.md index 94e1806c381..50ad5eb3d04 100644 --- a/.github/pull_request_template.md +++ b/.github/pull_request_template.md @@ -31,12 +31,11 @@ not fully covered by unit tests or manual testing can be complicated. --> ### License -- [ ] I submit _my code changes_ under the same [MIT License](https://github.com/openvinotoolkit/training_extensions/blob/develop/LICENSE) that covers the project. +- [ ] I submit _my code changes_ under the same [Apache License](https://github.com/openvinotoolkit/training_extensions/blob/develop/LICENSE) that covers the project. Feel free to contact the maintainers if that's a concern. - [ ] I have updated the license header for each file (see an example below). ```python -# Copyright (C) 2023 Intel Corporation -# -# SPDX-License-Identifier: MIT +# Copyright (C) 2022 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 ``` From 873d9b69e64d181902088825c310ba4a6c486e41 Mon Sep 17 00:00:00 2001 From: Songki Choi Date: Tue, 21 Mar 2023 03:37:59 +0900 Subject: [PATCH 17/34] Remove temp hot key values (#1916) Signed-off-by: Songki Choi --- otx/algorithms/classification/tasks/train.py | 2 -- otx/algorithms/common/tasks/nncf_base.py | 3 --- otx/algorithms/detection/tasks/train.py | 3 --- otx/algorithms/segmentation/tasks/train.py | 3 --- 4 files changed, 11 deletions(-) diff --git a/otx/algorithms/classification/tasks/train.py b/otx/algorithms/classification/tasks/train.py index 657e0c3bce4..37fbfa7e85e 100644 --- a/otx/algorithms/classification/tasks/train.py +++ b/otx/algorithms/classification/tasks/train.py @@ -162,8 +162,6 @@ def _init_train_data_cfg(self, dataset: DatasetEntity): labels=self._labels, ) - for label in self._labels: - label.hotkey = "a" return data_cfg def _generate_training_metrics_group(self, learning_curves): diff --git a/otx/algorithms/common/tasks/nncf_base.py b/otx/algorithms/common/tasks/nncf_base.py index cafaf77273f..758666596b2 100644 --- a/otx/algorithms/common/tasks/nncf_base.py +++ b/otx/algorithms/common/tasks/nncf_base.py @@ -132,9 +132,6 @@ def _init_train_data_cfg(self, dataset: DatasetEntity): labels=self._labels, ) - # Temparory remedy for cfg.pretty_text error - for label in self._labels: - label.hotkey = "a" return data_cfg def _init_nncf_cfg(self): diff --git a/otx/algorithms/detection/tasks/train.py b/otx/algorithms/detection/tasks/train.py index b74d6a086e1..0baecc1f810 100644 --- a/otx/algorithms/detection/tasks/train.py +++ b/otx/algorithms/detection/tasks/train.py @@ -217,9 +217,6 @@ def _init_train_data_cfg(self, dataset: DatasetEntity): labels=self._labels, ) - # Temparory remedy for cfg.pretty_text error - for label in self._labels: - label.hotkey = "a" return data_cfg @staticmethod diff --git a/otx/algorithms/segmentation/tasks/train.py b/otx/algorithms/segmentation/tasks/train.py index 1fd144467d8..51d4a87efda 100644 --- a/otx/algorithms/segmentation/tasks/train.py +++ b/otx/algorithms/segmentation/tasks/train.py @@ -169,9 +169,6 @@ def _init_train_data_cfg(self, dataset: DatasetEntity): labels=self._labels, ) - # Temparory remedy for cfg.pretty_text error - for label in self._labels: - label.hotkey = "a" return data_cfg def _generate_training_metrics_group(self, learning_curves): From 12e145cefe94f2a3b8a770d4817d506759c4bd39 Mon Sep 17 00:00:00 2001 From: Jihwan Eom Date: Wed, 22 Mar 2023 18:16:49 +0900 Subject: [PATCH 18/34] Refactor OTX classification phase 1: move modules from MPA to OTX (#1893) --------- Co-authored-by: Yunchu Lee --- .../guide/reference/mpa/modules/datasets.rst | 1 - .../guide/reference/mpa/modules/hooks.rst | 28 --- .../guide/reference/mpa/modules/index.rst | 1 - .../mpa/modules/models/classifiers.rst | 26 --- .../reference/mpa/modules/models/heads.rst | 42 ---- .../reference/mpa/modules/models/index.rst | 2 - .../reference/mpa/modules/models/losses.rst | 20 -- .../guide/reference/mpa/modules/optimizer.rst | 14 -- .../guide/reference/mpa/modules/ov/models.rst | 4 - .../classification/adapters/mmcls/__init__.py | 4 +- .../mmcls/{data => datasets}/__init__.py | 30 +-- .../datasets.py => datasets/otx_datasets.py} | 0 .../mmcls/datasets/pipelines/__init__.py | 36 ++++ .../pipelines/otx_pipelines.py} | 0 .../datasets/pipelines/transforms/__init__.py | 19 ++ .../datasets/pipelines/transforms/augmix.py | 99 +++++---- .../pipelines/transforms/otx_transforms.py} | 20 +- .../pipelines/transforms/random_augment.py | 194 +++++++++++++++++ .../pipelines/transforms/twocrop_transform.py | 4 +- .../adapters/mmcls/models/__init__.py | 51 ++++- .../mmcls/models/backbones/__init__.py | 19 ++ .../mmcls/models/backbones/mmov_backbone.py | 43 ++++ .../mmcls/models/classifiers/__init__.py | 8 +- .../models/classifiers/sam_classifier.py | 199 +++++++++--------- .../classifiers/sam_classifier_mixin.py | 9 +- .../models/classifiers/semisl_classifier.py | 6 +- .../semisl_multilabel_classifier.py | 13 +- .../models/classifiers/supcon_classifier.py | 6 +- .../adapters/mmcls/models/heads/__init__.py | 34 ++- .../adapters/mmcls/models}/heads/cls_head.py | 11 +- .../adapters/mmcls/models}/heads/conv_head.py | 25 ++- .../mmcls}/models/heads/custom_cls_head.py | 28 ++- .../custom_hierarchical_linear_cls_head.py | 32 ++- ...custom_hierarchical_non_linear_cls_head.py | 49 +++-- .../custom_multi_label_linear_cls_head.py | 32 ++- .../custom_multi_label_non_linear_cls_head.py | 42 ++-- .../mmcls/models}/heads/mmov_cls_head.py | 37 +++- .../models/heads/non_linear_cls_head.py | 36 ++-- .../mmcls}/models/heads/semisl_cls_head.py | 52 +++-- .../heads/semisl_multilabel_cls_head.py | 175 +++++++++++++-- .../mmcls}/models/heads/supcon_cls_head.py | 23 +- .../adapters/mmcls/models/losses/__init__.py | 29 +++ .../asymmetric_angular_loss_with_ignore.py | 23 +- .../losses/asymmetric_loss_with_ignore.py | 26 +-- .../mmcls}/models/losses/barlowtwins_loss.py | 24 +-- .../models/losses/cross_entropy_loss.py | 9 +- .../adapters/mmcls}/models/losses/ib_loss.py | 32 +-- .../adapters/mmcls/models/necks/__init__.py | 3 +- .../adapters/mmcls/models}/necks/mmov_neck.py | 10 +- .../adapters/mmcls/models/necks/selfsl_mlp.py | 6 +- .../adapters/mmcls/optimizer/__init__.py | 19 ++ .../adapters/mmcls}/optimizer/lars.py | 49 ++--- .../configs/base/data/semisl/data_pipeline.py | 2 +- .../supcon/model.py | 1 + .../common/adapters/mmcv/__init__.py | 16 ++ .../common/adapters/mmcv/hooks/__init__.py | 53 +++++ .../mmcv/{hooks.py => hooks/base_hook.py} | 0 .../adapters/mmcv}/hooks/checkpoint_hook.py | 6 +- .../common/adapters/mmcv}/hooks/eval_hook.py | 22 +- .../mmcv}/hooks/fp16_sam_optimizer_hook.py | 7 +- .../adapters/mmcv}/hooks/ib_loss_hook.py | 13 +- .../mmcv}/hooks/no_bias_decay_hook.py | 57 ++--- .../mmcv}/hooks/sam_optimizer_hook.py | 7 +- .../adapters/mmcv}/hooks/semisl_cls_hook.py | 11 +- .../common/adapters/mmcv/nncf/patches.py | 2 +- .../mmcv/pipelines/transforms/__init__.py | 10 + .../mmcv}/pipelines/transforms/augments.py | 94 ++++++--- .../transforms/cython_augments/__init__.py | 9 + .../transforms/cython_augments/cv_augment.pyx | 0 .../cython_augments/pil_augment.pyx | 0 otx/cli/utils/importing.py | 2 +- otx/mpa/cls/__init__.py | 22 -- otx/mpa/csrc/mpl/lib_mpl.cpp | 70 ------ otx/mpa/csrc/mpl/lib_mpl.h | 12 -- otx/mpa/csrc/mpl/pybind.cpp | 20 -- .../transforms/cython_augments/__init__.py | 0 .../pipelines/transforms/random_augment.py | 171 --------------- otx/mpa/modules/hooks/__init__.py | 6 - .../modules/models/classifiers/__init__.py | 11 - otx/mpa/modules/models/heads/__init__.py | 5 - otx/mpa/modules/models/heads/utils.py | 56 ----- otx/mpa/modules/optimizer/__init__.py | 5 - otx/mpa/modules/ov/models/mmcls/__init__.py | 6 - .../ov/models/mmcls/backbones/__init__.py | 6 - .../models/mmcls/backbones/mmov_backbone.py | 27 --- .../modules/ov/models/mmcls/heads/__init__.py | 9 - .../modules/ov/models/mmcls/necks/__init__.py | 6 - pyproject.toml | 4 + setup.py | 4 +- .../adapters/mmcls/data/test_datasets.py | 4 +- .../adapters/mmcls/data/test_pipelines.py | 12 +- .../test_mmcls_data_params_validation.py | 2 +- .../pipelines/transforms/test_augments.py | 2 +- .../pipelines/transforms/test_augmix.py | 14 +- .../transforms/test_ote_transforms.py | 2 +- .../transforms/test_random_augment.py | 22 +- .../transforms/test_twocrop_transform.py | 2 +- .../mpa/modules/heads/test_custom_cls_head.py | 2 +- .../test_custom_hierarchical_cls_head.py | 6 +- .../heads/test_custom_multilabel_cls_head.py | 6 +- .../modules/heads/test_multilabel_semisl.py | 8 +- .../mpa/modules/heads/test_semisl_cls_head.py | 2 +- .../modules/hooks/test_mpa_checkpoint_hook.py | 6 +- .../mpa/modules/hooks/test_mpa_eval_hook.py | 6 +- .../hooks/test_mpa_fp16_sam_optimizer_hook.py | 4 +- .../modules/hooks/test_mpa_ib_loss_hook.py | 2 +- .../hooks/test_mpa_no_bias_decay_hook.py | 4 +- .../modules/hooks/test_mpa_semisl_cls_hook.py | 2 +- .../hooks/test_mpafp16_sam_optimizer_hook.py | 18 -- .../losses/test_asymmetric_multilabel.py | 4 +- .../mpa/modules/losses/test_cross_entropy.py | 4 +- .../models/classifiers/test_sam_classifier.py | 2 +- .../classifiers/test_semisl_classifier.py | 2 +- .../classifiers/test_semisl_mlc_classifier.py | 2 +- .../classifiers/test_supcon_classifier.py | 2 +- tests/unit/mpa/modules/optimizer/test_lars.py | 2 +- .../backbones/test_ov_mmcls_mmov_backbone.py | 4 +- .../mmcls/heads/test_ov_mmcls_cls_head.py | 2 +- .../mmcls/heads/test_ov_mmcls_conv_head.py | 4 +- .../heads/test_ov_mmcls_mmcv_cls_head.py | 4 +- .../mmcls/necks/test_ov_mmcls_mmov_neck.py | 2 +- tests/unit/mpa/test_augments.py | 2 +- 122 files changed, 1470 insertions(+), 1148 deletions(-) delete mode 100644 docs/source/guide/reference/mpa/modules/models/classifiers.rst delete mode 100644 docs/source/guide/reference/mpa/modules/models/heads.rst delete mode 100644 docs/source/guide/reference/mpa/modules/optimizer.rst rename otx/algorithms/classification/adapters/mmcls/{data => datasets}/__init__.py (69%) rename otx/algorithms/classification/adapters/mmcls/{data/datasets.py => datasets/otx_datasets.py} (100%) create mode 100644 otx/algorithms/classification/adapters/mmcls/datasets/pipelines/__init__.py rename otx/algorithms/classification/adapters/mmcls/{data/pipelines.py => datasets/pipelines/otx_pipelines.py} (100%) create mode 100644 otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/__init__.py rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/datasets/pipelines/transforms/augmix.py (74%) rename otx/{mpa/modules/datasets/pipelines/transforms/ote_transforms.py => algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/otx_transforms.py} (80%) create mode 100644 otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/random_augment.py rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/datasets/pipelines/transforms/twocrop_transform.py (81%) create mode 100644 otx/algorithms/classification/adapters/mmcls/models/backbones/__init__.py create mode 100644 otx/algorithms/classification/adapters/mmcls/models/backbones/mmov_backbone.py rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/classifiers/sam_classifier.py (59%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/classifiers/sam_classifier_mixin.py (54%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/classifiers/semisl_classifier.py (94%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/classifiers/semisl_multilabel_classifier.py (79%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/classifiers/supcon_classifier.py (82%) rename otx/{mpa/modules/ov/models/mmcls => algorithms/classification/adapters/mmcls/models}/heads/cls_head.py (67%) rename otx/{mpa/modules/ov/models/mmcls => algorithms/classification/adapters/mmcls/models}/heads/conv_head.py (75%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/custom_cls_head.py (74%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/custom_hierarchical_linear_cls_head.py (85%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/custom_hierarchical_non_linear_cls_head.py (81%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/custom_multi_label_linear_cls_head.py (80%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/custom_multi_label_non_linear_cls_head.py (76%) rename otx/{mpa/modules/ov/models/mmcls => algorithms/classification/adapters/mmcls/models}/heads/mmov_cls_head.py (57%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/non_linear_cls_head.py (71%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/semisl_cls_head.py (82%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/semisl_multilabel_cls_head.py (52%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/heads/supcon_cls_head.py (86%) create mode 100644 otx/algorithms/classification/adapters/mmcls/models/losses/__init__.py rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/losses/asymmetric_angular_loss_with_ignore.py (88%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/losses/asymmetric_loss_with_ignore.py (83%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/losses/barlowtwins_loss.py (70%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/losses/cross_entropy_loss.py (83%) rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/models/losses/ib_loss.py (59%) rename otx/{mpa/modules/ov/models/mmcls => algorithms/classification/adapters/mmcls/models}/necks/mmov_neck.py (60%) create mode 100644 otx/algorithms/classification/adapters/mmcls/optimizer/__init__.py rename otx/{mpa/modules => algorithms/classification/adapters/mmcls}/optimizer/lars.py (74%) create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/__init__.py rename otx/algorithms/common/adapters/mmcv/{hooks.py => hooks/base_hook.py} (100%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/checkpoint_hook.py (94%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/eval_hook.py (84%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/fp16_sam_optimizer_hook.py (94%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/ib_loss_hook.py (77%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/no_bias_decay_hook.py (50%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/sam_optimizer_hook.py (94%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/semisl_cls_hook.py (83%) create mode 100644 otx/algorithms/common/adapters/mmcv/pipelines/transforms/__init__.py rename otx/{mpa/modules/datasets => algorithms/common/adapters/mmcv}/pipelines/transforms/augments.py (70%) create mode 100644 otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/__init__.py rename otx/{mpa/modules/datasets => algorithms/common/adapters/mmcv}/pipelines/transforms/cython_augments/cv_augment.pyx (100%) rename otx/{mpa/modules/datasets => algorithms/common/adapters/mmcv}/pipelines/transforms/cython_augments/pil_augment.pyx (100%) delete mode 100644 otx/mpa/csrc/mpl/lib_mpl.cpp delete mode 100644 otx/mpa/csrc/mpl/lib_mpl.h delete mode 100644 otx/mpa/csrc/mpl/pybind.cpp delete mode 100644 otx/mpa/modules/datasets/pipelines/transforms/cython_augments/__init__.py delete mode 100644 otx/mpa/modules/datasets/pipelines/transforms/random_augment.py delete mode 100644 otx/mpa/modules/models/classifiers/__init__.py delete mode 100644 otx/mpa/modules/models/heads/__init__.py delete mode 100644 otx/mpa/modules/models/heads/utils.py delete mode 100644 otx/mpa/modules/optimizer/__init__.py delete mode 100644 otx/mpa/modules/ov/models/mmcls/__init__.py delete mode 100644 otx/mpa/modules/ov/models/mmcls/backbones/__init__.py delete mode 100644 otx/mpa/modules/ov/models/mmcls/backbones/mmov_backbone.py delete mode 100644 otx/mpa/modules/ov/models/mmcls/heads/__init__.py delete mode 100644 otx/mpa/modules/ov/models/mmcls/necks/__init__.py delete mode 100644 tests/unit/mpa/modules/hooks/test_mpafp16_sam_optimizer_hook.py diff --git a/docs/source/guide/reference/mpa/modules/datasets.rst b/docs/source/guide/reference/mpa/modules/datasets.rst index 3d52750d3f6..5598e64d054 100644 --- a/docs/source/guide/reference/mpa/modules/datasets.rst +++ b/docs/source/guide/reference/mpa/modules/datasets.rst @@ -13,7 +13,6 @@ Datasets :members: :undoc-members: - .. automodule:: otx.mpa.modules.datasets.pipelines :members: :undoc-members: diff --git a/docs/source/guide/reference/mpa/modules/hooks.rst b/docs/source/guide/reference/mpa/modules/hooks.rst index 522a1d3a61f..127fb30a2ac 100644 --- a/docs/source/guide/reference/mpa/modules/hooks.rst +++ b/docs/source/guide/reference/mpa/modules/hooks.rst @@ -17,10 +17,6 @@ Hooks :members: :undoc-members: -.. automodule:: otx.mpa.modules.hooks.checkpoint_hook - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.hooks.composed_dataloaders_hook :members: :undoc-members: @@ -29,18 +25,6 @@ Hooks :members: :undoc-members: -.. automodule:: otx.mpa.modules.hooks.eval_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.fp16_sam_optimizer_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.ib_loss_hook - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.hooks.logger_replace_hook :members: :undoc-members: @@ -53,26 +37,14 @@ Hooks :members: :undoc-members: -.. automodule:: otx.mpa.modules.hooks.no_bias_decay_hook - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.hooks.recording_forward_hooks :members: :undoc-members: -.. automodule:: otx.mpa.modules.hooks.sam_optimizer_hook - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.hooks.save_initial_weight_hook :members: :undoc-members: -.. automodule:: otx.mpa.modules.hooks.semisl_cls_hook - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.hooks.task_adapt_hook :members: :undoc-members: diff --git a/docs/source/guide/reference/mpa/modules/index.rst b/docs/source/guide/reference/mpa/modules/index.rst index 0f463fdd61d..731dfbb571f 100644 --- a/docs/source/guide/reference/mpa/modules/index.rst +++ b/docs/source/guide/reference/mpa/modules/index.rst @@ -7,6 +7,5 @@ Modules models/index datasets hooks - optimizer ov/index utils diff --git a/docs/source/guide/reference/mpa/modules/models/classifiers.rst b/docs/source/guide/reference/mpa/modules/models/classifiers.rst deleted file mode 100644 index 7f11ac3ca4a..00000000000 --- a/docs/source/guide/reference/mpa/modules/models/classifiers.rst +++ /dev/null @@ -1,26 +0,0 @@ -Classifiers -^^^^^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.models.classifiers - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.classifiers.sam_classifier - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.classifiers.sam_classifier_mixin - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.classifiers.semisl_classifier - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.classifiers.supcon_classifier - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/heads.rst b/docs/source/guide/reference/mpa/modules/models/heads.rst deleted file mode 100644 index 8fbf6601254..00000000000 --- a/docs/source/guide/reference/mpa/modules/models/heads.rst +++ /dev/null @@ -1,42 +0,0 @@ -Heads -^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.models.heads - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.custom_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.custom_hierarchical_linear_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.custom_hierarchical_non_linear_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.custom_multi_label_linear_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.custom_multi_label_non_linear_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.non_linear_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.semisl_cls_head - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.heads.supcon_cls_head - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/index.rst b/docs/source/guide/reference/mpa/modules/models/index.rst index 65621b35cdc..b3afa8c1894 100644 --- a/docs/source/guide/reference/mpa/modules/models/index.rst +++ b/docs/source/guide/reference/mpa/modules/models/index.rst @@ -4,7 +4,5 @@ Models .. toctree:: :maxdepth: 1 - classifiers - heads losses utils \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/models/losses.rst b/docs/source/guide/reference/mpa/modules/models/losses.rst index f4c9301cb99..8242f2dbca7 100644 --- a/docs/source/guide/reference/mpa/modules/models/losses.rst +++ b/docs/source/guide/reference/mpa/modules/models/losses.rst @@ -9,26 +9,6 @@ Losses :members: :undoc-members: -.. automodule:: otx.mpa.modules.models.losses.asymmetric_angular_loss_with_ignore - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.asymmetric_loss_with_ignore - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.barlowtwins_loss - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.cross_entropy_loss - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.models.losses.ib_loss - :members: - :undoc-members: - .. automodule:: otx.mpa.modules.models.losses.utils :members: :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/optimizer.rst b/docs/source/guide/reference/mpa/modules/optimizer.rst deleted file mode 100644 index 767da78bc9f..00000000000 --- a/docs/source/guide/reference/mpa/modules/optimizer.rst +++ /dev/null @@ -1,14 +0,0 @@ -Optimizer -^^^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.optimizer - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.optimizer.lars - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/ov/models.rst b/docs/source/guide/reference/mpa/modules/ov/models.rst index 15ed83acb06..5531868fdea 100644 --- a/docs/source/guide/reference/mpa/modules/ov/models.rst +++ b/docs/source/guide/reference/mpa/modules/ov/models.rst @@ -20,7 +20,3 @@ Models .. automodule:: otx.mpa.modules.ov.models.parser_mixin :members: :undoc-members: - -.. automodule:: otx.mpa.modules.ov.models.mmcls - :members: - :undoc-members: diff --git a/otx/algorithms/classification/adapters/mmcls/__init__.py b/otx/algorithms/classification/adapters/mmcls/__init__.py index fbec047abe2..3fa9776764e 100644 --- a/otx/algorithms/classification/adapters/mmcls/__init__.py +++ b/otx/algorithms/classification/adapters/mmcls/__init__.py @@ -15,8 +15,9 @@ # and limitations under the License. -from .data import OTXClsDataset, SelfSLDataset +from .datasets import OTXClsDataset, SelfSLDataset from .models import BYOL, ConstrastiveHead, SelfSLMLP +from .optimizer import LARS # fmt: off # isort: off @@ -33,4 +34,5 @@ "BYOL", "SelfSLMLP", "ConstrastiveHead", + "LARS", ] diff --git a/otx/algorithms/classification/adapters/mmcls/data/__init__.py b/otx/algorithms/classification/adapters/mmcls/datasets/__init__.py similarity index 69% rename from otx/algorithms/classification/adapters/mmcls/data/__init__.py rename to otx/algorithms/classification/adapters/mmcls/datasets/__init__.py index 206425d34b1..0242a22643b 100644 --- a/otx/algorithms/classification/adapters/mmcls/data/__init__.py +++ b/otx/algorithms/classification/adapters/mmcls/datasets/__init__.py @@ -1,6 +1,6 @@ """OTX Algorithms - Classification Dataset.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,30 +14,30 @@ # See the License for the specific language governing permissions # and limitations under the License. -from .datasets import ( +from .otx_datasets import ( OTXClsDataset, OTXHierarchicalClsDataset, OTXMultilabelClsDataset, SelfSLDataset, ) -from .pipelines import ( - GaussianBlur, - LoadImageFromOTXDataset, - OTXColorJitter, - PILImageToNDArray, - PostAug, - RandomAppliedTrans, +from .pipelines.transforms import ( + AugMixAugment, + OTXRandAugment, + PILToTensor, + RandomRotate, + TensorNormalize, + TwoCropTransform, ) __all__ = [ + "AugMixAugment", + "PILToTensor", + "TensorNormalize", + "RandomRotate", + "OTXRandAugment", + "TwoCropTransform", "OTXClsDataset", "OTXMultilabelClsDataset", "OTXHierarchicalClsDataset", "SelfSLDataset", - "PostAug", - "PILImageToNDArray", - "LoadImageFromOTXDataset", - "RandomAppliedTrans", - "GaussianBlur", - "OTXColorJitter", ] diff --git a/otx/algorithms/classification/adapters/mmcls/data/datasets.py b/otx/algorithms/classification/adapters/mmcls/datasets/otx_datasets.py similarity index 100% rename from otx/algorithms/classification/adapters/mmcls/data/datasets.py rename to otx/algorithms/classification/adapters/mmcls/datasets/otx_datasets.py diff --git a/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/__init__.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/__init__.py new file mode 100644 index 00000000000..40f49b6d32d --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/__init__.py @@ -0,0 +1,36 @@ +"""OTX Algorithms - Classification pipelines.""" + +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 + +from .otx_pipelines import ( + GaussianBlur, + LoadImageFromOTXDataset, + OTXColorJitter, + PILImageToNDArray, + PostAug, + RandomAppliedTrans, +) +from .transforms import ( + AugMixAugment, + OTXRandAugment, + PILToTensor, + RandomRotate, + TensorNormalize, + TwoCropTransform, +) + +__all__ = [ + "PostAug", + "PILImageToNDArray", + "LoadImageFromOTXDataset", + "RandomAppliedTrans", + "GaussianBlur", + "OTXColorJitter", + "AugMixAugment", + "PILToTensor", + "RandomRotate", + "TensorNormalize", + "OTXRandAugment", + "TwoCropTransform", +] diff --git a/otx/algorithms/classification/adapters/mmcls/data/pipelines.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/otx_pipelines.py similarity index 100% rename from otx/algorithms/classification/adapters/mmcls/data/pipelines.py rename to otx/algorithms/classification/adapters/mmcls/datasets/pipelines/otx_pipelines.py diff --git a/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/__init__.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/__init__.py new file mode 100644 index 00000000000..5644dffbf01 --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/__init__.py @@ -0,0 +1,19 @@ +"""Module to init transforms for OTX classification.""" +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# +# flake8: noqa + +from .augmix import AugMixAugment +from .otx_transforms import PILToTensor, RandomRotate, TensorNormalize +from .random_augment import OTXRandAugment +from .twocrop_transform import TwoCropTransform + +__all__ = [ + "AugMixAugment", + "PILToTensor", + "TensorNormalize", + "RandomRotate", + "OTXRandAugment", + "TwoCropTransform", +] diff --git a/otx/mpa/modules/datasets/pipelines/transforms/augmix.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/augmix.py similarity index 74% rename from otx/mpa/modules/datasets/pipelines/transforms/augmix.py rename to otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/augmix.py index 704301ad3bd..1b9dd3e74e9 100644 --- a/otx/mpa/modules/datasets/pipelines/transforms/augmix.py +++ b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/augmix.py @@ -1,3 +1,4 @@ +"""Module for defining AugMix class used for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,9 +9,12 @@ import numpy as np from mmcls.datasets.builder import PIPELINES +from mmcv.utils import ConfigDict from PIL import Image -from otx.mpa.modules.datasets.pipelines.transforms.augments import CythonAugments +from otx.algorithms.common.adapters.mmcv.pipelines.transforms.augments import ( + CythonAugments, +) _AUGMIX_TRANSFORMS_GREY = [ "SharpnessIncreasing", # not in paper @@ -37,13 +41,15 @@ class OpsFabric: + """OpsFabric class.""" + def __init__(self, name, magnitude, hparams, prob=1.0): self.max_level = 10 self.prob = prob self.hparams = hparams # kwargs for augment functions self.aug_kwargs = dict(fillcolor=hparams["img_mean"], resample=(Image.BILINEAR, Image.BICUBIC)) - self.LEVEL_TO_ARG = { + self.level_to_arg = { "AutoContrast": None, "Equalize": None, "Rotate": self._rotate_level_to_arg, @@ -58,7 +64,7 @@ def __init__(self, name, magnitude, hparams, prob=1.0): "TranslateXRel": self._translate_rel_level_to_arg, "TranslateYRel": self._translate_rel_level_to_arg, } - self.NAME_TO_OP = { + self.name_to_op = { "AutoContrast": CythonAugments.autocontrast, "Equalize": CythonAugments.equalize, "Rotate": CythonAugments.rotate, @@ -73,15 +79,17 @@ def __init__(self, name, magnitude, hparams, prob=1.0): "TranslateXRel": CythonAugments.translate_x_rel, "TranslateYRel": CythonAugments.translate_y_rel, } - self.aug_fn = self.NAME_TO_OP[name] - self.level_fn = self.LEVEL_TO_ARG[name] - self.magnitude = magnitude - self.magnitude_std = self.hparams.get("magnitude_std", float("inf")) + self.aug_factory = ConfigDict( + aug_fn=self.name_to_op[name], + level_fn=self.level_to_arg[name], + magnitude=magnitude, + magnitude_std=self.hparams.get("magnitude_std", float("inf")), + ) @staticmethod - def randomly_negate(v): - """With 50% prob, negate the value""" - return -v if random.random() > 0.5 else v + def randomly_negate(value): + """With 50% prob, negate the value.""" + return -value if random.random() > 0.5 else value def _rotate_level_to_arg(self, level, _hparams): # range [-30, 30] @@ -129,95 +137,96 @@ def _solarize_increasing_level_to_arg(self, level, _hparams): return (256 - self._solarize_level_to_arg(level, _hparams)[0],) def __call__(self, img): + """Call method of OpsFabric class.""" if self.prob < 1.0 and random.random() > self.prob: return img - magnitude = self.magnitude - if self.magnitude_std: - if self.magnitude_std == float("inf"): + magnitude = self.aug_factory.magnitude + magnitude_std = self.aug_factory.magnitude_std + level_fn = self.aug_factory.level_fn + if magnitude_std: + if magnitude_std == float("inf"): magnitude = random.uniform(0, magnitude) - elif self.magnitude_std > 0: - magnitude = random.gauss(magnitude, self.magnitude_std) + elif magnitude_std > 0: + magnitude = random.gauss(magnitude, magnitude_std) magnitude = min(self.max_level, max(0, magnitude)) # clip to valid range - level_args = self.level_fn(magnitude, self.hparams) if self.level_fn is not None else tuple() - return self.aug_fn(img, *level_args, **self.aug_kwargs) + level_args = level_fn(magnitude, self.hparams) if level_fn is not None else tuple() + return self.aug_factory.aug_fn(img, *level_args, **self.aug_kwargs) @PIPELINES.register_module() -class AugMixAugment(object): - """AugMix Transform +class AugMixAugment: + """AugMix Transform. + Adapted and improved from impl here: https://github.com/google-research/augmix/blob/master/imagenet.py From paper: 'AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty - - https://arxiv.org/abs/1912.02781 + https://arxiv.org/abs/1912.02781. """ - def __init__(self, config_str, image_mean=None, grey=False, **kwargs): + def __init__(self, config_str, image_mean=None, grey=False): self.ops, self.alpha, self.width, self.depth = self._augmix_ops(config_str, image_mean, grey=grey) - def _apply_basic(self, img, mixing_weights, m): + def _apply_basic(self, img, mixing_weights, m): # pylint: disable=invalid-name # This is a literal adaptation of the paper/official implementation without normalizations and # PIL <-> Numpy conversions between every op. It is still quite CPU compute heavy compared to the # typical augmentation transforms, could use a GPU / Kornia implementation. mixed = (1 - m) * np.array(img, dtype=np.float32) - for mw in mixing_weights: + for mix_weight in mixing_weights: depth = self.depth if self.depth > 0 else np.random.randint(1, 4) ops = np.random.choice(self.ops, depth, replace=True) img_aug = deepcopy(img) - for op in ops: + for op in ops: # pylint: disable=invalid-name img_aug = op(img_aug) - CythonAugments.blend(img_aug, mixed, mw * m) + CythonAugments.blend(img_aug, mixed, mix_weight * m) np.clip(mixed, 0, 255.0, out=mixed) return Image.fromarray(mixed.astype(np.uint8)) def _augmix_ops(self, config_str, image_mean=None, translate_const=250, grey=False): if image_mean is None: image_mean = [0.485, 0.456, 0.406] # imagenet mean - magnitude = 3 - width = 3 - depth = -1 - alpha = 1.0 - p = 1.0 + aug_params = ConfigDict(magnitude=3, width=3, depth=-1, alpha=1.0, p=1.0) hparams = dict( translate_const=translate_const, - img_mean=tuple([int(c * 256) for c in image_mean]), + img_mean=tuple(int(c * 256) for c in image_mean), magnitude_std=float("inf"), ) config = config_str.split("-") assert config[0] == "augmix" config = config[1:] - for c in config: - cs = re.split(r"(\d.*)", c) - if len(cs) < 2: + for cfg in config: + cfgs = re.split(r"(\d.*)", cfg) + if len(cfgs) < 2: continue - key, val = cs[:2] + key, val = cfgs[:2] if key == "mstd": hparams.setdefault("magnitude_std", float(val)) elif key == "m": - magnitude = int(val) + aug_params.magnitude = int(val) elif key == "w": - width = int(val) + aug_params.width = int(val) elif key == "d": - depth = int(val) + aug_params.depth = int(val) elif key == "a": - alpha = float(val) + aug_params.alpha = float(val) elif key == "p": - p = float(val) + aug_params.p = float(val) else: assert False, "Unknown AugMix config section" aug_politics = _AUGMIX_TRANSFORMS_GREY if grey else _AUGMIX_TRANSFORMS return ( - [OpsFabric(name, magnitude, hparams, p) for name in aug_politics], - alpha, - width, - depth, + [OpsFabric(name, aug_params.magnitude, hparams, aug_params.p) for name in aug_politics], + aug_params.alpha, + aug_params.width, + aug_params.depth, ) def __call__(self, results): + """Call function applies augmix on image.""" for key in results.get("img_fields", ["img"]): img = results[key] if not Image.isImageType(img): img = Image.fromarray(img) mixing_weights = np.float32(np.random.dirichlet([self.alpha] * self.width)) - m = np.float32(np.random.beta(self.alpha, self.alpha)) + m = np.float32(np.random.beta(self.alpha, self.alpha)) # pylint: disable=invalid-name mixed = self._apply_basic(img, mixing_weights, m) results["augmix"] = True results[key] = mixed diff --git a/otx/mpa/modules/datasets/pipelines/transforms/ote_transforms.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/otx_transforms.py similarity index 80% rename from otx/mpa/modules/datasets/pipelines/transforms/ote_transforms.py rename to otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/otx_transforms.py index c5fdb8ccec7..8e790c8721e 100644 --- a/otx/mpa/modules/datasets/pipelines/transforms/ote_transforms.py +++ b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/otx_transforms.py @@ -1,3 +1,4 @@ +"""Module for defining transforms used for OTX classification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -10,8 +11,11 @@ @PIPELINES.register_module() -class PILToTensor(object): +class PILToTensor: + """Convert PIL image to Tensor.""" + def __call__(self, results): + """Call function of PILToTensor class.""" for key in results.get("img_fields", ["img"]): img = results[key] if not Image.isImageType(img): @@ -23,13 +27,16 @@ def __call__(self, results): @PIPELINES.register_module() -class TensorNormalize(object): +class TensorNormalize: + """Normalize tensor object.""" + def __init__(self, mean, std, inplace=False): self.mean = mean self.std = std self.inplace = inplace def __call__(self, results): + """Call function of TensorNormalize class.""" for key in results.get("img_fields", ["img"]): img = results[key] img = F.normalize(img, self.mean, self.std, self.inplace) @@ -40,18 +47,17 @@ def __call__(self, results): # TODO [Jihwan]: Can be removed by mmcls.dataset.pipelines.auto_augment L398, Roate class @PIPELINES.register_module() -class RandomRotate(object): - """Random rotate - From torchreid.data.transforms - """ +class RandomRotate: + """Random rotate, from torchreid.data.transforms.""" - def __init__(self, p=0.5, angle=(-5, 5), values=None, **kwargs): + def __init__(self, p=0.5, angle=(-5, 5), values=None): self.p = p self.angle = angle self.discrete = values is not None and len([v for v in values if v != 0]) > 0 self.values = values def __call__(self, results, *args, **kwargs): + """Call function of RandomRotate class.""" if random.uniform(0, 1) > self.p: return results for key in results.get("img_fields", ["img"]): diff --git a/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/random_augment.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/random_augment.py new file mode 100644 index 00000000000..d0e4804ba67 --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/random_augment.py @@ -0,0 +1,194 @@ +# Copyright (C) 2022 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# +# pylint: disable=unused-argument +"""Code in this file is adapted from. + +https://github.com/ildoonet/pytorch-randaugment/blob/master/RandAugment/augmentations.py +https://github.com/google-research/fixmatch/blob/master/third_party/auto_augment/augmentations.py +https://github.com/google-research/fixmatch/blob/master/libml/ctaugment.py +""" + +import random + +import numpy as np +import PIL +from mmcls.datasets.builder import PIPELINES + +PARAMETER_MAX = 10 + + +def auto_contrast(img, **kwargs): + """Applies auto contrast to an image.""" + return PIL.ImageOps.autocontrast(img), None + + +def brightness(img, value, max_value, bias=0): + """Applies brightness adjustment to an image.""" + value = _float_parameter(value, max_value) + bias + return PIL.ImageEnhance.Brightness(img).enhance(value), value + + +def color(img, value, max_value, bias=0): + """Applies color adjustment to an image.""" + value = _float_parameter(value, max_value) + bias + return PIL.ImageEnhance.Color(img).enhance(value), value + + +def contrast(img, value, max_value, bias=0): + """Applies contrast adjustment to an image.""" + value = _float_parameter(value, max_value) + bias + return PIL.ImageEnhance.Contrast(img).enhance(value), value + + +def cutout(img, value, max_value, bias=0): + """Applies cutout augmentation to an image.""" + if value == 0: + return img + value = _float_parameter(value, max_value) + bias + value = int(value * min(img.size)) + return cutout_abs(img, value), value + + +def cutout_abs(img, value, **kwargs): + """Applies cutout with absolute pixel size to an image.""" + w, h = img.size + x0 = np.random.uniform(0, w) + y0 = np.random.uniform(0, h) + x0 = int(max(0, x0 - value / 2.0)) + y0 = int(max(0, y0 - value / 2.0)) + x1 = int(min(w, x0 + value)) + y1 = int(min(h, y0 + value)) + xy = (x0, y0, x1, y1) + # gray + rec_color = (127, 127, 127) + img = img.copy() + PIL.ImageDraw.Draw(img).rectangle(xy, rec_color) + return img, xy, rec_color + + +def equalize(img, **kwargs): + """Applies equalization to an image.""" + return PIL.ImageOps.equalize(img), None + + +def identity(img, **kwargs): + """Returns the original image without any transformation.""" + return img, None + + +def posterize(img, value, max_value, bias=0): + """Applies posterization to an image.""" + value = _int_parameter(value, max_value) + bias + return PIL.ImageOps.posterize(img, value), value + + +def rotate(img, value, max_value, bias=0): + """Applies rotation to an image.""" + value = _int_parameter(value, max_value) + bias + if random.random() < 0.5: + value = -value + return img.rotate(value), value + + +def sharpness(img, value, max_value, bias=0): + """Applies Sharpness to an image.""" + value = _float_parameter(value, max_value) + bias + return PIL.ImageEnhance.Sharpness(img).enhance(value), value + + +def shear_x(img, value, max_value, bias=0): + """Applies ShearX to an image.""" + value = _float_parameter(value, max_value) + bias + if random.random() < 0.5: + value = -value + return img.transform(img.size, PIL.Image.AFFINE, (1, value, 0, 0, 1, 0)), value + + +def shear_y(img, value, max_value, bias=0): + """Applies ShearY to an image.""" + value = _float_parameter(value, max_value) + bias + if random.random() < 0.5: + value = -value + return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, value, 1, 0)), value + + +def solarize(img, value, max_value, bias=0): + """Applies Solarize to an image.""" + value = _int_parameter(value, max_value) + bias + return PIL.ImageOps.solarize(img, 256 - value), value + + +def translate_x(img, value, max_value, bias=0): + """Applies TranslateX to an image.""" + value = _float_parameter(value, max_value) + bias + if random.random() < 0.5: + value = -value + value = int(value * img.size[0]) + return img.transform(img.size, PIL.Image.AFFINE, (1, 0, value, 0, 1, 0)), value + + +def translate_y(img, value, max_value, bias=0): + """Applies TranslateX to an image.""" + value = _float_parameter(value, max_value) + bias + if random.random() < 0.5: + value = -value + value = int(value * img.size[1]) + return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, 0, 1, value)), value + + +def _float_parameter(value, max_value): + return float(value) * max_value / PARAMETER_MAX + + +def _int_parameter(value, max_value): + return int(value * max_value / PARAMETER_MAX) + + +rand_augment_pool = [ + (auto_contrast, None, None), + (brightness, 0.9, 0.05), + (color, 0.9, 0.05), + (contrast, 0.9, 0.05), + (equalize, None, None), + (identity, None, None), + (posterize, 4, 4), + (rotate, 30, 0), + (sharpness, 0.9, 0.05), + (shear_x, 0.3, 0), + (shear_y, 0.3, 0), + (solarize, 256, 0), + (translate_x, 0.3, 0), + (translate_y, 0.3, 0), +] + + +# TODO: [Jihwan]: Can be removed by mmcls.datasets.pipeline.auto_augment Line 95 RandAugment class +@PIPELINES.register_module() +class OTXRandAugment: + """RandAugment class for OTX classification.""" + + def __init__(self, num_aug, magnitude, cutout_value=16): + assert num_aug >= 1 + assert 1 <= magnitude <= 10 + self.num_aug = num_aug + self.magnitude = magnitude + self.cutout_value = cutout_value + self.augment_pool = rand_augment_pool + + def __call__(self, results): + """Call function of OTXRandAugment class.""" + for key in results.get("img_fields", ["img"]): + img = results[key] + if not PIL.Image.isImageType(img): + img = PIL.Image.fromarray(results[key]) + augs = random.choices(self.augment_pool, k=self.num_aug) + for aug, max_value, bias in augs: + value = np.random.randint(1, self.magnitude) + if random.random() < 0.5: + img, value = aug(img, value=value, max_value=max_value, bias=bias) + results[f"rand_mc_{aug.__name__}"] = value + img, xy, rec_color = cutout_abs(img, self.cutout_value) + results["CutoutAbs"] = (xy, self.cutout_value, rec_color) + results[key] = np.array(img) + return results diff --git a/otx/mpa/modules/datasets/pipelines/transforms/twocrop_transform.py b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/twocrop_transform.py similarity index 81% rename from otx/mpa/modules/datasets/pipelines/transforms/twocrop_transform.py rename to otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/twocrop_transform.py index 8a9c7818ebc..4d8c5308c95 100644 --- a/otx/mpa/modules/datasets/pipelines/transforms/twocrop_transform.py +++ b/otx/algorithms/classification/adapters/mmcls/datasets/pipelines/transforms/twocrop_transform.py @@ -1,3 +1,4 @@ +"""Define TwoCropTransform used for self-sl in mmclassification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 @@ -11,13 +12,14 @@ @PIPELINES.register_module() class TwoCropTransform: - """Generate two different cropped views of an image""" + """Generate two different cropped views of an image.""" def __init__(self, pipeline): self.pipeline1 = Compose([build_from_cfg(p, PIPELINES) for p in pipeline]) self.pipeline2 = Compose([build_from_cfg(p, PIPELINES) for p in pipeline]) def __call__(self, data): + """Call method for TwoCropTransform class.""" data1 = self.pipeline1(deepcopy(data)) data2 = self.pipeline2(deepcopy(data)) diff --git a/otx/algorithms/classification/adapters/mmcls/models/__init__.py b/otx/algorithms/classification/adapters/mmcls/models/__init__.py index 3a8398f2fed..ee3c040a319 100644 --- a/otx/algorithms/classification/adapters/mmcls/models/__init__.py +++ b/otx/algorithms/classification/adapters/mmcls/models/__init__.py @@ -14,8 +14,53 @@ # See the License for the specific language governing permissions # and limitations under the License. -from .classifiers import BYOL -from .heads import ConstrastiveHead +from .classifiers import BYOL, SAMImageClassifier, SemiSLClassifier, SupConClassifier +from .heads import ( + ClsHead, + ConstrastiveHead, + ConvClsHead, + CustomHierarchicalLinearClsHead, + CustomHierarchicalNonLinearClsHead, + CustomLinearClsHead, + CustomMultiLabelLinearClsHead, + CustomMultiLabelNonLinearClsHead, + CustomNonLinearClsHead, + MMOVClsHead, + SemiLinearMultilabelClsHead, + SemiNonLinearMultilabelClsHead, + SupConClsHead, +) +from .losses import ( + AsymmetricAngularLossWithIgnore, + AsymmetricLossWithIgnore, + BarlowTwinsLoss, + CrossEntropyLossWithIgnore, + IBLoss, +) from .necks import SelfSLMLP -__all__ = ["BYOL", "SelfSLMLP", "ConstrastiveHead"] +__all__ = [ + "BYOL", + "SAMImageClassifier", + "SemiSLClassifier", + "SupConClassifier", + "CustomLinearClsHead", + "CustomNonLinearClsHead", + "CustomMultiLabelNonLinearClsHead", + "CustomMultiLabelLinearClsHead", + "CustomHierarchicalLinearClsHead", + "CustomHierarchicalNonLinearClsHead", + "AsymmetricAngularLossWithIgnore", + "SemiLinearMultilabelClsHead", + "SemiNonLinearMultilabelClsHead", + "MMOVClsHead", + "ConvClsHead", + "ClsHead", + "AsymmetricLossWithIgnore", + "BarlowTwinsLoss", + "IBLoss", + "CrossEntropyLossWithIgnore", + "SelfSLMLP", + "ConstrastiveHead", + "SupConClsHead", +] diff --git a/otx/algorithms/classification/adapters/mmcls/models/backbones/__init__.py b/otx/algorithms/classification/adapters/mmcls/models/backbones/__init__.py new file mode 100644 index 00000000000..07decd4f247 --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/models/backbones/__init__.py @@ -0,0 +1,19 @@ +"""OTX Algorithms - Classification Backbones.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from .mmov_backbone import MMOVBackbone + +__all__ = ["MMOVBackbone"] diff --git a/otx/algorithms/classification/adapters/mmcls/models/backbones/mmov_backbone.py b/otx/algorithms/classification/adapters/mmcls/models/backbones/mmov_backbone.py new file mode 100644 index 00000000000..fab9d87179a --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/models/backbones/mmov_backbone.py @@ -0,0 +1,43 @@ +"""Module for the MMOVBackbone class.""" + +from typing import Dict, List + +from mmcls.models.builder import BACKBONES + +from otx.mpa.modules.ov.graph.parsers.cls.cls_base_parser import cls_base_parser +from otx.mpa.modules.ov.models.mmov_model import MMOVModel + + +@BACKBONES.register_module() +class MMOVBackbone(MMOVModel): + """MMOVBackbone class. + + Args: + *args: positional arguments. + **kwargs: keyword arguments. + """ + + @staticmethod + def parser(graph, **kwargs) -> Dict[str, List[str]]: + """Parses the input and output of the model. + + Args: + graph: input graph. + **kwargs: keyword arguments. + + Returns: + Dictionary containing input and output of the model. + """ + output = cls_base_parser(graph, "backbone") + if output is None: + raise ValueError("Parser can not determine input and output of model. Please provide them explicitly") + return output + + def init_weights(self, pretrained=None): # pylint: disable=unused-argument + """Initializes the weights of the model. + + Args: + pretrained: pretrained weights. Default: None. + """ + # TODO + return diff --git a/otx/algorithms/classification/adapters/mmcls/models/classifiers/__init__.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/__init__.py index e7a44b05002..ff42c4f6085 100644 --- a/otx/algorithms/classification/adapters/mmcls/models/classifiers/__init__.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/__init__.py @@ -1,6 +1,6 @@ """OTX Algorithms - Classification Classifiers.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,5 +15,9 @@ # and limitations under the License. from .byol import BYOL +from .sam_classifier import SAMImageClassifier +from .semisl_classifier import SemiSLClassifier +from .semisl_multilabel_classifier import SemiSLMultilabelClassifier +from .supcon_classifier import SupConClassifier -__all__ = ["BYOL"] +__all__ = ["BYOL", "SAMImageClassifier", "SemiSLClassifier", "SemiSLMultilabelClassifier", "SupConClassifier"] diff --git a/otx/mpa/modules/models/classifiers/sam_classifier.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py similarity index 59% rename from otx/mpa/modules/models/classifiers/sam_classifier.py rename to otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py index 86bb09edd68..9df002d9bc5 100644 --- a/otx/mpa/modules/models/classifiers/sam_classifier.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py @@ -1,12 +1,10 @@ +"""Module for defining SAMClassifier for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # - -from collections import OrderedDict from functools import partial from mmcls.models.builder import CLASSIFIERS -from mmcls.models.classifiers.base import BaseClassifier from mmcls.models.classifiers.image import ImageClassifier from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled @@ -20,7 +18,7 @@ @CLASSIFIERS.register_module() class SAMImageClassifier(SAMClassifierMixin, ImageClassifier): - """SAM-enabled ImageClassifier""" + """SAM-enabled ImageClassifier.""" def __init__(self, task_adapt=None, **kwargs): if "multilabel" in kwargs: @@ -75,119 +73,125 @@ def forward_train(self, img, gt_label, **kwargs): return losses @staticmethod - def state_dict_hook(module, state_dict, prefix, *args, **kwargs): - """Redirect model as output state_dict for OTX model compatibility""" + def state_dict_hook(module, state_dict, prefix, *args, **kwargs): # noqa: C901 + # pylint: disable=unused-argument, too-many-branches + """Redirect model as output state_dict for OTX model compatibility.""" backbone_type = type(module.backbone).__name__ if backbone_type not in ["OTXMobileNetV3", "OTXEfficientNet", "OTXEfficientNetV2"]: - return - - if backbone_type == "OTXMobileNetV3": - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("backbone"): - k = k.replace("backbone.", "", 1) - elif k.startswith("head"): - k = k.replace("head.", "", 1) - if "3" in k: # MPA uses "classifier.3", OTX uses "classifier.4". Convert for OTX compatibility. - k = k.replace("3", "4") + return None + + if backbone_type == "OTXMobileNetV3": # pylint: disable=too-many-nested-blocks + for key in list(state_dict.keys()): + val = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("backbone"): + key = key.replace("backbone.", "", 1) + elif key.startswith("head"): + key = key.replace("head.", "", 1) + if ( + "3" in key + ): # MPA uses "classifier.3", OTX uses "classifier.4". Convert for OTX compatibility. + key = key.replace("3", "4") if module.multilabel and not module.is_export: - v = v.t() - k = prefix + k - state_dict[k] = v + val = val.t() + key = prefix + key + state_dict[key] = val elif backbone_type == "OTXEfficientNet": - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("backbone"): - k = k.replace("backbone.", "", 1) - elif k.startswith("head"): - k = k.replace("head", "output", 1) + for key in list(state_dict.keys()): + val = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("backbone"): + key = key.replace("backbone.", "", 1) + elif key.startswith("head"): + key = key.replace("head", "output", 1) if not module.hierarchical and not module.is_export: - k = k.replace("fc", "asl") - v = v.t() - k = prefix + k - state_dict[k] = v + key = key.replace("fc", "asl") + val = val.t() + key = prefix + key + state_dict[key] = val elif backbone_type == "OTXEfficientNetV2": - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("backbone"): - k = k.replace("backbone.", "", 1) - elif k == "head.fc.weight": - k = k.replace("head.fc", "model.classifier") + for key in list(state_dict.keys()): + val = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("backbone"): + key = key.replace("backbone.", "", 1) + elif key == "head.fc.weight": + key = key.replace("head.fc", "model.classifier") if not module.hierarchical and not module.is_export: - v = v.t() - k = prefix + k - state_dict[k] = v + val = val.t() + key = prefix + key + state_dict[key] = val return state_dict @staticmethod - def load_state_dict_pre_hook(module, state_dict, prefix, *args, **kwargs): - """Redirect input state_dict to model for OTX model compatibility""" + def load_state_dict_pre_hook(module, state_dict, prefix, *args, **kwargs): # noqa: C901 + # pylint: disable=unused-argument, too-many-branches + """Redirect input state_dict to model for OTX model compatibility.""" backbone_type = type(module.backbone).__name__ if backbone_type not in ["OTXMobileNetV3", "OTXEfficientNet", "OTXEfficientNetV2"]: return - if backbone_type == "OTXMobileNetV3": - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("classifier."): - if "4" in k: - k = "head." + k.replace("4", "3") + if backbone_type == "OTXMobileNetV3": # pylint: disable=too-many-nested-blocks + for key in list(state_dict.keys()): + val = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("classifier."): + if "4" in key: + key = "head." + key.replace("4", "3") if module.multilabel: - v = v.t() + val = val.t() else: - k = "head." + k - elif k.startswith("act"): - k = "head." + k - elif not k.startswith("backbone."): - k = "backbone." + k - k = prefix + k - state_dict[k] = v + key = "head." + key + elif key.startswith("act"): + key = "head." + key + elif not key.startswith("backbone."): + key = "backbone." + key + key = prefix + key + state_dict[key] = val elif backbone_type == "OTXEfficientNet": - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("features.") and "activ" not in k: - k = "backbone." + k - elif k.startswith("output."): - k = k.replace("output", "head") + for key in list(state_dict.keys()): + val = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("features.") and "activ" not in key: + key = "backbone." + key + elif key.startswith("output."): + key = key.replace("output", "head") if not module.hierarchical: - k = k.replace("asl", "fc") - v = v.t() - k = prefix + k - state_dict[k] = v + key = key.replace("asl", "fc") + val = val.t() + key = prefix + key + state_dict[key] = val elif backbone_type == "OTXEfficientNetV2": - for k in list(state_dict.keys()): - v = state_dict.pop(k) - if not prefix or k.startswith(prefix): - k = k.replace(prefix, "", 1) - if k.startswith("model.classifier"): - k = k.replace("model.classifier", "head.fc") + for key in list(state_dict.keys()): + val = state_dict.pop(key) + if not prefix or key.startswith(prefix): + key = key.replace(prefix, "", 1) + if key.startswith("model.classifier"): + key = key.replace("model.classifier", "head.fc") if not module.hierarchical: - v = v.t() - elif k.startswith("model"): - k = "backbone." + k - k = prefix + k - state_dict[k] = v + val = val.t() + elif key.startswith("model"): + key = "backbone." + key + key = prefix + key + state_dict[key] = val else: logger.info("conversion is not required.") @staticmethod - def load_state_dict_mixing_hook(model, model_classes, chkpt_classes, chkpt_dict, prefix, *args, **kwargs): - """Modify input state_dict according to class name matching before weight loading""" + def load_state_dict_mixing_hook( + model, model_classes, chkpt_classes, chkpt_dict, prefix, *args, **kwargs + ): # pylint: disable=unused-argument, too-many-branches, too-many-locals + """Modify input state_dict according to class name matching before weight loading.""" backbone_type = type(model.backbone).__name__ if backbone_type not in ["OTXMobileNetV3", "OTXEfficientNet", "OTXEfficientNetV2"]: return @@ -244,15 +248,16 @@ def load_state_dict_mixing_hook(model, model_classes, chkpt_classes, chkpt_dict, # Mix weights chkpt_param = chkpt_dict[chkpt_name] - for m, c in enumerate(model2chkpt): + for module, c in enumerate(model2chkpt): if c >= 0: - model_param[m].copy_(chkpt_param[c]) + model_param[module].copy_(chkpt_param[c]) # Replace checkpoint weight by mixed weights chkpt_dict[chkpt_name] = model_param def extract_feat(self, img): - """Directly extract features from the backbone + neck + """Directly extract features from the backbone + neck. + Overriding for OpenVINO export with features """ x = self.backbone(img) @@ -270,15 +275,16 @@ def extract_feat(self, img): if is_mmdeploy_enabled(): from mmdeploy.core import FUNCTION_REWRITER - from otx.mpa.modules.hooks.recording_forward_hooks import ( + from otx.mpa.modules.hooks.recording_forward_hooks import ( # pylint: disable=ungrouped-imports FeatureVectorHook, ReciproCAMHook, ) @FUNCTION_REWRITER.register_rewriter( - "otx.mpa.modules.models.classifiers.sam_classifier.SAMImageClassifier.extract_feat" + "otx.algorithms.classification.adapters.mmcls.models.classifiers.SAMImageClassifier.extract_feat" ) - def sam_image_classifier__extract_feat(ctx, self, img): + def sam_image_classifier__extract_feat(ctx, self, img): # pylint: disable=unused-argument + """Feature extraction function for SAMClassifier with mmdeploy.""" feat = self.backbone(img) # For Global Backbones (det/seg/etc..), # In case of tuple or list, only the feat of the last layer is used. @@ -290,9 +296,10 @@ def sam_image_classifier__extract_feat(ctx, self, img): return feat, backbone_feat @FUNCTION_REWRITER.register_rewriter( - "otx.mpa.modules.models.classifiers.sam_classifier.SAMImageClassifier.simple_test" + "otx.algorithms.classification.adapters.mmcls.models.classifiers.SAMImageClassifier.simple_test" ) - def sam_image_classifier__simple_test(ctx, self, img, img_metas): + def sam_image_classifier__simple_test(ctx, self, img, img_metas): # pylint: disable=unused-argument + """Simple test function used for inference for SAMClassifier with mmdeploy.""" feat, backbone_feat = self.extract_feat(img) logit = self.head.simple_test(feat) diff --git a/otx/mpa/modules/models/classifiers/sam_classifier_mixin.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier_mixin.py similarity index 54% rename from otx/mpa/modules/models/classifiers/sam_classifier_mixin.py rename to otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier_mixin.py index cd7354a6c51..5ee82cbc058 100644 --- a/otx/mpa/modules/models/classifiers/sam_classifier_mixin.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier_mixin.py @@ -1,14 +1,13 @@ +"""Module defining Mix-in class of SAMClassifier.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -class SAMClassifierMixin(object): - """SAM-enabled BaseClassifier mix-in""" +class SAMClassifierMixin: + """SAM-enabled BaseClassifier mix-in.""" def train_step(self, data, optimizer=None, **kwargs): - # Saving current batch data to compute SAM gradient - # Rest of SAM logics are implented in SAMOptimizerHook + """Saving current batch data to compute SAM gradient.""" self.current_batch = data - return super().train_step(data, optimizer, **kwargs) diff --git a/otx/mpa/modules/models/classifiers/semisl_classifier.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/semisl_classifier.py similarity index 94% rename from otx/mpa/modules/models/classifiers/semisl_classifier.py rename to otx/algorithms/classification/adapters/mmcls/models/classifiers/semisl_classifier.py index 88ff008c64a..0da33bdf6ed 100644 --- a/otx/mpa/modules/models/classifiers/semisl_classifier.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/semisl_classifier.py @@ -1,3 +1,4 @@ +"""Module for defining a semi-supervised classifier using mmcls.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -14,12 +15,13 @@ @CLASSIFIERS.register_module() class SemiSLClassifier(SAMImageClassifier): - """Semi-SL Classifier + """Semi-SL Classifier. + This classifier supports unlabeled data by overriding forward_train """ def forward_train(self, imgs, **kwargs): - """Data is transmitted as a classifier training function + """Data is transmitted as a classifier training function. Args: imgs (list[Tensor]): List of tensors of shape (1, C, H, W) diff --git a/otx/mpa/modules/models/classifiers/semisl_multilabel_classifier.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/semisl_multilabel_classifier.py similarity index 79% rename from otx/mpa/modules/models/classifiers/semisl_multilabel_classifier.py rename to otx/algorithms/classification/adapters/mmcls/models/classifiers/semisl_multilabel_classifier.py index 8f9de42f07d..eced0182d86 100644 --- a/otx/mpa/modules/models/classifiers/semisl_multilabel_classifier.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/semisl_multilabel_classifier.py @@ -1,3 +1,4 @@ +"""Module for defining a semi-supervised multi-label classifier using mmcls.""" # Copyright (C) 2023 Intel Corporation # # SPDX-License-Identifier: MIT @@ -13,15 +14,13 @@ @CLASSIFIERS.register_module() class SemiSLMultilabelClassifier(SAMImageClassifier): - """Semi-SL Multilabel Classifier - This classifier supports unlabeled data by overriding forward_train - """ + """Semi-SL Multilabel Classifier which supports unlabeled data by overriding forward_train.""" - def forward_train(self, imgs, gt_label, **kwargs): - """Data is transmitted as a classifier training function + def forward_train(self, img, gt_label, **kwargs): + """Data is transmitted as a classifier training function. Args: - imgs (list[Tensor]): List of tensors of shape (1, C, H, W) + img (list[Tensor]): List of tensors of shape (1, C, H, W) Typically these should be mean centered and std scaled. gt_label (Tensor): Ground truth labels for the input labeled images kwargs (keyword arguments): Specific to concrete implementation @@ -34,7 +33,7 @@ def forward_train(self, imgs, gt_label, **kwargs): target = gt_label.squeeze() unlabeled_data = kwargs["extra_0"] x = {} - x["labeled_weak"] = self.extract_feat(imgs) + x["labeled_weak"] = self.extract_feat(img) x["labeled_strong"] = self.extract_feat(kwargs["img_strong"]) img_uw = unlabeled_data["img"] diff --git a/otx/mpa/modules/models/classifiers/supcon_classifier.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/supcon_classifier.py similarity index 82% rename from otx/mpa/modules/models/classifiers/supcon_classifier.py rename to otx/algorithms/classification/adapters/mmcls/models/classifiers/supcon_classifier.py index 6403de55f5b..ed3456fcb4c 100644 --- a/otx/mpa/modules/models/classifiers/supcon_classifier.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/supcon_classifier.py @@ -1,3 +1,4 @@ +"""This module contains the SupConClassifier implementation for MMClassification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -5,18 +6,19 @@ import torch from mmcls.models.builder import CLASSIFIERS from mmcls.models.classifiers.image import ImageClassifier -from torch.nn.functional import sigmoid, softmax @CLASSIFIERS.register_module() class SupConClassifier(ImageClassifier): + """SupConClassifier with support for classification tasks.""" + def __init__(self, backbone, neck=None, head=None, pretrained=None, **kwargs): self.multilabel = kwargs.pop("multilabel", False) self.hierarchical = kwargs.pop("hierarchical", False) super().__init__(backbone, neck=neck, head=head, pretrained=pretrained, **kwargs) def forward_train(self, img, gt_label, **kwargs): - # concatenate the different image views along the batch size + """Concatenate the different image views along the batch size.""" if len(img.shape) == 5: img = torch.cat([img[:, d, :, :, :] for d in range(img.shape[1])], dim=0) x = self.extract_feat(img) diff --git a/otx/algorithms/classification/adapters/mmcls/models/heads/__init__.py b/otx/algorithms/classification/adapters/mmcls/models/heads/__init__.py index 375f9e849a4..69a1ca77179 100644 --- a/otx/algorithms/classification/adapters/mmcls/models/heads/__init__.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/__init__.py @@ -14,6 +14,38 @@ # See the License for the specific language governing permissions # and limitations under the License. +from .cls_head import ClsHead from .contrastive_head import ConstrastiveHead +from .conv_head import ConvClsHead +from .custom_cls_head import CustomLinearClsHead, CustomNonLinearClsHead +from .custom_hierarchical_linear_cls_head import CustomHierarchicalLinearClsHead +from .custom_hierarchical_non_linear_cls_head import CustomHierarchicalNonLinearClsHead +from .custom_multi_label_linear_cls_head import CustomMultiLabelLinearClsHead +from .custom_multi_label_non_linear_cls_head import CustomMultiLabelNonLinearClsHead +from .mmov_cls_head import MMOVClsHead +from .non_linear_cls_head import NonLinearClsHead +from .semisl_cls_head import SemiLinearClsHead, SemiNonLinearClsHead +from .semisl_multilabel_cls_head import ( + SemiLinearMultilabelClsHead, + SemiNonLinearMultilabelClsHead, +) +from .supcon_cls_head import SupConClsHead -__all__ = ["ConstrastiveHead"] +__all__ = [ + "ConstrastiveHead", + "CustomLinearClsHead", + "CustomNonLinearClsHead", + "CustomHierarchicalLinearClsHead", + "CustomHierarchicalNonLinearClsHead", + "CustomMultiLabelLinearClsHead", + "CustomMultiLabelNonLinearClsHead", + "SemiLinearMultilabelClsHead", + "SemiNonLinearMultilabelClsHead", + "NonLinearClsHead", + "SemiLinearClsHead", + "SemiNonLinearClsHead", + "SupConClsHead", + "MMOVClsHead", + "ConvClsHead", + "ClsHead", +] diff --git a/otx/mpa/modules/ov/models/mmcls/heads/cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/cls_head.py similarity index 67% rename from otx/mpa/modules/ov/models/mmcls/heads/cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/cls_head.py index 2cdf09f9139..1cf23a187fb 100644 --- a/otx/mpa/modules/ov/models/mmcls/heads/cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/cls_head.py @@ -1,3 +1,4 @@ +"""Module defining Classification Head for MMOV inference.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,17 +9,25 @@ @HEADS.register_module(force=True) class ClsHead(OriginClsHead): + """Classification Head for MMOV inference.""" + def __init__(self, *args, **kwargs): do_squeeze = kwargs.pop("do_squeeze", False) - super(ClsHead, self).__init__(*args, **kwargs) + super().__init__(*args, **kwargs) self._do_squeeze = do_squeeze + def forward(self, x): + """Forward fuction of ClsHead class.""" + return self.simple_test(x) + def forward_train(self, cls_score, gt_label): + """Forward_train fuction of ClsHead class.""" if self._do_squeeze: cls_score = cls_score.unsqueeze(0).squeeze() return super().forward_train(cls_score, gt_label) def simple_test(self, cls_score): + """Test without augmentation.""" if self._do_squeeze: cls_score = cls_score.unsqueeze(0).squeeze() return super().simple_test(cls_score) diff --git a/otx/mpa/modules/ov/models/mmcls/heads/conv_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/conv_head.py similarity index 75% rename from otx/mpa/modules/ov/models/mmcls/heads/conv_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/conv_head.py index ee9fd89a59c..ca302d947f7 100644 --- a/otx/mpa/modules/ov/models/mmcls/heads/conv_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/conv_head.py @@ -1,11 +1,12 @@ +"""Module for defining ConvClsHead used for MMOV inference.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -import torch.nn as nn import torch.nn.functional as F from mmcls.models.builder import HEADS from mmcls.models.heads import ClsHead +from torch import nn @HEADS.register_module() @@ -20,8 +21,9 @@ class ConvClsHead(ClsHead): Defaults to use dict(type='Normal', layer='Linear', std=0.01). """ - def __init__(self, num_classes, in_channels, init_cfg=dict(type="Kaiming", layer=["Conv2d"]), *args, **kwargs): - super(ConvClsHead, self).__init__(init_cfg=init_cfg, *args, **kwargs) + def __init__(self, num_classes, in_channels, init_cfg=None, **kwargs): + init_cfg = init_cfg if init_cfg else dict(type="Kaiming", layer=["Conv2d"]) + super().__init__(init_cfg=init_cfg, **kwargs) self.in_channels = in_channels self.num_classes = num_classes @@ -32,11 +34,12 @@ def __init__(self, num_classes, in_channels, init_cfg=dict(type="Kaiming", layer self.conv = nn.Conv2d(self.in_channels, self.num_classes, (1, 1)) def pre_logits(self, x): + """Preprocess logits.""" if isinstance(x, tuple): x = x[-1] return x - def simple_test(self, x, softmax=True, post_process=True): + def simple_test(self, cls_score, softmax=True, post_process=True): """Inference without augmentation. Args: @@ -56,7 +59,7 @@ def simple_test(self, x, softmax=True, post_process=True): - If post processing, the output is a multi-dimentional list of float and the dimensions are ``(num_samples, num_classes)``. """ - x = self.pre_logits(x) + x = self.pre_logits(cls_score) cls_score = self.conv(x).squeeze() if softmax: @@ -66,11 +69,15 @@ def simple_test(self, x, softmax=True, post_process=True): if post_process: return self.post_process(pred) - else: - return pred + return pred + + def forward(self, x): + """Forward fuction of ConvClsHead class.""" + return self.simple_test(x) - def forward_train(self, x, gt_label, **kwargs): - x = self.pre_logits(x) + def forward_train(self, cls_score, gt_label, **kwargs): + """Forward_train fuction of ConvClsHead class.""" + x = self.pre_logits(cls_score) cls_score = self.conv(x).squeeze() losses = self.loss(cls_score, gt_label, **kwargs) return losses diff --git a/otx/mpa/modules/models/heads/custom_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_cls_head.py similarity index 74% rename from otx/mpa/modules/models/heads/custom_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/custom_cls_head.py index 8947c44efdf..fcf2008e795 100644 --- a/otx/mpa/modules/models/heads/custom_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_cls_head.py @@ -1,3 +1,4 @@ +"""Module defining for OTX Custom Non-linear classification head.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -15,10 +16,11 @@ class CustomNonLinearClsHead(NonLinearClsHead): """Custom Nonlinear classifier head.""" def __init__(self, *args, **kwargs): - super(CustomNonLinearClsHead, self).__init__(*args, **kwargs) + super().__init__(*args, **kwargs) self.loss_type = kwargs.get("loss", dict(type="CrossEntropyLoss"))["type"] def loss(self, cls_score, gt_label, feature=None): + """Calculate loss for given cls_score/gt_label.""" num_samples = len(cls_score) losses = dict() # compute loss @@ -34,9 +36,14 @@ def loss(self, cls_score, gt_label, feature=None): losses["loss"] = loss return losses - def forward_train(self, x, gt_label): - cls_score = self.classifier(x) - losses = self.loss(cls_score, gt_label, feature=x) + def forward(self, x): + """Forward fuction of CustomNonLinearHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label): + """Forward_train fuction of CustomNonLinearHead class.""" + logit = self.classifier(cls_score) + losses = self.loss(logit, gt_label, feature=cls_score) return losses @@ -52,13 +59,13 @@ class CustomLinearClsHead(LinearClsHead): Defaults to use dict(type='Normal', layer='Linear', std=0.01). """ - def __init__( - self, num_classes, in_channels, init_cfg=dict(type="Normal", layer="Linear", std=0.01), *args, **kwargs - ): + def __init__(self, num_classes, in_channels, init_cfg=None, **kwargs): + init_cfg = init_cfg if init_cfg else dict(type="Normal", layer="Linear", std=0.01) + super().__init__(num_classes, in_channels, init_cfg=init_cfg, **kwargs) self.loss_type = kwargs.get("loss", dict(type="CrossEntropyLoss"))["type"] - super(CustomLinearClsHead, self).__init__(num_classes, in_channels, init_cfg=init_cfg, *args, **kwargs) def loss(self, cls_score, gt_label, feature=None): + """Calculate loss for given cls_score/gt_label.""" num_samples = len(cls_score) losses = dict() # compute loss @@ -85,7 +92,12 @@ def simple_test(self, img): return self.post_process(pred) + def forward(self, x): + """Forward fuction of CustomLinearHead class.""" + return self.simple_test(x) + def forward_train(self, x, gt_label): + """Forward_train fuction of CustomLinearHead class.""" cls_score = self.fc(x) losses = self.loss(cls_score, gt_label, feature=x) return losses diff --git a/otx/mpa/modules/models/heads/custom_hierarchical_linear_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_hierarchical_linear_cls_head.py similarity index 85% rename from otx/mpa/modules/models/heads/custom_hierarchical_linear_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/custom_hierarchical_linear_cls_head.py index a9190de0c87..d03f89afcdf 100644 --- a/otx/mpa/modules/models/heads/custom_hierarchical_linear_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_hierarchical_linear_cls_head.py @@ -1,17 +1,19 @@ +"""Module for defining Linear classification head for h-label classification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn from mmcls.models.builder import HEADS, build_loss from mmcls.models.heads import MultiLabelClsHead from mmcv.cnn import normal_init +from torch import nn @HEADS.register_module() class CustomHierarchicalLinearClsHead(MultiLabelClsHead): """Custom Linear classification head for hierarchical classification task. + Args: num_classes (int): Number of categories. in_channels (int): Number of channels in the input feature map. @@ -23,13 +25,17 @@ def __init__( self, num_classes, in_channels, - loss=dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0), - multilabel_loss=dict(type="AsymmetricLoss", reduction="mean", loss_weight=1.0), + loss=None, + multilabel_loss=None, **kwargs, ): + loss = loss if loss else dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0) + multilabel_loss = ( + multilabel_loss if multilabel_loss else dict(type="AsymmetricLoss", reduction="mean", loss_weight=1.0) + ) self.hierarchical_info = kwargs.pop("hierarchical_info", None) assert self.hierarchical_info - super(CustomHierarchicalLinearClsHead, self).__init__(loss=loss) + super().__init__(loss=loss) if self.hierarchical_info["num_multiclass_heads"] + self.hierarchical_info["num_multilabel_classes"] == 0: raise ValueError("Invalid classification heads configuration") self.compute_multilabel_loss = False @@ -47,9 +53,11 @@ def _init_layers(self): self.fc = nn.Linear(self.in_channels, self.num_classes) def init_weights(self): + """Initialize weights of head.""" normal_init(self.fc, mean=0, std=0.01, bias=0) def loss(self, cls_score, gt_label, multilabel=False, valid_label_mask=None): + """Calculate loss for given cls_score/gt_label.""" num_samples = len(cls_score) # compute loss if multilabel: @@ -65,10 +73,15 @@ def loss(self, cls_score, gt_label, multilabel=False, valid_label_mask=None): return loss - def forward_train(self, x, gt_label, **kwargs): + def forward(self, x): + """Forward fuction of CustomHierarchicalLinearClsHead.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label, **kwargs): + """Forward_train fuction of CustomHierarchicalLinearClsHead class.""" img_metas = kwargs.get("img_metas", None) - gt_label = gt_label.type_as(x) - cls_score = self.fc(x) + gt_label = gt_label.type_as(cls_score) + cls_score = self.fc(cls_score) losses = dict(loss=0.0) for i in range(self.hierarchical_info["num_multiclass_heads"]): @@ -104,7 +117,7 @@ def forward_train(self, x, gt_label, **kwargs): valid_label_mask = self.get_valid_label_mask(img_metas=img_metas)[ :, self.hierarchical_info["num_single_label_classes"] : ] - valid_label_mask = valid_label_mask.to(x.device) + valid_label_mask = valid_label_mask.to(cls_score.device) valid_label_mask = valid_label_mask[valid_batch_mask] else: valid_label_mask = None @@ -150,8 +163,9 @@ def simple_test(self, img): return pred def get_valid_label_mask(self, img_metas): + """Get valid label with ignored_label mask.""" valid_label_mask = [] - for i, meta in enumerate(img_metas): + for meta in img_metas: mask = torch.Tensor([1 for _ in range(self.num_classes)]) if "ignored_labels" in meta and meta["ignored_labels"]: mask[meta["ignored_labels"]] = 0 diff --git a/otx/mpa/modules/models/heads/custom_hierarchical_non_linear_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_hierarchical_non_linear_cls_head.py similarity index 81% rename from otx/mpa/modules/models/heads/custom_hierarchical_non_linear_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/custom_hierarchical_non_linear_cls_head.py index ab737907299..88b7d88b67a 100644 --- a/otx/mpa/modules/models/heads/custom_hierarchical_non_linear_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_hierarchical_non_linear_cls_head.py @@ -1,17 +1,19 @@ +"""Non-linear classification head for hierarhical classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn from mmcls.models.builder import HEADS, build_loss from mmcls.models.heads import MultiLabelClsHead from mmcv.cnn import build_activation_layer, constant_init, normal_init +from torch import nn @HEADS.register_module() -class CustomHierarchicalNonLinearClsHead(MultiLabelClsHead): +class CustomHierarchicalNonLinearClsHead(MultiLabelClsHead): # pylint: disable=too-many-instance-attributes """Custom NonLinear classification head for hierarchical classification task. + Args: num_classes (int): Number of categories excluding the background category. @@ -27,15 +29,20 @@ def __init__( num_classes, in_channels, hid_channels=1280, - act_cfg=dict(type="ReLU"), - loss=dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0), - multilabel_loss=dict(type="AsymmetricLoss", reduction="mean", loss_weight=1.0), + act_cfg=None, + loss=None, + multilabel_loss=None, dropout=False, **kwargs, - ): + ): # pylint: disable=too-many-arguments + act_cfg = act_cfg if act_cfg else dict(type="ReLU") + loss = loss if loss else dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0) + multilabel_loss = ( + multilabel_loss if multilabel_loss else dict(type="AsymmetricLoss", reduction="mean", loss_weight=1.0) + ) self.hierarchical_info = kwargs.pop("hierarchical_info", None) assert self.hierarchical_info - super(CustomHierarchicalNonLinearClsHead, self).__init__(loss=loss) + super().__init__(loss=loss) if self.hierarchical_info["num_multiclass_heads"] + self.hierarchical_info["num_multilabel_classes"] == 0: raise ValueError("Invalid classification heads configuration") self.compute_multilabel_loss = False @@ -70,13 +77,15 @@ def _init_layers(self): ) def init_weights(self): - for m in self.classifier: - if isinstance(m, nn.Linear): - normal_init(m, mean=0, std=0.01, bias=0) - elif isinstance(m, nn.BatchNorm1d): - constant_init(m, 1) + """Iniitialize weights of classification head.""" + for module in self.classifier: + if isinstance(module, nn.Linear): + normal_init(module, mean=0, std=0.01, bias=0) + elif isinstance(module, nn.BatchNorm1d): + constant_init(module, 1) def loss(self, cls_score, gt_label, multilabel=False, valid_label_mask=None): + """Calculate loss for given cls_score and gt_label.""" num_samples = len(cls_score) # compute loss if multilabel: @@ -92,10 +101,15 @@ def loss(self, cls_score, gt_label, multilabel=False, valid_label_mask=None): return loss - def forward_train(self, x, gt_label, **kwargs): + def forward(self, x): + """Forward fuction of CustomHierarchicalNonLinearClsHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label, **kwargs): + """Forward_train fuction of CustomHierarchicalNonLinearClsHead class.""" img_metas = kwargs.get("img_metas", None) - gt_label = gt_label.type_as(x) - cls_score = self.classifier(x) + gt_label = gt_label.type_as(cls_score) + cls_score = self.classifier(cls_score) losses = dict(loss=0.0) for i in range(self.hierarchical_info["num_multiclass_heads"]): @@ -131,7 +145,7 @@ def forward_train(self, x, gt_label, **kwargs): valid_label_mask = self.get_valid_label_mask(img_metas=img_metas)[ :, self.hierarchical_info["num_single_label_classes"] : ] - valid_label_mask = valid_label_mask.to(x.device) + valid_label_mask = valid_label_mask.to(cls_score.device) valid_label_mask = valid_label_mask[valid_batch_mask] else: valid_label_mask = None @@ -177,8 +191,9 @@ def simple_test(self, img): return pred def get_valid_label_mask(self, img_metas): + """Get valid label mask with ignored_label.""" valid_label_mask = [] - for i, meta in enumerate(img_metas): + for meta in img_metas: mask = torch.Tensor([1 for _ in range(self.num_classes)]) if "ignored_labels" in meta and meta["ignored_labels"]: mask[meta["ignored_labels"]] = 0 diff --git a/otx/mpa/modules/models/heads/custom_multi_label_linear_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_multi_label_linear_cls_head.py similarity index 80% rename from otx/mpa/modules/models/heads/custom_multi_label_linear_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/custom_multi_label_linear_cls_head.py index 75ec8770d6a..0fc9e9214af 100644 --- a/otx/mpa/modules/models/heads/custom_multi_label_linear_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_multi_label_linear_cls_head.py @@ -1,18 +1,20 @@ +"""Module for defining multi-label linear classification head.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn import torch.nn.functional as F from mmcls.models.builder import HEADS from mmcls.models.heads import MultiLabelClsHead from mmcv.cnn import normal_init +from torch import nn @HEADS.register_module() class CustomMultiLabelLinearClsHead(MultiLabelClsHead): """Custom Linear classification head for multilabel task. + Args: num_classes (int): Number of categories. in_channels (int): Number of channels in the input feature map. @@ -27,9 +29,10 @@ def __init__( in_channels, normalized=False, scale=1.0, - loss=dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0), + loss=None, ): - super(CustomMultiLabelLinearClsHead, self).__init__(loss=loss) + loss = loss if loss else dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0) + super().__init__(loss=loss) if num_classes <= 0: raise ValueError(f"num_classes={num_classes} must be a positive integer") @@ -46,10 +49,12 @@ def _init_layers(self): self.fc = nn.Linear(self.in_channels, self.num_classes) def init_weights(self): + """Initialize weights of head.""" if isinstance(self.fc, nn.Linear): normal_init(self.fc, mean=0, std=0.01, bias=0) def loss(self, cls_score, gt_label, valid_label_mask=None): + """Calculate loss for given cls_score/gt_label.""" gt_label = gt_label.type_as(cls_score) num_samples = len(cls_score) losses = dict() @@ -61,10 +66,15 @@ def loss(self, cls_score, gt_label, valid_label_mask=None): losses["loss"] = loss / self.scale return losses - def forward_train(self, x, gt_label, **kwargs): + def forward(self, x): + """Forward fuction of CustomMultiLabelLinearClsHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label, **kwargs): + """Forward_train fuction of CustomMultiLabelLinearClsHead.""" img_metas = kwargs.get("img_metas", False) - gt_label = gt_label.type_as(x) - cls_score = self.fc(x) * self.scale + gt_label = gt_label.type_as(cls_score) + cls_score = self.fc(cls_score) * self.scale valid_batch_mask = gt_label >= 0 gt_label = gt_label[ @@ -75,7 +85,7 @@ def forward_train(self, x, gt_label, **kwargs): ].view(cls_score.shape[0], -1) if img_metas: valid_label_mask = self.get_valid_label_mask(img_metas=img_metas) - valid_label_mask = valid_label_mask.to(x.device) + valid_label_mask = valid_label_mask.to(cls_score.device) valid_label_mask = valid_label_mask[ valid_batch_mask, ].view(valid_label_mask.shape[0], -1) @@ -96,8 +106,9 @@ def simple_test(self, img): return pred def get_valid_label_mask(self, img_metas): + """Get valid label mask using ignored_label.""" valid_label_mask = [] - for i, meta in enumerate(img_metas): + for meta in img_metas: mask = torch.Tensor([1 for _ in range(self.num_classes)]) if "ignored_labels" in meta and meta["ignored_labels"]: mask[meta["ignored_labels"]] = 0 @@ -107,13 +118,15 @@ def get_valid_label_mask(self, img_metas): class AnglularLinear(nn.Module): - """Computes cos of angles between input vectors and weights vectors + """Computes cos of angles between input vectors and weights vectors. + Args: in_features (int): Number of input features. out_features (int): Number of output cosine logits. """ def __init__(self, in_features, out_features): + """Init fuction of AngularLinear class.""" super().__init__() self.in_features = in_features self.out_features = out_features @@ -121,5 +134,6 @@ def __init__(self, in_features, out_features): self.weight.data.normal_().renorm_(2, 0, 1e-5).mul_(1e5) def forward(self, x): + """Forward fuction of AngularLinear class.""" cos_theta = F.normalize(x.view(x.shape[0], -1), dim=1).mm(F.normalize(self.weight.t(), p=2, dim=0)) return cos_theta.clamp(-1, 1) diff --git a/otx/mpa/modules/models/heads/custom_multi_label_non_linear_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_multi_label_non_linear_cls_head.py similarity index 76% rename from otx/mpa/modules/models/heads/custom_multi_label_non_linear_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/custom_multi_label_non_linear_cls_head.py index d5b8db4c095..f851d6f0bda 100644 --- a/otx/mpa/modules/models/heads/custom_multi_label_non_linear_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/custom_multi_label_non_linear_cls_head.py @@ -1,12 +1,13 @@ +"""This module contains the CustomMultiLabelNonLinearClsHead implementation for MMClassification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn from mmcls.models.builder import HEADS from mmcls.models.heads import MultiLabelClsHead from mmcv.cnn import build_activation_layer, constant_init, normal_init +from torch import nn from .custom_multi_label_linear_cls_head import AnglularLinear @@ -14,6 +15,7 @@ @HEADS.register_module() class CustomMultiLabelNonLinearClsHead(MultiLabelClsHead): """Non-linear classification head for multilabel task. + Args: num_classes (int): Number of categories. in_channels (int): Number of channels in the input feature map. @@ -24,19 +26,21 @@ class CustomMultiLabelNonLinearClsHead(MultiLabelClsHead): normalized (bool): Normalize input features and weights in the last linar layer. """ + # pylint: disable=too-many-arguments def __init__( self, num_classes, in_channels, hid_channels=1280, - act_cfg=dict(type="ReLU"), + act_cfg=None, scale=1.0, - loss=dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0), + loss=None, dropout=False, normalized=False, ): - - super(CustomMultiLabelNonLinearClsHead, self).__init__(loss=loss) + act_cfg = act_cfg if act_cfg else dict(type="ReLU") + loss = loss if loss else dict(type="CrossEntropyLoss", use_sigmoid=True, reduction="mean", loss_weight=1.0) + super().__init__(loss=loss) self.in_channels = in_channels self.num_classes = num_classes @@ -66,13 +70,15 @@ def _init_layers(self, act_cfg): self.classifier = nn.Sequential(*modules) def init_weights(self): - for m in self.classifier: - if isinstance(m, nn.Linear): - normal_init(m, mean=0, std=0.01, bias=0) - elif isinstance(m, nn.BatchNorm1d): - constant_init(m, 1) + """Iniitalize weights of model.""" + for module in self.classifier: + if isinstance(module, nn.Linear): + normal_init(module, mean=0, std=0.01, bias=0) + elif isinstance(module, nn.BatchNorm1d): + constant_init(module, 1) def loss(self, cls_score, gt_label, valid_label_mask=None): + """Calculate loss for given cls_score/gt_label.""" gt_label = gt_label.type_as(cls_score) num_samples = len(cls_score) losses = dict() @@ -89,10 +95,15 @@ def loss(self, cls_score, gt_label, valid_label_mask=None): losses["loss"] = loss / self.scale return losses - def forward_train(self, x, gt_label, **kwargs): + def forward(self, x): + """Forward fuction of CustomMultiLabelNonLinearClsHead.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label, **kwargs): + """Forward_train fuction of CustomMultiLabelNonLinearClsHead.""" img_metas = kwargs.get("img_metas", False) - gt_label = gt_label.type_as(x) - cls_score = self.classifier(x) * self.scale + gt_label = gt_label.type_as(cls_score) + cls_score = self.classifier(cls_score) * self.scale valid_batch_mask = gt_label >= 0 gt_label = gt_label[ @@ -103,7 +114,7 @@ def forward_train(self, x, gt_label, **kwargs): ].view(cls_score.shape[0], -1) if img_metas: valid_label_mask = self.get_valid_label_mask(img_metas=img_metas) - valid_label_mask = valid_label_mask.to(x.device) + valid_label_mask = valid_label_mask.to(cls_score.device) valid_label_mask = valid_label_mask[ valid_batch_mask, ].view(valid_label_mask.shape[0], -1) @@ -124,8 +135,9 @@ def simple_test(self, img): return pred def get_valid_label_mask(self, img_metas): + """Get valid label with ignored_label mask.""" valid_label_mask = [] - for i, meta in enumerate(img_metas): + for meta in img_metas: mask = torch.Tensor([1 for _ in range(self.num_classes)]) if "ignored_labels" in meta and meta["ignored_labels"]: mask[meta["ignored_labels"]] = 0 diff --git a/otx/mpa/modules/ov/models/mmcls/heads/mmov_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/mmov_cls_head.py similarity index 57% rename from otx/mpa/modules/ov/models/mmcls/heads/mmov_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/mmov_cls_head.py index 804ee70babd..e158c2b6008 100644 --- a/otx/mpa/modules/ov/models/mmcls/heads/mmov_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/mmov_cls_head.py @@ -1,3 +1,4 @@ +"""Module for OpenVINO Classification Head adopted with mmclassification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -9,12 +10,28 @@ from mmcls.models.builder import HEADS from mmcls.models.heads import ClsHead -from ....graph.parsers.cls import cls_base_parser -from ...mmov_model import MMOVModel +from otx.mpa.modules.ov.graph.parsers.cls.cls_base_parser import cls_base_parser +from otx.mpa.modules.ov.models.mmov_model import MMOVModel @HEADS.register_module() class MMOVClsHead(ClsHead): + """Head module for MMClassification that uses MMOV for inference. + + Args: + model_path_or_model (Union[str, ov.Model]): Path to the ONNX model file or + the ONNX model object. + weight_path (Optional[str]): Path to the weight file. + inputs (Optional[Union[Dict[str, Union[str, List[str]]], List[str], str]]): + Input shape(s) of the ONNX model. + outputs (Optional[Union[Dict[str, Union[str, List[str]]], List[str], str]]): + Output name(s) of the ONNX model. + init_weight (bool): Whether to initialize the weight from a normal + distribution. + verify_shape (bool): Whether to verify the input shape of the ONNX model. + softmax_at_test (bool): Whether to apply softmax during testing. + """ + def __init__( self, model_path_or_model: Union[str, ov.Model], @@ -25,7 +42,7 @@ def __init__( verify_shape: bool = True, softmax_at_test: bool = True, **kwargs, - ): + ): # pylint: disable=too-many-arguments kwargs.pop("in_channels", None) kwargs.pop("num_classes", None) super().__init__(**kwargs) @@ -49,15 +66,21 @@ def __init__( parser_kwargs=dict(component="head"), ) - def forward_train(self, x, gt_label, **kwargs): - cls_score = self.model(x) + def forward(self, x): + """Forward fuction of MMOVClsHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label, **kwargs): + """Forward_train fuction of MMOVClsHead.""" + cls_score = self.model(cls_score) while cls_score.dim() > 2: cls_score = cls_score.squeeze(2) losses = self.loss(cls_score, gt_label, **kwargs) return losses - def simple_test(self, x): - cls_score = self.model(x) + def simple_test(self, cls_score): + """Test without augmentation.""" + cls_score = self.model(cls_score) while cls_score.dim() > 2: cls_score = cls_score.squeeze(2) if self._softmax_at_test: diff --git a/otx/mpa/modules/models/heads/non_linear_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/non_linear_cls_head.py similarity index 71% rename from otx/mpa/modules/models/heads/non_linear_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/non_linear_cls_head.py index 11aa198a160..fa37c5c9c72 100644 --- a/otx/mpa/modules/models/heads/non_linear_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/non_linear_cls_head.py @@ -1,13 +1,14 @@ +"""Module for defining non-linear classification head.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn import torch.nn.functional as F from mmcls.models.builder import HEADS from mmcls.models.heads.cls_head import ClsHead from mmcv.cnn import build_activation_layer, constant_init, normal_init +from torch import nn @HEADS.register_module() @@ -29,15 +30,16 @@ def __init__( num_classes, in_channels, hid_channels=1280, - act_cfg=dict(type="ReLU"), - loss=dict(type="CrossEntropyLoss", loss_weight=1.0), + act_cfg=None, + loss=None, topk=(1,), dropout=False, - *args, **kwargs, - ): + ): # pylint: disable=too-many-arguments topk = (1,) if num_classes < 5 else (1, 5) - super(NonLinearClsHead, self).__init__(loss=loss, topk=topk, *args, **kwargs) + act_cfg = act_cfg if act_cfg else dict(type="ReLU") + loss = loss if loss else dict(type="CrossEntropyLoss", loss_weight=1.0) + super().__init__(loss=loss, topk=topk, **kwargs) self.in_channels = in_channels self.hid_channels = hid_channels self.num_classes = num_classes @@ -67,11 +69,12 @@ def _init_layers(self): ) def init_weights(self): - for m in self.classifier: - if isinstance(m, nn.Linear): - normal_init(m, mean=0, std=0.01, bias=0) - elif isinstance(m, nn.BatchNorm1d): - constant_init(m, 1) + """Initialize weights of head.""" + for module in self.classifier: + if isinstance(module, nn.Linear): + normal_init(module, mean=0, std=0.01, bias=0) + elif isinstance(module, nn.BatchNorm1d): + constant_init(module, 1) def simple_test(self, img): """Test without augmentation.""" @@ -84,7 +87,12 @@ def simple_test(self, img): pred = list(pred.detach().cpu().numpy()) return pred - def forward_train(self, x, gt_label): - cls_score = self.classifier(x) - losses = self.loss(cls_score, gt_label) + def forward(self, x): + """Forward fuction of NonLinearClsHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label): + """Forward_train fuction of NonLinearClsHead class.""" + logit = self.classifier(cls_score) + losses = self.loss(logit, gt_label) return losses diff --git a/otx/mpa/modules/models/heads/semisl_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/semisl_cls_head.py similarity index 82% rename from otx/mpa/modules/models/heads/semisl_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/semisl_cls_head.py index adbb8dd2efa..108dff95e69 100644 --- a/otx/mpa/modules/models/heads/semisl_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/semisl_cls_head.py @@ -1,3 +1,4 @@ +"""Module for defining semi-supervised learning for multi-class classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,7 +7,9 @@ from mmcls.models.builder import HEADS from mmcls.models.heads.linear_head import LinearClsHead -from otx.mpa.modules.models.heads.non_linear_cls_head import NonLinearClsHead +from otx.algorithms.classification.adapters.mmcls.models.heads.non_linear_cls_head import ( + NonLinearClsHead, +) class SemiClsHead: @@ -30,7 +33,7 @@ def __init__(self, unlabeled_coef=1.0, use_dynamic_threshold=True, min_threshold self.classwise_acc = self.classwise_acc.cuda() def loss(self, logits, gt_label, pseudo_label=None, mask=None): - """loss function in which unlabeled data is considered + """Loss function in which unlabeled data is considered. Args: logit (set): (labeled data logit, unlabeled data logit) @@ -46,22 +49,22 @@ def loss(self, logits, gt_label, pseudo_label=None, mask=None): losses = dict() # compute supervised loss - lx = self.compute_loss(logits_x, gt_label, avg_factor=num_samples) + labeled_loss = self.compute_loss(logits_x, gt_label, avg_factor=num_samples) - lu = 0 + unlabeled_loss = 0 if len(logits_u_s) > 0: # compute unsupervised loss - lu = self.compute_loss(logits_u_s, pseudo_label, avg_factor=len(logits_u_s)) * mask - losses["loss"] = lx + self.unlabeled_coef * lu - losses["unlabeled_loss"] = self.unlabeled_coef * lu + unlabeled_loss = self.compute_loss(logits_u_s, pseudo_label, avg_factor=len(logits_u_s)) * mask + losses["loss"] = labeled_loss + self.unlabeled_coef * unlabeled_loss + losses["unlabeled_loss"] = self.unlabeled_coef * unlabeled_loss # compute accuracy acc = self.compute_accuracy(logits_x, gt_label) losses["accuracy"] = {f"top-{k}": a for k, a in zip(self.topk, acc)} return losses - def forward_train(self, x, gt_label, final_layer=None): - """forward_train head using pseudo-label selected through threshold + def forward_train(self, x, gt_label, final_layer=None): # pylint: disable=too-many-locals + """Forward_train head using pseudo-label selected through threshold. Args: x (dict or Tensor): dict(labeled, unlabeled_weak, unlabeled_strong) or NxC input features. @@ -119,7 +122,7 @@ def forward_train(self, x, gt_label, final_layer=None): @HEADS.register_module() class SemiLinearClsHead(SemiClsHead, LinearClsHead): - """Linear classification head for Semi-SL + """Linear classification head for Semi-SL. This head is designed to support FixMatch algorithm. (https://arxiv.org/abs/2001.07685) - [OTX] supports dynamic threshold based on confidence for each class @@ -138,28 +141,34 @@ def __init__( self, num_classes, in_channels, - loss=dict(type="CrossEntropyLoss", loss_weight=1.0), - topk=(1,), + loss=None, + topk=None, unlabeled_coef=1.0, use_dynamic_threshold=True, min_threshold=0.5, - ): + ): # pylint: disable=too-many-arguments if in_channels <= 0: raise ValueError(f"in_channels={in_channels} must be a positive integer") if num_classes <= 0: raise ValueError("at least one class must be exist num_classes.") topk = (1,) if num_classes < 5 else (1, 5) + loss = loss if loss else dict(type="CrossEntropyLoss", loss_weight=1.0) LinearClsHead.__init__(self, num_classes, in_channels, loss=loss, topk=topk) SemiClsHead.__init__(self, unlabeled_coef, use_dynamic_threshold, min_threshold) + def forward(self, x): + """Forward fuction of SemiLinearClsHead class.""" + return self.simple_test(x) + def forward_train(self, x, gt_label): + """Forward_train fuction of SemiLinearClsHead class.""" return SemiClsHead.forward_train(self, x, gt_label, final_layer=self.fc) @HEADS.register_module() class SemiNonLinearClsHead(SemiClsHead, NonLinearClsHead): - """Non-linear classification head for Semi-SL + """Non-linear classification head for Semi-SL. This head is designed to support FixMatch algorithm. (https://arxiv.org/abs/2001.07685) - [OTX] supports dynamic threshold based on confidence for each class @@ -181,20 +190,22 @@ def __init__( num_classes, in_channels, hid_channels=1280, - act_cfg=dict(type="ReLU"), - loss=dict(type="CrossEntropyLoss", loss_weight=1.0), - topk=(1,), + act_cfg=None, + loss=None, + topk=None, dropout=False, unlabeled_coef=1.0, use_dynamic_threshold=True, min_threshold=0.5, - ): + ): # pylint: disable=too-many-arguments if in_channels <= 0: raise ValueError(f"in_channels={in_channels} must be a positive integer") if num_classes <= 0: raise ValueError("at least one class must be exist num_classes.") topk = (1,) if num_classes < 5 else (1, 5) + act_cfg = act_cfg if act_cfg else dict(type="ReLU") + loss = loss if loss else dict(type="CrossEntropyLoss", loss_weight=1.0) NonLinearClsHead.__init__( self, num_classes, @@ -207,5 +218,10 @@ def __init__( ) SemiClsHead.__init__(self, unlabeled_coef, use_dynamic_threshold, min_threshold) + def forward(self, x): + """Forward fuction of SemiNonLinearClsHead class.""" + return self.simple_test(x) + def forward_train(self, x, gt_label): + """Forward_train fuction of SemiNonLinearClsHead class.""" return SemiClsHead.forward_train(self, x, gt_label, final_layer=self.classifier) diff --git a/otx/mpa/modules/models/heads/semisl_multilabel_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/semisl_multilabel_cls_head.py similarity index 52% rename from otx/mpa/modules/models/heads/semisl_multilabel_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/semisl_multilabel_cls_head.py index d763714f9fe..9f694990744 100644 --- a/otx/mpa/modules/models/heads/semisl_multilabel_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/semisl_multilabel_cls_head.py @@ -1,18 +1,115 @@ +"""Module for defining semi-supervised classification head for multi-label classification task.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch from mmcls.models.builder import HEADS, build_loss +from torch import nn -from otx.mpa.modules.models.heads.custom_multi_label_linear_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_multi_label_linear_cls_head import ( CustomMultiLabelLinearClsHead, ) -from otx.mpa.modules.models.heads.custom_multi_label_non_linear_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_multi_label_non_linear_cls_head import ( CustomMultiLabelNonLinearClsHead, ) -from .utils import LossBalancer, generate_aux_mlp + +def generate_aux_mlp(aux_mlp_cfg: dict, in_channels: int): + """Generate auxiliary MLP.""" + out_channels = aux_mlp_cfg["out_channels"] + if out_channels <= 0: + raise ValueError(f"out_channels={out_channels} must be a positive integer") + if "hid_channels" in aux_mlp_cfg and aux_mlp_cfg["hid_channels"] > 0: + hid_channels = aux_mlp_cfg["hid_channels"] + mlp = nn.Sequential( + nn.Linear(in_features=in_channels, out_features=hid_channels), + nn.ReLU(inplace=True), + nn.Linear(in_features=hid_channels, out_features=out_channels), + ) + else: + mlp = nn.Linear(in_features=in_channels, out_features=out_channels) + + return mlp + + +class EMAMeter: + """Exponential Moving Average Meter class.""" + + def __init__(self, alpha=0.9): + """Initialize the Exponential Moving Average Meter. + + Args: + - alpha (float): Smoothing factor for the exponential moving average. Defaults to 0.9. + + Returns: + - None + """ + self.alpha = alpha + self.val = 0 + + def reset(self): + """Reset the Exponential Moving Average Meter. + + Args: + - None + + Returns: + - None + """ + self.val = 0 + + def update(self, val): + """Update the Exponential Moving Average Meter with new value. + + Args: + - val (float): New value to update the meter. + + Returns: + - None + """ + self.val = self.alpha * self.val + (1 - self.alpha) * val + + +class LossBalancer: + """Loss Balancer class.""" + + def __init__(self, num_losses, weights=None, ema_weight=0.7) -> None: + """Initialize the Loss Balancer. + + Args: + - num_losses (int): Number of losses to balance. + - weights (list): List of weights to be applied to each loss. If None, equal weights are applied. + - ema_weight (float): Smoothing factor for the exponential moving average meter. Defaults to 0.7. + + Returns: + - None + """ + self.epsilon = 1e-9 + self.avg_estimators = [EMAMeter(ema_weight) for _ in range(num_losses)] + + if weights is not None: + assert len(weights) == num_losses + self.final_weights = weights + else: + self.final_weights = [1.0] * num_losses + + def balance_losses(self, losses): + """Balance the given losses using the weights and exponential moving average. + + Args: + - losses (list): List of losses to be balanced. + + Returns: + - total_loss (float): Balanced loss value. + """ + total_loss = 0.0 + for i, loss in enumerate(losses): + self.avg_estimators[i].update(float(loss)) + total_loss += ( + self.final_weights[i] * loss / (self.avg_estimators[i].val + self.epsilon) * self.avg_estimators[0].val + ) + return total_loss class SemiMultilabelClsHead: @@ -27,8 +124,11 @@ def __init__( self, unlabeled_coef=0.1, use_dynamic_loss_weighting=True, - aux_loss=dict(type="BarlowTwinsLoss", off_diag_penality=1.0 / 128.0, loss_weight=1.0), + aux_loss=None, ): + aux_loss = ( + aux_loss if aux_loss else dict(type="BarlowTwinsLoss", off_diag_penality=1.0 / 128.0, loss_weight=1.0) + ) self.unlabeled_coef = unlabeled_coef self.use_dynamic_loss_weighting = use_dynamic_loss_weighting self.aux_loss = build_loss(aux_loss) @@ -39,7 +139,7 @@ def __init__( self.num_pseudo_label = 0 def loss(self, logits, gt_label, features): - """loss function in which unlabeled data is considered + """Loss function in which unlabeled data is considered. Args: logit (Tensor): Labeled data logits @@ -71,7 +171,7 @@ def loss(self, logits, gt_label, features): return losses def forward_train_with_last_layers(self, x, gt_label, final_cls_layer, final_emb_layer): - """Forwards multilabel semi-sl head and losses + """Forwards multilabel semi-sl head and losses. Args: x (dict): dict(labeled_weak. labeled_strong, unlabeled_weak, unlabeled_strong) or NxC input features. @@ -92,7 +192,7 @@ def forward_train_with_last_layers(self, x, gt_label, final_cls_layer, final_emb @HEADS.register_module() class SemiLinearMultilabelClsHead(SemiMultilabelClsHead, CustomMultiLabelLinearClsHead): - """Linear multilabel classification head for Semi-SL + """Linear multilabel classification head for Semi-SL. Args: num_classes (int): The number of classes of dataset used for training @@ -111,29 +211,44 @@ def __init__( in_channels, scale=1.0, normalized=False, - aux_mlp=dict(hid_channels=0, out_channels=1024), - loss=dict(type="CrossEntropyLoss", loss_weight=1.0), + aux_mlp=None, + loss=None, unlabeled_coef=0.1, - aux_loss=dict(type="BarlowTwinsLoss", off_diag_penality=1.0 / 128.0, loss_weight=1.0), + aux_loss=None, use_dynamic_loss_weighting=True, - ): + ): # pylint: disable=too-many-arguments if in_channels <= 0: raise ValueError(f"in_channels={in_channels} must be a positive integer") if num_classes <= 0: raise ValueError("at least one class must be exist num_classes.") - + aux_mlp = aux_mlp if aux_mlp else dict(hid_channels=0, out_channels=1024) + loss = loss if loss else dict(type="CrossEntropyLoss", loss_weight=1.0) + aux_loss = ( + aux_loss if aux_loss else dict(type="BarlowTwinsLoss", off_diag_penality=1.0 / 128.0, loss_weight=1.0) + ) CustomMultiLabelLinearClsHead.__init__(self, num_classes, in_channels, normalized, scale, loss) SemiMultilabelClsHead.__init__(self, unlabeled_coef, use_dynamic_loss_weighting, aux_loss) self.aux_mlp = generate_aux_mlp(aux_mlp, in_channels) - def forward_train(self, x, gt_label): - return self.forward_train_with_last_layers(x, gt_label, final_cls_layer=self.fc, final_emb_layer=self.aux_mlp) + def loss(self, logits, gt_label, features): + """Calculate loss for given logits/gt_label.""" + return SemiMultilabelClsHead.loss(self, logits, gt_label, features) + + def forward(self, x): + """Forward fuction of SemiLinearMultilabelClsHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label): + """Forward_train fuction of SemiLinearMultilabelClsHead class.""" + return self.forward_train_with_last_layers( + cls_score, gt_label, final_cls_layer=self.fc, final_emb_layer=self.aux_mlp + ) @HEADS.register_module() class SemiNonLinearMultilabelClsHead(SemiMultilabelClsHead, CustomMultiLabelNonLinearClsHead): - """Non-linear classification head for Semi-SL + """Non-linear classification head for Semi-SL. Args: num_classes (int): The number of classes of dataset used for training @@ -156,19 +271,24 @@ def __init__( hid_channels=1280, scale=1.0, normalized=False, - aux_mlp=dict(hid_channels=0, out_channels=1024), - act_cfg=dict(type="ReLU"), - loss=dict(type="CrossEntropyLoss", loss_weight=1.0), - aux_loss=dict(type="BarlowTwinsLoss", off_diag_penality=1.0 / 128.0, loss_weight=1.0), + aux_mlp=None, + act_cfg=None, + loss=None, + aux_loss=None, dropout=False, unlabeled_coef=0.1, use_dynamic_loss_weighting=True, - ): + ): # pylint: disable=too-many-arguments if in_channels <= 0: raise ValueError(f"in_channels={in_channels} must be a positive integer") if num_classes <= 0: raise ValueError("at least one class must be exist num_classes.") - + aux_mlp = aux_mlp if aux_mlp else dict(hid_channels=0, out_channels=1024) + act_cfg = act_cfg if act_cfg else dict(type="ReLU") + loss = loss if loss else dict(type="CrossEntropyLoss", loss_weight=1.0) + aux_loss = ( + aux_loss if aux_loss else dict(type="BarlowTwinsLoss", off_diag_penality=1.0 / 128.0, loss_weight=1.0) + ) CustomMultiLabelNonLinearClsHead.__init__( self, num_classes, @@ -184,7 +304,16 @@ def __init__( self.aux_mlp = generate_aux_mlp(aux_mlp, in_channels) - def forward_train(self, x, gt_label): + def loss(self, logits, gt_label, features): + """Calculate loss for given logits/gt_label.""" + return SemiMultilabelClsHead.loss(self, logits, gt_label, features) + + def forward(self, x): + """Forward fuction of SemiNonLinearMultilabelClsHead class.""" + return self.simple_test(x) + + def forward_train(self, cls_score, gt_label): + """Forward_train fuction of SemiNonLinearMultilabelClsHead class.""" return self.forward_train_with_last_layers( - x, gt_label, final_cls_layer=self.classifier, final_emb_layer=self.aux_mlp + cls_score, gt_label, final_cls_layer=self.classifier, final_emb_layer=self.aux_mlp ) diff --git a/otx/mpa/modules/models/heads/supcon_cls_head.py b/otx/algorithms/classification/adapters/mmcls/models/heads/supcon_cls_head.py similarity index 86% rename from otx/mpa/modules/models/heads/supcon_cls_head.py rename to otx/algorithms/classification/adapters/mmcls/models/heads/supcon_cls_head.py index 528e200245f..2c9d0126a7f 100644 --- a/otx/mpa/modules/models/heads/supcon_cls_head.py +++ b/otx/algorithms/classification/adapters/mmcls/models/heads/supcon_cls_head.py @@ -1,3 +1,4 @@ +"""Module for defining classification head for supcon.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 @@ -10,8 +11,8 @@ @HEADS.register_module() class SupConClsHead(BaseHead): - """ - Supervised Contrastive Learning head for Classification using SelfSL + """Supervised Contrastive Learning head for Classification using SelfSL. + Args: num_classes (int): The number of classes of dataset used for training in_channels (int): The channels of input data from the backbone @@ -21,7 +22,9 @@ class SupConClsHead(BaseHead): topk (set): evaluation topk score, default is (1, ) """ - def __init__(self, num_classes: int, in_channels: int, aux_mlp, loss, aux_loss, topk=(1,), init_cfg=None, **kwargs): + def __init__( + self, num_classes: int, in_channels: int, aux_mlp, loss, aux_loss, topk=(1,), init_cfg=None + ): # pylint: disable=too-many-arguments if in_channels <= 0: raise ValueError(f"in_channels={in_channels} must be a positive integer") if num_classes <= 0: @@ -56,11 +59,16 @@ def __init__(self, num_classes: int, in_channels: int, aux_mlp, loss, aux_loss, else: self.aux_mlp = nn.Linear(in_features=in_channels, out_features=out_channels) + def forward(self, x): + """Forward fuction of SupConClsHead class.""" + return self.simple_test(x) + def forward_train(self, x, gt_label): - """ - Forward train head using the Supervised Contrastive Loss + """Forward train head using the Supervised Contrastive Loss. + Args: x (Tensor): features from the backbone. + Returns: dict[str, Tensor]: A dictionary of loss components. """ @@ -80,11 +88,8 @@ def forward_train(self, x, gt_label): return losses def simple_test(self, img): - """ - Test without data augmentation. - """ + """Test without data augmentation.""" cls_score = self.fc(img) - if isinstance(cls_score, list): cls_score = sum(cls_score) / float(len(cls_score)) pred = F.softmax(cls_score, dim=1) if cls_score is not None else None diff --git a/otx/algorithms/classification/adapters/mmcls/models/losses/__init__.py b/otx/algorithms/classification/adapters/mmcls/models/losses/__init__.py new file mode 100644 index 00000000000..f43e9e11bfc --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/models/losses/__init__.py @@ -0,0 +1,29 @@ +"""OTX Algorithms - Classification Losses.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from .asymmetric_angular_loss_with_ignore import AsymmetricAngularLossWithIgnore +from .asymmetric_loss_with_ignore import AsymmetricLossWithIgnore +from .barlowtwins_loss import BarlowTwinsLoss +from .cross_entropy_loss import CrossEntropyLossWithIgnore +from .ib_loss import IBLoss + +__all__ = [ + "AsymmetricAngularLossWithIgnore", + "AsymmetricLossWithIgnore", + "BarlowTwinsLoss", + "CrossEntropyLossWithIgnore", + "IBLoss", +] diff --git a/otx/mpa/modules/models/losses/asymmetric_angular_loss_with_ignore.py b/otx/algorithms/classification/adapters/mmcls/models/losses/asymmetric_angular_loss_with_ignore.py similarity index 88% rename from otx/mpa/modules/models/losses/asymmetric_angular_loss_with_ignore.py rename to otx/algorithms/classification/adapters/mmcls/models/losses/asymmetric_angular_loss_with_ignore.py index c67444c52ea..93d014c90fa 100644 --- a/otx/mpa/modules/models/losses/asymmetric_angular_loss_with_ignore.py +++ b/otx/algorithms/classification/adapters/mmcls/models/losses/asymmetric_angular_loss_with_ignore.py @@ -1,11 +1,12 @@ +"""Module for defining AsymmetricAngularLossWithIgnore.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn from mmcls.models.builder import LOSSES from mmcls.models.losses.utils import weight_reduce_loss +from torch import nn def asymmetric_angular_loss_with_ignore( @@ -19,8 +20,9 @@ def asymmetric_angular_loss_with_ignore( k=0.8, reduction="mean", avg_factor=None, -): - """asymmetric angular loss +): # pylint: disable=too-many-arguments, too-many-locals + """Asymmetric angular loss. + Args: pred (torch.Tensor): The prediction with shape (N, *). target (torch.Tensor): The ground truth label of the prediction with @@ -37,6 +39,7 @@ def asymmetric_angular_loss_with_ignore( is same shape as pred and label. Defaults to 'mean'. avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. + Returns: torch.Tensor: Loss. """ @@ -54,11 +57,11 @@ def asymmetric_angular_loss_with_ignore( asymmetric_focus = gamma_pos > 0 or gamma_neg > 0 if asymmetric_focus: - pt0 = xs_neg * target - pt1 = xs_pos * anti_target - pt = pt0 + pt1 + pos_target0 = xs_neg * target + pos_target1 = xs_pos * anti_target + pos_target = pos_target0 + pos_target1 one_sided_gamma = gamma_pos * target + gamma_neg * anti_target - one_sided_w = torch.pow(pt, one_sided_gamma) + one_sided_w = torch.pow(pos_target, one_sided_gamma) loss = -k * target * torch.log(xs_pos.clamp(min=eps)) - (1 - k) * anti_target * torch.log(xs_neg.clamp(min=eps)) @@ -81,7 +84,8 @@ def asymmetric_angular_loss_with_ignore( @LOSSES.register_module() class AsymmetricAngularLossWithIgnore(nn.Module): - """Asymmetric angular loss + """Asymmetric angular loss. + Args: gamma_pos (float): positive focusing parameter. Defaults to 0.0. @@ -95,6 +99,7 @@ class AsymmetricAngularLossWithIgnore(nn.Module): """ def __init__(self, gamma_pos=0.0, gamma_neg=1.0, k=0.8, clip=0.05, reduction="mean", loss_weight=1.0): + """Init fuction of AsymmetricAngularLossWithIgnore class.""" super().__init__() self.gamma_pos = gamma_pos self.gamma_neg = gamma_neg @@ -104,7 +109,7 @@ def __init__(self, gamma_pos=0.0, gamma_neg=1.0, k=0.8, clip=0.05, reduction="me self.loss_weight = loss_weight def forward(self, pred, target, valid_label_mask=None, weight=None, avg_factor=None, reduction_override=None): - """asymmetric angular loss""" + """Asymmetric angular loss.""" assert reduction_override in (None, "none", "mean", "sum") reduction = reduction_override if reduction_override else self.reduction loss_cls = self.loss_weight * asymmetric_angular_loss_with_ignore( diff --git a/otx/mpa/modules/models/losses/asymmetric_loss_with_ignore.py b/otx/algorithms/classification/adapters/mmcls/models/losses/asymmetric_loss_with_ignore.py similarity index 83% rename from otx/mpa/modules/models/losses/asymmetric_loss_with_ignore.py rename to otx/algorithms/classification/adapters/mmcls/models/losses/asymmetric_loss_with_ignore.py index d7022675271..638b6bf2d03 100644 --- a/otx/mpa/modules/models/losses/asymmetric_loss_with_ignore.py +++ b/otx/algorithms/classification/adapters/mmcls/models/losses/asymmetric_loss_with_ignore.py @@ -1,11 +1,12 @@ +"""Module for defining AsymmetricLossWithIgnore.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # import torch -import torch.nn as nn from mmcls.models.builder import LOSSES from mmcls.models.losses.utils import weight_reduce_loss +from torch import nn def asymmetric_loss_with_ignore( @@ -18,10 +19,9 @@ def asymmetric_loss_with_ignore( clip=0.05, reduction="none", avg_factor=None, -): - """asymmetric loss - Please refer to the `paper `_ for - details. +): # pylint: disable=too-many-arguments + """Asymmetric loss, please refer to the `paper `_ for details. + Args: pred (torch.Tensor): The prediction with shape (N, *). target (torch.Tensor): The ground truth label of the prediction with @@ -37,6 +37,7 @@ def asymmetric_loss_with_ignore( is same shape as pred and label. Defaults to 'mean'. avg_factor (int, optional): Average factor that is used to average the loss. Defaults to None. + Returns: torch.Tensor: Loss. """ @@ -49,11 +50,11 @@ def asymmetric_loss_with_ignore( avg_factor = None # if we are not set this to None the exception will be throwed if clip and clip > 0: - pt = (1 - pred_sigmoid + clip).clamp(max=1) * (1 - target) + pred_sigmoid * target + pos_target = (1 - pred_sigmoid + clip).clamp(max=1) * (1 - target) + pred_sigmoid * target else: - pt = (1 - pred_sigmoid) * (1 - target) + pred_sigmoid * target - asymmetric_weight = (1 - pt).pow(gamma_pos * target + gamma_neg * (1 - target)) - loss = -torch.log(pt.clamp(min=eps)) * asymmetric_weight + pos_target = (1 - pred_sigmoid) * (1 - target) + pred_sigmoid * target + asymmetric_weight = (1 - pos_target).pow(gamma_pos * target + gamma_neg * (1 - target)) + loss = -torch.log(pos_target.clamp(min=eps)) * asymmetric_weight if valid_label_mask is not None: loss = loss * valid_label_mask @@ -69,7 +70,8 @@ def asymmetric_loss_with_ignore( @LOSSES.register_module() class AsymmetricLossWithIgnore(nn.Module): - """asymmetric loss + """Asymmetric loss. + Args: gamma_pos (float): positive focusing parameter. Defaults to 0.0. @@ -82,7 +84,7 @@ class AsymmetricLossWithIgnore(nn.Module): """ def __init__(self, gamma_pos=0.0, gamma_neg=4.0, clip=0.05, reduction="none", loss_weight=1.0): - super(AsymmetricLossWithIgnore, self).__init__() + super().__init__() self.gamma_pos = gamma_pos self.gamma_neg = gamma_neg self.clip = clip @@ -90,7 +92,7 @@ def __init__(self, gamma_pos=0.0, gamma_neg=4.0, clip=0.05, reduction="none", lo self.loss_weight = loss_weight def forward(self, pred, target, valid_label_mask=None, weight=None, avg_factor=None, reduction_override=None): - """asymmetric loss""" + """Forward fuction of asymmetric loss.""" assert reduction_override in (None, "none", "mean", "sum") reduction = reduction_override if reduction_override else self.reduction loss_cls = self.loss_weight * asymmetric_loss_with_ignore( diff --git a/otx/mpa/modules/models/losses/barlowtwins_loss.py b/otx/algorithms/classification/adapters/mmcls/models/losses/barlowtwins_loss.py similarity index 70% rename from otx/mpa/modules/models/losses/barlowtwins_loss.py rename to otx/algorithms/classification/adapters/mmcls/models/losses/barlowtwins_loss.py index 744dc7e5eb7..dfbdeabd139 100644 --- a/otx/mpa/modules/models/losses/barlowtwins_loss.py +++ b/otx/algorithms/classification/adapters/mmcls/models/losses/barlowtwins_loss.py @@ -1,24 +1,22 @@ +"""Module for defining BarlowTwinsLoss for supcon in classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 import torch -import torch.nn as nn from mmcls.models.builder import LOSSES -from torch import Tensor +from torch import Tensor, nn def off_diagonal(x: Tensor): - """ - return a tensor containing all the elements outside the diagonal of x - """ + """Return a tensor containing all the elements outside the diagonal of x.""" assert x.shape[0] == x.shape[1] return x.flatten()[:-1].view(x.shape[0] - 1, x.shape[0] + 1)[:, 1:].flatten() @LOSSES.register_module() class BarlowTwinsLoss(nn.Module): - """ - Barlow Twins Loss: https://arxiv.org/abs/2103.03230. + """Barlow Twins Loss: https://arxiv.org/abs/2103.03230. + Self-Supervised Learning via Redundancy Reduction Code adapted from https://github.com/facebookresearch/barlowtwins. """ @@ -28,13 +26,13 @@ def __init__(self, off_diag_penality, loss_weight=1.0): self.penalty = off_diag_penality self.loss_weight = loss_weight - def forward(self, feats1: Tensor, feats2: Tensor, **kwargs): - """ - Compute Barlow Twins Loss and, if labels are not none, - also the Cross-Entropy loss. + def forward(self, feats1: Tensor, feats2: Tensor): + """Compute Barlow Twins Loss and, if labels are not none, also the Cross-Entropy loss. + Args: - feats1, feats2: vectors of shape [bsz, ...]. Corresponding to - two views of the same samples + feats1 (torch.Tensor): vectors of shape [bsz, ...]. Corresponding to one of two views of the same samples. + feats2 (torch.Tensor): vectors of shape [bsz, ...]. Corresponding to one of two views of the same samples. + Returns: A floating point number describing the Barlow Twins loss """ diff --git a/otx/mpa/modules/models/losses/cross_entropy_loss.py b/otx/algorithms/classification/adapters/mmcls/models/losses/cross_entropy_loss.py similarity index 83% rename from otx/mpa/modules/models/losses/cross_entropy_loss.py rename to otx/algorithms/classification/adapters/mmcls/models/losses/cross_entropy_loss.py index 0b99480c80f..28d8d2a57fd 100644 --- a/otx/mpa/modules/models/losses/cross_entropy_loss.py +++ b/otx/algorithms/classification/adapters/mmcls/models/losses/cross_entropy_loss.py @@ -1,14 +1,16 @@ +"""Module for defining cross entropy loss for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -import torch.nn as nn import torch.nn.functional as F from mmcls.models.builder import LOSSES from mmcls.models.losses.utils import weight_reduce_loss +from torch import nn def cross_entropy(pred, label, weight=None, reduction="mean", avg_factor=None, class_weight=None, ignore_index=None): + """Calculate cross entropy for given pred, label pairs.""" # element-wise losses if ignore_index is not None: loss = F.cross_entropy(pred, label, reduction="none", weight=class_weight, ignore_index=ignore_index) @@ -25,8 +27,10 @@ def cross_entropy(pred, label, weight=None, reduction="mean", avg_factor=None, c @LOSSES.register_module() class CrossEntropyLossWithIgnore(nn.Module): + """Defining CrossEntropyLossWothIgnore which supports ignored_label masking.""" + def __init__(self, reduction="mean", loss_weight=1.0, ignore_index=None): - super(CrossEntropyLossWithIgnore, self).__init__() + super().__init__() self.reduction = reduction self.loss_weight = loss_weight self.ignore_index = ignore_index @@ -34,6 +38,7 @@ def __init__(self, reduction="mean", loss_weight=1.0, ignore_index=None): self.cls_criterion = cross_entropy def forward(self, cls_score, label, weight=None, avg_factor=None, reduction_override=None, **kwargs): + """Forward function of CrossEntropyLossWithIgnore class.""" assert reduction_override in (None, "none", "mean", "sum") reduction = reduction_override if reduction_override else self.reduction loss_cls = self.loss_weight * self.cls_criterion( diff --git a/otx/mpa/modules/models/losses/ib_loss.py b/otx/algorithms/classification/adapters/mmcls/models/losses/ib_loss.py similarity index 59% rename from otx/mpa/modules/models/losses/ib_loss.py rename to otx/algorithms/classification/adapters/mmcls/models/losses/ib_loss.py index 3c5de67438e..ce627738628 100644 --- a/otx/mpa/modules/models/losses/ib_loss.py +++ b/otx/algorithms/classification/adapters/mmcls/models/losses/ib_loss.py @@ -1,3 +1,4 @@ +"""Module for defining IB Loss which alleviate effect of imbalanced dataset.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -11,16 +12,17 @@ @LOSSES.register_module() class IBLoss(CrossEntropyLoss): - def __init__(self, num_classes, start=5, alpha=1000.0, **kwargs): - """IB Loss - https://arxiv.org/abs/2110.02444 + """IB Loss, Influence-Balanced Loss for Imbalanced Visual Classification, https://arxiv.org/abs/2110.02444.""" + + def __init__(self, num_classes, start=5, alpha=1000.0): + """Init fuction of IBLoss. Args: num_classes (int): Number of classes in dataset start (int): Epoch to start finetuning with IB loss alpha (float): Hyper-parameter for an adjustment for IB loss re-weighting """ - super(IBLoss, self).__init__(loss_weight=1.0) + super().__init__(loss_weight=1.0) if alpha < 0: raise ValueError("Alpha for IB loss should be bigger than 0") self.alpha = alpha @@ -32,6 +34,7 @@ def __init__(self, num_classes, start=5, alpha=1000.0, **kwargs): @property def cur_epoch(self): + """Return current epoch.""" return self._cur_epoch @cur_epoch.setter @@ -39,6 +42,7 @@ def cur_epoch(self, epoch): self._cur_epoch = epoch def update_weight(self, cls_num_list): + """Update loss weight per class.""" if len(cls_num_list) == 0: raise ValueError("Cannot compute the IB loss weight with empty cls_num_list.") per_cls_weights = 1.0 / np.array(cls_num_list) @@ -46,14 +50,14 @@ def update_weight(self, cls_num_list): per_cls_weights = torch.FloatTensor(per_cls_weights) self.weight = per_cls_weights - def forward(self, input, target, feature): + def forward(self, x, target, feature): + """Forward fuction of IBLoss.""" if self._cur_epoch < self._start_epoch: - return super().forward(input, target) - else: - grads = torch.sum(torch.abs(F.softmax(input, dim=1) - F.one_hot(target, self.num_classes)), 1) - feature = torch.sum(torch.abs(feature), 1).reshape(-1, 1) - ib = grads * feature.reshape(-1) - ib = self.alpha / (ib + self.epsilon) - ce_loss = F.cross_entropy(input, target, weight=self.weight.to(input.get_device()), reduction="none") - loss = ce_loss * ib - return loss.mean() + return super().forward(x, target) + grads = torch.sum(torch.abs(F.softmax(x, dim=1) - F.one_hot(target, self.num_classes)), 1) + feature = torch.sum(torch.abs(feature), 1).reshape(-1, 1) + scaler = grads * feature.reshape(-1) + scaler = self.alpha / (scaler + self.epsilon) + ce_loss = F.cross_entropy(x, target, weight=self.weight.to(x.get_device()), reduction="none") + loss = ce_loss * scaler + return loss.mean() diff --git a/otx/algorithms/classification/adapters/mmcls/models/necks/__init__.py b/otx/algorithms/classification/adapters/mmcls/models/necks/__init__.py index 13803f3ccff..d2cb363f53f 100644 --- a/otx/algorithms/classification/adapters/mmcls/models/necks/__init__.py +++ b/otx/algorithms/classification/adapters/mmcls/models/necks/__init__.py @@ -14,6 +14,7 @@ # See the License for the specific language governing permissions # and limitations under the License. +from .mmov_neck import MMOVNeck from .selfsl_mlp import SelfSLMLP -__all__ = ["SelfSLMLP"] +__all__ = ["SelfSLMLP", "MMOVNeck"] diff --git a/otx/mpa/modules/ov/models/mmcls/necks/mmov_neck.py b/otx/algorithms/classification/adapters/mmcls/models/necks/mmov_neck.py similarity index 60% rename from otx/mpa/modules/ov/models/mmcls/necks/mmov_neck.py rename to otx/algorithms/classification/adapters/mmcls/models/necks/mmov_neck.py index f37a6c2e699..48f01df34e1 100644 --- a/otx/mpa/modules/ov/models/mmcls/necks/mmov_neck.py +++ b/otx/algorithms/classification/adapters/mmcls/models/necks/mmov_neck.py @@ -1,3 +1,4 @@ +"""Module for defining MMOVNeck for inference.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,18 +7,21 @@ from mmcls.models.builder import NECKS -from ....graph.parsers.cls.cls_base_parser import cls_base_parser -from ...mmov_model import MMOVModel +from otx.mpa.modules.ov.graph.parsers.cls.cls_base_parser import cls_base_parser +from otx.mpa.modules.ov.models.mmov_model import MMOVModel @NECKS.register_module() class MMOVNeck(MMOVModel): + """Neck class for MMOV inference.""" + def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) @staticmethod def parser(graph, **kwargs) -> Dict[str, List[str]]: + """Parser function returns base_parser for given graph.""" output = cls_base_parser(graph, "neck") if output is None: - raise ValueError("Parser can not determine input and output of model. " "Please provide them explicitly") + raise ValueError("Parser can not determine input and output of model. Please provide them explicitly") return output diff --git a/otx/algorithms/classification/adapters/mmcls/models/necks/selfsl_mlp.py b/otx/algorithms/classification/adapters/mmcls/models/necks/selfsl_mlp.py index 9c7b61b05bc..b787a911851 100644 --- a/otx/algorithms/classification/adapters/mmcls/models/necks/selfsl_mlp.py +++ b/otx/algorithms/classification/adapters/mmcls/models/necks/selfsl_mlp.py @@ -6,7 +6,7 @@ # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -# pylint: disable=missing-module-docstring, dangerous-default-value +# pylint: disable=missing-module-docstring from typing import Any, Dict, List, Tuple, Union import torch @@ -34,11 +34,11 @@ def __init__( in_channels: int, hid_channels: int, out_channels: int, - norm_cfg: Dict[str, Any] = dict(type="BN1d"), + norm_cfg: Dict[str, Any] = None, use_conv: bool = False, with_avg_pool: bool = True, ): - + norm_cfg = norm_cfg if norm_cfg else dict(type="BN1d") super().__init__() self.with_avg_pool = with_avg_pool diff --git a/otx/algorithms/classification/adapters/mmcls/optimizer/__init__.py b/otx/algorithms/classification/adapters/mmcls/optimizer/__init__.py new file mode 100644 index 00000000000..271368e0d6a --- /dev/null +++ b/otx/algorithms/classification/adapters/mmcls/optimizer/__init__.py @@ -0,0 +1,19 @@ +"""OTX Algorithms - Classification Optimizers.""" + +# Copyright (C) 2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from .lars import LARS + +__all__ = ["LARS"] diff --git a/otx/mpa/modules/optimizer/lars.py b/otx/algorithms/classification/adapters/mmcls/optimizer/lars.py similarity index 74% rename from otx/mpa/modules/optimizer/lars.py rename to otx/algorithms/classification/adapters/mmcls/optimizer/lars.py index 89fc83e98ba..cc453e40a51 100644 --- a/otx/mpa/modules/optimizer/lars.py +++ b/otx/algorithms/classification/adapters/mmcls/optimizer/lars.py @@ -1,3 +1,4 @@ +"""Module for defining LARS optimizer for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -45,15 +46,15 @@ def __init__( nesterov=False, mode=None, exclude_bn_from_weight_decay=False, - ): + ): # pylint: disable=too-many-arguments, too-many-locals if lr is not required and lr < 0.0: - raise ValueError("Invalid learning rate: {}".format(lr)) + raise ValueError(f"Invalid learning rate: {lr}") if momentum < 0.0: - raise ValueError("Invalid momentum value: {}".format(momentum)) + raise ValueError(f"Invalid momentum value: {momentum}") if weight_decay < 0.0: - raise ValueError("Invalid weight_decay value: {}".format(weight_decay)) + raise ValueError(f"Invalid weight_decay value: {weight_decay}") if eta < 0.0: - raise ValueError("Invalid LARS coefficient value: {}".format(eta)) + raise ValueError(f"Invalid LARS coefficient value: {eta}") defaults = dict( lr=lr, momentum=momentum, dampening=dampening, weight_decay=weight_decay, nesterov=nesterov, eta=eta @@ -92,10 +93,11 @@ def __init__( self.mode = mode - super(LARS, self).__init__(new_param_groups, defaults) + super().__init__(new_param_groups, defaults) def __setstate__(self, state): - super(LARS, self).__setstate__(state) + """Set state for parameter groups.""" + super().__setstate__(state) for group in self.param_groups: group.setdefault("nesterov", False) @@ -115,10 +117,8 @@ def step(self, closure=None): for group in self.param_groups: weight_decay = group["weight_decay"] momentum = group["momentum"] - dampening = group["dampening"] nesterov = group["nesterov"] eta = group["eta"] - lars_exclude = group.get("lars_exclude", False) for p in group["params"]: if p.grad is None: @@ -128,21 +128,15 @@ def step(self, closure=None): # Add weight decay before computing adaptive LR. # Seems to be pretty important in SIMclr style models. local_lr = 1.0 - if self.mode == "selfsl": - if weight_decay != 0: - d_p = d_p.add(p, alpha=weight_decay) - if not lars_exclude: - weight_norm = torch.norm(p).item() - grad_norm = torch.norm(d_p).item() - if weight_norm > 0 and grad_norm > 0: - local_lr = eta * weight_norm / grad_norm + if not group.get("lars_exclude", False): + weight_norm = torch.norm(p).item() + grad_norm = torch.norm(d_p).item() + if self.mode == "selfsl" and weight_norm > 0 and grad_norm > 0: + local_lr = eta * weight_norm / grad_norm else: - if not lars_exclude: - weight_norm = torch.norm(p).item() - grad_norm = torch.norm(d_p).item() - local_lr = eta * weight_norm / (grad_norm + weight_decay * weight_norm) - if weight_decay != 0: - d_p = d_p.add(p, alpha=weight_decay) + local_lr = eta * weight_norm / (grad_norm + weight_decay * weight_norm) + if weight_decay != 0: + d_p = d_p.add(p, alpha=weight_decay) d_p = d_p.mul(local_lr) @@ -152,12 +146,7 @@ def step(self, closure=None): buf = param_state["momentum_buffer"] = torch.clone(d_p).detach() else: buf = param_state["momentum_buffer"] - buf.mul_(momentum).add_(d_p, alpha=1 - dampening) - if nesterov: - d_p = d_p.add(buf, alpha=momentum) - else: - d_p = buf - + buf.mul_(momentum).add_(d_p, alpha=1 - group["dampening"]) + d_p = d_p.add(buf, alpha=momentum) if nesterov else buf p.add_(d_p, alpha=-group["lr"]) - return loss diff --git a/otx/algorithms/classification/configs/base/data/semisl/data_pipeline.py b/otx/algorithms/classification/configs/base/data/semisl/data_pipeline.py index e6ea7b73bbb..70ac93bbf8c 100644 --- a/otx/algorithms/classification/configs/base/data/semisl/data_pipeline.py +++ b/otx/algorithms/classification/configs/base/data/semisl/data_pipeline.py @@ -25,7 +25,7 @@ ] __strong_pipeline = [ - dict(type="MPARandAugment", n=8, m=10), + dict(type="OTXRandAugment", num_aug=8, magnitude=10), ] __train_pipeline = [ diff --git a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/supcon/model.py b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/supcon/model.py index 34976cf2668..80a8b25ea50 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/supcon/model.py +++ b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/supcon/model.py @@ -9,6 +9,7 @@ type="SupConClassifier", backbone=dict(mode="large"), head=dict( + _delete_=True, type="SupConClsHead", in_channels=-1, aux_mlp=dict(hid_channels=0, out_channels=1024), diff --git a/otx/algorithms/common/adapters/mmcv/__init__.py b/otx/algorithms/common/adapters/mmcv/__init__.py index 1d4d1c2cb89..16184f89f96 100644 --- a/otx/algorithms/common/adapters/mmcv/__init__.py +++ b/otx/algorithms/common/adapters/mmcv/__init__.py @@ -16,22 +16,37 @@ from .hooks import ( CancelTrainingHook, + CheckpointHookWithValResults, + CustomEvalHook, EarlyStoppingHook, EMAMomentumUpdateHook, EnsureCorrectBestCheckpointHook, + Fp16SAMOptimizerHook, + IBLossHook, + NoBiasDecayHook, OTXLoggerHook, OTXProgressHook, ReduceLROnPlateauLrUpdaterHook, + SAMOptimizerHook, + SemiSLClsHook, StopLossNanTrainingHook, TwoCropTransformHook, ) from .nncf.hooks import CompressionHook from .nncf.runners import AccuracyAwareRunner +from .pipelines.transforms import pil_augment from .runner import EpochRunnerWithCancel, IterBasedRunnerWithCancel __all__ = [ "EpochRunnerWithCancel", "IterBasedRunnerWithCancel", + "CheckpointHookWithValResults", + "CustomEvalHook", + "Fp16SAMOptimizerHook", + "IBLossHook", + "SAMOptimizerHook", + "NoBiasDecayHook", + "SemiSLClsHook", "CancelTrainingHook", "OTXLoggerHook", "OTXProgressHook", @@ -41,6 +56,7 @@ "StopLossNanTrainingHook", "EMAMomentumUpdateHook", "CompressionHook", + "pil_augment", "AccuracyAwareRunner", "TwoCropTransformHook", ] diff --git a/otx/algorithms/common/adapters/mmcv/hooks/__init__.py b/otx/algorithms/common/adapters/mmcv/hooks/__init__.py new file mode 100644 index 00000000000..08adf430c70 --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/__init__.py @@ -0,0 +1,53 @@ +"""Adapters for mmcv support.""" + +# Copyright (C) 2021-2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from .base_hook import ( + CancelTrainingHook, + EarlyStoppingHook, + EMAMomentumUpdateHook, + EnsureCorrectBestCheckpointHook, + OTXLoggerHook, + OTXProgressHook, + ReduceLROnPlateauLrUpdaterHook, + StopLossNanTrainingHook, + TwoCropTransformHook, +) +from .checkpoint_hook import CheckpointHookWithValResults +from .eval_hook import CustomEvalHook +from .fp16_sam_optimizer_hook import Fp16SAMOptimizerHook +from .ib_loss_hook import IBLossHook +from .no_bias_decay_hook import NoBiasDecayHook +from .sam_optimizer_hook import SAMOptimizerHook +from .semisl_cls_hook import SemiSLClsHook + +__all__ = [ + "CheckpointHookWithValResults", + "CustomEvalHook", + "IBLossHook", + "NoBiasDecayHook", + "SAMOptimizerHook", + "Fp16SAMOptimizerHook", + "SemiSLClsHook", + "CancelTrainingHook", + "OTXLoggerHook", + "OTXProgressHook", + "EarlyStoppingHook", + "ReduceLROnPlateauLrUpdaterHook", + "EnsureCorrectBestCheckpointHook", + "StopLossNanTrainingHook", + "EMAMomentumUpdateHook", + "TwoCropTransformHook", +] diff --git a/otx/algorithms/common/adapters/mmcv/hooks.py b/otx/algorithms/common/adapters/mmcv/hooks/base_hook.py similarity index 100% rename from otx/algorithms/common/adapters/mmcv/hooks.py rename to otx/algorithms/common/adapters/mmcv/hooks/base_hook.py diff --git a/otx/mpa/modules/hooks/checkpoint_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py similarity index 94% rename from otx/mpa/modules/hooks/checkpoint_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py index b421b173e07..61f4e51472f 100644 --- a/otx/mpa/modules/hooks/checkpoint_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py @@ -1,3 +1,4 @@ +"""CheckpointHook with validation results for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -11,7 +12,7 @@ @HOOKS.register_module() -class CheckpointHookWithValResults(Hook): +class CheckpointHookWithValResults(Hook): # pylint: disable=too-many-instance-attributes """Save checkpoints periodically. Args: @@ -53,10 +54,12 @@ def __init__( self._best_model_weight: Optional[Path] = None def before_run(self, runner): + """Set output directopy if not set.""" if not self.out_dir: self.out_dir = runner.work_dir def after_train_epoch(self, runner): + """Checkpoint stuffs after train epoch.""" if not self.by_epoch or not self.every_n_epochs(runner, self.interval): return @@ -126,6 +129,7 @@ def _save_latest_checkpoint(self, runner): runner.meta["hook_msgs"]["last_ckpt"] = str(self.out_dir / cur_ckpt_filename) def after_train_iter(self, runner): + """Checkpoint stuffs after train iteration.""" if self.by_epoch or not self.every_n_iters(runner, self.interval): return diff --git a/otx/mpa/modules/hooks/eval_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py similarity index 84% rename from otx/mpa/modules/hooks/eval_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py index eed4e743996..f5a8920783a 100644 --- a/otx/mpa/modules/hooks/eval_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py @@ -1,3 +1,4 @@ +"""Module for definig CustomEvalHook for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -5,15 +6,14 @@ from os import path as osp import mmcv -import numpy as np import torch -from mmcv.runner import HOOKS, EvalHook, Hook +from mmcv.runner import HOOKS, EvalHook from torch.utils.data import DataLoader @HOOKS.register_module() class CustomEvalHook(EvalHook): - """Custom Evaluation hook for the MPA + """Custom Evaluation hook for the MPA. Args: dataloader (DataLoader): A PyTorch dataloader. @@ -44,7 +44,7 @@ def __init__( self.save_mode = self.eval_kwargs.get("save_mode", "score") def _do_evaluate(self, runner, ema=False): - """perform evaluation""" + """Perform evaluation.""" results = single_gpu_test(runner.model, self.dataloader) if ema and hasattr(runner, "ema_model") and (runner.epoch >= self.ema_eval_start_epoch): results_ema = single_gpu_test(runner.ema_model.module, self.dataloader) @@ -53,17 +53,20 @@ def _do_evaluate(self, runner, ema=False): self.evaluate(runner, results) def after_train_epoch(self, runner): + """Check whether current epoch is to be evaluated or not.""" if not self.by_epoch or not self.every_n_epochs(runner, self.interval): return self._do_evaluate(runner, ema=True) def after_train_iter(self, runner): + """Check whether current iteration is to be evaluated or not.""" if self.by_epoch or not self.every_n_iters(runner, self.interval): return runner.log_buffer.clear() self._do_evaluate(runner) def evaluate(self, runner, results, results_ema=None): + """Evaluate predictions from model with ground truth.""" eval_res = self.dataloader.dataset.evaluate(results, logger=runner.logger, **self.eval_kwargs) score = eval_res[self.metric] for name, val in eval_res.items(): @@ -84,11 +87,12 @@ def evaluate(self, runner, results, results_ema=None): def single_gpu_test(model, data_loader): + """Single gpu test for inference.""" model.eval() results = [] dataset = data_loader.dataset prog_bar = mmcv.ProgressBar(len(dataset)) - for i, data in enumerate(data_loader): + for data in data_loader: with torch.no_grad(): result = model(return_loss=False, **data) results.append(result) @@ -102,14 +106,16 @@ def single_gpu_test(model, data_loader): @HOOKS.register_module() class DistCustomEvalHook(CustomEvalHook): + """Distributed Custom Evaluation Hook for Multi-GPU environment.""" + def __init__(self, dataloader, interval=1, gpu_collect=False, by_epoch=True, **eval_kwargs): if not isinstance(dataloader, DataLoader): raise TypeError("dataloader must be a pytorch DataLoader, but got " f"{type(dataloader)}") self.gpu_collect = gpu_collect - super(DistCustomEvalHook, self).__init__(dataloader, interval, by_epoch=by_epoch, **eval_kwargs) + super().__init__(dataloader, interval, by_epoch=by_epoch, **eval_kwargs) def _do_evaluate(self, runner): - """perform evaluation""" + """Perform evaluation.""" from mmcls.apis import multi_gpu_test results = multi_gpu_test( @@ -120,11 +126,13 @@ def _do_evaluate(self, runner): self.evaluate(runner, results) def after_train_epoch(self, runner): + """Check whether current epoch is to be evaluated or not.""" if not self.by_epoch or not self.every_n_epochs(runner, self.interval): return self._do_evaluate(runner) def after_train_iter(self, runner): + """Check whether current iteration is to be evaluated or not.""" if self.by_epoch or not self.every_n_iters(runner, self.interval): return runner.log_buffer.clear() diff --git a/otx/mpa/modules/hooks/fp16_sam_optimizer_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py similarity index 94% rename from otx/mpa/modules/hooks/fp16_sam_optimizer_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py index 34fc575bf07..4aa0cc50c21 100644 --- a/otx/mpa/modules/hooks/fp16_sam_optimizer_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py @@ -1,3 +1,4 @@ +"""Module for Sharpness-aware Minimization optimizer hook implementation for MMCV Runners with FP16 precision.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,7 +9,7 @@ @HOOKS.register_module() class Fp16SAMOptimizerHook(Fp16OptimizerHook): - """Sharpness-aware Minimization optimizer hook + """Sharpness-aware Minimization optimizer hook. Implemented as OptimizerHook for MMCV Runners - Paper ref: https://arxiv.org/abs/2010.01412 @@ -23,7 +24,8 @@ def __init__(self, rho=0.05, start_epoch=1, **kwargs): raise ValueError("rho should be greater than 0 for SAM optimizer") def after_train_iter(self, runner): - """Perform SAM optimization + """Perform SAM optimization. + 0. compute current loss (DONE IN model.train_step()) 1. compute current gradient 2. move param to the approximate local maximum: w + e(w) = w + rho*norm_grad @@ -77,6 +79,7 @@ def after_train_iter(self, runner): runner.meta.setdefault("fp16", {})["loss_scaler"] = self.loss_scaler.state_dict() runner.log_buffer.update({"sharpness": float(max_loss - curr_loss), "max_loss": float(max_loss)}) + return None def _get_current_batch(self, model): if hasattr(model, "module"): diff --git a/otx/mpa/modules/hooks/ib_loss_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py similarity index 77% rename from otx/mpa/modules/hooks/ib_loss_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py index 7525ca87e9c..ea823de04a3 100644 --- a/otx/mpa/modules/hooks/ib_loss_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py @@ -1,3 +1,4 @@ +"""Module for defining a hook for IB loss using mmcls.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,9 +9,13 @@ @HOOKS.register_module() class IBLossHook(Hook): + """Hook for IB loss. + + It passes the number of data per class and current epoch to IB loss class. + """ + def __init__(self, dst_classes): - """Hook for IB loss. - It passes the number of data per class and current epoch to IB loss class. + """Initialize the IBLossHook. Args: dst_classes (list): A list of classes including new_classes to be newly learned @@ -19,10 +24,8 @@ def __init__(self, dst_classes): self.dst_classes = dst_classes def before_train_epoch(self, runner): - # get loss from model + """Get loss from model and pass the number of data per class and current epoch to IB loss.""" model_loss = self._get_model_loss(runner) - - # pass the number of data per class and current epoch to IB loss if runner.epoch == 0: dataset = runner.data_loader.dataset num_data = self._get_num_data(dataset) diff --git a/otx/mpa/modules/hooks/no_bias_decay_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py similarity index 50% rename from otx/mpa/modules/hooks/no_bias_decay_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py index a8bacfe61cf..82dae744d59 100644 --- a/otx/mpa/modules/hooks/no_bias_decay_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py @@ -1,9 +1,10 @@ +"""Module for NoBiasDecayHook used in classification.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -import torch.nn as nn from mmcv.runner import HOOKS, Hook +from torch import nn from otx.mpa.utils.logger import get_logger @@ -12,26 +13,27 @@ @HOOKS.register_module() class NoBiasDecayHook(Hook): - """Hook for No Bias Decay Method (Bag of Tricks for Image Classification) + """Hook for No Bias Decay Method (Bag of Tricks for Image Classification). This hook divides model's weight & bias to 3 parameter groups - [weight with decay, weight without decay, bias without decay] + [weight with decay, weight without decay, bias without decay]. """ def before_train_epoch(self, runner): + """Split weights into decay/no-decay groups.""" weight_decay, bias_no_decay, weight_no_decay = [], [], [] - for m in runner.model.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear): - weight_decay.append(m.weight) - if m.bias is not None: - bias_no_decay.append(m.bias) - elif hasattr(m, "weight") or hasattr(m, "bias"): - if hasattr(m, "weight"): - weight_no_decay.append(m.weight) - if hasattr(m, "bias"): - bias_no_decay.append(m.bias) - elif len(list(m.children())) == 0: - for p in m.parameters(): + for module in runner.model.modules(): + if isinstance(module, (nn.Conv2d, nn.Linear)): + weight_decay.append(module.weight) + if module.bias is not None: + bias_no_decay.append(module.bias) + elif hasattr(module, "weight") or hasattr(module, "bias"): + if hasattr(module, "weight"): + weight_no_decay.append(module.weight) + if hasattr(module, "bias"): + bias_no_decay.append(module.bias) + elif len(list(module.children())) == 0: + for p in module.parameters(): weight_decay.append(p) weight_decay_group = runner.optimizer.param_groups[0].copy() @@ -50,19 +52,20 @@ def before_train_epoch(self, runner): runner.optimizer.param_groups = param_groups def after_train_epoch(self, runner): + """Merge splited groups before saving checkpoint.""" params = [] - for m in runner.model.modules(): - if isinstance(m, nn.Conv2d) or isinstance(m, nn.Linear): - params.append(m.weight) - if m.bias is not None: - params.append(m.bias) - elif hasattr(m, "weight") or hasattr(m, "bias"): - if hasattr(m, "weight"): - params.append(m.weight) - if hasattr(m, "bias"): - params.append(m.bias) - elif len(list(m.children())) == 0: - for p in m.parameters(): + for module in runner.model.modules(): + if isinstance(module, (nn.Conv2d, nn.Linear)): + params.append(module.weight) + if module.bias is not None: + params.append(module.bias) + elif hasattr(module, "weight") or hasattr(module, "bias"): + if hasattr(module, "weight"): + params.append(module.weight) + if hasattr(module, "bias"): + params.append(module.bias) + elif len(list(module.children())) == 0: + for p in module.parameters(): params.append(p) param_groups = runner.optimizer.param_groups[0].copy() diff --git a/otx/mpa/modules/hooks/sam_optimizer_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py similarity index 94% rename from otx/mpa/modules/hooks/sam_optimizer_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py index 29357dfda0f..71b2f6799cd 100644 --- a/otx/mpa/modules/hooks/sam_optimizer_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py @@ -1,3 +1,4 @@ +"""This module contains the Sharpness-aware Minimization optimizer hook implementation for MMCV Runners.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,7 +9,7 @@ @HOOKS.register_module() class SAMOptimizerHook(OptimizerHook): - """Sharpness-aware Minimization optimizer hook + """Sharpness-aware Minimization optimizer hook. Implemented as OptimizerHook for MMCV Runners - Paper ref: https://arxiv.org/abs/2010.01412 @@ -23,7 +24,8 @@ def __init__(self, rho=0.05, start_epoch=1, **kwargs): raise ValueError("rho should be greater than 0 for SAM optimizer") def after_train_iter(self, runner): - """Perform SAM optimization + """Perform SAM optimization. + 0. compute current loss (DONE IN model.train_step()) 1. compute current gradient 2. move param to the approximate local maximum: w + e(w) = w + rho*norm_grad @@ -73,6 +75,7 @@ def after_train_iter(self, runner): # Shaprpness-aware param update runner.optimizer.step() # param -= lr * sam_grad runner.log_buffer.update({"sharpness": float(max_loss - curr_loss), "max_loss": float(max_loss)}) + return None def _get_current_batch(self, model): if hasattr(model, "module"): diff --git a/otx/mpa/modules/hooks/semisl_cls_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py similarity index 83% rename from otx/mpa/modules/hooks/semisl_cls_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py index d53006862ea..92ab0a9d581 100644 --- a/otx/mpa/modules/hooks/semisl_cls_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py @@ -1,3 +1,4 @@ +"""Module for defining hook for semi-supervised learning for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -10,7 +11,7 @@ @HOOKS.register_module() class SemiSLClsHook(Hook): - """Hook for SemiSL for classification + """Hook for SemiSL for classification. This hook includes unlabeled warm-up loss coefficient (default: True): unlabeled_coef = (0.5 - cos(min(pi, 2 * pi * k) / K)) / 2 @@ -25,14 +26,14 @@ class SemiSLClsHook(Hook): If False, Semi-SL uses 1 as unlabeled loss coefficient """ - def __init__(self, total_steps=0, unlabeled_warmup=True, **kwargs): + def __init__(self, total_steps=0, unlabeled_warmup=True): self.unlabeled_warmup = unlabeled_warmup self.total_steps = total_steps self.current_step, self.unlabeled_coef = 0, 0 self.num_pseudo_label = 0 def before_train_iter(self, runner): - # Calculate the unlabeled warm-up loss coefficient before training iteration + """Calculate the unlabeled warm-up loss coefficient before training iteration.""" if self.unlabeled_warmup and self.unlabeled_coef < 1.0: if self.total_steps == 0: self.total_steps = runner.max_iters @@ -44,12 +45,12 @@ def before_train_iter(self, runner): self.current_step += 1 def after_train_iter(self, runner): + """Add the number of pseudo-labels correctly selected from iteration.""" model = self._get_model(runner) - # Add the number of pseudo-labels currently selected from iteration self.num_pseudo_label += int(model.head.num_pseudo_label) def after_epoch(self, runner): - # Add data related to Semi-SL to the log + """Add data related to Semi-SL to the log.""" if self.unlabeled_warmup: runner.log_buffer.output.update({"unlabeled_coef": round(self.unlabeled_coef, 4)}) runner.log_buffer.output.update({"pseudo_label": self.num_pseudo_label}) diff --git a/otx/algorithms/common/adapters/mmcv/nncf/patches.py b/otx/algorithms/common/adapters/mmcv/nncf/patches.py index 0ebd3653ae4..aad2823b795 100644 --- a/otx/algorithms/common/adapters/mmcv/nncf/patches.py +++ b/otx/algorithms/common/adapters/mmcv/nncf/patches.py @@ -32,7 +32,7 @@ def _evaluation_wrapper(self, fn, runner, *args, **kwargs): NNCF_PATCHER.patch("mmcv.runner.EvalHook.evaluate", _evaluation_wrapper) -NNCF_PATCHER.patch("otx.mpa.modules.hooks.eval_hook.CustomEvalHook.evaluate", _evaluation_wrapper) +NNCF_PATCHER.patch("otx.algorithms.common.adapters.mmcv.hooks.eval_hook.CustomEvalHook.evaluate", _evaluation_wrapper) NNCF_PATCHER.patch( "otx.mpa.modules.hooks.recording_forward_hooks.FeatureVectorHook.func", diff --git a/otx/algorithms/common/adapters/mmcv/pipelines/transforms/__init__.py b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/__init__.py new file mode 100644 index 00000000000..2d2a189eb8b --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/__init__.py @@ -0,0 +1,10 @@ +"""Transforms for mmcv.""" +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + +# flake8: noqa + +from .cython_augments import pil_augment + +__all__ = ["pil_augment"] diff --git a/otx/mpa/modules/datasets/pipelines/transforms/augments.py b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/augments.py similarity index 70% rename from otx/mpa/modules/datasets/pipelines/transforms/augments.py rename to otx/algorithms/common/adapters/mmcv/pipelines/transforms/augments.py index 8e92e9a41a7..80e95183cff 100644 --- a/otx/mpa/modules/datasets/pipelines/transforms/augments.py +++ b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/augments.py @@ -1,3 +1,4 @@ +"""Module for defining Augments and CythonArguments class used for classification task.""" # Copyright (C) 2022 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -5,181 +6,210 @@ import random from typing import Union -import numpy as np +from numpy import ndarray as CvImage from PIL import Image, ImageEnhance, ImageOps +from PIL.Image import Image as PILImage from PIL.Image import Resampling -import otx.mpa.modules.datasets.pipelines.transforms.cython_augments.pil_augment as pil_aug +from otx.algorithms.common.adapters.mmcv.pipelines.transforms.cython_augments import ( + pil_augment as pil_aug, +) -PILImage = Image.Image -CvImage = np.ndarray ImgTypes = Union[PILImage, CvImage] -class Augments: +class Augments: # pylint: disable=unused-argument + """Augments class that implements various augmentations via plain PIL.""" + + @staticmethod def _check_args_tf(kwargs): def _interpolation(kwargs): interpolation = kwargs.pop("resample", Resampling.BILINEAR) if isinstance(interpolation, (list, tuple)): return random.choice(interpolation) - else: - return interpolation + return interpolation - kwargs["resample"] = _interpolation(kwargs) + new_kwargs = {**kwargs, "resample": _interpolation(kwargs)} + return new_kwargs @staticmethod def autocontrast(img: PILImage, *args, **kwargs) -> PILImage: + """Apply autocontrast for an given image.""" return ImageOps.autocontrast(img) @staticmethod def equalize(img: PILImage, *args, **kwargs) -> PILImage: + """Apply equalize for an given image.""" return ImageOps.equalize(img) @staticmethod def solarize(img: PILImage, threshold: int, *args, **kwargs) -> PILImage: + """Apply solarize for an given image.""" return ImageOps.solarize(img, threshold) @staticmethod def posterize(img: PILImage, bits_to_keep: int, *args, **kwargs) -> PILImage: + """Apply posterize for an given image.""" if bits_to_keep >= 8: return img - return ImageOps.posterize(img, bits_to_keep) @staticmethod def color(img: PILImage, factor: float, *args, **kwargs) -> PILImage: + """Apply color for an given image.""" return ImageEnhance.Color(img).enhance(factor) @staticmethod def contrast(img: PILImage, factor: float, *args, **kwargs) -> PILImage: + """Apply contrast for an given image.""" return ImageEnhance.Contrast(img).enhance(factor) @staticmethod def brightness(img: PILImage, factor: float, *args, **kwargs) -> PILImage: + """Apply brightness for an given image.""" return ImageEnhance.Brightness(img).enhance(factor) @staticmethod def sharpness(img: PILImage, factor: float, *args, **kwargs) -> PILImage: + """Apply sharpness for an given image.""" return ImageEnhance.Sharpness(img).enhance(factor) @staticmethod def rotate(img: PILImage, degree: float, *args, **kwargs) -> PILImage: - Augments._check_args_tf(kwargs) + """Apply rotate for an given image.""" + kwargs = Augments._check_args_tf(kwargs) return img.rotate(degree, **kwargs) @staticmethod def shear_x(img: PILImage, factor: float, *args, **kwargs) -> PILImage: - Augments._check_args_tf(kwargs) + """Apply shear_x for an given image.""" + kwargs = Augments._check_args_tf(kwargs) return img.transform(img.size, Image.AFFINE, (1, factor, 0, 0, 1, 0), **kwargs) @staticmethod def shear_y(img: PILImage, factor: float, *args, **kwargs) -> PILImage: - Augments._check_args_tf(kwargs) + """Apply shear_y for an given image.""" + kwargs = Augments._check_args_tf(kwargs) return img.transform(img.size, Image.AFFINE, (1, 0, 0, factor, 1, 0), **kwargs) @staticmethod def translate_x_rel(img: PILImage, pct: float, *args, **kwargs) -> PILImage: - Augments._check_args_tf(kwargs) + """Apply translate_x_rel for an given image.""" + kwargs = Augments._check_args_tf(kwargs) pixels = pct * img.size[0] return img.transform(img.size, Image.AFFINE, (1, 0, pixels, 0, 1, 0), **kwargs) @staticmethod def translate_y_rel(img: PILImage, pct: float, *args, **kwargs) -> PILImage: - Augments._check_args_tf(kwargs) + """Apply translate_y_rel for an given image.""" + kwargs = Augments._check_args_tf(kwargs) pixels = pct * img.size[1] return img.transform(img.size, Image.AFFINE, (1, 0, 0, 0, 1, pixels), **kwargs) class CythonAugments(Augments): + """CythonAugments class that support faster augmentation with cythonizing.""" + + @staticmethod def autocontrast(img: ImgTypes, *args, **kwargs) -> ImgTypes: + """Apply autocontrast for an given image.""" if Image.isImageType(img): return pil_aug.autocontrast(img) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def equalize(img: ImgTypes, *args, **kwargs) -> ImgTypes: + """Apply equalize for an given image.""" if Image.isImageType(img): return pil_aug.equalize(img) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def solarize(img: ImgTypes, threshold: int, *args, **kwargs) -> ImgTypes: + """Apply solarize for an given image.""" if Image.isImageType(img): return pil_aug.solarize(img, threshold) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def posterize(img: ImgTypes, bits_to_keep: int, *args, **kwargs) -> ImgTypes: + """Apply posterize for an given image.""" if Image.isImageType(img): if bits_to_keep >= 8: return img - return pil_aug.posterize(img, bits_to_keep) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def color(img: ImgTypes, factor: float, *args, **kwargs) -> ImgTypes: + """Apply color for an given image.""" if Image.isImageType(img): return pil_aug.color(img, factor) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def contrast(img: ImgTypes, factor: float, *args, **kwargs) -> ImgTypes: + """Apply contrast for an given image.""" if Image.isImageType(img): return pil_aug.contrast(img, factor) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def brightness(img: ImgTypes, factor: float, *args, **kwargs) -> ImgTypes: + """Apply brightness for an given image.""" if Image.isImageType(img): return pil_aug.brightness(img, factor) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def sharpness(img: ImgTypes, factor: float, *args, **kwargs) -> ImgTypes: + """Apply sharpness for an given image.""" if Image.isImageType(img): return pil_aug.sharpness(img, factor) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def rotate(img: ImgTypes, degree: float, *args, **kwargs) -> ImgTypes: + """Apply rotate for an given image.""" Augments._check_args_tf(kwargs) if Image.isImageType(img): return pil_aug.rotate(img, degree) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def shear_x(img: ImgTypes, factor: float, *args, **kwargs) -> ImgTypes: + """Apply shear_x for an given image.""" Augments._check_args_tf(kwargs) - if Image.isImageType(img): return pil_aug.shear_x(img, factor) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def shear_y(img: ImgTypes, factor: float, *args, **kwargs) -> ImgTypes: + """Apply shear_y for an given image.""" if Image.isImageType(img): return pil_aug.shear_y(img, factor) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def translate_x_rel(img: ImgTypes, pct: float, *args, **kwargs) -> ImgTypes: + """Apply translate_x_rel for an given image.""" if Image.isImageType(img): return pil_aug.translate_x_rel(img, pct) - raise NotImplementedError(f"Unknown type: {type(img)}") + @staticmethod def translate_y_rel(img: ImgTypes, pct: float, *args, **kwargs) -> ImgTypes: + """Apply translate_y_rel for an given image.""" if Image.isImageType(img): return pil_aug.translate_y_rel(img, pct) - raise NotImplementedError(f"Unknown type: {type(img)}") - def blend(src: ImgTypes, dst: CvImage, weight: float): + @staticmethod + def blend(src: ImgTypes, dst: CvImage, weight: float = 0.0): + """Apply blend for an given image.""" assert isinstance(dst, CvImage), f"Type of dst should be numpy array, but type(dst)={type(dst)}." - if Image.isImageType(src): return pil_aug.blend(src, dst, weight) - raise NotImplementedError(f"Unknown type: {type(src)}") diff --git a/otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/__init__.py b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/__init__.py new file mode 100644 index 00000000000..d9f52923137 --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/__init__.py @@ -0,0 +1,9 @@ +"""Module to init cython augments.""" +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# ignore mypy attr-defined error by cython modules +# pylint: disable=import-self + +from . import pil_augment # type: ignore[attr-defined] + +__all__ = ["pil_augment"] diff --git a/otx/mpa/modules/datasets/pipelines/transforms/cython_augments/cv_augment.pyx b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/cv_augment.pyx similarity index 100% rename from otx/mpa/modules/datasets/pipelines/transforms/cython_augments/cv_augment.pyx rename to otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/cv_augment.pyx diff --git a/otx/mpa/modules/datasets/pipelines/transforms/cython_augments/pil_augment.pyx b/otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/pil_augment.pyx similarity index 100% rename from otx/mpa/modules/datasets/pipelines/transforms/cython_augments/pil_augment.pyx rename to otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/pil_augment.pyx diff --git a/otx/cli/utils/importing.py b/otx/cli/utils/importing.py index a00deb2cedc..173cdd8bf89 100644 --- a/otx/cli/utils/importing.py +++ b/otx/cli/utils/importing.py @@ -29,7 +29,7 @@ "mmseg": "mmseg.models", "torchvision": "otx.algorithms.common.adapters.mmcv.models", "pytorchcv": "mmdet.models", - "omz.mmcls": "otx.mpa.modules.ov.models.mmcls.backbones.mmov_backbone", + "omz.mmcls": "otx.algorithms.classification.adapters.mmcls.models.backbones.mmov_backbone", } diff --git a/otx/mpa/cls/__init__.py b/otx/mpa/cls/__init__.py index 59c5e4f775a..fe6ed053dd0 100644 --- a/otx/mpa/cls/__init__.py +++ b/otx/mpa/cls/__init__.py @@ -2,28 +2,6 @@ # SPDX-License-Identifier: Apache-2.0 # -import otx.mpa.modules.datasets.pipelines.transforms.augmix -import otx.mpa.modules.datasets.pipelines.transforms.ote_transforms -import otx.mpa.modules.datasets.pipelines.transforms.random_augment -import otx.mpa.modules.datasets.pipelines.transforms.twocrop_transform -import otx.mpa.modules.hooks -import otx.mpa.modules.models.classifiers -import otx.mpa.modules.models.heads.custom_cls_head -import otx.mpa.modules.models.heads.custom_hierarchical_linear_cls_head -import otx.mpa.modules.models.heads.custom_hierarchical_non_linear_cls_head -import otx.mpa.modules.models.heads.custom_multi_label_linear_cls_head -import otx.mpa.modules.models.heads.custom_multi_label_non_linear_cls_head -import otx.mpa.modules.models.heads.non_linear_cls_head -import otx.mpa.modules.models.heads.semisl_cls_head -import otx.mpa.modules.models.heads.semisl_multilabel_cls_head -import otx.mpa.modules.models.heads.supcon_cls_head -import otx.mpa.modules.models.losses.asymmetric_angular_loss_with_ignore -import otx.mpa.modules.models.losses.asymmetric_loss_with_ignore -import otx.mpa.modules.models.losses.barlowtwins_loss -import otx.mpa.modules.models.losses.cross_entropy_loss -import otx.mpa.modules.models.losses.ib_loss -import otx.mpa.modules.optimizer.lars - # flake8: noqa from . import ( evaluator, diff --git a/otx/mpa/csrc/mpl/lib_mpl.cpp b/otx/mpa/csrc/mpl/lib_mpl.cpp deleted file mode 100644 index 4ad403bcbcb..00000000000 --- a/otx/mpa/csrc/mpl/lib_mpl.cpp +++ /dev/null @@ -1,70 +0,0 @@ -// Copyright (c) 2018, Sergei Belousov -// SPDX-License-Identifier: BSD-3-Clause -// -// Copyright (C) 2022 Intel Corporation -// SPDX-License-Identifier: Apache-2.0 -// -// The original repo: https://github.com/bes-dev/mpl.pytorch - -#include -#include - -#include - -void compute_weights(int size, - const torch::Tensor losses, - const torch::Tensor indices, - torch::Tensor weights, - float ratio, - float p) { - const float* losses_data = losses.data_ptr(); - const int64_t* indices_data = indices.data_ptr(); - float* weights_data = weights.data_ptr(); - - // find a first nonzero element - int pos = 0; - while(losses_data[pos] < std::numeric_limits::epsilon()) { - ++pos; - } - - // Algorithm #1 - int n = size - pos; - int m = int(ratio * n); - if (n <= 0 || m <= 0) { - return; - } - - float q = p / (p - 1.0); - int c = m - n; - float a[2] = {0.0, 0.0}; - int i = pos; - float eta = 0.0; - for(; i < size && eta < std::numeric_limits::epsilon(); ++i) { - float loss_q = pow(losses_data[i] / losses_data[size - 1], q); - - a[0] = a[1]; - a[1] += loss_q; - - c += 1; - eta = float(c) * loss_q - a[1]; - } - - // compute alpha - float alpha; - if (eta < std::numeric_limits::epsilon()) { - c += 1; - a[0] = a[1]; - } - alpha = pow(a[0] / c, 1.0 / q) * losses_data[size - 1]; - - // compute weights - float tau = 1.0 / (pow(n, 1.0 / q) * pow(m, 1.0 / p)); - for (int k = i; k < size; ++k) { - weights_data[indices_data[k]] = tau; - } - if (alpha > -std::numeric_limits::epsilon()) { - for(int k = pos; k < i; ++k) { - weights_data[indices_data[k]] = tau * pow(losses_data[k] / alpha, q - 1); - } - } -} diff --git a/otx/mpa/csrc/mpl/lib_mpl.h b/otx/mpa/csrc/mpl/lib_mpl.h deleted file mode 100644 index b7e6b5def82..00000000000 --- a/otx/mpa/csrc/mpl/lib_mpl.h +++ /dev/null @@ -1,12 +0,0 @@ -// Copyright (c) 2018, Sergei Belousov -// SPDX-License-Identifier: BSD-3-Clause -// -// Copyright (C) 2022 Intel Corporation -// SPDX-License-Identifier: Apache-2.0 -// -void compute_weights(int size, - const torch::Tensor losses, - const torch::Tensor indices, - torch::Tensor weights, - float ratio, - float p); diff --git a/otx/mpa/csrc/mpl/pybind.cpp b/otx/mpa/csrc/mpl/pybind.cpp deleted file mode 100644 index 39bce9c6960..00000000000 --- a/otx/mpa/csrc/mpl/pybind.cpp +++ /dev/null @@ -1,20 +0,0 @@ -// Copyright (c) 2018, Sergei Belousov -// SPDX-License-Identifier: BSD-3-Clause -// -// Copyright (C) 2022 Intel Corporation -// SPDX-License-Identifier: Apache-2.0 -// -#include - -void compute_weights(int size, - const torch::Tensor losses, - const torch::Tensor indices, - torch::Tensor weights, - float ratio, - float p); - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) { - m.def("compute_weights", &compute_weights, "compute_weights", - py::arg("size"), py::arg("losses"), py::arg("indices"), - py::arg("weights"), py::arg("ratio"), py::arg("p")); -} diff --git a/otx/mpa/modules/datasets/pipelines/transforms/cython_augments/__init__.py b/otx/mpa/modules/datasets/pipelines/transforms/cython_augments/__init__.py deleted file mode 100644 index e69de29bb2d..00000000000 diff --git a/otx/mpa/modules/datasets/pipelines/transforms/random_augment.py b/otx/mpa/modules/datasets/pipelines/transforms/random_augment.py deleted file mode 100644 index 3b59eff3351..00000000000 --- a/otx/mpa/modules/datasets/pipelines/transforms/random_augment.py +++ /dev/null @@ -1,171 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# code in this file is adpated from -# https://github.com/ildoonet/pytorch-randaugment/blob/master/RandAugment/augmentations.py -# https://github.com/google-research/fixmatch/blob/master/third_party/auto_augment/augmentations.py -# https://github.com/google-research/fixmatch/blob/master/libml/ctaugment.py -import random - -import numpy as np -import PIL -from mmcls.datasets.builder import PIPELINES - -PARAMETER_MAX = 10 - - -def AutoContrast(img, **kwarg): - return PIL.ImageOps.autocontrast(img), None - - -def Brightness(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - return PIL.ImageEnhance.Brightness(img).enhance(v), v - - -def Color(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - return PIL.ImageEnhance.Color(img).enhance(v), v - - -def Contrast(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - return PIL.ImageEnhance.Contrast(img).enhance(v), v - - -def Cutout(img, v, max_v, bias=0): - if v == 0: - return img - v = _float_parameter(v, max_v) + bias - v = int(v * min(img.size)) - return CutoutAbs(img, v), v - - -def CutoutAbs(img, v, **kwarg): - w, h = img.size - x0 = np.random.uniform(0, w) - y0 = np.random.uniform(0, h) - x0 = int(max(0, x0 - v / 2.0)) - y0 = int(max(0, y0 - v / 2.0)) - x1 = int(min(w, x0 + v)) - y1 = int(min(h, y0 + v)) - xy = (x0, y0, x1, y1) - # gray - color = (127, 127, 127) - img = img.copy() - PIL.ImageDraw.Draw(img).rectangle(xy, color) - return img, xy, color - - -def Equalize(img, **kwarg): - return PIL.ImageOps.equalize(img), None - - -def Identity(img, **kwarg): - return img, None - - -def Posterize(img, v, max_v, bias=0): - v = _int_parameter(v, max_v) + bias - return PIL.ImageOps.posterize(img, v), v - - -def Rotate(img, v, max_v, bias=0): - v = _int_parameter(v, max_v) + bias - if random.random() < 0.5: - v = -v - return img.rotate(v), v - - -def Sharpness(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - return PIL.ImageEnhance.Sharpness(img).enhance(v), v - - -def ShearX(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - if random.random() < 0.5: - v = -v - return img.transform(img.size, PIL.Image.AFFINE, (1, v, 0, 0, 1, 0)), v - - -def ShearY(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - if random.random() < 0.5: - v = -v - return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, v, 1, 0)), v - - -def Solarize(img, v, max_v, bias=0): - v = _int_parameter(v, max_v) + bias - return PIL.ImageOps.solarize(img, 256 - v), v - - -def TranslateX(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - if random.random() < 0.5: - v = -v - v = int(v * img.size[0]) - return img.transform(img.size, PIL.Image.AFFINE, (1, 0, v, 0, 1, 0)), v - - -def TranslateY(img, v, max_v, bias=0): - v = _float_parameter(v, max_v) + bias - if random.random() < 0.5: - v = -v - v = int(v * img.size[1]) - return img.transform(img.size, PIL.Image.AFFINE, (1, 0, 0, 0, 1, v)), v - - -def _float_parameter(v, max_v): - return float(v) * max_v / PARAMETER_MAX - - -def _int_parameter(v, max_v): - return int(v * max_v / PARAMETER_MAX) - - -rand_augment_pool = [ - (AutoContrast, None, None), - (Brightness, 0.9, 0.05), - (Color, 0.9, 0.05), - (Contrast, 0.9, 0.05), - (Equalize, None, None), - (Identity, None, None), - (Posterize, 4, 4), - (Rotate, 30, 0), - (Sharpness, 0.9, 0.05), - (ShearX, 0.3, 0), - (ShearY, 0.3, 0), - (Solarize, 256, 0), - (TranslateX, 0.3, 0), - (TranslateY, 0.3, 0), -] - -# TODO: [Jihwan]: Can be removed by mmcls.datasets.pipeline.auto_augment Line 95 RandAugment class -@PIPELINES.register_module() -class MPARandAugment(object): - def __init__(self, n, m, cutout=16): - assert n >= 1 - assert 1 <= m <= 10 - self.n = n - self.m = m - self.cutout = cutout - self.augment_pool = rand_augment_pool - - def __call__(self, results): - for key in results.get("img_fields", ["img"]): - img = results[key] - if not PIL.Image.isImageType(img): - img = PIL.Image.fromarray(results[key]) - ops = random.choices(self.augment_pool, k=self.n) - for op, max_v, bias in ops: - v = np.random.randint(1, self.m) - if random.random() < 0.5: - img, v = op(img, v=v, max_v=max_v, bias=bias) - results["rand_mc_{}".format(op.__name__)] = v - img, xy, color = CutoutAbs(img, self.cutout) - results["CutoutAbs"] = (xy, self.cutout, color) - results[key] = np.array(img) - return results diff --git a/otx/mpa/modules/hooks/__init__.py b/otx/mpa/modules/hooks/__init__.py index 5bd14347002..8c751bbbe2b 100644 --- a/otx/mpa/modules/hooks/__init__.py +++ b/otx/mpa/modules/hooks/__init__.py @@ -5,19 +5,13 @@ # flake8: noqa from . import ( adaptive_training_hooks, - checkpoint_hook, composed_dataloaders_hook, early_stopping_hook, - fp16_sam_optimizer_hook, - ib_loss_hook, logger_replace_hook, model_ema_hook, model_ema_v2_hook, - no_bias_decay_hook, recording_forward_hooks, - sam_optimizer_hook, save_initial_weight_hook, - semisl_cls_hook, task_adapt_hook, unbiased_teacher_hook, workflow_hooks, diff --git a/otx/mpa/modules/models/classifiers/__init__.py b/otx/mpa/modules/models/classifiers/__init__.py deleted file mode 100644 index d1c98448092..00000000000 --- a/otx/mpa/modules/models/classifiers/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from . import ( - sam_classifier, - semisl_classifier, - semisl_multilabel_classifier, - supcon_classifier, -) diff --git a/otx/mpa/modules/models/heads/__init__.py b/otx/mpa/modules/models/heads/__init__.py deleted file mode 100644 index 4e1701262e2..00000000000 --- a/otx/mpa/modules/models/heads/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa diff --git a/otx/mpa/modules/models/heads/utils.py b/otx/mpa/modules/models/heads/utils.py deleted file mode 100644 index 9390fb85c31..00000000000 --- a/otx/mpa/modules/models/heads/utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (C) 2023 Intel Corporation -# SPDX-License-Identifier: MIT -# - -from torch import nn - - -def generate_aux_mlp(aux_mlp_cfg: dict, in_channels: int): - out_channels = aux_mlp_cfg["out_channels"] - if out_channels <= 0: - raise ValueError(f"out_channels={out_channels} must be a positive integer") - if "hid_channels" in aux_mlp_cfg and aux_mlp_cfg["hid_channels"] > 0: - hid_channels = aux_mlp_cfg["hid_channels"] - mlp = nn.Sequential( - nn.Linear(in_features=in_channels, out_features=hid_channels), - nn.ReLU(inplace=True), - nn.Linear(in_features=hid_channels, out_features=out_channels), - ) - else: - mlp = nn.Linear(in_features=in_channels, out_features=out_channels) - - return mlp - - -class EMAMeter: - def __init__(self, alpha=0.9): - self.alpha = alpha - self.reset() - - def reset(self): - self.val = 0 - - def update(self, val): - self.val = self.alpha * self.val + (1 - self.alpha) * val - - -class LossBalancer: - def __init__(self, num_losses, weights=None, ema_weight=0.7) -> None: - self.EPS = 1e-9 - self.avg_estimators = [EMAMeter(ema_weight) for _ in range(num_losses)] - - if weights is not None: - assert len(weights) == num_losses - self.final_weights = weights - else: - self.final_weights = [1.0] * num_losses - - def balance_losses(self, losses): - total_loss = 0.0 - for i, l in enumerate(losses): - self.avg_estimators[i].update(float(l)) - total_loss += ( - self.final_weights[i] * l / (self.avg_estimators[i].val + self.EPS) * self.avg_estimators[0].val - ) - - return total_loss diff --git a/otx/mpa/modules/optimizer/__init__.py b/otx/mpa/modules/optimizer/__init__.py deleted file mode 100644 index 4e1701262e2..00000000000 --- a/otx/mpa/modules/optimizer/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa diff --git a/otx/mpa/modules/ov/models/mmcls/__init__.py b/otx/mpa/modules/ov/models/mmcls/__init__.py deleted file mode 100644 index d054cbc3fa3..00000000000 --- a/otx/mpa/modules/ov/models/mmcls/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from . import backbones, heads, necks diff --git a/otx/mpa/modules/ov/models/mmcls/backbones/__init__.py b/otx/mpa/modules/ov/models/mmcls/backbones/__init__.py deleted file mode 100644 index 1ad81562177..00000000000 --- a/otx/mpa/modules/ov/models/mmcls/backbones/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from .mmov_backbone import MMOVBackbone diff --git a/otx/mpa/modules/ov/models/mmcls/backbones/mmov_backbone.py b/otx/mpa/modules/ov/models/mmcls/backbones/mmov_backbone.py deleted file mode 100644 index 5c901229f37..00000000000 --- a/otx/mpa/modules/ov/models/mmcls/backbones/mmov_backbone.py +++ /dev/null @@ -1,27 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from typing import Dict, List - -from mmcls.models.builder import BACKBONES - -from ....graph.parsers.cls.cls_base_parser import cls_base_parser -from ...mmov_model import MMOVModel - - -@BACKBONES.register_module() -class MMOVBackbone(MMOVModel): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - - @staticmethod - def parser(graph, **kwargs) -> Dict[str, List[str]]: - output = cls_base_parser(graph, "backbone") - if output is None: - raise ValueError("Parser can not determine input and output of model. " "Please provide them explicitly") - return output - - def init_weights(self, pretrained=None): - # TODO - pass diff --git a/otx/mpa/modules/ov/models/mmcls/heads/__init__.py b/otx/mpa/modules/ov/models/mmcls/heads/__init__.py deleted file mode 100644 index c7bb496ede5..00000000000 --- a/otx/mpa/modules/ov/models/mmcls/heads/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from .cls_head import ClsHead -from .conv_head import ConvClsHead - -# flake8: noqa -from .mmov_cls_head import MMOVClsHead diff --git a/otx/mpa/modules/ov/models/mmcls/necks/__init__.py b/otx/mpa/modules/ov/models/mmcls/necks/__init__.py deleted file mode 100644 index 300cf80ef3f..00000000000 --- a/otx/mpa/modules/ov/models/mmcls/necks/__init__.py +++ /dev/null @@ -1,6 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from .mmov_neck import MMOVNeck diff --git a/pyproject.toml b/pyproject.toml index f4ed51f28fc..560b922add7 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -110,6 +110,9 @@ good-names = [ "y2", "x", "y", + "xy", + "x0", + "y0", "r", "id", "type", @@ -122,6 +125,7 @@ good-names = [ "t", "w", "h", + "fc", ] [tool.pylint.imports] diff --git a/setup.py b/setup.py index 6660edad616..bbdb65e1987 100644 --- a/setup.py +++ b/setup.py @@ -104,8 +104,8 @@ def _cython_modules(): package_root = os.path.dirname(__file__) cython_files = [ - "otx/mpa/modules/datasets/pipelines/transforms/cython_augments/pil_augment.pyx", - "otx/mpa/modules/datasets/pipelines/transforms/cython_augments/cv_augment.pyx", + "otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/pil_augment.pyx", + "otx/algorithms/common/adapters/mmcv/pipelines/transforms/cython_augments/cv_augment.pyx" ] ext_modules = [ diff --git a/tests/unit/algorithms/classification/adapters/mmcls/data/test_datasets.py b/tests/unit/algorithms/classification/adapters/mmcls/data/test_datasets.py index 3fb7a4d2227..13e28c4e3e6 100644 --- a/tests/unit/algorithms/classification/adapters/mmcls/data/test_datasets.py +++ b/tests/unit/algorithms/classification/adapters/mmcls/data/test_datasets.py @@ -5,12 +5,12 @@ import numpy as np import pytest -from otx.algorithms.classification.adapters.mmcls.data import ( +from otx.algorithms.classification.adapters.mmcls.datasets import ( OTXClsDataset, OTXHierarchicalClsDataset, OTXMultilabelClsDataset, + SelfSLDataset, ) -from otx.algorithms.classification.adapters.mmcls.data.datasets import SelfSLDataset from otx.algorithms.classification.utils import get_multihead_class_info from otx.api.entities.annotation import ( Annotation, diff --git a/tests/unit/algorithms/classification/adapters/mmcls/data/test_pipelines.py b/tests/unit/algorithms/classification/adapters/mmcls/data/test_pipelines.py index be2c4e8eace..caaaab22845 100644 --- a/tests/unit/algorithms/classification/adapters/mmcls/data/test_pipelines.py +++ b/tests/unit/algorithms/classification/adapters/mmcls/data/test_pipelines.py @@ -2,7 +2,7 @@ import pytest from PIL import Image -from otx.algorithms.classification.adapters.mmcls.data.pipelines import ( +from otx.algorithms.classification.adapters.mmcls.datasets.pipelines.otx_pipelines import ( GaussianBlur, LoadImageFromOTXDataset, OTXColorJitter, @@ -57,7 +57,10 @@ def test_load_image_from_otx_dataset_call(to_float32): @e2e_pytest_unit def test_random_applied_transforms(mocker, inputs_np): """Test RandomAppliedTrans.""" - mocker.patch("otx.algorithms.classification.adapters.mmcls.data.pipelines.build_from_cfg", return_value=lambda x: x) + mocker.patch( + "otx.algorithms.classification.adapters.mmcls.datasets.pipelines.otx_pipelines.build_from_cfg", + return_value=lambda x: x, + ) random_applied_transforms = RandomAppliedTrans(transforms=[dict()]) @@ -106,7 +109,10 @@ def test_pil_image_to_nd_array(inputs_PIL) -> None: @e2e_pytest_unit def test_post_aug(mocker, inputs_np): """Test PostAug.""" - mocker.patch("otx.algorithms.classification.adapters.mmcls.data.pipelines.Compose", return_value=lambda x: x) + mocker.patch( + "otx.algorithms.classification.adapters.mmcls.datasets.pipelines.otx_pipelines.Compose", + return_value=lambda x: x, + ) post_aug = PostAug(keys=dict(orig=lambda x: x)) diff --git a/tests/unit/algorithms/classification/adapters/mmcls/test_mmcls_data_params_validation.py b/tests/unit/algorithms/classification/adapters/mmcls/test_mmcls_data_params_validation.py index 2e8b9a0bc00..b7d0dd6128d 100644 --- a/tests/unit/algorithms/classification/adapters/mmcls/test_mmcls_data_params_validation.py +++ b/tests/unit/algorithms/classification/adapters/mmcls/test_mmcls_data_params_validation.py @@ -4,7 +4,7 @@ import numpy as np import pytest -from otx.algorithms.classification.adapters.mmcls.data import OTXClsDataset +from otx.algorithms.classification.adapters.mmcls.datasets import OTXClsDataset from otx.api.entities.annotation import ( Annotation, AnnotationSceneEntity, diff --git a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augments.py b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augments.py index 4611962e10f..26f213dba45 100644 --- a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augments.py +++ b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augments.py @@ -10,7 +10,7 @@ import pytest from PIL import Image -from otx.mpa.modules.datasets.pipelines.transforms.augments import ( +from otx.algorithms.common.adapters.mmcv.pipelines.transforms.augments import ( Augments, CythonAugments, ) diff --git a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augmix.py b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augmix.py index de9695e07d0..fd078c54167 100644 --- a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augmix.py +++ b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_augmix.py @@ -10,11 +10,13 @@ import pytest from PIL import Image -from otx.mpa.modules.datasets.pipelines.transforms.augments import CythonAugments -from otx.mpa.modules.datasets.pipelines.transforms.augmix import ( +from otx.algorithms.classification.adapters.mmcls.datasets.pipelines.transforms.augmix import ( AugMixAugment, OpsFabric, ) +from otx.algorithms.common.adapters.mmcv.pipelines.transforms.augments import ( + CythonAugments, +) @pytest.fixture @@ -32,10 +34,10 @@ def test_init(self, ops_fabric: OpsFabric) -> None: "fillcolor": 128, "resample": (Image.BILINEAR, Image.BICUBIC), } - assert ops_fabric.magnitude == 5 - assert ops_fabric.magnitude_std == float("inf") - assert ops_fabric.level_fn == ops_fabric._rotate_level_to_arg - assert ops_fabric.aug_fn == CythonAugments.rotate + assert ops_fabric.aug_factory.magnitude == 5 + assert ops_fabric.aug_factory.magnitude_std == float("inf") + assert ops_fabric.aug_factory.level_fn == ops_fabric._rotate_level_to_arg + assert ops_fabric.aug_factory.aug_fn == CythonAugments.rotate def test_randomly_negate(self) -> None: """Test randomly_negate function.""" diff --git a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_ote_transforms.py b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_ote_transforms.py index 9900ed8a520..ae5d771dbcb 100644 --- a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_ote_transforms.py +++ b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_ote_transforms.py @@ -11,7 +11,7 @@ from PIL import Image from torchvision.transforms import functional as F -from otx.mpa.modules.datasets.pipelines.transforms.ote_transforms import ( +from otx.algorithms.classification.adapters.mmcls.datasets.pipelines.transforms.otx_transforms import ( PILToTensor, RandomRotate, TensorNormalize, diff --git a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_random_augment.py b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_random_augment.py index 7070248caf0..320bdbb4a3d 100644 --- a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_random_augment.py +++ b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_random_augment.py @@ -8,9 +8,9 @@ import pytest from PIL import Image -from otx.mpa.modules.datasets.pipelines.transforms.random_augment import ( - CutoutAbs, - MPARandAugment, +from otx.algorithms.classification.adapters.mmcls.datasets.pipelines.transforms.random_augment import ( + OTXRandAugment, + cutout_abs, rand_augment_pool, ) @@ -27,15 +27,15 @@ def sample_pil_image() -> Image: def test_all_transforms_return_valid_image(sample_pil_image: Image.Image) -> None: """Test all transforms return valid image.""" - for transform, v, max_v in rand_augment_pool: - img, *extra = transform(sample_pil_image, v=v, max_v=max_v) + for transform, value, max_value in rand_augment_pool: + img, *extra = transform(sample_pil_image, value=value, max_value=max_value) assert isinstance(img, Image.Image) assert img.size == sample_pil_image.size def test_cutoutabs_transform(sample_pil_image: Image.Image) -> None: - """Test CutoutAbs transform.""" - img, (x0, y0, x1, y1), color = CutoutAbs(sample_pil_image, 2) + """Test cutout_abs transform.""" + img, (x0, y0, x1, y1), color = cutout_abs(sample_pil_image, 2) assert isinstance(img, Image.Image) assert img.size == sample_pil_image.size assert x0 >= 0 and y0 >= 0 @@ -43,10 +43,10 @@ def test_cutoutabs_transform(sample_pil_image: Image.Image) -> None: assert color == (127, 127, 127) -class TestMPARandAugment: +class TestOTXRandAugment: def test_with_default_arguments(self, sample_np_image: np.ndarray) -> None: """Test case with default arguments.""" - transform = MPARandAugment(n=2, m=5, cutout=16) + transform = OTXRandAugment(num_aug=2, magnitude=5, cutout_value=16) data = {"img": sample_np_image} results = transform(data) @@ -56,7 +56,7 @@ def test_with_default_arguments(self, sample_np_image: np.ndarray) -> None: def test_with_img_fields_argument(self, sample_np_image: np.ndarray) -> None: """Test case with img_fields argument.""" - transform = MPARandAugment(n=2, m=5, cutout=16) + transform = OTXRandAugment(num_aug=2, magnitude=5, cutout_value=16) data = { "img1": sample_np_image, "img2": sample_np_image, @@ -69,7 +69,7 @@ def test_with_img_fields_argument(self, sample_np_image: np.ndarray) -> None: def test_with_pil_image_input(self, sample_pil_image: Image.Image) -> None: """Test case with PIL.Image input.""" - transform = MPARandAugment(n=2, m=5, cutout=16) + transform = OTXRandAugment(num_aug=2, magnitude=5, cutout_value=16) data = {"img": sample_pil_image} results = transform(data) diff --git a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_twocrop_transform.py b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_twocrop_transform.py index 9e69b779faf..8118b67d0c2 100644 --- a/tests/unit/mpa/modules/datasets/pipelines/transforms/test_twocrop_transform.py +++ b/tests/unit/mpa/modules/datasets/pipelines/transforms/test_twocrop_transform.py @@ -9,7 +9,7 @@ from mmcls.datasets.pipelines import Compose from mmcv.utils import build_from_cfg -from otx.mpa.modules.datasets.pipelines.transforms.twocrop_transform import ( +from otx.algorithms.classification.adapters.mmcls.datasets.pipelines.transforms.twocrop_transform import ( TwoCropTransform, ) diff --git a/tests/unit/mpa/modules/heads/test_custom_cls_head.py b/tests/unit/mpa/modules/heads/test_custom_cls_head.py index a3ee1db02a3..377966ad505 100644 --- a/tests/unit/mpa/modules/heads/test_custom_cls_head.py +++ b/tests/unit/mpa/modules/heads/test_custom_cls_head.py @@ -5,7 +5,7 @@ import pytest import torch -from otx.mpa.modules.models.heads.custom_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_cls_head import ( CustomLinearClsHead, CustomNonLinearClsHead, ) diff --git a/tests/unit/mpa/modules/heads/test_custom_hierarchical_cls_head.py b/tests/unit/mpa/modules/heads/test_custom_hierarchical_cls_head.py index 43c9d73ea2a..11f6e100996 100644 --- a/tests/unit/mpa/modules/heads/test_custom_hierarchical_cls_head.py +++ b/tests/unit/mpa/modules/heads/test_custom_hierarchical_cls_head.py @@ -5,13 +5,13 @@ import pytest import torch -from otx.mpa.modules.models.heads.custom_hierarchical_linear_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_hierarchical_linear_cls_head import ( CustomHierarchicalLinearClsHead, ) -from otx.mpa.modules.models.heads.custom_hierarchical_non_linear_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_hierarchical_non_linear_cls_head import ( CustomHierarchicalNonLinearClsHead, ) -from otx.mpa.modules.models.losses.asymmetric_loss_with_ignore import ( +from otx.algorithms.classification.adapters.mmcls.models.losses.asymmetric_loss_with_ignore import ( AsymmetricLossWithIgnore, ) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/heads/test_custom_multilabel_cls_head.py b/tests/unit/mpa/modules/heads/test_custom_multilabel_cls_head.py index 628051b5c06..c67afc16840 100644 --- a/tests/unit/mpa/modules/heads/test_custom_multilabel_cls_head.py +++ b/tests/unit/mpa/modules/heads/test_custom_multilabel_cls_head.py @@ -5,13 +5,13 @@ import pytest import torch -from otx.mpa.modules.models.heads.custom_multi_label_linear_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_multi_label_linear_cls_head import ( CustomMultiLabelLinearClsHead, ) -from otx.mpa.modules.models.heads.custom_multi_label_non_linear_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.custom_multi_label_non_linear_cls_head import ( CustomMultiLabelNonLinearClsHead, ) -from otx.mpa.modules.models.losses.asymmetric_loss_with_ignore import ( +from otx.algorithms.classification.adapters.mmcls.models.losses.asymmetric_loss_with_ignore import ( AsymmetricLossWithIgnore, ) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/heads/test_multilabel_semisl.py b/tests/unit/mpa/modules/heads/test_multilabel_semisl.py index 45367bb426b..eef4be46304 100644 --- a/tests/unit/mpa/modules/heads/test_multilabel_semisl.py +++ b/tests/unit/mpa/modules/heads/test_multilabel_semisl.py @@ -5,14 +5,16 @@ import pytest import torch -from otx.mpa.modules.models.heads.semisl_multilabel_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.semisl_multilabel_cls_head import ( SemiLinearMultilabelClsHead, SemiNonLinearMultilabelClsHead, ) -from otx.mpa.modules.models.losses.asymmetric_loss_with_ignore import ( +from otx.algorithms.classification.adapters.mmcls.models.losses.asymmetric_loss_with_ignore import ( AsymmetricLossWithIgnore, ) -from otx.mpa.modules.models.losses.barlowtwins_loss import BarlowTwinsLoss +from otx.algorithms.classification.adapters.mmcls.models.losses.barlowtwins_loss import ( + BarlowTwinsLoss, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/heads/test_semisl_cls_head.py b/tests/unit/mpa/modules/heads/test_semisl_cls_head.py index 180bb5899ee..4f0ddbd0a04 100644 --- a/tests/unit/mpa/modules/heads/test_semisl_cls_head.py +++ b/tests/unit/mpa/modules/heads/test_semisl_cls_head.py @@ -2,7 +2,7 @@ import torch from mmcls.models.builder import build_head -from otx.mpa.modules.models.heads.semisl_cls_head import ( +from otx.algorithms.classification.adapters.mmcls.models.heads.semisl_cls_head import ( SemiLinearClsHead, SemiNonLinearClsHead, ) diff --git a/tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py b/tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py index 55e7e519a85..eac1710095d 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py +++ b/tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py @@ -5,7 +5,9 @@ from mmcv.utils import Config -from otx.mpa.modules.hooks.checkpoint_hook import CheckpointHookWithValResults +from otx.algorithms.common.adapters.mmcv.hooks.checkpoint_hook import ( + CheckpointHookWithValResults, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit @@ -52,7 +54,7 @@ def test_after_train_epoch(self, mocker) -> None: """Test after_train_epoch function.""" mocker.patch.object(CheckpointHookWithValResults, "every_n_epochs", return_value=True) - mocker.patch("otx.mpa.modules.hooks.checkpoint_hook.allreduce_params", return_value=True) + mocker.patch("otx.algorithms.common.adapters.mmcv.hooks.checkpoint_hook.allreduce_params", return_value=True) hook = CheckpointHookWithValResults(sync_buffer=True, out_dir="./tmp_dir/") runner = MockRunner() hook.after_train_epoch(runner) diff --git a/tests/unit/mpa/modules/hooks/test_mpa_eval_hook.py b/tests/unit/mpa/modules/hooks/test_mpa_eval_hook.py index 6d1626382e0..960705fd980 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_eval_hook.py +++ b/tests/unit/mpa/modules/hooks/test_mpa_eval_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.eval_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.eval_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,7 +8,7 @@ from mmcv.runner import BaseRunner from torch.utils.data import DataLoader -from otx.mpa.modules.hooks.eval_hook import ( +from otx.algorithms.common.adapters.mmcv.hooks.eval_hook import ( CustomEvalHook, DistCustomEvalHook, single_gpu_test, @@ -100,7 +100,7 @@ def test_do_evaluate(self, mocker) -> None: hook = CustomEvalHook(metric="accuracy", dataloader=MockDataloader()) runner = MockRunner() - mocker.patch("otx.mpa.modules.hooks.eval_hook.single_gpu_test", return_value=[]) + mocker.patch("otx.algorithms.common.adapters.mmcv.hooks.eval_hook.single_gpu_test", return_value=[]) mocker.patch.object(CustomEvalHook, "evaluate", return_value=True) hook._do_evaluate(runner, ema=False) hook.ema_eval_start_epoch = 3 diff --git a/tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py b/tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py index 3464c8dd385..91ec24017d4 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py +++ b/tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py @@ -3,7 +3,9 @@ # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.fp16_sam_optimizer_hook import Fp16SAMOptimizerHook +from otx.algorithms.common.adapters.mmcv.hooks.fp16_sam_optimizer_hook import ( + Fp16SAMOptimizerHook, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py b/tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py index 6c788ffa82f..63b95e2e780 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py +++ b/tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py @@ -3,7 +3,7 @@ # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.ib_loss_hook import IBLossHook +from otx.algorithms.common.adapters.mmcv.hooks.ib_loss_hook import IBLossHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_no_bias_decay_hook.py b/tests/unit/mpa/modules/hooks/test_mpa_no_bias_decay_hook.py index 5e28958cb0f..11e95fdfba2 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_no_bias_decay_hook.py +++ b/tests/unit/mpa/modules/hooks/test_mpa_no_bias_decay_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.no_bias_decay_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.no_bias_decay_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -6,7 +6,7 @@ import torch from mmcv.utils import Config -from otx.mpa.modules.hooks.no_bias_decay_hook import NoBiasDecayHook +from otx.algorithms.common.adapters.mmcv.hooks.no_bias_decay_hook import NoBiasDecayHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py b/tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py index 294f376bfed..2abfd2fc28b 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py +++ b/tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py @@ -3,7 +3,7 @@ # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.semisl_cls_hook import SemiSLClsHook +from otx.algorithms.common.adapters.mmcv.hooks.semisl_cls_hook import SemiSLClsHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpafp16_sam_optimizer_hook.py b/tests/unit/mpa/modules/hooks/test_mpafp16_sam_optimizer_hook.py deleted file mode 100644 index 3464c8dd385..00000000000 --- a/tests/unit/mpa/modules/hooks/test_mpafp16_sam_optimizer_hook.py +++ /dev/null @@ -1,18 +0,0 @@ -"""Unit test for otx.mpa.modules.hooks.fp16_sam_optimizer_hook.""" -# Copyright (C) 2023 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from otx.mpa.modules.hooks.fp16_sam_optimizer_hook import Fp16SAMOptimizerHook -from tests.test_suite.e2e_test_system import e2e_pytest_unit - - -class TestFp16SAMOptimizerHook: - @e2e_pytest_unit - def test_temp(self) -> None: - try: - hook = Fp16SAMOptimizerHook() - assert hook is None - except Exception as e: - print(e) - pass diff --git a/tests/unit/mpa/modules/losses/test_asymmetric_multilabel.py b/tests/unit/mpa/modules/losses/test_asymmetric_multilabel.py index 14e355401e1..5c9d6311a55 100644 --- a/tests/unit/mpa/modules/losses/test_asymmetric_multilabel.py +++ b/tests/unit/mpa/modules/losses/test_asymmetric_multilabel.py @@ -5,10 +5,10 @@ import pytest import torch -from otx.mpa.modules.models.losses.asymmetric_angular_loss_with_ignore import ( +from otx.algorithms.classification.adapters.mmcls.models.losses.asymmetric_angular_loss_with_ignore import ( AsymmetricAngularLossWithIgnore, ) -from otx.mpa.modules.models.losses.asymmetric_loss_with_ignore import ( +from otx.algorithms.classification.adapters.mmcls.models.losses.asymmetric_loss_with_ignore import ( AsymmetricLossWithIgnore, ) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/losses/test_cross_entropy.py b/tests/unit/mpa/modules/losses/test_cross_entropy.py index 6d20943853d..cadafc007f8 100644 --- a/tests/unit/mpa/modules/losses/test_cross_entropy.py +++ b/tests/unit/mpa/modules/losses/test_cross_entropy.py @@ -5,7 +5,9 @@ import pytest import torch -from otx.mpa.modules.models.losses.cross_entropy_loss import CrossEntropyLossWithIgnore +from otx.algorithms.classification.adapters.mmcls.models.losses.cross_entropy_loss import ( + CrossEntropyLossWithIgnore, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/models/classifiers/test_sam_classifier.py b/tests/unit/mpa/modules/models/classifiers/test_sam_classifier.py index 4c2a3b3a73f..77dbf87f2c7 100644 --- a/tests/unit/mpa/modules/models/classifiers/test_sam_classifier.py +++ b/tests/unit/mpa/modules/models/classifiers/test_sam_classifier.py @@ -3,7 +3,7 @@ import pytest import torch -from otx.mpa.modules.models.classifiers.sam_classifier import ( +from otx.algorithms.classification.adapters.mmcls.models.classifiers.sam_classifier import ( ImageClassifier, SAMImageClassifier, ) diff --git a/tests/unit/mpa/modules/models/classifiers/test_semisl_classifier.py b/tests/unit/mpa/modules/models/classifiers/test_semisl_classifier.py index 63109bb0b04..9612be1cf74 100644 --- a/tests/unit/mpa/modules/models/classifiers/test_semisl_classifier.py +++ b/tests/unit/mpa/modules/models/classifiers/test_semisl_classifier.py @@ -5,7 +5,7 @@ import pytest import torch -from otx.mpa.modules.models.classifiers.semisl_classifier import ( +from otx.algorithms.classification.adapters.mmcls.models.classifiers.semisl_classifier import ( SAMImageClassifier, SemiSLClassifier, ) diff --git a/tests/unit/mpa/modules/models/classifiers/test_semisl_mlc_classifier.py b/tests/unit/mpa/modules/models/classifiers/test_semisl_mlc_classifier.py index 3eae403dbe6..1d0d64fcebe 100644 --- a/tests/unit/mpa/modules/models/classifiers/test_semisl_mlc_classifier.py +++ b/tests/unit/mpa/modules/models/classifiers/test_semisl_mlc_classifier.py @@ -5,7 +5,7 @@ import pytest import torch -from otx.mpa.modules.models.classifiers.semisl_multilabel_classifier import ( +from otx.algorithms.classification.adapters.mmcls.models.classifiers.semisl_multilabel_classifier import ( SAMImageClassifier, SemiSLMultilabelClassifier, ) diff --git a/tests/unit/mpa/modules/models/classifiers/test_supcon_classifier.py b/tests/unit/mpa/modules/models/classifiers/test_supcon_classifier.py index a87357fb5e2..cff88b03e24 100644 --- a/tests/unit/mpa/modules/models/classifiers/test_supcon_classifier.py +++ b/tests/unit/mpa/modules/models/classifiers/test_supcon_classifier.py @@ -1,7 +1,7 @@ import pytest import torch -from otx.mpa.modules.models.classifiers.supcon_classifier import ( +from otx.algorithms.classification.adapters.mmcls.models.classifiers.supcon_classifier import ( ImageClassifier, SupConClassifier, ) diff --git a/tests/unit/mpa/modules/optimizer/test_lars.py b/tests/unit/mpa/modules/optimizer/test_lars.py index 2e31ea58276..218d57b540f 100644 --- a/tests/unit/mpa/modules/optimizer/test_lars.py +++ b/tests/unit/mpa/modules/optimizer/test_lars.py @@ -3,7 +3,7 @@ import torch import torch.nn as nn -from otx.mpa.modules.optimizer.lars import LARS +from otx.algorithms.classification.adapters.mmcls.optimizer.lars import LARS from tests.test_suite.e2e_test_system import e2e_pytest_unit from tests.unit.mpa.test_helpers import ( generate_random_torch_image, diff --git a/tests/unit/mpa/modules/ov/models/mmcls/backbones/test_ov_mmcls_mmov_backbone.py b/tests/unit/mpa/modules/ov/models/mmcls/backbones/test_ov_mmcls_mmov_backbone.py index 5651d064400..c3e48d1428b 100644 --- a/tests/unit/mpa/modules/ov/models/mmcls/backbones/test_ov_mmcls_mmov_backbone.py +++ b/tests/unit/mpa/modules/ov/models/mmcls/backbones/test_ov_mmcls_mmov_backbone.py @@ -5,7 +5,9 @@ import pytest import torch -from otx.mpa.modules.ov.models.mmcls.backbones.mmov_backbone import MMOVBackbone +from otx.algorithms.classification.adapters.mmcls.models.backbones.mmov_backbone import ( + MMOVBackbone, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit from tests.unit.mpa.modules.ov.models.mmcls.test_helpers import create_ov_model diff --git a/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_cls_head.py b/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_cls_head.py index 63676b1a64e..ea25d9794ea 100644 --- a/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_cls_head.py +++ b/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_cls_head.py @@ -5,7 +5,7 @@ import pytest import torch -from otx.mpa.modules.ov.models.mmcls.heads.cls_head import ClsHead +from otx.algorithms.classification.adapters.mmcls.models.heads.cls_head import ClsHead from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_conv_head.py b/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_conv_head.py index 490c1cd560f..a58d46f2d01 100644 --- a/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_conv_head.py +++ b/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_conv_head.py @@ -6,7 +6,9 @@ import pytest import torch -from otx.mpa.modules.ov.models.mmcls.heads.conv_head import ConvClsHead +from otx.algorithms.classification.adapters.mmcls.models.heads.conv_head import ( + ConvClsHead, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_mmcv_cls_head.py b/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_mmcv_cls_head.py index b5e5039dc43..71823180291 100644 --- a/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_mmcv_cls_head.py +++ b/tests/unit/mpa/modules/ov/models/mmcls/heads/test_ov_mmcls_mmcv_cls_head.py @@ -5,7 +5,9 @@ import pytest import torch -from otx.mpa.modules.ov.models.mmcls.heads.mmov_cls_head import MMOVClsHead +from otx.algorithms.classification.adapters.mmcls.models.heads.mmov_cls_head import ( + MMOVClsHead, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit from tests.unit.mpa.modules.ov.models.mmcls.test_helpers import create_ov_model diff --git a/tests/unit/mpa/modules/ov/models/mmcls/necks/test_ov_mmcls_mmov_neck.py b/tests/unit/mpa/modules/ov/models/mmcls/necks/test_ov_mmcls_mmov_neck.py index 2c2756e9583..8dff281798a 100644 --- a/tests/unit/mpa/modules/ov/models/mmcls/necks/test_ov_mmcls_mmov_neck.py +++ b/tests/unit/mpa/modules/ov/models/mmcls/necks/test_ov_mmcls_mmov_neck.py @@ -4,7 +4,7 @@ import pytest -from otx.mpa.modules.ov.models.mmcls.necks.mmov_neck import MMOVNeck +from otx.algorithms.classification.adapters.mmcls.models.necks import MMOVNeck from tests.test_suite.e2e_test_system import e2e_pytest_unit from tests.unit.mpa.modules.ov.models.mmcls.test_helpers import create_ov_model diff --git a/tests/unit/mpa/test_augments.py b/tests/unit/mpa/test_augments.py index 41beaece056..d7c93288745 100644 --- a/tests/unit/mpa/test_augments.py +++ b/tests/unit/mpa/test_augments.py @@ -11,7 +11,7 @@ import pytest from PIL import Image -from otx.mpa.modules.datasets.pipelines.transforms.augments import ( +from otx.algorithms.common.adapters.mmcv.pipelines.transforms.augments import ( Augments, CythonAugments, ) From beb807b0ba40c4284d98a501baa91f80ff9e48b5 Mon Sep 17 00:00:00 2001 From: Harim Kang Date: Thu, 23 Mar 2023 11:21:53 +0900 Subject: [PATCH 19/34] Revert TrainType typo (#1928) * Fix Conflict * Fix conflict * Fix unit-tests * Fix cli tests * Fix cli tests * Fix type --- .../multi_class_classification.rst | 2 +- .../segmentation/semantic_segmentation.rst | 2 +- .../quick_start_guide/cli_commands.rst | 4 +-- .../guide/tutorials/advanced/self_sl.rst | 30 ++++++++--------- .../guide/tutorials/advanced/semi_sl.rst | 18 +++++------ .../configs/classification/configuration.yaml | 6 ++-- .../classification/movinet/template.yaml | 2 +- .../configs/classification/x3d/template.yaml | 2 +- .../configs/detection/configuration.yaml | 6 ++-- .../detection/x3d_fast_rcnn/template.yaml | 2 +- .../classification/configs/configuration.yaml | 10 +++--- .../selfsl/hparam.yaml | 2 +- .../semisl/hparam.yaml | 2 +- .../efficientnet_b0_cls_incr/template.yaml | 2 +- .../selfsl/hparam.yaml | 2 +- .../semisl/hparam.yaml | 2 +- .../efficientnet_v2_s_cls_incr/template.yaml | 2 +- .../selfsl/hparam.yaml | 2 +- .../template_experiment.yaml | 2 +- .../selfsl/hparam.yaml | 2 +- .../semisl/hparam.yaml | 2 +- .../template.yaml | 2 +- .../selfsl/hparam.yaml | 2 +- .../template_experiment.yaml | 2 +- .../classification/tasks/inference.py | 18 +++++------ .../common/configs/training_base.py | 14 ++++---- otx/algorithms/common/tasks/training_base.py | 6 ++-- .../configs/detection/configuration.yaml | 8 ++--- .../cspdarknet_yolox/semisl/hparam.yaml | 2 +- .../detection/cspdarknet_yolox/template.yaml | 2 +- .../mobilenetv2_atss/semisl/hparam.yaml | 2 +- .../detection/mobilenetv2_atss/template.yaml | 2 +- .../mobilenetv2_ssd/semisl/hparam.yaml | 2 +- .../detection/mobilenetv2_ssd/template.yaml | 2 +- .../resnet50_vfnet/template_experimental.yaml | 2 +- .../instance_segmentation/configuration.yaml | 8 ++--- .../efficientnetb2b_maskrcnn/template.yaml | 2 +- .../resnet50_maskrcnn/template.yaml | 2 +- .../rotated_detection/configuration.yaml | 8 ++--- .../efficientnetb2b_maskrcnn/template.yaml | 2 +- .../resnet50_maskrcnn/template.yaml | 2 +- otx/algorithms/detection/tasks/inference.py | 8 ++--- .../segmentation/configs/configuration.yaml | 10 +++--- .../ocr_lite_hrnet_18/selfsl/hparam.yaml | 2 +- .../ocr_lite_hrnet_18/semisl/hparam.yaml | 2 +- .../configs/ocr_lite_hrnet_18/template.yaml | 2 +- .../ocr_lite_hrnet_18_mod2/selfsl/hparam.yaml | 2 +- .../ocr_lite_hrnet_18_mod2/semisl/hparam.yaml | 2 +- .../ocr_lite_hrnet_18_mod2/template.yaml | 2 +- .../ocr_lite_hrnet_s_mod2/selfsl/hparam.yaml | 2 +- .../ocr_lite_hrnet_s_mod2/semisl/hparam.yaml | 2 +- .../ocr_lite_hrnet_s_mod2/template.yaml | 2 +- .../ocr_lite_hrnet_x_mod3/selfsl/hparam.yaml | 2 +- .../ocr_lite_hrnet_x_mod3/semisl/hparam.yaml | 2 +- .../ocr_lite_hrnet_x_mod3/template.yaml | 2 +- .../segmentation/tasks/inference.py | 12 +++---- otx/cli/manager/config_manager.py | 22 ++++++------- otx/cli/tools/build.py | 2 +- otx/cli/tools/train.py | 2 +- otx/core/data/adapter/__init__.py | 32 +++++++++---------- .../cli/classification/test_classification.py | 6 ++-- tests/e2e/cli/detection/test_detection.py | 2 +- .../e2e/cli/segmentation/test_segmentation.py | 4 +-- .../cli/classification/test_classification.py | 8 ++--- .../cli/detection/test_detection.py | 2 +- .../cli/segmentation/test_segmentation.py | 4 +-- tests/integration/cli/test_cli.py | 12 +++---- .../classification/test_classification.py | 4 +-- tests/regression/detection/test_detection.py | 2 +- .../segmentation/test_segmentation.py | 4 +-- tests/test_suite/run_test_command.py | 2 +- .../test_action_sample_classification.py | 2 +- .../tools/test_action_sample_detection.py | 2 +- tests/unit/cli/manager/test_config_manager.py | 28 ++++++++-------- tests/unit/cli/tools/test_build.py | 4 +-- tests/unit/core/data/adapter/test_init.py | 6 ++-- 76 files changed, 200 insertions(+), 198 deletions(-) diff --git a/docs/source/guide/explanation/algorithms/classification/multi_class_classification.rst b/docs/source/guide/explanation/algorithms/classification/multi_class_classification.rst index 07b571cefec..3923b077434 100644 --- a/docs/source/guide/explanation/algorithms/classification/multi_class_classification.rst +++ b/docs/source/guide/explanation/algorithms/classification/multi_class_classification.rst @@ -206,7 +206,7 @@ Unlike other tasks, ``--val-data-root`` is not needed. $ otx train otx/algorithms/classification/configs/efficientnet_b0_cls_incr/template.yaml \ --train-data-root=tests/assets/imagenet_dataset_class_incremental \ params \ - --algo_backend.train_type=SELFSUPERVISED + --algo_backend.train_type=Selfsupervised After self-supervised training, pretrained weights can be use for supervised (incremental) learning like the below command: diff --git a/docs/source/guide/explanation/algorithms/segmentation/semantic_segmentation.rst b/docs/source/guide/explanation/algorithms/segmentation/semantic_segmentation.rst index ede1c246464..5ae38d350fb 100644 --- a/docs/source/guide/explanation/algorithms/segmentation/semantic_segmentation.rst +++ b/docs/source/guide/explanation/algorithms/segmentation/semantic_segmentation.rst @@ -165,7 +165,7 @@ To enable self-supervised training, the command below can be executed: $ otx train otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml \ --train-data-roots=tests/assets/common_semantic_segmentation_dataset/train/images \ params \ - --algo_backend.train_type=SELFSUPERVISED + --algo_backend.train_type=Selfsupervised After self-supervised training, pretrained weights can be use for supervised (incremental) learning like the below command: diff --git a/docs/source/guide/get_started/quick_start_guide/cli_commands.rst b/docs/source/guide/get_started/quick_start_guide/cli_commands.rst index 2d628d9c4a9..de32e70d1ab 100644 --- a/docs/source/guide/get_started/quick_start_guide/cli_commands.rst +++ b/docs/source/guide/get_started/quick_start_guide/cli_commands.rst @@ -92,7 +92,7 @@ Building workspace folder Comma-separated paths to unlabeled file list --task TASK The currently supported options: ('CLASSIFICATION', 'DETECTION', 'INSTANCE_SEGMENTATION', 'SEGMENTATION', 'ACTION_CLASSIFICATION', 'ACTION_DETECTION', 'ANOMALY_CLASSIFICATION', 'ANOMALY_DETECTION', 'ANOMALY_SEGMENTATION'). --train-type TRAIN_TYPE - The currently supported options: dict_keys(['INCREMENTAL', 'SEMISUPERVISED', 'SELFSUPERVISED']). + The currently supported options: dict_keys(['Incremental', 'Semisupervised', 'Selfsupervised']). --work-dir WORK_DIR Location where the workspace. --model MODEL Enter the name of the model you want to use. (Ex. EfficientNet-B0). --backbone BACKBONE Available Backbone Type can be found using 'otx find --backbone {framework}'. @@ -181,7 +181,7 @@ However, if you created a workspace with ``otx build``, the training process can --unlabeled-file-list UNLABELED_FILE_LIST Comma-separated paths to unlabeled file list --train-type TRAIN_TYPE - The currently supported options: dict_keys(['INCREMENTAL', 'SEMISUPERVISED', 'SELFSUPERVISED']). + The currently supported options: dict_keys(['Incremental', 'Semisupervised', 'Selfsupervised']). --load-weights LOAD_WEIGHTS Load model weights from previously saved checkpoint. --resume-from RESUME_FROM diff --git a/docs/source/guide/tutorials/advanced/self_sl.rst b/docs/source/guide/tutorials/advanced/self_sl.rst index 96de2beb42c..6de99a97d50 100644 --- a/docs/source/guide/tutorials/advanced/self_sl.rst +++ b/docs/source/guide/tutorials/advanced/self_sl.rst @@ -64,23 +64,23 @@ for **self-supervised learning** by running the following command: .. code-block:: - (otx) ...$ otx build --train-data-roots data/flower_photos --model MobileNet-V3-large-1x --train-type SELFSUPERVISED --work-dir otx-workspace-CLASSIFICATION-SELFSUPERVISED + (otx) ...$ otx build --train-data-roots data/flower_photos --model MobileNet-V3-large-1x --train-type Selfsupervised --work-dir otx-workspace-CLASSIFICATION-Selfsupervised - [*] Workspace Path: otx-workspace-CLASSIFICATION-SELFSUPERVISED + [*] Workspace Path: otx-workspace-CLASSIFICATION-Selfsupervised [*] Load Model Template ID: Custom_Image_Classification_MobileNet-V3-large-1x - [*] Load Model Name: MobileNet-V3-large-1x[*] - Updated: otx-workspace-CLASSIFICATION-SELFSUPERVISED/selfsl/model.py - [*] - Updated: otx-workspace-CLASSIFICATION-SELFSUPERVISED/selfsl/data_pipeline.py - [*] - Updated: otx-workspace-CLASSIFICATION-SELFSUPERVISED/deployment.py - [*] - Updated: otx-workspace-CLASSIFICATION-SELFSUPERVISED/hpo_config.yaml - [*] - Updated: otx-workspace-CLASSIFICATION-SELFSUPERVISED/model_hierarchical.py - [*] - Updated: otx-workspace-CLASSIFICATION-SELFSUPERVISED/model_multilabel.py - [*] Update data configuration file to: otx-workspace-CLASSIFICATION-SELFSUPERVISED/data.yaml + [*] Load Model Name: MobileNet-V3-large-1x[*] - Updated: otx-workspace-CLASSIFICATION-Selfsupervised/selfsl/model.py + [*] - Updated: otx-workspace-CLASSIFICATION-Selfsupervised/selfsl/data_pipeline.py + [*] - Updated: otx-workspace-CLASSIFICATION-Selfsupervised/deployment.py + [*] - Updated: otx-workspace-CLASSIFICATION-Selfsupervised/hpo_config.yaml + [*] - Updated: otx-workspace-CLASSIFICATION-Selfsupervised/model_hierarchical.py + [*] - Updated: otx-workspace-CLASSIFICATION-Selfsupervised/model_multilabel.py + [*] Update data configuration file to: otx-workspace-CLASSIFICATION-Selfsupervised/data.yaml .. note:: Three things must be considered to set the workspace for self-supervised learning: - 1. add ``--train-type SELFSUPERVISED`` in the command to get the training components for self-supervised learning, + 1. add ``--train-type Selfsupervised`` in the command to get the training components for self-supervised learning, 2. update the path set as ``train-data-roots``, 3. and add ``--work-dir`` to distinguish self-supervised learning workspace from supervised learning workspace. @@ -102,7 +102,7 @@ After the workspace creation, the workspace structure is as follows: │   ├── train │   └── val └── template.yaml - otx-workspace-CLASSIFICATION-SELFSUPERVISED + otx-workspace-CLASSIFICATION-Selfsupervised ├── configuration.yaml ├── data.yaml ├── deployment.py @@ -121,20 +121,20 @@ After the workspace creation, the workspace structure is as follows: For `VOC2012 dataset `_ used in :doc:`semantic segmentation tutorial <../base/how_to_train/semantic_segmentation>`, for example, the path ``data/VOCdevkit/VOC2012/JPEGImages`` must be set instead of ``data/VOCdevkit/VOC2012``. Please refer to :ref:`Explanation of Self-Supervised Learning for Semantic Segmentation `. - And don't forget to add ``--train-type SELFSUPERVISED``. + And don't forget to add ``--train-type Selfsupervised``. .. code-block:: (otx) ...$ otx build --train-data-roots data/VOCdevkit/VOC2012/JPEGImages \ --model Lite-HRNet-18-mod2 \ - --train-type SELFSUPERVISED + --train-type Selfsupervised 4. To start training we need to call ``otx train`` command in **self-supervised learning** workspace: .. code-block:: - (otx) ...$ cd otx-workspace-CLASSIFICATION-SELFSUPERVISED + (otx) ...$ cd otx-workspace-CLASSIFICATION-Selfsupervised (otx) ...$ otx train --data ../otx-workspace-CLASSIFICATION/data.yaml ... @@ -168,7 +168,7 @@ After pre-training progress, start fine-tuning by calling the below command with .. code-block:: (otx) ...$ cd ../otx-workspace-CLASSIFICATION - (otx) ...$ otx train --load-weights ../otx-workspace-CLASSIFICATION-SELFSUPERVISED/models/weights.pth + (otx) ...$ otx train --load-weights ../otx-workspace-CLASSIFICATION-Selfsupervised/models/weights.pth ... diff --git a/docs/source/guide/tutorials/advanced/semi_sl.rst b/docs/source/guide/tutorials/advanced/semi_sl.rst index cce334631e9..ef81cf598d5 100644 --- a/docs/source/guide/tutorials/advanced/semi_sl.rst +++ b/docs/source/guide/tutorials/advanced/semi_sl.rst @@ -73,7 +73,7 @@ Enable via ``otx build`` 1. To enable semi-supervsied learning via ``otx build``, we need to add arguments ``--unlabeled-data-roots`` and ``--train-type``. OpenVINO™ Training Extensions receives the root path where unlabeled images are by ``--unlabeled-data-roots``. -We should put the path where unlabeled data are contained. It also provides us ``--train-type`` to select the type of training scheme. All we have to do for that is specifying it as **SEMISUPERVISED**. +We should put the path where unlabeled data are contained. It also provides us ``--train-type`` to select the type of training scheme. All we have to do for that is specifying it as **Semisupervised**. .. note:: @@ -85,7 +85,7 @@ We should put the path where unlabeled data are contained. It also provides us ` .. code-block:: - (otx) ...$ otx build --train-data-roots data/flower_photos --unlabeled-data-roots tests/assets/imagenet_dataset --model MobileNet-V3-large-1x --train-type SEMISUPERVISED + (otx) ...$ otx build --train-data-roots data/flower_photos --unlabeled-data-roots tests/assets/imagenet_dataset --model MobileNet-V3-large-1x --train-type Semisupervised [*] Workspace Path: otx-workspace-CLASSIFICATION @@ -107,14 +107,14 @@ command in our workspace: (otx) ...$ otx train -In the train log, you can check that the train type is set to **SEMISUPERVISED** and related configurations are properly loaded as following: +In the train log, you can check that the train type is set to **Semisupervised** and related configurations are properly loaded as following: .. code-block:: ... 2023-02-22 06:21:54,492 | INFO : called _init_recipe() - 2023-02-22 06:21:54,492 | INFO : train type = SEMISUPERVISED - 2023-02-22 06:21:54,492 | INFO : train type = SEMISUPERVISED - loading training_extensions/otx/recipes/stages/classification/semisl.yaml + 2023-02-22 06:21:54,492 | INFO : train type = Semisupervised + 2023-02-22 06:21:54,492 | INFO : train type = Semisupervised - loading training_extensions/otx/recipes/stages/classification/semisl.yaml 2023-02-22 06:21:54,500 | INFO : Replacing runner from EpochRunnerWithCancel to EpochRunnerWithCancel. 2023-02-22 06:21:54,503 | INFO : initialized recipe = training_extensions/otx/recipes/stages/classification/semisl.yaml ... @@ -135,16 +135,16 @@ which is one of template-specific parameters (details are provided in `quick sta (otx) ...$ otx train otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml \ --train-data-roots data/flower_photos \ --unlabeled-data-roots tests/assets/imagenet_dataset \ - params --algo_backend.train_type SEMISUPERVISED + params --algo_backend.train_type Semisupervised -In the train log, you can check that the train type is set to **SEMISUPERVISED** and related configurations are properly loaded as following: +In the train log, you can check that the train type is set to **Semisupervised** and related configurations are properly loaded as following: .. code-block:: ... 2023-02-22 06:21:54,492 | INFO : called _init_recipe() - 2023-02-22 06:21:54,492 | INFO : train type = SEMISUPERVISED - 2023-02-22 06:21:54,492 | INFO : train type = SEMISUPERVISED - loading training_extensions/otx/recipes/stages/classification/semisl.yaml + 2023-02-22 06:21:54,492 | INFO : train type = Semisupervised + 2023-02-22 06:21:54,492 | INFO : train type = Semisupervised - loading training_extensions/otx/recipes/stages/classification/semisl.yaml 2023-02-22 06:21:54,500 | INFO : Replacing runner from EpochRunnerWithCancel to EpochRunnerWithCancel. 2023-02-22 06:21:54,503 | INFO : initialized recipe = training_extensions/otx/recipes/stages/classification/semisl.yaml ... diff --git a/otx/algorithms/action/configs/classification/configuration.yaml b/otx/algorithms/action/configs/classification/configuration.yaml index 5221eaaa03b..fde36852b7d 100644 --- a/otx/algorithms/action/configs/classification/configuration.yaml +++ b/otx/algorithms/action/configs/classification/configuration.yaml @@ -245,20 +245,20 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: NONE - default_value: INCREMENTAL + default_value: Incremental description: Quantization preset that defines quantization scheme editable: false enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" + Incremental: "Incremental" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null mem_cache_size: diff --git a/otx/algorithms/action/configs/classification/movinet/template.yaml b/otx/algorithms/action/configs/classification/movinet/template.yaml index 6fee18320db..c9b9b570714 100644 --- a/otx/algorithms/action/configs/classification/movinet/template.yaml +++ b/otx/algorithms/action/configs/classification/movinet/template.yaml @@ -45,7 +45,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/action/configs/classification/x3d/template.yaml b/otx/algorithms/action/configs/classification/x3d/template.yaml index c214e0a282d..f217dac0e4a 100644 --- a/otx/algorithms/action/configs/classification/x3d/template.yaml +++ b/otx/algorithms/action/configs/classification/x3d/template.yaml @@ -45,7 +45,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/action/configs/detection/configuration.yaml b/otx/algorithms/action/configs/detection/configuration.yaml index 5221eaaa03b..fde36852b7d 100644 --- a/otx/algorithms/action/configs/detection/configuration.yaml +++ b/otx/algorithms/action/configs/detection/configuration.yaml @@ -245,20 +245,20 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: NONE - default_value: INCREMENTAL + default_value: Incremental description: Quantization preset that defines quantization scheme editable: false enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" + Incremental: "Incremental" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null mem_cache_size: diff --git a/otx/algorithms/action/configs/detection/x3d_fast_rcnn/template.yaml b/otx/algorithms/action/configs/detection/x3d_fast_rcnn/template.yaml index 63ed3f682bd..b1d204d6cb0 100644 --- a/otx/algorithms/action/configs/detection/x3d_fast_rcnn/template.yaml +++ b/otx/algorithms/action/configs/detection/x3d_fast_rcnn/template.yaml @@ -45,7 +45,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/configuration.yaml b/otx/algorithms/classification/configs/configuration.yaml index dd2a93c51a0..f64bf911c01 100644 --- a/otx/algorithms/classification/configs/configuration.yaml +++ b/otx/algorithms/classification/configs/configuration.yaml @@ -336,22 +336,22 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: TRAINING - default_value: INCREMENTAL + default_value: Incremental description: Training scheme option that determines how to train the model editable: True enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" - SEMISUPERVISED: "SEMISUPERVISED" - SELFSUPERVISED: "SELFSUPERVISED" + Incremental: "Incremental" + Semisupervised: "Semisupervised" + Selfsupervised: "Selfsupervised" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null mem_cache_size: diff --git a/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/selfsl/hparam.yaml b/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/selfsl/hparam.yaml index 6b1e96ff7e0..9be2286c001 100644 --- a/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/selfsl/hparam.yaml +++ b/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/selfsl/hparam.yaml @@ -18,7 +18,7 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/semisl/hparam.yaml b/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/semisl/hparam.yaml index fce1fb2f832..d282469bb8e 100644 --- a/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/semisl/hparam.yaml +++ b/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/semisl/hparam.yaml @@ -20,4 +20,4 @@ hyper_parameters: default_value: 90 algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/template.yaml b/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/template.yaml index 6743404bb1d..9e195689a68 100644 --- a/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/template.yaml +++ b/otx/algorithms/classification/configs/efficientnet_b0_cls_incr/template.yaml @@ -46,7 +46,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/selfsl/hparam.yaml b/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/selfsl/hparam.yaml index 6b1e96ff7e0..9be2286c001 100644 --- a/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/selfsl/hparam.yaml +++ b/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/selfsl/hparam.yaml @@ -18,7 +18,7 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/semisl/hparam.yaml b/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/semisl/hparam.yaml index 62026f71da9..87c19f5bd01 100644 --- a/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/semisl/hparam.yaml +++ b/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/semisl/hparam.yaml @@ -20,4 +20,4 @@ hyper_parameters: default_value: 90 algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/template.yaml b/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/template.yaml index eaf31fa0afe..1dc17e1470b 100644 --- a/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/template.yaml +++ b/otx/algorithms/classification/configs/efficientnet_v2_s_cls_incr/template.yaml @@ -46,7 +46,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/selfsl/hparam.yaml b/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/selfsl/hparam.yaml index 6b1e96ff7e0..9be2286c001 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/selfsl/hparam.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/selfsl/hparam.yaml @@ -18,7 +18,7 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/template_experiment.yaml b/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/template_experiment.yaml index 86ed2150a43..cdf0f76bbc1 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/template_experiment.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_large_075_cls_incr/template_experiment.yaml @@ -37,7 +37,7 @@ hyper_parameters: default_value: 20 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/selfsl/hparam.yaml b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/selfsl/hparam.yaml index 6b1e96ff7e0..9be2286c001 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/selfsl/hparam.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/selfsl/hparam.yaml @@ -18,7 +18,7 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/semisl/hparam.yaml b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/semisl/hparam.yaml index 928ffab95bb..2e116c4acdb 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/semisl/hparam.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/semisl/hparam.yaml @@ -20,4 +20,4 @@ hyper_parameters: default_value: 90 algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml index 0076c9f6ee3..7d06fd5fbf3 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml @@ -46,7 +46,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/selfsl/hparam.yaml b/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/selfsl/hparam.yaml index 6b1e96ff7e0..9be2286c001 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/selfsl/hparam.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/selfsl/hparam.yaml @@ -18,7 +18,7 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/template_experiment.yaml b/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/template_experiment.yaml index 7418bb3e3bd..4b0ce523e05 100644 --- a/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/template_experiment.yaml +++ b/otx/algorithms/classification/configs/mobilenet_v3_small_cls_incr/template_experiment.yaml @@ -37,7 +37,7 @@ hyper_parameters: default_value: 20 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/classification/tasks/inference.py b/otx/algorithms/classification/tasks/inference.py index d49fc8c0e86..d2b9b937ad6 100644 --- a/otx/algorithms/classification/tasks/inference.py +++ b/otx/algorithms/classification/tasks/inference.py @@ -71,9 +71,9 @@ TASK_CONFIG = ClassificationConfig RECIPE_TRAIN_TYPE = { - TrainType.SEMISUPERVISED: "semisl.yaml", - TrainType.INCREMENTAL: "incremental.yaml", - TrainType.SELFSUPERVISED: "selfsl.yaml", + TrainType.Semisupervised: "semisl.yaml", + TrainType.Incremental: "incremental.yaml", + TrainType.Selfsupervised: "selfsl.yaml", } @@ -113,7 +113,7 @@ def __init__(self, task_environment: TaskEnvironment, **kwargs): if not self._multilabel and not self._hierarchical: logger.info("Classification mode: multiclass") - if self._hyperparams.algo_backend.train_type == TrainType.SELFSUPERVISED: + if self._hyperparams.algo_backend.train_type == TrainType.Selfsupervised: self._selfsl = True @check_input_parameters_type({"dataset": DatasetParamTypeCheck}) @@ -423,7 +423,7 @@ def _init_recipe_hparam(self) -> dict: runner=runner, ) - if self._train_type.value == "SEMISUPERVISED": + if self._train_type.value == "Semisupervised": unlabeled_config = ConfigDict( data=ConfigDict( unlabeled_dataloader=ConfigDict( @@ -443,7 +443,7 @@ def _init_recipe(self): # pylint: disable=too-many-boolean-expressions if ( self._train_type in RECIPE_TRAIN_TYPE - and self._train_type == TrainType.INCREMENTAL + and self._train_type == TrainType.Incremental and not self._multilabel and not self._hierarchical and self._hyperparams.learning_parameters.enable_supcon @@ -453,7 +453,7 @@ def _init_recipe(self): self._recipe_cfg = self._init_model_cfg() - # FIXME[Soobee] : if train type is not in cfg, it raises an error in default INCREMENTAL mode. + # FIXME[Soobee] : if train type is not in cfg, it raises an error in default Incremental mode. # During semi-implementation, this line should be fixed to -> self._recipe_cfg.train_type = train_type self._recipe_cfg.train_type = self._train_type.name @@ -479,7 +479,7 @@ def _init_recipe(self): patch_evaluation(self._recipe_cfg, **options_for_patch_evaluation) # for OTX compatibility # TODO: make cfg_path loaded from custom model cfg file corresponding to train_type - # model.py contains heads/classifier only for INCREMENTAL setting + # model.py contains heads/classifier only for Incremental setting # error log : ValueError: Unexpected type of 'data_loader' parameter def _init_model_cfg(self): if self._multilabel: @@ -512,7 +512,7 @@ def _init_test_data_cfg(self, dataset: DatasetEntity): return data_cfg def _update_stage_module(self, stage_module): - module_prefix = {TrainType.INCREMENTAL: "Incr", TrainType.SEMISUPERVISED: "SemiSL"} + module_prefix = {TrainType.Incremental: "Incr", TrainType.Semisupervised: "SemiSL"} if self._train_type in module_prefix and stage_module in ["ClsTrainer", "ClsInferrer"]: stage_module = module_prefix[self._train_type] + stage_module return stage_module diff --git a/otx/algorithms/common/configs/training_base.py b/otx/algorithms/common/configs/training_base.py index 4c397554c54..c1a85446eb8 100644 --- a/otx/algorithms/common/configs/training_base.py +++ b/otx/algorithms/common/configs/training_base.py @@ -32,15 +32,17 @@ from .configuration_enums import POTQuantizationPreset +# pylint: disable=invalid-name + class TrainType(ConfigurableEnum): """TrainType for OTX Algorithms.""" - FINETUNE = "FINETUNE" - SEMISUPERVISED = "SEMISUPERVISED" - SELFSUPERVISED = "SELFSUPERVISED" - INCREMENTAL = "INCREMENTAL" - FUTUREWORK = "FUTUREWORK" + Finetune = "Finetune" + Semisupervised = "Semisupervised" + Selfsupervised = "Selfsupervised" + Incremental = "Incremental" + Futurework = "Futurework" class LearningRateSchedule(ConfigurableEnum): @@ -275,7 +277,7 @@ class BaseAlgoBackendParameters(ParameterGroup): """BaseAlgoBackendParameters for OTX Algorithms.""" train_type = selectable( - default_value=TrainType.INCREMENTAL, + default_value=TrainType.Incremental, header="train type", description="Training scheme option that determines how to train the model", editable=False, diff --git a/otx/algorithms/common/tasks/training_base.py b/otx/algorithms/common/tasks/training_base.py index 3016685fd4e..24775730976 100644 --- a/otx/algorithms/common/tasks/training_base.py +++ b/otx/algorithms/common/tasks/training_base.py @@ -59,9 +59,9 @@ logger = get_logger() TRAIN_TYPE_DIR_PATH = { - TrainType.INCREMENTAL.name: ".", - TrainType.SELFSUPERVISED.name: "selfsl", - TrainType.SEMISUPERVISED.name: "semisl", + TrainType.Incremental.name: ".", + TrainType.Selfsupervised.name: "selfsl", + TrainType.Semisupervised.name: "semisl", } diff --git a/otx/algorithms/detection/configs/detection/configuration.yaml b/otx/algorithms/detection/configs/detection/configuration.yaml index cd8ba0eadff..e53fe639e0f 100644 --- a/otx/algorithms/detection/configs/detection/configuration.yaml +++ b/otx/algorithms/detection/configs/detection/configuration.yaml @@ -245,21 +245,21 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: TRAINING - default_value: INCREMENTAL + default_value: Incremental description: Training scheme option that determines how to train the model editable: True enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" - SEMISUPERVISED: "SEMISUPERVISED" + Incremental: "Incremental" + Semisupervised: "Semisupervised" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null mem_cache_size: diff --git a/otx/algorithms/detection/configs/detection/cspdarknet_yolox/semisl/hparam.yaml b/otx/algorithms/detection/configs/detection/cspdarknet_yolox/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/detection/configs/detection/cspdarknet_yolox/semisl/hparam.yaml +++ b/otx/algorithms/detection/configs/detection/cspdarknet_yolox/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/detection/configs/detection/cspdarknet_yolox/template.yaml b/otx/algorithms/detection/configs/detection/cspdarknet_yolox/template.yaml index cf1546e24a1..6360cba3b74 100644 --- a/otx/algorithms/detection/configs/detection/cspdarknet_yolox/template.yaml +++ b/otx/algorithms/detection/configs/detection/cspdarknet_yolox/template.yaml @@ -46,7 +46,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/detection/mobilenetv2_atss/semisl/hparam.yaml b/otx/algorithms/detection/configs/detection/mobilenetv2_atss/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/detection/configs/detection/mobilenetv2_atss/semisl/hparam.yaml +++ b/otx/algorithms/detection/configs/detection/mobilenetv2_atss/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/detection/configs/detection/mobilenetv2_atss/template.yaml b/otx/algorithms/detection/configs/detection/mobilenetv2_atss/template.yaml index 49b91e29014..cc4051f4611 100644 --- a/otx/algorithms/detection/configs/detection/mobilenetv2_atss/template.yaml +++ b/otx/algorithms/detection/configs/detection/mobilenetv2_atss/template.yaml @@ -46,7 +46,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/semisl/hparam.yaml b/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/semisl/hparam.yaml +++ b/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/template.yaml b/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/template.yaml index bdb817d21cf..a90ac70b124 100644 --- a/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/template.yaml +++ b/otx/algorithms/detection/configs/detection/mobilenetv2_ssd/template.yaml @@ -46,7 +46,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/detection/resnet50_vfnet/template_experimental.yaml b/otx/algorithms/detection/configs/detection/resnet50_vfnet/template_experimental.yaml index 624b6331ae4..6605ee5bed6 100644 --- a/otx/algorithms/detection/configs/detection/resnet50_vfnet/template_experimental.yaml +++ b/otx/algorithms/detection/configs/detection/resnet50_vfnet/template_experimental.yaml @@ -34,7 +34,7 @@ hyper_parameters: default_value: 100 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/instance_segmentation/configuration.yaml b/otx/algorithms/detection/configs/instance_segmentation/configuration.yaml index 57693128302..bd8b078dd3f 100644 --- a/otx/algorithms/detection/configs/instance_segmentation/configuration.yaml +++ b/otx/algorithms/detection/configs/instance_segmentation/configuration.yaml @@ -245,21 +245,21 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: TRAINING - default_value: INCREMENTAL + default_value: Incremental description: Training scheme option that determines how to train the model editable: True enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" - SEMISUPEVISED: "SEMISUPERVISED" + Incremental: "Incremental" + SEMISUPEVISED: "Semisupervised" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null mem_cache_size: diff --git a/otx/algorithms/detection/configs/instance_segmentation/efficientnetb2b_maskrcnn/template.yaml b/otx/algorithms/detection/configs/instance_segmentation/efficientnetb2b_maskrcnn/template.yaml index 8b08cff74c9..70ff835290f 100644 --- a/otx/algorithms/detection/configs/instance_segmentation/efficientnetb2b_maskrcnn/template.yaml +++ b/otx/algorithms/detection/configs/instance_segmentation/efficientnetb2b_maskrcnn/template.yaml @@ -49,7 +49,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/instance_segmentation/resnet50_maskrcnn/template.yaml b/otx/algorithms/detection/configs/instance_segmentation/resnet50_maskrcnn/template.yaml index 9f62ac6fea5..c8ddb1ab624 100644 --- a/otx/algorithms/detection/configs/instance_segmentation/resnet50_maskrcnn/template.yaml +++ b/otx/algorithms/detection/configs/instance_segmentation/resnet50_maskrcnn/template.yaml @@ -49,7 +49,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/rotated_detection/configuration.yaml b/otx/algorithms/detection/configs/rotated_detection/configuration.yaml index 4f6c2420a4b..9091a47232f 100644 --- a/otx/algorithms/detection/configs/rotated_detection/configuration.yaml +++ b/otx/algorithms/detection/configs/rotated_detection/configuration.yaml @@ -245,21 +245,21 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: TRAINING - default_value: INCREMENTAL + default_value: Incremental description: Training scheme option that determines how to train the model editable: True enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" - SEMISUPEVISED: "SEMISUPERVISED" + Incremental: "Incremental" + SEMISUPEVISED: "Semisupervised" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null type: PARAMETER_GROUP diff --git a/otx/algorithms/detection/configs/rotated_detection/efficientnetb2b_maskrcnn/template.yaml b/otx/algorithms/detection/configs/rotated_detection/efficientnetb2b_maskrcnn/template.yaml index c863f176cc6..c652c833a0a 100644 --- a/otx/algorithms/detection/configs/rotated_detection/efficientnetb2b_maskrcnn/template.yaml +++ b/otx/algorithms/detection/configs/rotated_detection/efficientnetb2b_maskrcnn/template.yaml @@ -49,7 +49,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/configs/rotated_detection/resnet50_maskrcnn/template.yaml b/otx/algorithms/detection/configs/rotated_detection/resnet50_maskrcnn/template.yaml index 3593f5e6a09..ff7992857e0 100644 --- a/otx/algorithms/detection/configs/rotated_detection/resnet50_maskrcnn/template.yaml +++ b/otx/algorithms/detection/configs/rotated_detection/resnet50_maskrcnn/template.yaml @@ -49,7 +49,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/detection/tasks/inference.py b/otx/algorithms/detection/tasks/inference.py index 89434ae1e60..efcd43788c1 100644 --- a/otx/algorithms/detection/tasks/inference.py +++ b/otx/algorithms/detection/tasks/inference.py @@ -81,8 +81,8 @@ logger = get_logger() RECIPE_TRAIN_TYPE = { - TrainType.SEMISUPERVISED: "semisl.py", - TrainType.INCREMENTAL: "incremental.py", + TrainType.Semisupervised: "semisl.py", + TrainType.Incremental: "incremental.py", } @@ -356,8 +356,8 @@ def _init_test_data_cfg(self, dataset: DatasetEntity): return data_cfg def _update_stage_module(self, stage_module): - module_prefix = {TrainType.INCREMENTAL: "Incr", TrainType.SEMISUPERVISED: "SemiSL"} - if self._train_type == TrainType.SEMISUPERVISED and stage_module == "DetectionExporter": + module_prefix = {TrainType.Incremental: "Incr", TrainType.Semisupervised: "SemiSL"} + if self._train_type == TrainType.Semisupervised and stage_module == "DetectionExporter": stage_module = "SemiSLDetectionExporter" elif self._train_type in module_prefix and stage_module in [ "DetectionTrainer", diff --git a/otx/algorithms/segmentation/configs/configuration.yaml b/otx/algorithms/segmentation/configs/configuration.yaml index 0da91d335ba..132ab25a4d6 100644 --- a/otx/algorithms/segmentation/configs/configuration.yaml +++ b/otx/algorithms/segmentation/configs/configuration.yaml @@ -274,22 +274,22 @@ algo_backend: header: Algo backend parameters train_type: affects_outcome_of: TRAINING - default_value: INCREMENTAL + default_value: Incremental description: Training scheme option that determines how to train the model editable: True enum_name: TrainType header: Train type options: - INCREMENTAL: "INCREMENTAL" - SEMISUPERVISED: "SEMISUPERVISED" - SELFSUPERVISED: "SELFSUPERVISED" + Incremental: "Incremental" + Semisupervised: "Semisupervised" + Selfsupervised: "Selfsupervised" type: SELECTABLE ui_rules: action: DISABLE_EDITING operator: AND rules: [] type: UI_RULES - value: INCREMENTAL + value: Incremental visible_in_ui: True warning: null mem_cache_size: diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/selfsl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/selfsl/hparam.yaml index c2259b834a8..c5ea68bbd4f 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/selfsl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/selfsl/hparam.yaml @@ -16,4 +16,4 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/semisl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/semisl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/template.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/template.yaml index 548e2676bbc..ef3acd94560 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/template.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18/template.yaml @@ -38,7 +38,7 @@ hyper_parameters: default_value: 300 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/selfsl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/selfsl/hparam.yaml index c2259b834a8..c5ea68bbd4f 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/selfsl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/selfsl/hparam.yaml @@ -16,4 +16,4 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/semisl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/semisl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/template.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/template.yaml index fc1f1dfa889..92ba2428e52 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/template.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_18_mod2/template.yaml @@ -47,7 +47,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/selfsl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/selfsl/hparam.yaml index c2259b834a8..c5ea68bbd4f 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/selfsl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/selfsl/hparam.yaml @@ -16,4 +16,4 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/semisl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/semisl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml index 78b229b2e1d..c079a116738 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_s_mod2/template.yaml @@ -48,7 +48,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/selfsl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/selfsl/hparam.yaml index c2259b834a8..c5ea68bbd4f 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/selfsl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/selfsl/hparam.yaml @@ -16,4 +16,4 @@ hyper_parameters: default_value: false algo_backend: train_type: - default_value: SELFSUPERVISED + default_value: Selfsupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/semisl/hparam.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/semisl/hparam.yaml index 55395b0d84c..580462daa1e 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/semisl/hparam.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/semisl/hparam.yaml @@ -3,4 +3,4 @@ hyper_parameters: parameter_overrides: algo_backend: train_type: - default_value: SEMISUPERVISED + default_value: Semisupervised diff --git a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/template.yaml b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/template.yaml index af1383f58b7..2e95673f035 100644 --- a/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/template.yaml +++ b/otx/algorithms/segmentation/configs/ocr_lite_hrnet_x_mod3/template.yaml @@ -48,7 +48,7 @@ hyper_parameters: default_value: 1.0 algo_backend: train_type: - default_value: INCREMENTAL + default_value: Incremental # Training resources. max_nodes: 1 diff --git a/otx/algorithms/segmentation/tasks/inference.py b/otx/algorithms/segmentation/tasks/inference.py index 408696e3530..9cdd457dab2 100644 --- a/otx/algorithms/segmentation/tasks/inference.py +++ b/otx/algorithms/segmentation/tasks/inference.py @@ -74,9 +74,9 @@ RECIPE_TRAIN_TYPE = { - TrainType.SEMISUPERVISED: "semisl.py", - TrainType.INCREMENTAL: "incremental.py", - TrainType.SELFSUPERVISED: "selfsl.py", + TrainType.Semisupervised: "semisl.py", + TrainType.Incremental: "incremental.py", + TrainType.Selfsupervised: "selfsl.py", } @@ -191,7 +191,7 @@ def _init_recipe(self): # TODO: Need to remove the hard coding for supcon only. if ( self._train_type in RECIPE_TRAIN_TYPE - and self._train_type == TrainType.INCREMENTAL + and self._train_type == TrainType.Incremental and self._hyperparams.learning_parameters.enable_supcon and not self._model_dir.endswith("supcon") ): @@ -218,8 +218,8 @@ def _init_recipe(self): remove_from_configs_by_type(self._recipe_cfg.custom_hooks, "FreezeLayers") def _update_stage_module(self, stage_module: str): - module_prefix = {TrainType.SEMISUPERVISED: "SemiSL", TrainType.INCREMENTAL: "Incr"} - if self._train_type == TrainType.SEMISUPERVISED and stage_module == "SegExporter": + module_prefix = {TrainType.Semisupervised: "SemiSL", TrainType.Incremental: "Incr"} + if self._train_type == TrainType.Semisupervised and stage_module == "SegExporter": stage_module = "SemiSLSegExporter" elif self._train_type in module_prefix and stage_module in ["SegTrainer", "SegInferrer"]: stage_module = module_prefix[self._train_type] + stage_module diff --git a/otx/cli/manager/config_manager.py b/otx/cli/manager/config_manager.py index 1136c12ec32..8a4a59d0347 100644 --- a/otx/cli/manager/config_manager.py +++ b/otx/cli/manager/config_manager.py @@ -60,9 +60,9 @@ } TASK_TYPE_TO_SUB_DIR_NAME = { - "INCREMENTAL": "", - "SEMISUPERVISED": "semisl", - "SELFSUPERVISED": "selfsl", + "Incremental": "", + "Semisupervised": "semisl", + "Selfsupervised": "selfsl", } @@ -163,8 +163,8 @@ def _check_rebuild(self): print(f"[*] Rebuild model: {self.template.name} -> {self.args.model.upper()}") result = True template_train_type = self._get_train_type(ignore_args=True) - if self.args.train_type and template_train_type != self.args.train_type.upper(): - self.train_type = self.args.train_type.upper() + if self.args.train_type and template_train_type != self.args.train_type: + self.train_type = self.args.train_type print(f"[*] Rebuild train-type: {template_train_type} -> {self.train_type}") result = True return result @@ -192,10 +192,10 @@ def _get_train_type(self, ignore_args: bool = False) -> str: args_hyper_parameters = gen_params_dict_from_args(self.args) arg_algo_backend = args_hyper_parameters.get("algo_backend", False) if arg_algo_backend: - train_type = arg_algo_backend.get("train_type", {"value": "INCREMENTAL"}) # type: ignore - return train_type.get("value", "INCREMENTAL") + train_type = arg_algo_backend.get("train_type", {"value": "Incremental"}) # type: ignore + return train_type.get("value", "Incremental") if hasattr(self.args, "train_type") and self.mode in ("build", "train") and self.args.train_type: - self.train_type = self.args.train_type.upper() + self.train_type = self.args.train_type if self.train_type not in TASK_TYPE_TO_SUB_DIR_NAME: raise NotSupportedError(f"{self.train_type} is not currently supported by otx.") if self.train_type in TASK_TYPE_TO_SUB_DIR_NAME: @@ -203,9 +203,9 @@ def _get_train_type(self, ignore_args: bool = False) -> str: algo_backend = self.template.hyper_parameters.parameter_overrides.get("algo_backend", False) if algo_backend: - train_type = algo_backend.get("train_type", {"default_value": "INCREMENTAL"}) - return train_type.get("default_value", "INCREMENTAL") - return "INCREMENTAL" + train_type = algo_backend.get("train_type", {"default_value": "Incremental"}) + return train_type.get("default_value", "Incremental") + return "Incremental" def auto_task_detection(self, data_roots: str) -> str: """Detect task type automatically.""" diff --git a/otx/cli/tools/build.py b/otx/cli/tools/build.py index 0c5a4b3761f..246b2efb954 100644 --- a/otx/cli/tools/build.py +++ b/otx/cli/tools/build.py @@ -65,7 +65,7 @@ def get_args(): "--train-type", help=f"The currently supported options: {TASK_TYPE_TO_SUB_DIR_NAME.keys()}.", type=str, - default="incremental", + default="Incremental", ) parser.add_argument( "--work-dir", diff --git a/otx/cli/tools/train.py b/otx/cli/tools/train.py index 164210366ba..faffff85635 100644 --- a/otx/cli/tools/train.py +++ b/otx/cli/tools/train.py @@ -65,7 +65,7 @@ def get_args(): "--train-type", help=f"The currently supported options: {TASK_TYPE_TO_SUB_DIR_NAME.keys()}.", type=str, - default="incremental", + default="Incremental", ) parser.add_argument( "--load-weights", diff --git a/otx/core/data/adapter/__init__.py b/otx/core/data/adapter/__init__.py index 140d644728e..a51ef7c32f6 100644 --- a/otx/core/data/adapter/__init__.py +++ b/otx/core/data/adapter/__init__.py @@ -23,53 +23,53 @@ ADAPTERS = { TaskType.CLASSIFICATION: { - "INCREMENTAL": { + "Incremental": { "module_name": "classification_dataset_adapter", "class": "ClassificationDatasetAdapter", } }, TaskType.DETECTION: { - "INCREMENTAL": { + "Incremental": { "module_name": "detection_dataset_adapter", "class": "DetectionDatasetAdapter", } }, TaskType.ROTATED_DETECTION: { - "INCREMENTAL": { + "Incremental": { "module_name": "detection_dataset_adapter", "class": "DetectionDatasetAdapter", } }, TaskType.INSTANCE_SEGMENTATION: { - "INCREMENTAL": { + "Incremental": { "module_name": "detection_dataset_adapter", "class": "DetectionDatasetAdapter", } }, TaskType.SEGMENTATION: { - "INCREMENTAL": { + "Incremental": { "module_name": "segmentation_dataset_adapter", "class": "SegmentationDatasetAdapter", }, - "SELFSUPERVISED": { + "Selfsupervised": { "module_name": "segmentation_dataset_adapter", "class": "SelfSLSegmentationDatasetAdapter", }, }, TaskType.ANOMALY_CLASSIFICATION: { - "INCREMENTAL": { + "Incremental": { "module_name": "anomaly_dataset_adapter", "class": "AnomalyClassificationDatasetAdapter", } }, TaskType.ANOMALY_DETECTION: { - "INCREMENTAL": { + "Incremental": { "module_name": "anomaly_dataset_adapter", "class": "AnomalyDetectionDatasetAdapter", } }, TaskType.ANOMALY_SEGMENTATION: { - "INCREMENTAL": { + "Incremental": { "module_name": "anomaly_dataset_adapter", "class": "AnomalySegmentationDatasetAdapter", } @@ -79,13 +79,13 @@ ADAPTERS.update( { TaskType.ACTION_CLASSIFICATION: { - "INCREMENTAL": { + "Incremental": { "module_name": "action_dataset_adapter", "class": "ActionClassificationDatasetAdapter", } }, TaskType.ACTION_DETECTION: { - "INCREMENTAL": { + "Incremental": { "module_name": "action_dataset_adapter", "class": "ActionDetectionDatasetAdapter", } @@ -107,18 +107,18 @@ def get_dataset_adapter( Args: task_type: A task type such as ANOMALY_CLASSIFICATION, ANOMALY_DETECTION, ANOMALY_SEGMENTATION, CLASSIFICATION, INSTANCE_SEGMENTATION, DETECTION, CLASSIFICATION, ROTATED_DETECTION, SEGMENTATION. - train_type: train type such as INCREMENTAL and SELFSUPERVISED. - SELFSUPERVISED is only supported for SEGMENTATION. + train_type: train type such as Incremental and Selfsupervised. + Selfsupervised is only supported for SEGMENTATION. train_data_roots: the path of data root for training data val_data_roots: the path of data root for validation data test_data_roots: the path of data root for test data unlabeled_data_roots: the path of data root for unlabeled data """ - train_type_to_be_called = TrainType.INCREMENTAL.value + train_type_to_be_called = TrainType.Incremental.value # FIXME : Hardcoded solution for self-sl for seg - if task_type == TaskType.SEGMENTATION and train_type == TrainType.SELFSUPERVISED.value: - train_type_to_be_called = TrainType.SELFSUPERVISED.value + if task_type == TaskType.SEGMENTATION and train_type == TrainType.Selfsupervised.value: + train_type_to_be_called = TrainType.Selfsupervised.value module_root = "otx.core.data.adapter." module = importlib.import_module(module_root + ADAPTERS[task_type][train_type_to_be_called]["module_name"]) diff --git a/tests/e2e/cli/classification/test_classification.py b/tests/e2e/cli/classification/test_classification.py index a3278f0eb13..fe0e12498e8 100644 --- a/tests/e2e/cli/classification/test_classification.py +++ b/tests/e2e/cli/classification/test_classification.py @@ -303,7 +303,7 @@ def test_otx_train(self, template, tmp_dir_path): tmp_dir_path = tmp_dir_path / "multi_class_cls/test_semisl" args_semisl = copy.deepcopy(args0) args_semisl["--unlabeled-data-roots"] = args["--train-data-roots"] - args_semisl["train_params"].extend(["--algo_backend.train_type", "SEMISUPERVISED"]) + args_semisl["train_params"].extend(["--algo_backend.train_type", "Semisupervised"]) otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl) @e2e_pytest_component @@ -322,7 +322,7 @@ def test_otx_multi_gpu_train_semisl(self, template, tmp_dir_path): tmp_dir_path = tmp_dir_path / "multi_class_cls/test_multi_gpu_semisl" args_semisl_multigpu = copy.deepcopy(args0) args_semisl_multigpu["--unlabeled-data-roots"] = args["--train-data-roots"] - args_semisl_multigpu["train_params"].extend(["--algo_backend.train_type", "SEMISUPERVISED"]) + args_semisl_multigpu["train_params"].extend(["--algo_backend.train_type", "Semisupervised"]) args_semisl_multigpu["--gpus"] = "0,1" otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl_multigpu) @@ -757,7 +757,7 @@ def test_otx_multi_gpu_train(self, template, tmp_dir_path): "--learning_parameters.learning_rate", "1e-07", "--algo_backend.train_type", - "SELFSUPERVISED", + "Selfsupervised", ], } diff --git a/tests/e2e/cli/detection/test_detection.py b/tests/e2e/cli/detection/test_detection.py index 24c5d5b6274..da82aec22c6 100644 --- a/tests/e2e/cli/detection/test_detection.py +++ b/tests/e2e/cli/detection/test_detection.py @@ -68,7 +68,7 @@ "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ], } diff --git a/tests/e2e/cli/segmentation/test_segmentation.py b/tests/e2e/cli/segmentation/test_segmentation.py index 96086fa25eb..1df46ecfba1 100644 --- a/tests/e2e/cli/segmentation/test_segmentation.py +++ b/tests/e2e/cli/segmentation/test_segmentation.py @@ -276,7 +276,7 @@ def test_otx_multi_gpu_train(self, template, tmp_dir_path): "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ], } @@ -317,7 +317,7 @@ def test_otx_multi_gpu_train_semisl(self, template, tmp_dir_path): "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SELFSUPERVISED", + "Selfsupervised", ], } diff --git a/tests/integration/cli/classification/test_classification.py b/tests/integration/cli/classification/test_classification.py index ee26e5cbc35..459ef89c033 100644 --- a/tests/integration/cli/classification/test_classification.py +++ b/tests/integration/cli/classification/test_classification.py @@ -53,7 +53,7 @@ "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SELFSUPERVISED", + "Selfsupervised", ], } @@ -190,7 +190,7 @@ def test_otx_train_semisl(self, template, tmp_dir_path): tmp_dir_path = tmp_dir_path / "multi_class_cls/test_semisl" args_semisl = copy.deepcopy(args) args_semisl["--unlabeled-data-roots"] = args["--train-data-roots"] - args_semisl["train_params"].extend(["--algo_backend.train_type", "SEMISUPERVISED"]) + args_semisl["train_params"].extend(["--algo_backend.train_type", "Semisupervised"]) otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl) @e2e_pytest_component @@ -201,7 +201,7 @@ def test_otx_multi_gpu_train_semisl(self, template, tmp_dir_path): tmp_dir_path = tmp_dir_path / "multi_class_cls/test_multi_gpu_semisl" args_semisl_multigpu = copy.deepcopy(args) args_semisl_multigpu["--unlabeled-data-roots"] = args["--train-data-roots"] - args_semisl_multigpu["train_params"].extend(["--algo_backend.train_type", "SEMISUPERVISED"]) + args_semisl_multigpu["train_params"].extend(["--algo_backend.train_type", "Semisupervised"]) args_semisl_multigpu["--gpus"] = "0,1" otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl_multigpu) @@ -308,7 +308,7 @@ def test_otx_train_semisl(self, template, tmp_dir_path): tmp_dir_path = tmp_dir_path / "multi_label_cls" / "test_semisl" args_semisl = copy.deepcopy(args_m) args_semisl["--unlabeled-data-roots"] = args_m["--train-data-roots"] - args_semisl["train_params"].extend(["--algo_backend.train_type", "SEMISUPERVISED"]) + args_semisl["train_params"].extend(["--algo_backend.train_type", "Semisupervised"]) otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl) diff --git a/tests/integration/cli/detection/test_detection.py b/tests/integration/cli/detection/test_detection.py index 7e071a1ee35..44a92ad812c 100644 --- a/tests/integration/cli/detection/test_detection.py +++ b/tests/integration/cli/detection/test_detection.py @@ -48,7 +48,7 @@ "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ], } diff --git a/tests/integration/cli/segmentation/test_segmentation.py b/tests/integration/cli/segmentation/test_segmentation.py index be027226fb6..54557124f8a 100644 --- a/tests/integration/cli/segmentation/test_segmentation.py +++ b/tests/integration/cli/segmentation/test_segmentation.py @@ -54,7 +54,7 @@ "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ], } @@ -68,7 +68,7 @@ "--learning_parameters.batch_size", "4", "--algo_backend.train_type", - "SELFSUPERVISED", + "Selfsupervised", ], } diff --git a/tests/integration/cli/test_cli.py b/tests/integration/cli/test_cli.py index 1653f0cf939..825a811750b 100644 --- a/tests/integration/cli/test_cli.py +++ b/tests/integration/cli/test_cli.py @@ -36,9 +36,9 @@ "default": "EfficientNet-B0", "--task": "classification", "--model": "MobileNet-V3-large-1x", - "--train-type": "semisupervised", + "--train-type": "Semisupervised", }, - "detection": {"default": "ATSS", "--task": "detection", "--model": "SSD", "--train-type": "semisupervised"}, + "detection": {"default": "ATSS", "--task": "detection", "--model": "SSD", "--train-type": "Semisupervised"}, } @@ -59,19 +59,19 @@ def test_otx_build_rebuild(self, tmp_dir_path, case): tmp_dir_path = tmp_dir_path / "test_rebuild" / case # 1. Only Task build_arg = {"--task": rebuild_args[case]["--task"]} - expected = {"model": rebuild_args[case]["default"], "train_type": "INCREMENTAL"} + expected = {"model": rebuild_args[case]["default"], "train_type": "Incremental"} otx_build_testing(tmp_dir_path, build_arg, expected=expected) # 2. Change Model build_arg = {"--model": rebuild_args[case]["--model"]} - expected = {"model": rebuild_args[case]["--model"], "train_type": "INCREMENTAL"} + expected = {"model": rebuild_args[case]["--model"], "train_type": "Incremental"} otx_build_testing(tmp_dir_path, build_arg, expected=expected) # 3. Change Train-type build_arg = {"--train-type": rebuild_args[case]["--train-type"]} expected = {"model": rebuild_args[case]["--model"], "train_type": rebuild_args[case]["--train-type"]} otx_build_testing(tmp_dir_path, build_arg, expected=expected) # 4. Change to Default - build_arg = {"--model": rebuild_args[case]["default"], "--train-type": "INCREMENTAL"} - expected = {"model": rebuild_args[case]["default"], "train_type": "INCREMENTAL"} + build_arg = {"--model": rebuild_args[case]["default"], "--train-type": "Incremental"} + expected = {"model": rebuild_args[case]["default"], "train_type": "Incremental"} otx_build_testing(tmp_dir_path, build_arg, expected=expected) diff --git a/tests/regression/classification/test_classification.py b/tests/regression/classification/test_classification.py index 49a0593f014..f19fc6a72fd 100644 --- a/tests/regression/classification/test_classification.py +++ b/tests/regression/classification/test_classification.py @@ -170,7 +170,7 @@ def test_otx_train_semisl(self, template, tmp_dir_path): "--learning_parameters.num_iters", REGRESSION_TEST_EPOCHS, "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ] train_start_time = timer() otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl) @@ -228,7 +228,7 @@ def test_otx_train_selfsl(self, template, tmp_dir_path): "--learning_parameters.num_iters", "10", "--algo_backend.train_type", - "SELFSUPERVISED", + "Selfsupervised", ] # Self-supervised Training diff --git a/tests/regression/detection/test_detection.py b/tests/regression/detection/test_detection.py index 5837682e381..309fc539804 100644 --- a/tests/regression/detection/test_detection.py +++ b/tests/regression/detection/test_detection.py @@ -169,7 +169,7 @@ def test_otx_train_semisl(self, template, tmp_dir_path): "--learning_parameters.num_iters", REGRESSION_TEST_EPOCHS, "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ] train_start_time = timer() otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl) diff --git a/tests/regression/segmentation/test_segmentation.py b/tests/regression/segmentation/test_segmentation.py index e179559ce2c..f194245fd5a 100644 --- a/tests/regression/segmentation/test_segmentation.py +++ b/tests/regression/segmentation/test_segmentation.py @@ -170,7 +170,7 @@ def test_otx_train_semisl(self, template, tmp_dir_path): "--learning_parameters.num_iters", REGRESSION_TEST_EPOCHS, "--algo_backend.train_type", - "SEMISUPERVISED", + "Semisupervised", ] train_start_time = timer() otx_train_testing(template, tmp_dir_path, otx_dir, args_semisl) @@ -221,7 +221,7 @@ def test_otx_train_selfsl(self, template, tmp_dir_path): args_selfsl = config_selfsl["data_path"] selfsl_train_args = copy.deepcopy(args_selfsl) - selfsl_train_args["train_params"] = ["params", "--algo_backend.train_type", "SELFSUPERVISED"] + selfsl_train_args["train_params"] = ["params", "--algo_backend.train_type", "Selfsupervised"] # Self-supervised Training train_start_time = timer() diff --git a/tests/test_suite/run_test_command.py b/tests/test_suite/run_test_command.py index f385ce47382..f443b61a598 100644 --- a/tests/test_suite/run_test_command.py +++ b/tests/test_suite/run_test_command.py @@ -813,7 +813,7 @@ def otx_build_testing(root, args: Dict[str, str], expected: Dict[str, str]): assert template_config.name == expected["model"] assert ( template_config.hyper_parameters.parameter_overrides.algo_backend.train_type.default_value - == expected["train_type"].upper() + == expected["train_type"] ) diff --git a/tests/unit/algorithms/action/tools/test_action_sample_classification.py b/tests/unit/algorithms/action/tools/test_action_sample_classification.py index 472907637dc..f821864bb39 100644 --- a/tests/unit/algorithms/action/tools/test_action_sample_classification.py +++ b/tests/unit/algorithms/action/tools/test_action_sample_classification.py @@ -50,7 +50,7 @@ def test_load_test_dataset() -> None: class MockTemplate: task_type = TaskType.ACTION_CLASSIFICATION hyper_parameters = Config( - {"parameter_overrides": {"algo_backend": {"train_type": {"default_value": TrainType.INCREMENTAL.value}}}} + {"parameter_overrides": {"algo_backend": {"train_type": {"default_value": TrainType.Incremental.value}}}} ) dataset, label_schema = load_test_dataset(MockTemplate()) diff --git a/tests/unit/algorithms/action/tools/test_action_sample_detection.py b/tests/unit/algorithms/action/tools/test_action_sample_detection.py index c002337526b..b774a6af8b7 100644 --- a/tests/unit/algorithms/action/tools/test_action_sample_detection.py +++ b/tests/unit/algorithms/action/tools/test_action_sample_detection.py @@ -51,7 +51,7 @@ def test_load_test_dataset() -> None: class MockTemplate: task_type = TaskType.ACTION_DETECTION hyper_parameters = Config( - {"parameter_overrides": {"algo_backend": {"train_type": {"default_value": TrainType.INCREMENTAL.value}}}} + {"parameter_overrides": {"algo_backend": {"train_type": {"default_value": TrainType.Incremental.value}}}} ) dataset, label_schema = load_test_dataset(MockTemplate()) diff --git a/tests/unit/cli/manager/test_config_manager.py b/tests/unit/cli/manager/test_config_manager.py index ce7900cef7e..c53d56f1153 100644 --- a/tests/unit/cli/manager/test_config_manager.py +++ b/tests/unit/cli/manager/test_config_manager.py @@ -138,7 +138,7 @@ def test_export_data_cfg(self, mocker, config_manager): def test_build_workspace(self, mocker): # Setup task_type = "CLASSIFICATION" - train_type = "SEMISUPERVISED" + train_type = "Semisupervised" workspace_path = "./otx-workspace" args = mocker.Mock() args.autosplit = None @@ -345,7 +345,7 @@ def test_configure_template(self, mocker): "otx.cli.manager.config_manager.ConfigManager.check_workspace", return_value=True ) mocker.patch("otx.cli.manager.config_manager.ConfigManager._get_template", return_value=mock_template) - mocker.patch("otx.cli.manager.config_manager.ConfigManager._get_train_type", return_value="INCREMENTAL") + mocker.patch("otx.cli.manager.config_manager.ConfigManager._get_train_type", return_value="Incremental") mock_parse_model_template = mocker.patch( "otx.cli.manager.config_manager.parse_model_template", return_value=mock_template ) @@ -358,7 +358,7 @@ def test_configure_template(self, mocker): # Then assert config_manager.task_type == "CLASSIFICATION" assert config_manager.model == "template_name" - assert config_manager.train_type == "INCREMENTAL" + assert config_manager.train_type == "Incremental" config_manager.mode = "build" mocker.patch("otx.cli.manager.config_manager.ConfigManager._check_rebuild", return_value=True) @@ -366,7 +366,7 @@ def test_configure_template(self, mocker): assert config_manager.rebuild assert config_manager.task_type == "CLASSIFICATION" assert config_manager.model == "template_name" - assert config_manager.train_type == "INCREMENTAL" + assert config_manager.train_type == "Incremental" mock_check_workspace.return_value = False mocker.patch("pathlib.Path.exists", return_value=True) @@ -382,7 +382,7 @@ def test_configure_template(self, mocker): config_manager.configure_template() assert config_manager.task_type == "CLASSIFICATION" assert config_manager.model == "template_name" - assert config_manager.train_type == "INCREMENTAL" + assert config_manager.train_type == "Incremental" @e2e_pytest_unit def test__check_rebuild(self, mocker): @@ -405,7 +405,7 @@ def test__check_rebuild(self, mocker): config_manager.args.model = "SSD" config_manager.template.name = "ATSS" - config_manager.args.train_type = "SEMISUPERVISED" + config_manager.args.train_type = "Semisupervised" assert config_manager._check_rebuild() @e2e_pytest_unit @@ -433,7 +433,7 @@ def test_configure_data_config(self, mocker): mock_args.mode = "build" config_manager = ConfigManager(mock_args) - config_manager.train_type = "INCREMENTAL" + config_manager.train_type = "Incremental" config_manager.configure_data_config(update_data_yaml=True) mock_configure_dataset.assert_called_once() @@ -446,27 +446,27 @@ def test_configure_data_config(self, mocker): @e2e_pytest_unit def test__get_train_type(self, mocker): mock_args = mocker.MagicMock() - mock_params_dict = {"algo_backend": {"train_type": {"value": "SEMISUPERVISED"}}} + mock_params_dict = {"algo_backend": {"train_type": {"value": "Semisupervised"}}} mock_configure_dataset = mocker.patch( "otx.cli.manager.config_manager.gen_params_dict_from_args", return_value=mock_params_dict ) config_manager = ConfigManager(args=mock_args) config_manager.mode = "build" - assert config_manager._get_train_type() == "SEMISUPERVISED" + assert config_manager._get_train_type() == "Semisupervised" - config_manager.args.train_type = "INCREMENTAL" + config_manager.args.train_type = "Incremental" mock_configure_dataset.return_value = {} - assert config_manager._get_train_type() == "INCREMENTAL" + assert config_manager._get_train_type() == "Incremental" mock_template = mocker.MagicMock() mock_template.hyper_parameters.parameter_overrides = { - "algo_backend": {"train_type": {"default_value": "SELFSUPERVISED"}} + "algo_backend": {"train_type": {"default_value": "Selfsupervised"}} } config_manager.template = mock_template - assert config_manager._get_train_type(ignore_args=True) == "SELFSUPERVISED" + assert config_manager._get_train_type(ignore_args=True) == "Selfsupervised" config_manager.template.hyper_parameters.parameter_overrides = {} - assert config_manager._get_train_type(ignore_args=True) == "INCREMENTAL" + assert config_manager._get_train_type(ignore_args=True) == "Incremental" @e2e_pytest_unit def test_auto_task_detection(self, mocker): diff --git a/tests/unit/cli/tools/test_build.py b/tests/unit/cli/tools/test_build.py index d54652c58bf..3ddb60689bc 100644 --- a/tests/unit/cli/tools/test_build.py +++ b/tests/unit/cli/tools/test_build.py @@ -16,7 +16,7 @@ def test_get_args(mocker): "--unlabeled-data-roots": "unlabeled/data/root", "--unlabeled-file-list": "unlabeled/file/list", "--task": "detection", - "--train-type": "SEMISUPERVISED", + "--train-type": "Semisupervised", "--work-dir": "work/dir/path", "--model": "SSD", "--backbone": "torchvision.resnet18", @@ -37,7 +37,7 @@ def test_get_args(mocker): assert parsed_args.unlabeled_file_list == "unlabeled/file/list" assert parsed_args.work_dir == "work/dir/path" assert parsed_args.task == "detection" - assert parsed_args.train_type == "SEMISUPERVISED" + assert parsed_args.train_type == "Semisupervised" assert parsed_args.model == "SSD" assert parsed_args.backbone == "torchvision.resnet18" diff --git a/tests/unit/core/data/adapter/test_init.py b/tests/unit/core/data/adapter/test_init.py index 031a8b2ebd6..b647a53715f 100644 --- a/tests/unit/core/data/adapter/test_init.py +++ b/tests/unit/core/data/adapter/test_init.py @@ -13,7 +13,7 @@ @e2e_pytest_unit @pytest.mark.parametrize("task_name", TASK_NAME_TO_TASK_TYPE.keys()) -@pytest.mark.parametrize("train_type", [TrainType.INCREMENTAL.value]) +@pytest.mark.parametrize("train_type", [TrainType.Incremental.value]) def test_get_dataset_adapter_incremental(task_name, train_type): root_path = os.getcwd() task_type = TASK_NAME_TO_TASK_TYPE[task_name] @@ -35,7 +35,7 @@ def test_get_dataset_adapter_incremental(task_name, train_type): @e2e_pytest_unit @pytest.mark.parametrize("task_name", ["classification"]) -@pytest.mark.parametrize("train_type", [TrainType.SELFSUPERVISED.value]) +@pytest.mark.parametrize("train_type", [TrainType.Selfsupervised.value]) def test_get_dataset_adapter_selfsl_classification(task_name, train_type): root_path = os.getcwd() task_type = TASK_NAME_TO_TASK_TYPE[task_name] @@ -56,7 +56,7 @@ def test_get_dataset_adapter_selfsl_classification(task_name, train_type): @e2e_pytest_unit @pytest.mark.parametrize("task_name", ["segmentation"]) -@pytest.mark.parametrize("train_type", [TrainType.SELFSUPERVISED.value]) +@pytest.mark.parametrize("train_type", [TrainType.Selfsupervised.value]) def test_get_dataset_adapter_selfsl_segmentation(task_name, train_type): root_path = os.getcwd() task_type = TASK_NAME_TO_TASK_TYPE[task_name] From 0e2610664ef7632a482ec88ad75a0ef076d16b7e Mon Sep 17 00:00:00 2001 From: Soobee Lee Date: Thu, 23 Mar 2023 17:07:32 +0900 Subject: [PATCH 20/34] Move all hooks in MPA into OTX common mmcv adapter (#1922) --- .../guide/reference/mpa/modules/hooks.rst | 58 -- .../guide/reference/mpa/modules/index.rst | 1 - .../models/classifiers/sam_classifier.py | 2 +- .../common/adapters/mmcv/hooks/__init__.py | 74 +- .../mmcv/hooks/adaptive_training_hook.py} | 15 +- .../common/adapters/mmcv/hooks/base_hook.py | 750 ------------------ .../common/adapters/mmcv/hooks/cancel_hook.py | 89 +++ .../adapters/mmcv/hooks/checkpoint_hook.py | 36 +- .../mmcv}/hooks/composed_dataloaders_hook.py | 20 +- .../mmcv/hooks/custom_model_ema_hook.py | 113 +++ .../mmcv/hooks/dual_model_ema_hook.py} | 34 +- .../mmcv}/hooks/early_stopping_hook.py | 94 ++- .../common/adapters/mmcv/hooks/eval_hook.py | 2 +- .../adapters/mmcv/hooks/force_train_hook.py | 38 + .../mmcv/hooks/fp16_sam_optimizer_hook.py | 2 +- .../adapters/mmcv/hooks/ib_loss_hook.py | 2 +- .../common/adapters/mmcv/hooks/logger_hook.py | 87 ++ .../adapters/mmcv}/hooks/model_ema_v2_hook.py | 21 +- .../adapters/mmcv/hooks/no_bias_decay_hook.py | 2 +- .../adapters/mmcv/hooks/progress_hook.py | 101 +++ .../mmcv/hooks/recording_forward_hook.py} | 70 +- .../adapters/mmcv/hooks/sam_optimizer_hook.py | 2 +- .../adapters/mmcv/hooks/semisl_cls_hook.py | 2 +- .../adapters/mmcv}/hooks/task_adapt_hook.py | 6 +- .../mmcv/hooks/two_crop_transform_hook.py | 92 +++ .../mmcv}/hooks/unbiased_teacher_hook.py | 11 +- .../adapters/mmcv/hooks/workflow_hook.py} | 55 +- .../common/adapters/mmcv/nncf/patches.py | 6 +- otx/algorithms/common/tasks/training_base.py | 4 +- .../mmdet/hooks/det_saliency_map_hook.py | 4 +- .../models/detectors/custom_atss_detector.py | 4 +- .../detectors/custom_maskrcnn_detector.py | 4 +- .../detectors/custom_single_stage_detector.py | 4 +- .../models/detectors/custom_yolox_detector.py | 4 +- .../models/segmentors/otx_encoder_decoder.py | 2 +- otx/mpa/builder.py | 6 +- otx/mpa/cls/explainer.py | 10 +- otx/mpa/cls/inferrer.py | 8 +- otx/mpa/det/__init__.py | 4 +- otx/mpa/det/explainer.py | 8 +- otx/mpa/det/inferrer.py | 8 +- otx/mpa/modules/hooks/__init__.py | 18 - .../modules/hooks/cancel_interface_hook.py | 40 - otx/mpa/modules/hooks/logger_replace_hook.py | 24 - .../modules/hooks/save_initial_weight_hook.py | 19 - otx/mpa/seg/__init__.py | 2 +- otx/mpa/seg/inferrer.py | 4 +- .../api/xai/test_api_xai_validity.py | 4 +- .../hooks/test_adaptive_training_hooks.py} | 10 +- .../mmcv/hooks/test_cancel_interface_hook.py} | 4 +- .../mmcv/hooks/test_checkpoint_hook.py} | 2 +- .../hooks/test_composed_dataloader_hook.py} | 6 +- .../mmcv/hooks/test_early_stopping_hook.py} | 26 +- .../adapters/mmcv/hooks/test_ema_v2_hook.py} | 7 +- .../adapters/mmcv/hooks/test_eval_hook.py} | 0 .../hooks/test_fp16_sam_optimizer_hook.py} | 2 +- .../adapters/mmcv/hooks/test_ib_loss_hook.py} | 2 +- .../mmcv/hooks/test_logger_replace_hook.py} | 4 +- .../mmcv/hooks/test_model_ema_hook.py} | 7 +- .../mmcv/hooks/test_no_bias_decay_hook.py} | 0 .../hooks/test_recording_forward_hooks.py} | 4 +- .../hooks/test_save_initial_weight_hook.py} | 4 +- .../mmcv/hooks/test_semisl_cls_hook.py} | 2 +- .../mmcv/hooks/test_task_adapt_hook.py} | 4 +- .../mmcv/hooks/test_unbiased_teacher_hook.py} | 6 +- .../mmcv/hooks/test_workflow_hooks.py} | 4 +- tests/unit/mpa/cls/test_cls_explanier.py | 4 +- 67 files changed, 928 insertions(+), 1136 deletions(-) delete mode 100644 docs/source/guide/reference/mpa/modules/hooks.rst rename otx/{mpa/modules/hooks/adaptive_training_hooks.py => algorithms/common/adapters/mmcv/hooks/adaptive_training_hook.py} (92%) delete mode 100644 otx/algorithms/common/adapters/mmcv/hooks/base_hook.py create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/cancel_hook.py rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/composed_dataloaders_hook.py (65%) create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/custom_model_ema_hook.py rename otx/{mpa/modules/hooks/model_ema_hook.py => algorithms/common/adapters/mmcv/hooks/dual_model_ema_hook.py} (84%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/early_stopping_hook.py (85%) create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/force_train_hook.py create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/logger_hook.py rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/model_ema_v2_hook.py (92%) create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/progress_hook.py rename otx/{mpa/modules/hooks/recording_forward_hooks.py => algorithms/common/adapters/mmcv/hooks/recording_forward_hook.py} (80%) rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/task_adapt_hook.py (92%) create mode 100644 otx/algorithms/common/adapters/mmcv/hooks/two_crop_transform_hook.py rename otx/{mpa/modules => algorithms/common/adapters/mmcv}/hooks/unbiased_teacher_hook.py (86%) rename otx/{mpa/modules/hooks/workflow_hooks.py => algorithms/common/adapters/mmcv/hooks/workflow_hook.py} (69%) delete mode 100644 otx/mpa/modules/hooks/__init__.py delete mode 100644 otx/mpa/modules/hooks/cancel_interface_hook.py delete mode 100644 otx/mpa/modules/hooks/logger_replace_hook.py delete mode 100644 otx/mpa/modules/hooks/save_initial_weight_hook.py rename tests/unit/{mpa/modules/hooks/test_mpa_adaptive_training_hooks.py => algorithms/common/adapters/mmcv/hooks/test_adaptive_training_hooks.py} (89%) rename tests/unit/{mpa/modules/hooks/test_mpa_cancel_interface_hook.py => algorithms/common/adapters/mmcv/hooks/test_cancel_interface_hook.py} (89%) rename tests/unit/{mpa/modules/hooks/test_mpa_checkpoint_hook.py => algorithms/common/adapters/mmcv/hooks/test_checkpoint_hook.py} (95%) rename tests/unit/{mpa/modules/hooks/test_mpa_composed_dataloader_hook.py => algorithms/common/adapters/mmcv/hooks/test_composed_dataloader_hook.py} (65%) rename tests/unit/{mpa/modules/hooks/test_mpa_early_stopping_hook.py => algorithms/common/adapters/mmcv/hooks/test_early_stopping_hook.py} (90%) rename tests/unit/{mpa/modules/hooks/test_mpa_ema_v2_hook.py => algorithms/common/adapters/mmcv/hooks/test_ema_v2_hook.py} (75%) rename tests/unit/{mpa/modules/hooks/test_mpa_eval_hook.py => algorithms/common/adapters/mmcv/hooks/test_eval_hook.py} (100%) rename tests/unit/{mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py => algorithms/common/adapters/mmcv/hooks/test_fp16_sam_optimizer_hook.py} (85%) rename tests/unit/{mpa/modules/hooks/test_mpa_ib_loss_hook.py => algorithms/common/adapters/mmcv/hooks/test_ib_loss_hook.py} (85%) rename tests/unit/{mpa/modules/hooks/test_mpa_logger_replace_hook.py => algorithms/common/adapters/mmcv/hooks/test_logger_replace_hook.py} (72%) rename tests/unit/{mpa/modules/hooks/test_mpa_model_ema_hook.py => algorithms/common/adapters/mmcv/hooks/test_model_ema_hook.py} (94%) rename tests/unit/{mpa/modules/hooks/test_mpa_no_bias_decay_hook.py => algorithms/common/adapters/mmcv/hooks/test_no_bias_decay_hook.py} (100%) rename tests/unit/{mpa/modules/hooks/test_mpa_recording_forward_hooks.py => algorithms/common/adapters/mmcv/hooks/test_recording_forward_hooks.py} (95%) rename tests/unit/{mpa/modules/hooks/test_mpa_save_initial_weight_hook.py => algorithms/common/adapters/mmcv/hooks/test_save_initial_weight_hook.py} (70%) rename tests/unit/{mpa/modules/hooks/test_mpa_semisl_cls_hook.py => algorithms/common/adapters/mmcv/hooks/test_semisl_cls_hook.py} (85%) rename tests/unit/{mpa/modules/hooks/test_mpa_task_adapt_hook.py => algorithms/common/adapters/mmcv/hooks/test_task_adapt_hook.py} (69%) rename tests/unit/{mpa/modules/hooks/test_mpa_unbiased_teacher_hook.py => algorithms/common/adapters/mmcv/hooks/test_unbiased_teacher_hook.py} (66%) rename tests/unit/{mpa/modules/hooks/test_mpa_workflow_hooks.py => algorithms/common/adapters/mmcv/hooks/test_workflow_hooks.py} (89%) diff --git a/docs/source/guide/reference/mpa/modules/hooks.rst b/docs/source/guide/reference/mpa/modules/hooks.rst deleted file mode 100644 index 127fb30a2ac..00000000000 --- a/docs/source/guide/reference/mpa/modules/hooks.rst +++ /dev/null @@ -1,58 +0,0 @@ -Hooks -^^^^^^^ - -.. toctree:: - :maxdepth: 3 - :caption: Contents: - -.. automodule:: otx.mpa.modules.hooks - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.adaptive_training_hooks - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.cancel_interface_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.composed_dataloaders_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.early_stopping_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.logger_replace_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.model_ema_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.model_ema_v2_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.recording_forward_hooks - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.save_initial_weight_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.task_adapt_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.unbiased_teacher_hook - :members: - :undoc-members: - -.. automodule:: otx.mpa.modules.hooks.workflow_hooks - :members: - :undoc-members: \ No newline at end of file diff --git a/docs/source/guide/reference/mpa/modules/index.rst b/docs/source/guide/reference/mpa/modules/index.rst index 731dfbb571f..c5664b1c06b 100644 --- a/docs/source/guide/reference/mpa/modules/index.rst +++ b/docs/source/guide/reference/mpa/modules/index.rst @@ -6,6 +6,5 @@ Modules models/index datasets - hooks ov/index utils diff --git a/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py b/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py index 9df002d9bc5..3324f858a1b 100644 --- a/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py +++ b/otx/algorithms/classification/adapters/mmcls/models/classifiers/sam_classifier.py @@ -275,7 +275,7 @@ def extract_feat(self, img): if is_mmdeploy_enabled(): from mmdeploy.core import FUNCTION_REWRITER - from otx.mpa.modules.hooks.recording_forward_hooks import ( # pylint: disable=ungrouped-imports + from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( # pylint: disable=ungrouped-imports FeatureVectorHook, ReciproCAMHook, ) diff --git a/otx/algorithms/common/adapters/mmcv/hooks/__init__.py b/otx/algorithms/common/adapters/mmcv/hooks/__init__.py index 08adf430c70..4c48242bc35 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/__init__.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/__init__.py @@ -1,6 +1,6 @@ """Adapters for mmcv support.""" -# Copyright (C) 2021-2023 Intel Corporation +# Copyright (C) 2022-2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -14,40 +14,76 @@ # See the License for the specific language governing permissions # and limitations under the License. -from .base_hook import ( - CancelTrainingHook, - EarlyStoppingHook, - EMAMomentumUpdateHook, +from .adaptive_training_hook import AdaptiveTrainSchedulingHook +from .cancel_hook import CancelInterfaceHook, CancelTrainingHook +from .checkpoint_hook import ( + CheckpointHookWithValResults, EnsureCorrectBestCheckpointHook, - OTXLoggerHook, - OTXProgressHook, + SaveInitialWeightHook, +) +from .composed_dataloaders_hook import ComposedDataLoadersHook +from .custom_model_ema_hook import CustomModelEMAHook, EMAMomentumUpdateHook +from .dual_model_ema_hook import DualModelEMAHook +from .early_stopping_hook import ( + EarlyStoppingHook, + LazyEarlyStoppingHook, ReduceLROnPlateauLrUpdaterHook, StopLossNanTrainingHook, - TwoCropTransformHook, ) -from .checkpoint_hook import CheckpointHookWithValResults -from .eval_hook import CustomEvalHook +from .eval_hook import CustomEvalHook, DistCustomEvalHook +from .force_train_hook import ForceTrainModeHook from .fp16_sam_optimizer_hook import Fp16SAMOptimizerHook from .ib_loss_hook import IBLossHook +from .logger_hook import LoggerReplaceHook, OTXLoggerHook +from .model_ema_v2_hook import ModelEmaV2Hook from .no_bias_decay_hook import NoBiasDecayHook +from .progress_hook import OTXProgressHook +from .recording_forward_hook import ( + ActivationMapHook, + BaseRecordingForwardHook, + EigenCamHook, + FeatureVectorHook, +) from .sam_optimizer_hook import SAMOptimizerHook from .semisl_cls_hook import SemiSLClsHook +from .task_adapt_hook import TaskAdaptHook +from .two_crop_transform_hook import TwoCropTransformHook +from .unbiased_teacher_hook import UnbiasedTeacherHook +from .workflow_hook import WorkflowHook __all__ = [ + "AdaptiveTrainSchedulingHook", + "CancelInterfaceHook", + "CancelTrainingHook", "CheckpointHookWithValResults", + "EnsureCorrectBestCheckpointHook", + "ComposedDataLoadersHook", "CustomEvalHook", + "DistCustomEvalHook", + "EarlyStoppingHook", + "LazyEarlyStoppingHook", + "ReduceLROnPlateauLrUpdaterHook", + "EMAMomentumUpdateHook", + "ForceTrainModeHook", + "Fp16SAMOptimizerHook", + "StopLossNanTrainingHook", "IBLossHook", + "OTXLoggerHook", + "LoggerReplaceHook", + "CustomModelEMAHook", + "DualModelEMAHook", + "ModelEmaV2Hook", "NoBiasDecayHook", + "OTXProgressHook", + "BaseRecordingForwardHook", + "EigenCamHook", + "ActivationMapHook", + "FeatureVectorHook", "SAMOptimizerHook", - "Fp16SAMOptimizerHook", + "SaveInitialWeightHook", "SemiSLClsHook", - "CancelTrainingHook", - "OTXLoggerHook", - "OTXProgressHook", - "EarlyStoppingHook", - "ReduceLROnPlateauLrUpdaterHook", - "EnsureCorrectBestCheckpointHook", - "StopLossNanTrainingHook", - "EMAMomentumUpdateHook", + "TaskAdaptHook", "TwoCropTransformHook", + "UnbiasedTeacherHook", + "WorkflowHook", ] diff --git a/otx/mpa/modules/hooks/adaptive_training_hooks.py b/otx/algorithms/common/adapters/mmcv/hooks/adaptive_training_hook.py similarity index 92% rename from otx/mpa/modules/hooks/adaptive_training_hooks.py rename to otx/algorithms/common/adapters/mmcv/hooks/adaptive_training_hook.py index cbd31b095f4..862ffe21f7c 100644 --- a/otx/mpa/modules/hooks/adaptive_training_hooks.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/adaptive_training_hook.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Adaptive training schedule hook.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,15 +9,19 @@ from mmcv.runner.hooks.checkpoint import CheckpointHook from mmcv.runner.hooks.evaluation import EvalHook -from otx.mpa.modules.hooks.early_stopping_hook import EarlyStoppingHook +from otx.algorithms.common.adapters.mmcv.hooks.early_stopping_hook import ( + EarlyStoppingHook, +) from otx.mpa.utils.logger import get_logger logger = get_logger() +# pylint: disable=too-many-arguments, too-many-instance-attributes + @HOOKS.register_module() class AdaptiveTrainSchedulingHook(Hook): - """Adaptive Training Scheduling Hook + """Adaptive Training Scheduling Hook. Depending on the size of iteration per epoch, adaptively update the validation interval and related values. @@ -58,6 +63,7 @@ def __init__( self._original_interval = None def before_run(self, runner): + """Before run.""" if self.enable_eval_before_run: hook = self.get_evalhook(runner) if hook is None: @@ -68,6 +74,7 @@ def before_run(self, runner): hook.start = 0 def before_train_iter(self, runner): + """Before train iter.""" if self.enable_eval_before_run and self._original_interval is not None: hook = self.get_evalhook(runner) hook.interval = self._original_interval @@ -110,10 +117,12 @@ def before_train_iter(self, runner): self._initialized = True def get_adaptive_interval(self, iter_per_epoch): + """Get adaptive interval.""" adaptive_interval = max(round(math.exp(self.decay * iter_per_epoch) * self.max_interval), 1) return adaptive_interval def get_evalhook(self, runner): + """Get evaluation hook.""" target_hook = None for hook in runner.hooks: if isinstance(hook, EvalHook): diff --git a/otx/algorithms/common/adapters/mmcv/hooks/base_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/base_hook.py deleted file mode 100644 index 1a13c5e68e0..00000000000 --- a/otx/algorithms/common/adapters/mmcv/hooks/base_hook.py +++ /dev/null @@ -1,750 +0,0 @@ -"""Collections of hooks for common OTX algorithms.""" - -# Copyright (C) 2021-2022 Intel Corporation -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, -# software distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions -# and limitations under the License. - -import math -import os -from collections import defaultdict -from math import cos, inf, isnan, pi -from typing import Any, Dict, List, Optional - -from mmcv.parallel import is_module_wrapper -from mmcv.runner import BaseRunner, EpochBasedRunner -from mmcv.runner.dist_utils import master_only -from mmcv.runner.hooks import HOOKS, Hook, LoggerHook, LrUpdaterHook -from mmcv.utils import print_log - -from otx.api.usecases.reporting.time_monitor_callback import TimeMonitorCallback -from otx.api.utils.argument_checks import check_input_parameters_type -from otx.mpa.utils.logger import get_logger - -logger = get_logger() - - -# pylint: disable=too-many-instance-attributes, protected-access, too-many-arguments, unused-argument -@HOOKS.register_module() -class CancelTrainingHook(Hook): - """CancelTrainingHook for Training Stopping.""" - - @check_input_parameters_type() - def __init__(self, interval: int = 5): - """Periodically check whether whether a stop signal is sent to the runner during model training. - - Every 'check_interval' iterations, the work_dir for the runner is checked to see if a file '.stop_training' - is present. If it is, training is stopped. - - :param interval: Period for checking for stop signal, given in iterations. - - """ - self.interval = interval - - @staticmethod - def _check_for_stop_signal(runner: BaseRunner): - """Log _check_for_stop_signal for CancelTrainingHook.""" - work_dir = runner.work_dir - stop_filepath = os.path.join(work_dir, ".stop_training") - if os.path.exists(stop_filepath): - if isinstance(runner, EpochBasedRunner): - epoch = runner.epoch - runner._max_epochs = epoch # Force runner to stop by pretending it has reached it's max_epoch - runner.should_stop = True # Set this flag to true to stop the current training epoch - os.remove(stop_filepath) - - @check_input_parameters_type() - def after_train_iter(self, runner: BaseRunner): - """Log after_train_iter for CancelTrainingHook.""" - if not self.every_n_iters(runner, self.interval): - return - self._check_for_stop_signal(runner) - - -@HOOKS.register_module() -class EnsureCorrectBestCheckpointHook(Hook): - """EnsureCorrectBestCheckpointHook. - - This hook makes sure that the 'best_mAP' checkpoint points properly to the best model, even if the best model is - created in the last epoch. - """ - - @check_input_parameters_type() - def after_run(self, runner: BaseRunner): - """Called after train epoch hooks.""" - runner.call_hook("after_train_epoch") - - -@HOOKS.register_module() -class OTXLoggerHook(LoggerHook): - """OTXLoggerHook for Logging.""" - - class Curve: - """Curve with x (epochs) & y (scores).""" - - def __init__(self): - self.x = [] - self.y = [] - - def __repr__(self): - """Repr function.""" - points = [] - for x, y in zip(self.x, self.y): - points.append(f"({x},{y})") - return "curve[" + ",".join(points) + "]" - - @check_input_parameters_type() - def __init__( - self, - curves: Optional[Dict[Any, Curve]] = None, - interval: int = 10, - ignore_last: bool = True, - reset_flag: bool = True, - by_epoch: bool = True, - ): - super().__init__(interval, ignore_last, reset_flag, by_epoch) - self.curves = curves if curves is not None else defaultdict(self.Curve) - - @master_only - @check_input_parameters_type() - def log(self, runner: BaseRunner): - """Log function for OTXLoggerHook.""" - tags = self.get_loggable_tags(runner, allow_text=False, tags_to_skip=()) - if runner.max_epochs is not None: - normalized_iter = self.get_iter(runner) / runner.max_iters * runner.max_epochs - else: - normalized_iter = self.get_iter(runner) - for tag, value in tags.items(): - curve = self.curves[tag] - # Remove duplicates. - if len(curve.x) > 0 and curve.x[-1] == normalized_iter: - curve.x.pop() - curve.y.pop() - curve.x.append(normalized_iter) - curve.y.append(value) - - @check_input_parameters_type() - def after_train_epoch(self, runner: BaseRunner): - """Called after_train_epoch in OTXLoggerHook.""" - # Iteration counter is increased right after the last iteration in the epoch, - # temporarily decrease it back. - runner._iter -= 1 - super().after_train_epoch(runner) - runner._iter += 1 - - -@HOOKS.register_module() -class OTXProgressHook(Hook): - """OTXProgressHook for getting progress.""" - - @check_input_parameters_type() - def __init__(self, time_monitor: TimeMonitorCallback, verbose: bool = False): - super().__init__() - self.time_monitor = time_monitor - self.verbose = verbose - self.print_threshold = 1 - - @check_input_parameters_type() - def before_run(self, runner: BaseRunner): - """Called before_run in OTXProgressHook.""" - total_epochs = runner.max_epochs if runner.max_epochs is not None else 1 - self.time_monitor.total_epochs = total_epochs - self.time_monitor.train_steps = runner.max_iters // total_epochs if total_epochs else 1 - self.time_monitor.steps_per_epoch = self.time_monitor.train_steps + self.time_monitor.val_steps - self.time_monitor.total_steps = max(math.ceil(self.time_monitor.steps_per_epoch * total_epochs), 1) - self.time_monitor.current_step = 0 - self.time_monitor.current_epoch = 0 - self.time_monitor.on_train_begin() - - @check_input_parameters_type() - def before_epoch(self, runner: BaseRunner): - """Called before_epoch in OTXProgressHook.""" - self.time_monitor.on_epoch_begin(runner.epoch) - - @check_input_parameters_type() - def after_epoch(self, runner: BaseRunner): - """Called after_epoch in OTXProgressHook.""" - # put some runner's training status to use on the other hooks - runner.log_buffer.output["current_iters"] = runner.iter - self.time_monitor.on_epoch_end(runner.epoch, runner.log_buffer.output) - - @check_input_parameters_type() - def before_iter(self, runner: BaseRunner): - """Called before_iter in OTXProgressHook.""" - self.time_monitor.on_train_batch_begin(1) - - @check_input_parameters_type() - def after_iter(self, runner: BaseRunner): - """Called after_iter in OTXProgressHook.""" - # put some runner's training status to use on the other hooks - runner.log_buffer.output["current_iters"] = runner.iter - self.time_monitor.on_train_batch_end(1) - if self.verbose: - progress = self.progress - if progress >= self.print_threshold: - logger.warning(f"training progress {progress:.0f}%") - self.print_threshold = (progress + 10) // 10 * 10 - - @check_input_parameters_type() - def before_val_iter(self, runner: BaseRunner): - """Called before_val_iter in OTXProgressHook.""" - self.time_monitor.on_test_batch_begin(1, logger) - - @check_input_parameters_type() - def after_val_iter(self, runner: BaseRunner): - """Called after_val_iter in OTXProgressHook.""" - self.time_monitor.on_test_batch_end(1, logger) - - @check_input_parameters_type() - def after_run(self, runner: BaseRunner): - """Called after_run in OTXProgressHook.""" - self.time_monitor.on_train_end(1) - if self.time_monitor.update_progress_callback: - self.time_monitor.update_progress_callback(int(self.time_monitor.get_progress())) - - @property - def progress(self): - """Getting Progress from time monitor.""" - return self.time_monitor.get_progress() - - -@HOOKS.register_module() -class EarlyStoppingHook(Hook): - """Cancel training when a metric has stopped improving. - - Early Stopping hook monitors a metric quantity and if no improvement is seen for a ‘patience’ - number of epochs, the training is cancelled. - - :param interval: the number of intervals for checking early stop. The interval number should be - the same as the evaluation interval - the `interval` variable set in - `evaluation` config. - :param metric: the metric name to be monitored - :param rule: greater or less. In `less` mode, training will stop when the metric has stopped - decreasing and in `greater` mode it will stop when the metric has stopped - increasing. - :param patience: Number of epochs with no improvement after which the training will be reduced. - For example, if patience = 2, then we will ignore the first 2 epochs with no - improvement, and will only cancel the training after the 3rd epoch if the - metric still hasn’t improved then - :param iteration_patience: Number of iterations must be trained after the last improvement - before training stops. The same as patience but the training - continues if the number of iteration is lower than iteration_patience - This variable makes sure a model is trained enough for some - iterations after the last improvement before stopping. - :param min_delta: Minimal decay applied to lr. If the difference between new and old lr is - smaller than eps, the update is ignored - """ - - rule_map = {"greater": lambda x, y: x > y, "less": lambda x, y: x < y} - init_value_map = {"greater": -inf, "less": inf} - greater_keys = [ - "acc", - "top", - "AR@", - "auc", - "precision", - "mAP", - "mDice", - "mIoU", - "mAcc", - "aAcc", - ] - less_keys = ["loss"] - - @check_input_parameters_type() - def __init__( - self, - interval: int, - metric: str = "bbox_mAP", - rule: Optional[str] = None, - patience: int = 5, - iteration_patience: int = 500, - min_delta: float = 0.0, - ): - super().__init__() - self.patience = patience - self.iteration_patience = iteration_patience - self.interval = interval - self.min_delta = min_delta - self._init_rule(rule, metric) - - self.min_delta *= 1 if self.rule == "greater" else -1 - self.last_iter = 0 - self.wait_count = 0 - self.by_epoch = True - self.warmup_iters = 0 - self.best_score = self.init_value_map[self.rule] - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Here is the rule to determine which rule is used for key indicator - when the rule is not specific: - 1. If the key indicator is in ``self.greater_keys``, the rule will be - specified as 'greater'. - 2. Or if the key indicator is in ``self.less_keys``, the rule will be - specified as 'less'. - 3. Or if the key indicator is equal to the substring in any one item - in ``self.greater_keys``, the rule will be specified as 'greater'. - 4. Or if the key indicator is equal to the substring in any one item - in ``self.less_keys``, the rule will be specified as 'less'. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f"rule must be greater, less or None, " f"but got {rule}.") - - if rule is None: - if key_indicator in self.greater_keys or any(key in key_indicator for key in self.greater_keys): - rule = "greater" - elif key_indicator in self.less_keys or any(key in key_indicator for key in self.less_keys): - rule = "less" - else: - raise ValueError( - f"Cannot infer the rule for key " f"{key_indicator}, thus a specific rule " f"must be specified." - ) - self.rule = rule - self.key_indicator = key_indicator - self.compare_func = self.rule_map[self.rule] - - @check_input_parameters_type() - def before_run(self, runner: BaseRunner): - """Called before_run in EarlyStoppingHook.""" - self.by_epoch = runner.max_epochs is not None - for hook in runner.hooks: - if isinstance(hook, LrUpdaterHook): - self.warmup_iters = hook.warmup_iters - break - - @check_input_parameters_type() - def after_train_iter(self, runner: BaseRunner): - """Called after every training iter to evaluate the results.""" - if not self.by_epoch: - self._do_check_stopping(runner) - - @check_input_parameters_type() - def after_train_epoch(self, runner: BaseRunner): - """Called after every training epoch to evaluate the results.""" - if self.by_epoch: - self._do_check_stopping(runner) - - def _do_check_stopping(self, runner): - """Called _do_check_stopping in EarlyStoppingHook.""" - if not self._should_check_stopping(runner) or self.warmup_iters > runner.iter: - return - - if runner.rank == 0: - if self.key_indicator not in runner.log_buffer.output: - raise KeyError( - f"metric {self.key_indicator} does not exist in buffer. Please check " - f"{self.key_indicator} is cached in evaluation output buffer" - ) - - key_score = runner.log_buffer.output[self.key_indicator] - if self.compare_func(key_score - self.min_delta, self.best_score): - self.best_score = key_score - self.wait_count = 0 - self.last_iter = runner.iter - else: - self.wait_count += 1 - if self.wait_count >= self.patience: - if runner.iter - self.last_iter < self.iteration_patience: - print_log( - f"\nSkip early stopping. Accumulated iteration " - f"{runner.iter - self.last_iter} from the last " - f"improvement must be larger than {self.iteration_patience} to trigger " - f"Early Stopping.", - logger=runner.logger, - ) - return - stop_point = runner.epoch if self.by_epoch else runner.iter - print_log( - f"\nEarly Stopping at :{stop_point} with " f"best {self.key_indicator}: {self.best_score}", - logger=runner.logger, - ) - runner.should_stop = True - - def _should_check_stopping(self, runner): - """Called _should_check_stopping in EarlyStoppingHook.""" - check_time = self.every_n_epochs if self.by_epoch else self.every_n_iters - if not check_time(runner, self.interval): - # No evaluation during the interval. - return False - return True - - -@HOOKS.register_module(force=True) -class ReduceLROnPlateauLrUpdaterHook(LrUpdaterHook): - """Reduce learning rate when a metric has stopped improving. - - Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. - This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ - number of epochs, the learning rate is reduced. - - :param min_lr: minimum learning rate. The lower bound of the desired learning rate. - :param interval: the number of intervals for checking the hook. The interval number should be - the same as the evaluation interval - the `interval` variable set in - `evaluation` config. - :param metric: the metric name to be monitored - :param rule: greater or less. In `less` mode, learning rate will be dropped if the metric has - stopped decreasing and in `greater` mode it will be dropped when the metric has - stopped increasing. - :param patience: Number of epochs with no improvement after which learning rate will be reduced. - For example, if patience = 2, then we will ignore the first 2 epochs with no - improvement, and will only drop LR after the 3rd epoch if the metric still - hasn’t improved then - :param iteration_patience: Number of iterations must be trained after the last improvement - before LR drops. The same as patience but the LR remains the same if - the number of iteration is lower than iteration_patience. This - variable makes sure a model is trained enough for some iterations - after the last improvement before dropping the LR. - :param factor: Factor to be multiply with the learning rate. - For example, new_lr = current_lr * factor - """ - - rule_map = {"greater": lambda x, y: x > y, "less": lambda x, y: x < y} - init_value_map = {"greater": -inf, "less": inf} - greater_keys = [ - "acc", - "top", - "AR@", - "auc", - "precision", - "mAP", - "mDice", - "mIoU", - "mAcc", - "aAcc", - ] - less_keys = ["loss"] - - @check_input_parameters_type() - def __init__( - self, - min_lr: float, - interval: int, - metric: str = "bbox_mAP", - rule: Optional[str] = None, - factor: float = 0.1, - patience: int = 3, - iteration_patience: int = 300, - **kwargs, - ): - super().__init__(**kwargs) - self.interval = interval - self.min_lr = min_lr - self.factor = factor - self.patience = patience - self.iteration_patience = iteration_patience - self.metric = metric - self.bad_count = 0 - self.last_iter = 0 - self.current_lr = -1.0 - self.base_lr = [] # type: List - self._init_rule(rule, metric) - self.best_score = self.init_value_map[self.rule] - - def _init_rule(self, rule, key_indicator): - """Initialize rule, key_indicator, comparison_func, and best score. - - Here is the rule to determine which rule is used for key indicator - when the rule is not specific: - 1. If the key indicator is in ``self.greater_keys``, the rule will be - specified as 'greater'. - 2. Or if the key indicator is in ``self.less_keys``, the rule will be - specified as 'less'. - 3. Or if the key indicator is equal to the substring in any one item - in ``self.greater_keys``, the rule will be specified as 'greater'. - 4. Or if the key indicator is equal to the substring in any one item - in ``self.less_keys``, the rule will be specified as 'less'. - - Args: - rule (str | None): Comparison rule for best score. - key_indicator (str | None): Key indicator to determine the - comparison rule. - """ - if rule not in self.rule_map and rule is not None: - raise KeyError(f"rule must be greater, less or None, " f"but got {rule}.") - - if rule is None: - if key_indicator in self.greater_keys or any(key in key_indicator for key in self.greater_keys): - rule = "greater" - elif key_indicator in self.less_keys or any(key in key_indicator for key in self.less_keys): - rule = "less" - else: - raise ValueError( - f"Cannot infer the rule for key " f"{key_indicator}, thus a specific rule " f"must be specified." - ) - self.rule = rule - self.key_indicator = key_indicator - self.compare_func = self.rule_map[self.rule] - - def _is_check_timing(self, runner: BaseRunner) -> bool: - """Check whether current epoch or iter is multiple of self.interval, skip during warmup interations.""" - check_time = self.after_each_n_epochs if self.by_epoch else self.after_each_n_iters - return check_time(runner, self.interval) and (self.warmup_iters <= runner.iter) - - def after_each_n_epochs(self, runner: BaseRunner, interval: int) -> bool: - """Check whether current epoch is a next epoch after multiples of interval.""" - return runner.epoch % interval == 0 if interval > 0 and runner.epoch != 0 else False - - def after_each_n_iters(self, runner: BaseRunner, interval: int) -> bool: - """Check whether current iter is a next iter after multiples of interval.""" - return runner.iter % interval == 0 if interval > 0 and runner.iter != 0 else False - - @check_input_parameters_type() - def get_lr(self, runner: BaseRunner, base_lr: float): - """Called get_lr in ReduceLROnPlateauLrUpdaterHook.""" - if self.current_lr < 0: - self.current_lr = base_lr - - if not self._is_check_timing(runner): - return self.current_lr - - if hasattr(runner, "all_metrics"): - score = runner.all_metrics.get(self.metric, 0.0) - else: - return self.current_lr - - if self.compare_func(score, self.best_score): - self.best_score = score - self.bad_count = 0 - self.last_iter = runner.iter - else: - self.bad_count += 1 - - print_log( - f"\nBest Score: {self.best_score}, Current Score: {score}, Patience: {self.patience} " - f"Count: {self.bad_count}", - logger=runner.logger, - ) - - if self.bad_count >= self.patience: - if runner.iter - self.last_iter < self.iteration_patience: - print_log( - f"\nSkip LR dropping. Accumulated iteration " - f"{runner.iter - self.last_iter} from the last " - f"improvement must be larger than {self.iteration_patience} to trigger " - f"LR dropping.", - logger=runner.logger, - ) - return self.current_lr - self.last_iter = runner.iter - self.bad_count = 0 - print_log( - f"\nDrop LR from: {self.current_lr}, to: " f"{max(self.current_lr * self.factor, self.min_lr)}", - logger=runner.logger, - ) - self.current_lr = max(self.current_lr * self.factor, self.min_lr) - return self.current_lr - - @check_input_parameters_type() - def before_run(self, runner: BaseRunner): - """Called before_run in ReduceLROnPlateauLrUpdaterHook.""" - # TODO: remove overloaded method after fixing the issue - # https://github.com/open-mmlab/mmdetection/issues/6572 - for group in runner.optimizer.param_groups: - group.setdefault("initial_lr", group["lr"]) - self.base_lr = [group["initial_lr"] for group in runner.optimizer.param_groups] - self.bad_count = 0 - self.last_iter = 0 - self.current_lr = -1.0 - self.best_score = self.init_value_map[self.rule] - - -@HOOKS.register_module(force=True) -class StopLossNanTrainingHook(Hook): - """StopLossNanTrainingHook.""" - - @check_input_parameters_type() - def after_train_iter(self, runner: BaseRunner): - """Called after_train_iter in StopLossNanTrainingHook.""" - if isnan(runner.outputs["loss"].item()): - logger.warning("Early Stopping since loss is NaN") - runner.should_stop = True - - -@HOOKS.register_module() -class EMAMomentumUpdateHook(Hook): - """Exponential moving average (EMA) momentum update hook for self-supervised methods. - - This hook includes momentum adjustment in self-supervised methods following: - m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, - k: current step, K: total steps. - - :param end_momentum: The final momentum coefficient for the target network, defaults to 1. - :param update_interval: Interval to update new momentum, defaults to 1. - :param by_epoch: Whether updating momentum by epoch or not, defaults to False. - """ - - def __init__(self, end_momentum: float = 1.0, update_interval: int = 1, by_epoch: bool = False, **kwargs): - self.by_epoch = by_epoch - self.end_momentum = end_momentum - self.update_interval = update_interval - - def before_train_epoch(self, runner: BaseRunner): - """Called before_train_epoch in EMAMomentumUpdateHook.""" - if not self.by_epoch: - return - - if is_module_wrapper(runner.model): - model = runner.model.module - else: - model = runner.model - - if not hasattr(model, "momentum"): - raise AttributeError('The model must have attribute "momentum".') - if not hasattr(model, "base_momentum"): - raise AttributeError('The model must have attribute "base_momentum".') - - if self.every_n_epochs(runner, self.update_interval): - cur_epoch = runner.epoch - max_epoch = runner.max_epochs - base_m = model.base_momentum - updated_m = ( - self.end_momentum - (self.end_momentum - base_m) * (cos(pi * cur_epoch / float(max_epoch)) + 1) / 2 - ) - model.momentum = updated_m - - def before_train_iter(self, runner: BaseRunner): - """Called before_train_iter in EMAMomentumUpdateHook.""" - if self.by_epoch: - return - - if is_module_wrapper(runner.model): - model = runner.model.module - else: - model = runner.model - - if not hasattr(model, "momentum"): - raise AttributeError('The model must have attribute "momentum".') - if not hasattr(model, "base_momentum"): - raise AttributeError('The model must have attribute "base_momentum".') - - if self.every_n_iters(runner, self.update_interval): - cur_iter = runner.iter - max_iter = runner.max_iters - base_m = model.base_momentum - updated_m = ( - self.end_momentum - (self.end_momentum - base_m) * (cos(pi * cur_iter / float(max_iter)) + 1) / 2 - ) - model.momentum = updated_m - - def after_train_iter(self, runner: BaseRunner): - """Called after_train_iter in EMAMomentumUpdateHook.""" - if self.every_n_iters(runner, self.update_interval): - if is_module_wrapper(runner.model): - runner.model.module.momentum_update() - else: - runner.model.momentum_update() - - -@HOOKS.register_module() -class ForceTrainModeHook(Hook): - """Force train mode for model. - - This is a workaround of a bug in EvalHook from MMCV. - If a model evaluation is enabled before training by setting 'start=0' in EvalHook, - EvalHook does not put a model in a training mode again after evaluation. - - This simple hook forces to put a model in a training mode before every train epoch - with the lowest priority. - """ - - def before_train_epoch(self, runner): - """Make sure to put a model in a training mode before train epoch.""" - runner.model.train() - - -@HOOKS.register_module() -class TwoCropTransformHook(Hook): - """TwoCropTransformHook with every specific interval. - - This hook decides whether using single pipeline or two pipelines - implemented in `TwoCropTransform` for the current iteration. - - Args: - interval (int): If `interval` == 1, both pipelines is used. - If `interval` > 1, the first pipeline is used and then - both pipelines are used every `interval`. Defaults to 1. - by_epoch (bool): (TODO) Use `interval` by epoch. Defaults to False. - """ - - @check_input_parameters_type() - def __init__(self, interval: int = 1, by_epoch: bool = False): - assert interval > 0, f"interval (={interval}) must be positive value." - if by_epoch: - raise NotImplementedError("by_epoch is not implemented.") - - self.interval = interval - self.cnt = 0 - - @check_input_parameters_type() - def _get_dataset(self, runner: BaseRunner): - """Get dataset to handle `is_both`.""" - if hasattr(runner.data_loader.dataset, "dataset"): - # for RepeatDataset - dataset = runner.data_loader.dataset.dataset - else: - dataset = runner.data_loader.dataset - - return dataset - - # pylint: disable=inconsistent-return-statements - @check_input_parameters_type() - def _find_two_crop_transform(self, transforms: List[object]): - """Find TwoCropTransform among transforms.""" - for transform in transforms: - if transform.__class__.__name__ == "TwoCropTransform": - return transform - - @check_input_parameters_type() - def before_train_epoch(self, runner: BaseRunner): - """Called before_train_epoch in TwoCropTransformHook.""" - # Always keep `TwoCropTransform` enabled. - if self.interval == 1: - return - - dataset = self._get_dataset(runner) - two_crop_transform = self._find_two_crop_transform(dataset.pipeline.transforms) - if self.cnt == self.interval - 1: - # start using both pipelines - two_crop_transform.is_both = True - else: - two_crop_transform.is_both = False - - @check_input_parameters_type() - def after_train_iter(self, runner: BaseRunner): - """Called after_train_iter in TwoCropTransformHook.""" - # Always keep `TwoCropTransform` enabled. - if self.interval == 1: - return - - if self.cnt < self.interval - 1: - # Instead of using `runner.every_n_iters` or `runner.every_n_inner_iters`, - # this condition is used to compare `self.cnt` with `self.interval` throughout the entire epochs. - self.cnt += 1 - - if self.cnt == self.interval - 1: - dataset = self._get_dataset(runner) - two_crop_transform = self._find_two_crop_transform(dataset.pipeline.transforms) - if not two_crop_transform.is_both: - # If `self.cnt` == `self.interval`-1, there are two cases, - # 1. `self.cnt` was updated in L709, so `is_both` must be on for the next iter. - # 2. if the current iter was already conducted, `is_both` must be off. - two_crop_transform.is_both = True - else: - two_crop_transform.is_both = False - self.cnt = 0 diff --git a/otx/algorithms/common/adapters/mmcv/hooks/cancel_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/cancel_hook.py new file mode 100644 index 00000000000..1c59dd28e82 --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/cancel_hook.py @@ -0,0 +1,89 @@ +"""Cancel hooks.""" +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + + +import os +from typing import Callable + +from mmcv.runner import BaseRunner, EpochBasedRunner +from mmcv.runner.hooks import HOOKS, Hook + +from otx.api.utils.argument_checks import check_input_parameters_type +from otx.mpa.utils.logger import get_logger + +logger = get_logger() + + +# pylint: disable=too-many-instance-attributes, protected-access, too-many-arguments, unused-argument +@HOOKS.register_module() +class CancelTrainingHook(Hook): + """CancelTrainingHook for Training Stopping.""" + + @check_input_parameters_type() + def __init__(self, interval: int = 5): + """Periodically check whether whether a stop signal is sent to the runner during model training. + + Every 'check_interval' iterations, the work_dir for the runner is checked to see if a file '.stop_training' + is present. If it is, training is stopped. + + :param interval: Period for checking for stop signal, given in iterations. + + """ + self.interval = interval + + @staticmethod + def _check_for_stop_signal(runner: BaseRunner): + """Log _check_for_stop_signal for CancelTrainingHook.""" + work_dir = runner.work_dir + stop_filepath = os.path.join(work_dir, ".stop_training") + if os.path.exists(stop_filepath): + if isinstance(runner, EpochBasedRunner): + epoch = runner.epoch + runner._max_epochs = epoch # Force runner to stop by pretending it has reached it's max_epoch + runner.should_stop = True # Set this flag to true to stop the current training epoch + os.remove(stop_filepath) + + @check_input_parameters_type() + def after_train_iter(self, runner: BaseRunner): + """Log after_train_iter for CancelTrainingHook.""" + if not self.every_n_iters(runner, self.interval): + return + self._check_for_stop_signal(runner) + + +@HOOKS.register_module() +class CancelInterfaceHook(Hook): + """Cancel interface. If called, running job will be terminated.""" + + def __init__(self, init_callback: Callable, interval=5): + self.on_init_callback = init_callback + self.runner = None + self.interval = interval + + def cancel(self): + """Cancel.""" + logger.info("CancelInterfaceHook.cancel() is called.") + if self.runner is None: + logger.warning("runner is not configured yet. ignored this request.") + return + + if self.runner.should_stop: + logger.warning("cancel already requested.") + return + + if isinstance(self.runner, EpochBasedRunner): + epoch = self.runner.epoch + self.runner._max_epochs = epoch # Force runner to stop by pretending it has reached it's max_epoch + self.runner.should_stop = True # Set this flag to true to stop the current training epoch + logger.info("requested stopping to the runner") + + def before_run(self, runner): + """Before run.""" + self.runner = runner + self.on_init_callback(self) + + def after_run(self, runner): + """After run.""" + self.runner = None diff --git a/otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py index 61f4e51472f..234cb2dd49a 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/checkpoint_hook.py @@ -1,5 +1,5 @@ """CheckpointHook with validation results for classification task.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -7,9 +7,12 @@ from pathlib import Path from typing import Optional +from mmcv.runner import BaseRunner from mmcv.runner.dist_utils import allreduce_params, master_only from mmcv.runner.hooks.hook import HOOKS, Hook +from otx.api.utils.argument_checks import check_input_parameters_type + @HOOKS.register_module() class CheckpointHookWithValResults(Hook): # pylint: disable=too-many-instance-attributes @@ -140,3 +143,34 @@ def after_train_iter(self, runner): allreduce_params(runner.model.buffers()) self._save_checkpoint(runner) runner.save_ckpt = False + + +@HOOKS.register_module() +class EnsureCorrectBestCheckpointHook(Hook): + """EnsureCorrectBestCheckpointHook. + + This hook makes sure that the 'best_mAP' checkpoint points properly to the best model, even if the best model is + created in the last epoch. + """ + + @check_input_parameters_type() + def after_run(self, runner: BaseRunner): + """Called after train epoch hooks.""" + runner.call_hook("after_train_epoch") + + +@HOOKS.register_module() +class SaveInitialWeightHook(Hook): + """Save the initial weights before training.""" + + def __init__(self, save_path, file_name: str = "weights.pth", **kwargs): + self._save_path = save_path + self._file_name = file_name + self._args = kwargs + + def before_run(self, runner): + """Save initial the weights before training.""" + runner.logger.info("Saving weight before training") + runner.save_checkpoint( + self._save_path, filename_tmpl=self._file_name, save_optimizer=False, create_symlink=False, **self._args + ) diff --git a/otx/mpa/modules/hooks/composed_dataloaders_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/composed_dataloaders_hook.py similarity index 65% rename from otx/mpa/modules/hooks/composed_dataloaders_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/composed_dataloaders_hook.py index 64fe0ebdd2f..d327118fc0c 100644 --- a/otx/mpa/modules/hooks/composed_dataloaders_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/composed_dataloaders_hook.py @@ -1,8 +1,9 @@ -# Copyright (C) 2022 Intel Corporation +"""Composed dataloader hook.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from typing import Sequence, Union +from typing import List, Sequence, Union from mmcv.runner import HOOKS, Hook from torch.utils.data import DataLoader @@ -15,16 +16,22 @@ @HOOKS.register_module() class ComposedDataLoadersHook(Hook): + """Composed dataloader hook, which makes a composed dataloader which can combine multiple data loaders. + + Especially used for semi-supervised learning to aggregate a unlabeled dataloader and a labeled dataloader. + """ + def __init__( self, data_loaders: Union[Sequence[DataLoader], DataLoader], ): - self.data_loaders = [] + self.data_loaders = [] # type: List[DataLoader] self.composed_loader = None self.add_dataloaders(data_loaders) def add_dataloaders(self, data_loaders: Union[Sequence[DataLoader], DataLoader]): + """Create data_loaders to be added into composed dataloader.""" if isinstance(data_loaders, DataLoader): data_loaders = [data_loaders] else: @@ -34,12 +41,9 @@ def add_dataloaders(self, data_loaders: Union[Sequence[DataLoader], DataLoader]) self.composed_loader = None def before_epoch(self, runner): + """Create composedDL before running epoch.""" if self.composed_loader is None: - logger.info( - "Creating ComposedDL " - f"(runner's -> {runner.data_loader}, " - f"hook's -> {[i for i in self.data_loaders]})" - ) + logger.info("Creating ComposedDL " f"(runner's -> {runner.data_loader}, " f"hook's -> {self.data_loaders})") self.composed_loader = ComposedDL([runner.data_loader, *self.data_loaders]) # Per-epoch replacement: train-only loader -> train loader + additional loaders # (It's similar to local variable in epoch. Need to update every epoch...) diff --git a/otx/algorithms/common/adapters/mmcv/hooks/custom_model_ema_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/custom_model_ema_hook.py new file mode 100644 index 00000000000..03c3351ef7b --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/custom_model_ema_hook.py @@ -0,0 +1,113 @@ +"""EMA hooks.""" +# Copyright (C) 2023 Intel Corporation +# SPDX-License-Identifier: Apache-2.0 +# + +import math +from math import cos, pi + +from mmcv.parallel import is_module_wrapper +from mmcv.runner import HOOKS, BaseRunner, Hook +from mmcv.runner.hooks.ema import EMAHook + +from otx.mpa.utils.logger import get_logger + +logger = get_logger() + + +@HOOKS.register_module() +class CustomModelEMAHook(EMAHook): + """Custom EMAHook to update momentum for ema over training.""" + + def __init__(self, momentum=0.0002, epoch_momentum=0.0, interval=1, **kwargs): + super().__init__(momentum=momentum, interval=interval, **kwargs) + self.momentum = momentum + self.epoch_momentum = epoch_momentum + self.interval = interval + + def before_train_epoch(self, runner): + """Update the momentum.""" + if self.epoch_momentum > 0.0: + iter_per_epoch = len(runner.data_loader) + epoch_decay = 1 - self.epoch_momentum + iter_decay = math.pow(epoch_decay, self.interval / iter_per_epoch) + self.momentum = 1 - iter_decay + logger.info(f"Update EMA momentum: {self.momentum}") + self.epoch_momentum = 0.0 # disable re-compute + + super().before_train_epoch(runner) + + +@HOOKS.register_module() +class EMAMomentumUpdateHook(Hook): + """Exponential moving average (EMA) momentum update hook for self-supervised methods. + + This hook includes momentum adjustment in self-supervised methods following: + m = 1 - ( 1- m_0) * (cos(pi * k / K) + 1) / 2, + k: current step, K: total steps. + + :param end_momentum: The final momentum coefficient for the target network, defaults to 1. + :param update_interval: Interval to update new momentum, defaults to 1. + :param by_epoch: Whether updating momentum by epoch or not, defaults to False. + """ + + def __init__(self, end_momentum: float = 1.0, update_interval: int = 1, by_epoch: bool = False): + self.by_epoch = by_epoch + self.end_momentum = end_momentum + self.update_interval = update_interval + + def before_train_epoch(self, runner: BaseRunner): + """Called before_train_epoch in EMAMomentumUpdateHook.""" + if not self.by_epoch: + return + + if is_module_wrapper(runner.model): + model = runner.model.module + else: + model = runner.model + + if not hasattr(model, "momentum"): + raise AttributeError('The model must have attribute "momentum".') + if not hasattr(model, "base_momentum"): + raise AttributeError('The model must have attribute "base_momentum".') + + if self.every_n_epochs(runner, self.update_interval): + cur_epoch = runner.epoch + max_epoch = runner.max_epochs + base_m = model.base_momentum + updated_m = ( + self.end_momentum - (self.end_momentum - base_m) * (cos(pi * cur_epoch / float(max_epoch)) + 1) / 2 + ) + model.momentum = updated_m + + def before_train_iter(self, runner: BaseRunner): + """Called before_train_iter in EMAMomentumUpdateHook.""" + if self.by_epoch: + return + + if is_module_wrapper(runner.model): + model = runner.model.module + else: + model = runner.model + + if not hasattr(model, "momentum"): + raise AttributeError('The model must have attribute "momentum".') + if not hasattr(model, "base_momentum"): + raise AttributeError('The model must have attribute "base_momentum".') + + if self.every_n_iters(runner, self.update_interval): + cur_iter = runner.iter + max_iter = runner.max_iters + base_m = model.base_momentum + updated_m = ( + self.end_momentum - (self.end_momentum - base_m) * (cos(pi * cur_iter / float(max_iter)) + 1) / 2 + ) + model.momentum = updated_m + + def after_train_iter(self, runner: BaseRunner): + """Called after_train_iter in EMAMomentumUpdateHook.""" + if self.every_n_iters(runner, self.update_interval): + if is_module_wrapper(runner.model): + runner.model.module.momentum_update() + else: + runner.model.momentum_update() diff --git a/otx/mpa/modules/hooks/model_ema_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/dual_model_ema_hook.py similarity index 84% rename from otx/mpa/modules/hooks/model_ema_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/dual_model_ema_hook.py index 8d96bd4b97c..be9af01a039 100644 --- a/otx/mpa/modules/hooks/model_ema_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/dual_model_ema_hook.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Dual model EMA hooks.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -7,16 +8,17 @@ import torch from mmcv.parallel import is_module_wrapper from mmcv.runner import HOOKS, Hook -from mmcv.runner.hooks.ema import EMAHook from otx.mpa.utils.logger import get_logger logger = get_logger() +# pylint: disable=too-many-instance-attributes + @HOOKS.register_module() class DualModelEMAHook(Hook): - """Generalized re-implementation of mmcv.runner.EMAHook + r"""Generalized re-implementation of mmcv.runner.EMAHook. Source model paramters would be exponentially averaged onto destination model pararmeters on given intervals @@ -55,8 +57,12 @@ def __init__( self.epoch_momentum = epoch_momentum self.interval = interval self.start_epoch = start_epoch + self.src_model = None + self.dst_model = None self.src_model_name = src_model_name self.dst_model_name = dst_model_name + self.src_params = None + self.dst_params = None self.enabled = False def before_run(self, runner): @@ -78,6 +84,7 @@ def before_run(self, runner): logger.info(f"model_s model_t diff: {self._diff_model()}") def before_train_epoch(self, runner): + """Momentum update.""" if self.epoch_momentum > 0.0: iter_per_epoch = len(runner.data_loader) epoch_decay = 1 - self.epoch_momentum @@ -104,6 +111,7 @@ def after_train_iter(self, runner): self._ema_model() def after_train_epoch(self, runner): + """Log difference between models if enabled.""" if self.enabled: logger.info(f"model_s model_t diff: {self._diff_model()}") @@ -141,23 +149,3 @@ def _diff_model(self): diff = ((src_param - dst_param) ** 2).sum() diff_sum += diff return diff_sum - - -@HOOKS.register_module() -class CustomModelEMAHook(EMAHook): - def __init__(self, momentum=0.0002, epoch_momentum=0.0, interval=1, **kwargs): - super().__init__(momentum=momentum, interval=interval, **kwargs) - self.momentum = momentum - self.epoch_momentum = epoch_momentum - self.interval = interval - - def before_train_epoch(self, runner): - if self.epoch_momentum > 0.0: - iter_per_epoch = len(runner.data_loader) - epoch_decay = 1 - self.epoch_momentum - iter_decay = math.pow(epoch_decay, self.interval / iter_per_epoch) - self.momentum = 1 - iter_decay - logger.info(f"Update EMA momentum: {self.momentum}") - self.epoch_momentum = 0.0 # disable re-compute - - super().before_train_epoch(runner) diff --git a/otx/mpa/modules/hooks/early_stopping_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/early_stopping_hook.py similarity index 85% rename from otx/mpa/modules/hooks/early_stopping_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/early_stopping_hook.py index 75bd2faeed7..c44cdd69f37 100644 --- a/otx/mpa/modules/hooks/early_stopping_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/early_stopping_hook.py @@ -1,24 +1,26 @@ -# Copyright (C) 2022 Intel Corporation +"""Early stopping hooks.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from math import inf, isnan -from typing import Optional +from typing import List, Optional from mmcv.runner import BaseRunner, LrUpdaterHook from mmcv.runner.hooks import HOOKS, Hook from mmcv.utils import print_log +from otx.api.utils.argument_checks import check_input_parameters_type from otx.mpa.utils.logger import get_logger logger = get_logger() +# pylint: disable=too-many-arguments, too-many-instance-attributes -# Temp copy from detection_task -# TODO: refactoing + +@HOOKS.register_module() class EarlyStoppingHook(Hook): - """ - Cancel training when a metric has stopped improving. + """Cancel training when a metric has stopped improving. Early Stopping hook monitors a metric quantity and if no improvement is seen for a ‘patience’ number of epochs, the training is cancelled. @@ -48,6 +50,7 @@ class EarlyStoppingHook(Hook): greater_keys = ["acc", "top", "AR@", "auc", "precision", "mAP", "mDice", "mIoU", "mAcc", "aAcc", "MHAcc"] less_keys = ["loss"] + @check_input_parameters_type() def __init__( self, interval: int, @@ -68,6 +71,8 @@ def __init__( self.last_iter = 0 self.wait_count = 0 self.best_score = self.init_value_map[self.rule] + self.warmup_iters = None + self.by_epoch = True def _init_rule(self, rule, key_indicator): """Initialize rule, key_indicator, comparison_func, and best score. @@ -104,8 +109,11 @@ def _init_rule(self, rule, key_indicator): self.key_indicator = key_indicator self.compare_func = self.rule_map[self.rule] + @check_input_parameters_type() def before_run(self, runner: BaseRunner): - self.by_epoch = False if runner.max_epochs is None else True + """Called before_run in EarlyStoppingHook.""" + if runner.max_epochs is None: + self.by_epoch = False for hook in runner.hooks: if isinstance(hook, LrUpdaterHook): self.warmup_iters = hook.warmup_iters @@ -113,17 +121,20 @@ def before_run(self, runner: BaseRunner): if getattr(self, "warmup_iters", None) is None: raise ValueError("LrUpdaterHook must be registered to runner.") + @check_input_parameters_type() def after_train_iter(self, runner: BaseRunner): """Called after every training iter to evaluate the results.""" if not self.by_epoch: self._do_check_stopping(runner) + @check_input_parameters_type() def after_train_epoch(self, runner: BaseRunner): """Called after every training epoch to evaluate the results.""" if self.by_epoch: self._do_check_stopping(runner) def _do_check_stopping(self, runner): + """Called _do_check_stopping in EarlyStoppingHook.""" if not self._should_check_stopping(runner) or self.warmup_iters > runner.iter: return @@ -159,6 +170,7 @@ def _do_check_stopping(self, runner): runner.should_stop = True def _should_check_stopping(self, runner): + """Called _should_check_stopping in EarlyStoppingHook.""" check_time = self.every_n_epochs if self.by_epoch else self.every_n_iters if not check_time(runner, self.interval): # No evaluation during the interval. @@ -168,6 +180,8 @@ def _should_check_stopping(self, runner): @HOOKS.register_module() class LazyEarlyStoppingHook(EarlyStoppingHook): + """Lazy early stop hook.""" + def __init__( self, interval: int, @@ -179,7 +193,7 @@ def __init__( start: int = None, ): self.start = start - super(LazyEarlyStoppingHook, self).__init__(interval, metric, rule, patience, iteration_patience, min_delta) + super().__init__(interval, metric, rule, patience, iteration_patience, min_delta) def _should_check_stopping(self, runner): if self.by_epoch: @@ -201,10 +215,9 @@ def _should_check_stopping(self, runner): return True -@HOOKS.register_module() +@HOOKS.register_module(force=True) class ReduceLROnPlateauLrUpdaterHook(LrUpdaterHook): - """ - Reduce learning rate when a metric has stopped improving. + """Reduce learning rate when a metric has stopped improving. Models often benefit from reducing the learning rate by a factor of 2-10 once learning stagnates. This scheduler reads a metrics quantity and if no improvement is seen for a ‘patience’ @@ -233,9 +246,10 @@ class ReduceLROnPlateauLrUpdaterHook(LrUpdaterHook): rule_map = {"greater": lambda x, y: x > y, "less": lambda x, y: x < y} init_value_map = {"greater": -inf, "less": inf} - greater_keys = ["acc", "top", "AR@", "auc", "precision", "mAP", "mDice", "mIoU", "mAcc", "aAcc"] + greater_keys = ["acc", "top", "AR@", "auc", "precision", "mAP", "mDice", "mIoU", "mAcc", "aAcc", "MHAcc"] less_keys = ["loss"] + @check_input_parameters_type() def __init__( self, min_lr: float, @@ -256,7 +270,8 @@ def __init__( self.metric = metric self.bad_count = 0 self.last_iter = 0 - self.current_lr = None + self.current_lr = -1.0 + self.base_lr = [] # type: List self._init_rule(rule, metric) self.best_score = self.init_value_map[self.rule] @@ -295,30 +310,33 @@ def _init_rule(self, rule, key_indicator): self.key_indicator = key_indicator self.compare_func = self.rule_map[self.rule] - def _should_check_stopping(self, runner): - check_time = self.every_n_epochs if self.by_epoch else self.every_n_iters - if not check_time(runner, self.interval): - # No evaluation during the interval. - return False - return True + def _is_check_timing(self, runner: BaseRunner) -> bool: + """Check whether current epoch or iter is multiple of self.interval, skip during warmup interations.""" + check_time = self.after_each_n_epochs if self.by_epoch else self.after_each_n_iters + return check_time(runner, self.interval) and (self.warmup_iters <= runner.iter) - def get_lr(self, runner: BaseRunner, base_lr: float): - if not self._should_check_stopping(runner) or self.warmup_iters > runner.iter: - return self.current_lr if self.current_lr is not None else base_lr + def after_each_n_epochs(self, runner: BaseRunner, interval: int) -> bool: + """Check whether current epoch is a next epoch after multiples of interval.""" + return runner.epoch % interval == 0 if interval > 0 and runner.epoch != 0 else False + + def after_each_n_iters(self, runner: BaseRunner, interval: int) -> bool: + """Check whether current iter is a next iter after multiples of interval.""" + return runner.iter % interval == 0 if interval > 0 and runner.iter != 0 else False - if self.current_lr is None: + @check_input_parameters_type() + def get_lr(self, runner: BaseRunner, base_lr: float): + """Called get_lr in ReduceLROnPlateauLrUpdaterHook.""" + if self.current_lr < 0: self.current_lr = base_lr - if hasattr(runner, self.metric): - score = getattr(runner, self.metric, 0.0) + if not self._is_check_timing(runner): + return self.current_lr + + if hasattr(runner, "all_metrics"): + score = runner.all_metrics.get(self.metric, 0.0) else: return self.current_lr - print_log( - f"\nBest Score: {self.best_score}, Current Score: {score}, Patience: {self.patience} " - f"Count: {self.bad_count}", - logger=runner.logger, - ) if self.compare_func(score, self.best_score): self.best_score = score self.bad_count = 0 @@ -326,6 +344,12 @@ def get_lr(self, runner: BaseRunner, base_lr: float): else: self.bad_count += 1 + print_log( + f"\nBest Score: {self.best_score}, Current Score: {score}, Patience: {self.patience} " + f"Count: {self.bad_count}", + logger=runner.logger, + ) + if self.bad_count >= self.patience: if runner.iter - self.last_iter < self.iteration_patience: print_log( @@ -345,7 +369,9 @@ def get_lr(self, runner: BaseRunner, base_lr: float): self.current_lr = max(self.current_lr * self.factor, self.min_lr) return self.current_lr + @check_input_parameters_type() def before_run(self, runner: BaseRunner): + """Called before_run in ReduceLROnPlateauLrUpdaterHook.""" # TODO: remove overloaded method after fixing the issue # https://github.com/open-mmlab/mmdetection/issues/6572 for group in runner.optimizer.param_groups: @@ -353,13 +379,17 @@ def before_run(self, runner: BaseRunner): self.base_lr = [group["initial_lr"] for group in runner.optimizer.param_groups] self.bad_count = 0 self.last_iter = 0 - self.current_lr = None + self.current_lr = -1.0 self.best_score = self.init_value_map[self.rule] -@HOOKS.register_module() +@HOOKS.register_module(force=True) class StopLossNanTrainingHook(Hook): + """StopLossNanTrainingHook.""" + + @check_input_parameters_type() def after_train_iter(self, runner: BaseRunner): + """Called after_train_iter in StopLossNanTrainingHook.""" if isnan(runner.outputs["loss"].item()): logger.warning("Early Stopping since loss is NaN") runner.should_stop = True diff --git a/otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py index f5a8920783a..667667d3fa4 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/eval_hook.py @@ -1,5 +1,5 @@ """Module for definig CustomEvalHook for classification task.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/common/adapters/mmcv/hooks/force_train_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/force_train_hook.py new file mode 100644 index 00000000000..28b10c2def2 --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/force_train_hook.py @@ -0,0 +1,38 @@ +"""Collections of hooks for common OTX algorithms.""" + +# Copyright (C) 2021-2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +from mmcv.runner.hooks import HOOKS, Hook + +from otx.mpa.utils.logger import get_logger + +logger = get_logger() + + +@HOOKS.register_module() +class ForceTrainModeHook(Hook): + """Force train mode for model. + + This is a workaround of a bug in EvalHook from MMCV. + If a model evaluation is enabled before training by setting 'start=0' in EvalHook, + EvalHook does not put a model in a training mode again after evaluation. + + This simple hook forces to put a model in a training mode before every train epoch + with the lowest priority. + """ + + def before_train_epoch(self, runner): + """Make sure to put a model in a training mode before train epoch.""" + runner.model.train() diff --git a/otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py index 4aa0cc50c21..410b4ee65dc 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/fp16_sam_optimizer_hook.py @@ -1,5 +1,5 @@ """Module for Sharpness-aware Minimization optimizer hook implementation for MMCV Runners with FP16 precision.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py index ea823de04a3..a9a2fdf3007 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/ib_loss_hook.py @@ -1,5 +1,5 @@ """Module for defining a hook for IB loss using mmcls.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/common/adapters/mmcv/hooks/logger_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/logger_hook.py new file mode 100644 index 00000000000..fe1b41c3df8 --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/logger_hook.py @@ -0,0 +1,87 @@ +"""Logger hooks.""" +from collections import defaultdict +from typing import Any, Dict, Optional + +from mmcv.runner import BaseRunner +from mmcv.runner.dist_utils import master_only +from mmcv.runner.hooks import HOOKS, Hook, LoggerHook + +from otx.api.utils.argument_checks import check_input_parameters_type +from otx.mpa.utils.logger import get_logger + +logger = get_logger() + + +@HOOKS.register_module() +class OTXLoggerHook(LoggerHook): + """OTXLoggerHook for Logging.""" + + class Curve: + """Curve with x (epochs) & y (scores).""" + + def __init__(self): + self.x = [] + self.y = [] + + def __repr__(self): + """Repr function.""" + points = [] + for x, y in zip(self.x, self.y): + points.append(f"({x},{y})") + return "curve[" + ",".join(points) + "]" + + @check_input_parameters_type() + def __init__( + self, + curves: Optional[Dict[Any, Curve]] = None, + interval: int = 10, + ignore_last: bool = True, + reset_flag: bool = True, + by_epoch: bool = True, + ): + super().__init__(interval, ignore_last, reset_flag, by_epoch) + self.curves = curves if curves is not None else defaultdict(self.Curve) + + @master_only + @check_input_parameters_type() + def log(self, runner: BaseRunner): + """Log function for OTXLoggerHook.""" + tags = self.get_loggable_tags(runner, allow_text=False, tags_to_skip=()) + if runner.max_epochs is not None: + normalized_iter = self.get_iter(runner) / runner.max_iters * runner.max_epochs + else: + normalized_iter = self.get_iter(runner) + for tag, value in tags.items(): + curve = self.curves[tag] + # Remove duplicates. + if len(curve.x) > 0 and curve.x[-1] == normalized_iter: + curve.x.pop() + curve.y.pop() + curve.x.append(normalized_iter) + curve.y.append(value) + + @check_input_parameters_type() + def after_train_epoch(self, runner: BaseRunner): + """Called after_train_epoch in OTXLoggerHook.""" + # Iteration counter is increased right after the last iteration in the epoch, + # temporarily decrease it back. + runner._iter -= 1 + super().after_train_epoch(runner) + runner._iter += 1 + + +@HOOKS.register_module() +class LoggerReplaceHook(Hook): + """replace logger in the runner to the MPA logger. + + DO NOT INCLUDE this hook to the recipe directly. + mpa will add this hook to all recipe internally. + """ + + def __init__(self, **kwargs): + super().__init__(**kwargs) + + def before_run(self, runner): + """Replace logger.""" + runner.logger = logger + logger.info("logger in the runner is replaced to the MPA logger") diff --git a/otx/mpa/modules/hooks/model_ema_v2_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/model_ema_v2_hook.py similarity index 92% rename from otx/mpa/modules/hooks/model_ema_v2_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/model_ema_v2_hook.py index b98c646dee1..914e376c433 100644 --- a/otx/mpa/modules/hooks/model_ema_v2_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/model_ema_v2_hook.py @@ -1,12 +1,13 @@ -# Copyright (C) 2022 Intel Corporation +"""Model EMA V2 hooks.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from copy import deepcopy import torch -import torch.nn as nn from mmcv.runner import HOOKS, Hook +from torch import nn from otx.mpa.utils.logger import get_logger @@ -15,7 +16,8 @@ @HOOKS.register_module() class ModelEmaV2Hook(Hook): - """ + r"""ModelEmaV2Hook. + Source model paramters would be exponentially averaged onto destination model pararmeters on given intervals .. math:: @@ -37,8 +39,10 @@ def __init__(self, ema_decay=0.9995, interval=1, start_epoch=0, dataset_len_thr= self.interval = interval self.start_epoch = start_epoch self.dataset_len_thr = dataset_len_thr + self.use_ema = None def before_train_epoch(self, runner): + """Make emav2 model before run epoch.""" if not hasattr(self, "use_ema"): self.use_ema = len(runner.data_loader.dataset) > self.dataset_len_thr @@ -48,6 +52,7 @@ def before_train_epoch(self, runner): runner.ema_model = ema_model def before_run(self, runner): + """Log before run.""" logger.info("\t* EMA V2 Enable") def after_train_iter(self, runner): @@ -67,7 +72,8 @@ def after_train_iter(self, runner): class ModelEmaV2(nn.Module): - """Model Exponential Moving Average V2 + """Model Exponential Moving Average V2. + Keep a moving average of everything in the model state_dict (parameters and buffers). V2 of this module is simpler, it does not match params/buffers based on name but simply iterates in order. It works with torchscript (JIT of full model). @@ -86,7 +92,7 @@ class ModelEmaV2(nn.Module): """ def __init__(self, model, decay=0.9999, dataset_len_thr=None, device=None): - super(ModelEmaV2, self).__init__() + super().__init__() # make a copy of the model for accumulating moving average of weights self.module = deepcopy(model) self.module.eval() @@ -98,6 +104,10 @@ def __init__(self, model, decay=0.9999, dataset_len_thr=None, device=None): if self.device is not None: self.module.to(device=device) + def forward(self): + """Forward.""" + return + def _update(self, update_fn): with torch.no_grad(): for ema_v, model_v in zip(self.dst_model.values(), self.src_model.values()): @@ -106,4 +116,5 @@ def _update(self, update_fn): ema_v.copy_(update_fn(ema_v, model_v)) def update(self): + """Update.""" self._update(update_fn=lambda e, m: self.decay * e + (1.0 - self.decay) * m) diff --git a/otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py index 82dae744d59..810a98739b6 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/no_bias_decay_hook.py @@ -1,5 +1,5 @@ """Module for NoBiasDecayHook used in classification.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/common/adapters/mmcv/hooks/progress_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/progress_hook.py new file mode 100644 index 00000000000..36ed55d627d --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/progress_hook.py @@ -0,0 +1,101 @@ +"""Collections of hooks for common OTX algorithms.""" + +# Copyright (C) 2021-2023 Intel Corporation +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, +# software distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions +# and limitations under the License. + +import math + +from mmcv.runner import BaseRunner +from mmcv.runner.hooks import HOOKS, Hook + +from otx.api.usecases.reporting.time_monitor_callback import TimeMonitorCallback +from otx.api.utils.argument_checks import check_input_parameters_type +from otx.mpa.utils.logger import get_logger + +logger = get_logger() + + +@HOOKS.register_module() +class OTXProgressHook(Hook): + """OTXProgressHook for getting progress.""" + + @check_input_parameters_type() + def __init__(self, time_monitor: TimeMonitorCallback, verbose: bool = False): + super().__init__() + self.time_monitor = time_monitor + self.verbose = verbose + self.print_threshold = 1 + + @check_input_parameters_type() + def before_run(self, runner: BaseRunner): + """Called before_run in OTXProgressHook.""" + total_epochs = runner.max_epochs if runner.max_epochs is not None else 1 + self.time_monitor.total_epochs = total_epochs + self.time_monitor.train_steps = runner.max_iters // total_epochs if total_epochs else 1 + self.time_monitor.steps_per_epoch = self.time_monitor.train_steps + self.time_monitor.val_steps + self.time_monitor.total_steps = max(math.ceil(self.time_monitor.steps_per_epoch * total_epochs), 1) + self.time_monitor.current_step = 0 + self.time_monitor.current_epoch = 0 + self.time_monitor.on_train_begin() + + @check_input_parameters_type() + def before_epoch(self, runner: BaseRunner): + """Called before_epoch in OTXProgressHook.""" + self.time_monitor.on_epoch_begin(runner.epoch) + + @check_input_parameters_type() + def after_epoch(self, runner: BaseRunner): + """Called after_epoch in OTXProgressHook.""" + # put some runner's training status to use on the other hooks + runner.log_buffer.output["current_iters"] = runner.iter + self.time_monitor.on_epoch_end(runner.epoch, runner.log_buffer.output) + + @check_input_parameters_type() + def before_iter(self, runner: BaseRunner): + """Called before_iter in OTXProgressHook.""" + self.time_monitor.on_train_batch_begin(1) + + @check_input_parameters_type() + def after_iter(self, runner: BaseRunner): + """Called after_iter in OTXProgressHook.""" + # put some runner's training status to use on the other hooks + runner.log_buffer.output["current_iters"] = runner.iter + self.time_monitor.on_train_batch_end(1) + if self.verbose: + progress = self.progress + if progress >= self.print_threshold: + logger.warning(f"training progress {progress:.0f}%") + self.print_threshold = (progress + 10) // 10 * 10 + + @check_input_parameters_type() + def before_val_iter(self, runner: BaseRunner): + """Called before_val_iter in OTXProgressHook.""" + self.time_monitor.on_test_batch_begin(1, logger) + + @check_input_parameters_type() + def after_val_iter(self, runner: BaseRunner): + """Called after_val_iter in OTXProgressHook.""" + self.time_monitor.on_test_batch_end(1, logger) + + @check_input_parameters_type() + def after_run(self, runner: BaseRunner): + """Called after_run in OTXProgressHook.""" + self.time_monitor.on_train_end(1) + if self.time_monitor.update_progress_callback: + self.time_monitor.update_progress_callback(int(self.time_monitor.get_progress())) + + @property + def progress(self): + """Getting Progress from time monitor.""" + return self.time_monitor.get_progress() diff --git a/otx/mpa/modules/hooks/recording_forward_hooks.py b/otx/algorithms/common/adapters/mmcv/hooks/recording_forward_hook.py similarity index 80% rename from otx/mpa/modules/hooks/recording_forward_hooks.py rename to otx/algorithms/common/adapters/mmcv/hooks/recording_forward_hook.py index 4b3fc7011e2..5b3662b53f3 100644 --- a/otx/mpa/modules/hooks/recording_forward_hooks.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/recording_forward_hook.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Recording forward hooks for explain mode.""" +# Copyright (C) 2023 Intel Corporation # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. @@ -15,7 +16,7 @@ from __future__ import annotations from abc import ABC, abstractmethod -from typing import Sequence, Union +from typing import List, Sequence, Union import torch @@ -27,11 +28,13 @@ class BaseRecordingForwardHook(ABC): """While registered with the designated PyTorch module, this class caches feature vector during forward pass. + Example:: with BaseRecordingForwardHook(model.module.backbone) as hook: with torch.no_grad(): result = model(return_loss=False, **data) print(hook.records) + Args: module (torch.nn.Module): The PyTorch module to be registered in forward pass fpn_idx (int, optional): The layer index to be processed if the model is a FPN. @@ -41,60 +44,72 @@ class BaseRecordingForwardHook(ABC): def __init__(self, module: torch.nn.Module, fpn_idx: int = -1) -> None: self._module = module self._handle = None - self._records = [] + self._records = [] # type: List[torch.Tensor] self._fpn_idx = fpn_idx @property def records(self): + """Return records.""" return self._records @abstractmethod def func(self, feature_map: torch.Tensor, fpn_idx: int = -1) -> torch.Tensor: """This method get the feature vector or saliency map from the output of the module. + Args: x (torch.Tensor): Feature map from the backbone module fpn_idx (int, optional): The layer index to be processed if the model is a FPN. Defaults to 0 which uses the largest feature map from FPN. + Returns: torch.Tensor (torch.Tensor): Saliency map for feature vector """ raise NotImplementedError - def _recording_forward(self, _: torch.nn.Module, input: torch.Tensor, output: torch.Tensor): + def _recording_forward( + self, _: torch.nn.Module, x: torch.Tensor, output: torch.Tensor + ): # pylint: disable=unused-argument tensors = self.func(output) tensors = tensors.detach().cpu().numpy() for tensor in tensors: self._records.append(tensor) def __enter__(self) -> BaseRecordingForwardHook: + """Enter.""" self._handle = self._module.backbone.register_forward_hook(self._recording_forward) return self def __exit__(self, exc_type, exc_value, traceback): + """Exit.""" self._handle.remove() class EigenCamHook(BaseRecordingForwardHook): + """EigenCamHook.""" + @staticmethod def func(feature_map: Union[torch.Tensor, Sequence[torch.Tensor]], fpn_idx: int = -1) -> torch.Tensor: + """Generate the saliency map.""" if isinstance(feature_map, (list, tuple)): feature_map = feature_map[fpn_idx] x = feature_map.type(torch.float) - bs, c, h, w = x.size() - reshaped_fmap = x.reshape((bs, c, h * w)).transpose(1, 2) + batch_size, channel, h, w = x.size() + reshaped_fmap = x.reshape((batch_size, channel, h * w)).transpose(1, 2) reshaped_fmap = reshaped_fmap - reshaped_fmap.mean(1)[:, None, :] - U, S, V = torch.linalg.svd(reshaped_fmap, full_matrices=True) - saliency_map = (reshaped_fmap @ V[:, 0][:, :, None]).squeeze(-1) + _, _, vh = torch.linalg.svd(reshaped_fmap, full_matrices=True) # pylint: disable=invalid-name + saliency_map = (reshaped_fmap @ vh[:, 0][:, :, None]).squeeze(-1) max_values, _ = torch.max(saliency_map, -1) min_values, _ = torch.min(saliency_map, -1) saliency_map = 255 * (saliency_map - min_values[:, None]) / ((max_values - min_values + 1e-12)[:, None]) - saliency_map = saliency_map.reshape((bs, h, w)) + saliency_map = saliency_map.reshape((batch_size, h, w)) saliency_map = saliency_map.to(torch.uint8) return saliency_map class ActivationMapHook(BaseRecordingForwardHook): + """ActivationMapHook.""" + @staticmethod def func(feature_map: Union[torch.Tensor, Sequence[torch.Tensor]], fpn_idx: int = -1) -> torch.Tensor: """Generate the saliency map by average feature maps then normalizing to (0, 255).""" @@ -104,20 +119,22 @@ def func(feature_map: Union[torch.Tensor, Sequence[torch.Tensor]], fpn_idx: int ), f"fpn_idx: {fpn_idx} is out of scope of feature_map length {len(feature_map)}!" feature_map = feature_map[fpn_idx] - bs, c, h, w = feature_map.size() + batch_size, _, h, w = feature_map.size() activation_map = torch.mean(feature_map, dim=1) - activation_map = activation_map.reshape((bs, h * w)) + activation_map = activation_map.reshape((batch_size, h * w)) max_values, _ = torch.max(activation_map, -1) min_values, _ = torch.min(activation_map, -1) activation_map = 255 * (activation_map - min_values[:, None]) / (max_values - min_values + 1e-12)[:, None] - activation_map = activation_map.reshape((bs, h, w)) + activation_map = activation_map.reshape((batch_size, h, w)) activation_map = activation_map.to(torch.uint8) return activation_map class FeatureVectorHook(BaseRecordingForwardHook): + """FeatureVectorHook.""" + @staticmethod - def func(feature_map: Union[torch.Tensor, Sequence[torch.Tensor]]) -> torch.Tensor: + def func(feature_map: Union[torch.Tensor, Sequence[torch.Tensor]], fpn_idx: int = -1) -> torch.Tensor: """Generate the feature vector by average pooling feature maps.""" if isinstance(feature_map, (list, tuple)): # aggregate feature maps from Feature Pyramid Network @@ -129,8 +146,8 @@ def func(feature_map: Union[torch.Tensor, Sequence[torch.Tensor]]) -> torch.Tens class ReciproCAMHook(BaseRecordingForwardHook): - """ - Implementation of recipro-cam for class-wise saliency map + """Implementation of recipro-cam for class-wise saliency map. + recipro-cam: gradient-free reciprocal class activation map (https://arxiv.org/pdf/2209.14074.pdf) """ @@ -141,8 +158,7 @@ def __init__(self, module: torch.nn.Module, fpn_idx: int = -1) -> None: self._num_classes = module.head.num_classes def func(self, feature_map: Union[torch.Tensor, Sequence[torch.Tensor]], fpn_idx: int = -1) -> torch.Tensor: - """ - Generate the class-wise saliency maps using Recipro-CAM and then normalizing to (0, 255). + """Generate the class-wise saliency maps using Recipro-CAM and then normalizing to (0, 255). Args: feature_map (Union[torch.Tensor, List[torch.Tensor]]): feature maps from backbone or list of feature maps @@ -156,18 +172,18 @@ def func(self, feature_map: Union[torch.Tensor, Sequence[torch.Tensor]], fpn_idx if isinstance(feature_map, (list, tuple)): feature_map = feature_map[fpn_idx] - bs, c, h, w = feature_map.size() - saliency_maps = torch.empty(bs, self._num_classes, h, w) - for f in range(bs): - mosaic_feature_map = self._get_mosaic_feature_map(feature_map[f], c, h, w) + batch_size, channel, h, w = feature_map.size() + saliency_maps = torch.empty(batch_size, self._num_classes, h, w) + for f in range(batch_size): + mosaic_feature_map = self._get_mosaic_feature_map(feature_map[f], channel, h, w) mosaic_prediction = self._predict_from_feature_map(mosaic_feature_map) saliency_maps[f] = mosaic_prediction.transpose(0, 1).reshape((self._num_classes, h, w)) - saliency_maps = saliency_maps.reshape((bs, self._num_classes, h * w)) + saliency_maps = saliency_maps.reshape((batch_size, self._num_classes, h * w)) max_values, _ = torch.max(saliency_maps, -1) min_values, _ = torch.min(saliency_maps, -1) saliency_maps = 255 * (saliency_maps - min_values[:, :, None]) / (max_values - min_values + 1e-12)[:, :, None] - saliency_maps = saliency_maps.reshape((bs, self._num_classes, h, w)) + saliency_maps = saliency_maps.reshape((batch_size, self._num_classes, h, w)) saliency_maps = saliency_maps.to(torch.uint8) return saliency_maps @@ -182,11 +198,9 @@ def _predict_from_feature_map(self, x: torch.Tensor) -> torch.Tensor: def _get_mosaic_feature_map(self, feature_map: torch.Tensor, c: int, h: int, w: int) -> torch.Tensor: if MMCLS_AVAILABLE and self._neck is not None and isinstance(self._neck, GlobalAveragePooling): - """ - Optimization workaround for the GAP case (simulate GAP with more simple compute graph) - Possible due to static sparsity of mosaic_feature_map - Makes the downstream GAP operation to be dummy - """ + # Optimization workaround for the GAP case (simulate GAP with more simple compute graph) + # Possible due to static sparsity of mosaic_feature_map + # Makes the downstream GAP operation to be dummy feature_map_transposed = torch.flatten(feature_map, start_dim=1).transpose(0, 1)[:, :, None, None] mosaic_feature_map = feature_map_transposed / (h * w) else: diff --git a/otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py index 71b2f6799cd..ffa08b12ae8 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/sam_optimizer_hook.py @@ -1,5 +1,5 @@ """This module contains the Sharpness-aware Minimization optimizer hook implementation for MMCV Runners.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py index 92ab0a9d581..178e3c9b3aa 100644 --- a/otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/semisl_cls_hook.py @@ -1,5 +1,5 @@ """Module for defining hook for semi-supervised learning for classification task.""" -# Copyright (C) 2022 Intel Corporation +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/otx/mpa/modules/hooks/task_adapt_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/task_adapt_hook.py similarity index 92% rename from otx/mpa/modules/hooks/task_adapt_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/task_adapt_hook.py index 116115bada5..8a5ab890e2f 100644 --- a/otx/mpa/modules/hooks/task_adapt_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/task_adapt_hook.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Task adapt hook which selects a proper sampler.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -14,7 +15,7 @@ @HOOKS.register_module() class TaskAdaptHook(Hook): - """Task Adaptation Hook for Task-Inc & Class-Inc + """Task Adaptation Hook for Task-Inc & Class-Inc. Args: src_classes (list): A list of old classes used in the existing model @@ -46,6 +47,7 @@ def __init__( logger.info(f"- Sampler flag: {self.sampler_flag}") def before_epoch(self, runner): + """Produce a proper sampler for task-adaptation.""" if self.sampler_flag: dataset = runner.data_loader.dataset batch_size = runner.data_loader.batch_size diff --git a/otx/algorithms/common/adapters/mmcv/hooks/two_crop_transform_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/two_crop_transform_hook.py new file mode 100644 index 00000000000..1317322f1e2 --- /dev/null +++ b/otx/algorithms/common/adapters/mmcv/hooks/two_crop_transform_hook.py @@ -0,0 +1,92 @@ +"""Two crop transform hook.""" +from typing import List + +from mmcv.runner import BaseRunner +from mmcv.runner.hooks import HOOKS, Hook + +from otx.api.utils.argument_checks import check_input_parameters_type +from otx.mpa.utils.logger import get_logger + +logger = get_logger() + + +@HOOKS.register_module() +class TwoCropTransformHook(Hook): + """TwoCropTransformHook with every specific interval. + + This hook decides whether using single pipeline or two pipelines + implemented in `TwoCropTransform` for the current iteration. + + Args: + interval (int): If `interval` == 1, both pipelines is used. + If `interval` > 1, the first pipeline is used and then + both pipelines are used every `interval`. Defaults to 1. + by_epoch (bool): (TODO) Use `interval` by epoch. Defaults to False. + """ + + @check_input_parameters_type() + def __init__(self, interval: int = 1, by_epoch: bool = False): + assert interval > 0, f"interval (={interval}) must be positive value." + if by_epoch: + raise NotImplementedError("by_epoch is not implemented.") + + self.interval = interval + self.cnt = 0 + + @check_input_parameters_type() + def _get_dataset(self, runner: BaseRunner): + """Get dataset to handle `is_both`.""" + if hasattr(runner.data_loader.dataset, "dataset"): + # for RepeatDataset + dataset = runner.data_loader.dataset.dataset + else: + dataset = runner.data_loader.dataset + + return dataset + + # pylint: disable=inconsistent-return-statements + @check_input_parameters_type() + def _find_two_crop_transform(self, transforms: List[object]): + """Find TwoCropTransform among transforms.""" + for transform in transforms: + if transform.__class__.__name__ == "TwoCropTransform": + return transform + + @check_input_parameters_type() + def before_train_epoch(self, runner: BaseRunner): + """Called before_train_epoch in TwoCropTransformHook.""" + # Always keep `TwoCropTransform` enabled. + if self.interval == 1: + return + + dataset = self._get_dataset(runner) + two_crop_transform = self._find_two_crop_transform(dataset.pipeline.transforms) + if self.cnt == self.interval - 1: + # start using both pipelines + two_crop_transform.is_both = True + else: + two_crop_transform.is_both = False + + @check_input_parameters_type() + def after_train_iter(self, runner: BaseRunner): + """Called after_train_iter in TwoCropTransformHook.""" + # Always keep `TwoCropTransform` enabled. + if self.interval == 1: + return + + if self.cnt < self.interval - 1: + # Instead of using `runner.every_n_iters` or `runner.every_n_inner_iters`, + # this condition is used to compare `self.cnt` with `self.interval` throughout the entire epochs. + self.cnt += 1 + + if self.cnt == self.interval - 1: + dataset = self._get_dataset(runner) + two_crop_transform = self._find_two_crop_transform(dataset.pipeline.transforms) + if not two_crop_transform.is_both: + # If `self.cnt` == `self.interval`-1, there are two cases, + # 1. `self.cnt` was updated in L709, so `is_both` must be on for the next iter. + # 2. if the current iter was already conducted, `is_both` must be off. + two_crop_transform.is_both = True + else: + two_crop_transform.is_both = False + self.cnt = 0 diff --git a/otx/mpa/modules/hooks/unbiased_teacher_hook.py b/otx/algorithms/common/adapters/mmcv/hooks/unbiased_teacher_hook.py similarity index 86% rename from otx/mpa/modules/hooks/unbiased_teacher_hook.py rename to otx/algorithms/common/adapters/mmcv/hooks/unbiased_teacher_hook.py index 922fe1b05ed..6fa7428a259 100644 --- a/otx/mpa/modules/hooks/unbiased_teacher_hook.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/unbiased_teacher_hook.py @@ -1,24 +1,29 @@ -# Copyright (C) 2022 Intel Corporation +"""Unbiased-teacher hook.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from mmcv.runner import HOOKS +from otx.algorithms.common.adapters.mmcv.hooks.dual_model_ema_hook import ( + DualModelEMAHook, +) from otx.mpa.utils.logger import get_logger -from .model_ema_hook import DualModelEMAHook - logger = get_logger() @HOOKS.register_module() class UnbiasedTeacherHook(DualModelEMAHook): + """UnbiasedTeacherHook for semi-supervised learnings.""" + def __init__(self, min_pseudo_label_ratio=0.1, **kwargs): super().__init__(**kwargs) self.min_pseudo_label_ratio = min_pseudo_label_ratio self.unlabeled_loss_enabled = False def before_train_epoch(self, runner): + """Enable unlabeled loss if over start epoch.""" super().before_train_epoch(runner) if runner.epoch + 1 < self.start_epoch: diff --git a/otx/mpa/modules/hooks/workflow_hooks.py b/otx/algorithms/common/adapters/mmcv/hooks/workflow_hook.py similarity index 69% rename from otx/mpa/modules/hooks/workflow_hooks.py rename to otx/algorithms/common/adapters/mmcv/hooks/workflow_hook.py index 42da9a02b4a..80e905d7368 100644 --- a/otx/mpa/modules/hooks/workflow_hooks.py +++ b/otx/algorithms/common/adapters/mmcv/hooks/workflow_hook.py @@ -1,4 +1,5 @@ -# Copyright (C) 2022 Intel Corporation +"""Workflow hooks.""" +# Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -13,60 +14,75 @@ logger = get_logger() WORKFLOW_HOOKS = Registry("workflow_hooks") +# pylint: disable=unused-argument + def build_workflow_hook(config, *args, **kwargs): + """Build a workflow hook.""" logger.info(f"called build_workflow_hook({config})") whook_type = config.pop("type") # event = config.pop('event') if whook_type not in WORKFLOW_HOOKS: raise KeyError(f"not supported workflow hook type {whook_type}") - else: - whook_cls = WORKFLOW_HOOKS.get(whook_type) + whook_cls = WORKFLOW_HOOKS.get(whook_type) return whook_cls(*args, **kwargs, **config) class WorkflowHook: + """Workflow hook.""" + def __init__(self, name): self.name = name def before_workflow(self, workflow, idx=-1, results=None): - pass + """Before workflow.""" + return def after_workflow(self, workflow, idx=-1, results=None): - pass + """After workflow.""" + return def before_stage(self, workflow, idx, results=None): - pass + """Before stage.""" + return def after_stage(self, workflow, idx, results=None): - pass + """After stage.""" + return @WORKFLOW_HOOKS.register_module() class SampleLoggingHook(WorkflowHook): + """Sample logging hook.""" + def __init__(self, name=__name__, log_level="DEBUG"): - super(SampleLoggingHook, self).__init__(name) + super().__init__(name) self.logging = getattr(logger, log_level.lower()) - def before_stage(self, wf, stage_idx, results): + def before_stage(self, workflow, idx, results=None): + """Before stage.""" self.logging(f"called {self.name}.run()") - self.logging(f"stage index {stage_idx}, results keys = {results.keys()}") - result_key = f"{self.name}|{stage_idx}" + self.logging(f"stage index {idx}, results keys = {results.keys()}") + result_key = f"{self.name}|{idx}" results[result_key] = dict(message=f"this is a sample result of the {__name__} hook") @WORKFLOW_HOOKS.register_module() class WFProfileHook(WorkflowHook): + """Workflow profile hook.""" + def __init__(self, name=__name__, output_path=None): - super(WFProfileHook, self).__init__(name) + super().__init__(name) self.output_path = output_path self.profile = dict(start=0, end=0, elapsed=0, stages=dict()) logger.info(f"initialized {__name__}....") - def before_workflow(self, wf, idx=-1, results=None): + def before_workflow(self, workflow, idx=-1, results=None): + """Before workflow.""" self.profile["start"] = datetime.datetime.now() - def after_workflow(self, wf, idx=-1, results=None): + def after_workflow(self, workflow, idx=-1, results=None): + """After workflow.""" self.profile["end"] = datetime.datetime.now() self.profile["elapsed"] = self.profile["end"] - self.profile["start"] @@ -74,15 +90,17 @@ def after_workflow(self, wf, idx=-1, results=None): logger.info("** workflow profile results **") logger.info(str_dumps) if self.output_path is not None: - with open(self.output_path, "w") as f: + with open(self.output_path, "w") as f: # pylint: disable=unspecified-encoding f.write(str_dumps) - def before_stage(self, wf, idx=-1, results=None): + def before_stage(self, workflow, idx=-1, results=None): + """Before stage.""" stages = self.profile.get("stages") stages[f"{idx}"] = {} stages[f"{idx}"]["start"] = datetime.datetime.now() - def after_stage(self, wf, idx=-1, results=None): + def after_stage(self, workflow, idx=-1, results=None): + """After stage.""" stages = self.profile.get("stages") stages[f"{idx}"]["end"] = datetime.datetime.now() stages[f"{idx}"]["elapsed"] = stages[f"{idx}"]["end"] - stages[f"{idx}"]["start"] @@ -90,11 +108,14 @@ def after_stage(self, wf, idx=-1, results=None): @WORKFLOW_HOOKS.register_module() class AfterStageWFHook(WorkflowHook): + """After stage workflow hook.""" + def __init__(self, name, stage_cfg_updated_callback): self.callback = stage_cfg_updated_callback super().__init__(name) def after_stage(self, workflow, idx, results=None): + """After stage.""" logger.info(f"{__name__}: called after_stage()") name = copy.deepcopy(workflow.stages[idx].name) cfg = copy.deepcopy(workflow.stages[idx].cfg) diff --git a/otx/algorithms/common/adapters/mmcv/nncf/patches.py b/otx/algorithms/common/adapters/mmcv/nncf/patches.py index aad2823b795..bf0adc3e489 100644 --- a/otx/algorithms/common/adapters/mmcv/nncf/patches.py +++ b/otx/algorithms/common/adapters/mmcv/nncf/patches.py @@ -35,14 +35,14 @@ def _evaluation_wrapper(self, fn, runner, *args, **kwargs): NNCF_PATCHER.patch("otx.algorithms.common.adapters.mmcv.hooks.eval_hook.CustomEvalHook.evaluate", _evaluation_wrapper) NNCF_PATCHER.patch( - "otx.mpa.modules.hooks.recording_forward_hooks.FeatureVectorHook.func", + "otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook.FeatureVectorHook.func", no_nncf_trace_wrapper, ) NNCF_PATCHER.patch( - "otx.mpa.modules.hooks.recording_forward_hooks.ActivationMapHook.func", + "otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook.ActivationMapHook.func", no_nncf_trace_wrapper, ) NNCF_PATCHER.patch( - "otx.mpa.modules.hooks.recording_forward_hooks.ReciproCAMHook.func", + "otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook.ReciproCAMHook.func", no_nncf_trace_wrapper, ) diff --git a/otx/algorithms/common/tasks/training_base.py b/otx/algorithms/common/tasks/training_base.py index 24775730976..2b73c2edc94 100644 --- a/otx/algorithms/common/tasks/training_base.py +++ b/otx/algorithms/common/tasks/training_base.py @@ -28,6 +28,7 @@ from mmcv.utils.config import Config, ConfigDict from otx.algorithms.common.adapters.mmcv.hooks import OTXLoggerHook +from otx.algorithms.common.adapters.mmcv.hooks.cancel_hook import CancelInterfaceHook from otx.algorithms.common.adapters.mmcv.utils import ( align_data_config_with_recipe, get_configs_by_pairs, @@ -47,7 +48,6 @@ from otx.api.utils.argument_checks import check_input_parameters_type from otx.core.data import caching from otx.mpa.builder import build -from otx.mpa.modules.hooks.cancel_interface_hook import CancelInterfaceHook from otx.mpa.stage import Stage from otx.mpa.utils.config_utils import ( MPAConfig, @@ -113,7 +113,7 @@ def __init__(self, task_config, task_environment: TaskEnvironment, output_path: self._learning_curves = UncopiableDefaultDict(OTXLoggerHook.Curve) self._is_training = False self._should_stop = False - self.cancel_interface = None + self.cancel_interface = None # type: Optional[CancelInterfaceHook] self.reserved_cancel = False self.on_hook_initialized = self.OnHookInitialized(self) diff --git a/otx/algorithms/detection/adapters/mmdet/hooks/det_saliency_map_hook.py b/otx/algorithms/detection/adapters/mmdet/hooks/det_saliency_map_hook.py index 103996184f5..17d96acab0c 100644 --- a/otx/algorithms/detection/adapters/mmdet/hooks/det_saliency_map_hook.py +++ b/otx/algorithms/detection/adapters/mmdet/hooks/det_saliency_map_hook.py @@ -7,6 +7,9 @@ import torch import torch.nn.functional as F +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + BaseRecordingForwardHook, +) from otx.algorithms.detection.adapters.mmdet.models.heads.custom_atss_head import ( CustomATSSHead, ) @@ -19,7 +22,6 @@ from otx.algorithms.detection.adapters.mmdet.models.heads.custom_yolox_head import ( CustomYOLOXHead, ) -from otx.mpa.modules.hooks.recording_forward_hooks import BaseRecordingForwardHook # pylint: disable=too-many-locals diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py index 04eba1011f9..59ce085f6d6 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_atss_detector.py @@ -9,11 +9,13 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.atss import ATSS +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + FeatureVectorHook, +) from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py index 56b8da87164..c400c96e46e 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_maskrcnn_detector.py @@ -9,11 +9,11 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.mask_rcnn import MaskRCNN -from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled -from otx.mpa.modules.hooks.recording_forward_hooks import ( +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( ActivationMapHook, FeatureVectorHook, ) +from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py index 1ad6a744819..f1db1ec0b9b 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_single_stage_detector.py @@ -9,11 +9,13 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.single_stage import SingleStageDetector +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + FeatureVectorHook, +) from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py index 20432c8fb0b..3aec6332e59 100644 --- a/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py +++ b/otx/algorithms/detection/adapters/mmdet/models/detectors/custom_yolox_detector.py @@ -9,11 +9,13 @@ from mmdet.models.builder import DETECTORS from mmdet.models.detectors.yolox import YOLOX +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + FeatureVectorHook, +) from otx.algorithms.common.adapters.mmdeploy.utils import is_mmdeploy_enabled from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.modules.utils.task_adapt import map_class_names from otx.mpa.utils.logger import get_logger diff --git a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py index 6bacc0d59cb..33cbf38bf78 100644 --- a/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py +++ b/otx/algorithms/segmentation/adapters/mmseg/models/segmentors/otx_encoder_decoder.py @@ -38,7 +38,7 @@ def simple_test(self, img, img_meta, rescale=True, output_logits=False): if is_mmdeploy_enabled(): from mmdeploy.core import FUNCTION_REWRITER - from otx.mpa.modules.hooks.recording_forward_hooks import ( # pylint: disable=ungrouped-imports + from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( # pylint: disable=ungrouped-imports FeatureVectorHook, ) diff --git a/otx/mpa/builder.py b/otx/mpa/builder.py index 7abc0d7e0f8..6818761efdd 100644 --- a/otx/mpa/builder.py +++ b/otx/mpa/builder.py @@ -8,7 +8,11 @@ from mmcv import Config, ConfigDict, build_from_cfg -from .modules.hooks.workflow_hooks import WorkflowHook, build_workflow_hook +from otx.algorithms.common.adapters.mmcv.hooks.workflow_hook import ( + WorkflowHook, + build_workflow_hook, +) + from .registry import STAGES from .stage import get_available_types from .utils.config_utils import MPAConfig diff --git a/otx/mpa/cls/explainer.py b/otx/mpa/cls/explainer.py index 315ad1f4cb3..326878e4d9c 100644 --- a/otx/mpa/cls/explainer.py +++ b/otx/mpa/cls/explainer.py @@ -6,16 +6,16 @@ from mmcls.datasets import build_dataloader as mmcls_build_dataloader from mmcls.datasets import build_dataset as mmcls_build_dataset +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + ActivationMapHook, + EigenCamHook, + ReciproCAMHook, +) from otx.algorithms.common.adapters.mmcv.utils import ( build_data_parallel, build_dataloader, build_dataset, ) -from otx.mpa.modules.hooks.recording_forward_hooks import ( - ActivationMapHook, - EigenCamHook, - ReciproCAMHook, -) from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger diff --git a/otx/mpa/cls/inferrer.py b/otx/mpa/cls/inferrer.py index 9c7e5770219..579747458b3 100644 --- a/otx/mpa/cls/inferrer.py +++ b/otx/mpa/cls/inferrer.py @@ -12,15 +12,15 @@ from mmcv import Config, ConfigDict from otx.algorithms import TRANSFORMER_BACKBONES +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + FeatureVectorHook, + ReciproCAMHook, +) from otx.algorithms.common.adapters.mmcv.utils import ( build_data_parallel, build_dataloader, build_dataset, ) -from otx.mpa.modules.hooks.recording_forward_hooks import ( - FeatureVectorHook, - ReciproCAMHook, -) from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger diff --git a/otx/mpa/det/__init__.py b/otx/mpa/det/__init__.py index 8fd0f6a8480..c9c5d51835b 100644 --- a/otx/mpa/det/__init__.py +++ b/otx/mpa/det/__init__.py @@ -2,6 +2,8 @@ # SPDX-License-Identifier: Apache-2.0 # +import otx.algorithms.common.adapters.mmcv.hooks +import otx.algorithms.common.adapters.mmcv.hooks.composed_dataloaders_hook import otx.algorithms.detection.adapters.mmdet.datasets.pipelines.torchvision2mmdet import otx.algorithms.detection.adapters.mmdet.datasets.task_adapt_dataset import otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook @@ -9,8 +11,6 @@ import otx.algorithms.detection.adapters.mmdet.models.detectors import otx.algorithms.detection.adapters.mmdet.models.heads import otx.algorithms.detection.adapters.mmdet.models.losses -import otx.mpa.modules.hooks -import otx.mpa.modules.hooks.composed_dataloaders_hook # flake8: noqa from . import explainer, exporter, incremental, inferrer, semisl, stage, trainer diff --git a/otx/mpa/det/explainer.py b/otx/mpa/det/explainer.py index 237e3f46b57..d1d024877c0 100644 --- a/otx/mpa/det/explainer.py +++ b/otx/mpa/det/explainer.py @@ -8,6 +8,10 @@ from mmdet.datasets import build_dataset as mmdet_build_dataset from mmdet.datasets import replace_ImageToTensor +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + ActivationMapHook, + EigenCamHook, +) from otx.algorithms.common.adapters.mmcv.utils import ( build_data_parallel, build_dataloader, @@ -17,10 +21,6 @@ from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.modules.hooks.recording_forward_hooks import ( - ActivationMapHook, - EigenCamHook, -) from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger diff --git a/otx/mpa/det/inferrer.py b/otx/mpa/det/inferrer.py index 066e44971c4..dee4889e019 100644 --- a/otx/mpa/det/inferrer.py +++ b/otx/mpa/det/inferrer.py @@ -12,6 +12,10 @@ from mmdet.datasets import replace_ImageToTensor from mmdet.models.detectors import TwoStageDetector +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + ActivationMapHook, + FeatureVectorHook, +) from otx.algorithms.common.adapters.mmcv.utils import ( build_data_parallel, build_dataloader, @@ -21,10 +25,6 @@ from otx.algorithms.detection.adapters.mmdet.hooks.det_saliency_map_hook import ( DetSaliencyMapHook, ) -from otx.mpa.modules.hooks.recording_forward_hooks import ( - ActivationMapHook, - FeatureVectorHook, -) from otx.mpa.registry import STAGES from otx.mpa.utils.logger import get_logger diff --git a/otx/mpa/modules/hooks/__init__.py b/otx/mpa/modules/hooks/__init__.py deleted file mode 100644 index 8c751bbbe2b..00000000000 --- a/otx/mpa/modules/hooks/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -# flake8: noqa -from . import ( - adaptive_training_hooks, - composed_dataloaders_hook, - early_stopping_hook, - logger_replace_hook, - model_ema_hook, - model_ema_v2_hook, - recording_forward_hooks, - save_initial_weight_hook, - task_adapt_hook, - unbiased_teacher_hook, - workflow_hooks, -) diff --git a/otx/mpa/modules/hooks/cancel_interface_hook.py b/otx/mpa/modules/hooks/cancel_interface_hook.py deleted file mode 100644 index 1cadb1e7af4..00000000000 --- a/otx/mpa/modules/hooks/cancel_interface_hook.py +++ /dev/null @@ -1,40 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from mmcv.runner import HOOKS, EpochBasedRunner, Hook - -from otx.mpa.utils.logger import get_logger - -logger = get_logger() - - -@HOOKS.register_module() -class CancelInterfaceHook(Hook): - def __init__(self, init_callback: callable, interval=5): - self.on_init_callback = init_callback - self.runner = None - self.interval = interval - - def cancel(self): - logger.info("CancelInterfaceHook.cancel() is called.") - if self.runner is None: - logger.warning("runner is not configured yet. ignored this request.") - return - - if self.runner.should_stop: - logger.warning("cancel already requested.") - return - - if isinstance(self.runner, EpochBasedRunner): - epoch = self.runner.epoch - self.runner._max_epochs = epoch # Force runner to stop by pretending it has reached it's max_epoch - self.runner.should_stop = True # Set this flag to true to stop the current training epoch - logger.info("requested stopping to the runner") - - def before_run(self, runner): - self.runner = runner - self.on_init_callback(self) - - def after_run(self, runner): - self.runner = None diff --git a/otx/mpa/modules/hooks/logger_replace_hook.py b/otx/mpa/modules/hooks/logger_replace_hook.py deleted file mode 100644 index 3cb64b40a4d..00000000000 --- a/otx/mpa/modules/hooks/logger_replace_hook.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from mmcv.runner import HOOKS, Hook - -from otx.mpa.utils.logger import get_logger - -logger = get_logger() - - -@HOOKS.register_module() -class LoggerReplaceHook(Hook): - """replace logger in the runner to the MPA logger. - DO NOT INCLUDE this hook to the recipe directly. - mpa will add this hook to all recipe internally. - """ - - def __init__(self, **kwargs): - super().__init__(**kwargs) - - def before_run(self, runner): - runner.logger = logger - logger.info("logger in the runner is replaced to the MPA logger") diff --git a/otx/mpa/modules/hooks/save_initial_weight_hook.py b/otx/mpa/modules/hooks/save_initial_weight_hook.py deleted file mode 100644 index 0cafd1ce283..00000000000 --- a/otx/mpa/modules/hooks/save_initial_weight_hook.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (C) 2022 Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# - -from mmcv.runner import HOOKS, Hook - - -@HOOKS.register_module() -class SaveInitialWeightHook(Hook): - def __init__(self, save_path, file_name: str = "weights.pth", **kwargs): - self._save_path = save_path - self._file_name = file_name - self._args = kwargs - - def before_run(self, runner): - runner.logger.info("Saving weight before training") - runner.save_checkpoint( - self._save_path, filename_tmpl=self._file_name, save_optimizer=False, create_symlink=False, **self._args - ) diff --git a/otx/mpa/seg/__init__.py b/otx/mpa/seg/__init__.py index b7b2c4ffb47..0e55e6ee2fa 100644 --- a/otx/mpa/seg/__init__.py +++ b/otx/mpa/seg/__init__.py @@ -2,10 +2,10 @@ # SPDX-License-Identifier: Apache-2.0 # +import otx.algorithms.common.adapters.mmcv.hooks import otx.algorithms.segmentation.adapters.mmseg import otx.algorithms.segmentation.adapters.mmseg.models import otx.algorithms.segmentation.adapters.mmseg.models.schedulers -import otx.mpa.modules.hooks from otx.mpa.seg.incremental import IncrSegInferrer, IncrSegTrainer from otx.mpa.seg.semisl import SemiSLSegExporter, SemiSLSegInferrer, SemiSLSegTrainer diff --git a/otx/mpa/seg/inferrer.py b/otx/mpa/seg/inferrer.py index c6bb695b895..5480357c856 100644 --- a/otx/mpa/seg/inferrer.py +++ b/otx/mpa/seg/inferrer.py @@ -10,12 +10,14 @@ from mmseg.datasets import build_dataloader as mmseg_build_dataloader from mmseg.datasets import build_dataset as mmseg_build_dataset +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + FeatureVectorHook, +) from otx.algorithms.common.adapters.mmcv.utils import ( build_data_parallel, build_dataloader, build_dataset, ) -from otx.mpa.modules.hooks.recording_forward_hooks import FeatureVectorHook from otx.mpa.registry import STAGES from otx.mpa.stage import Stage from otx.mpa.utils.logger import get_logger diff --git a/tests/integration/api/xai/test_api_xai_validity.py b/tests/integration/api/xai/test_api_xai_validity.py index 26b8660ce50..a543099bfb4 100644 --- a/tests/integration/api/xai/test_api_xai_validity.py +++ b/tests/integration/api/xai/test_api_xai_validity.py @@ -11,10 +11,12 @@ from mmdet.models import build_detector from otx.algorithms.classification.tasks import ClassificationInferenceTask # noqa +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + ReciproCAMHook, +) from otx.algorithms.detection.adapters.mmdet.hooks import DetSaliencyMapHook from otx.cli.registry import Registry from otx.mpa.det.stage import DetectionStage # noqa -from otx.mpa.modules.hooks.recording_forward_hooks import ReciproCAMHook from otx.mpa.utils.config_utils import MPAConfig from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_adaptive_training_hooks.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_adaptive_training_hooks.py similarity index 89% rename from tests/unit/mpa/modules/hooks/test_mpa_adaptive_training_hooks.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_adaptive_training_hooks.py index 3d3a9a0eba8..51a30756b97 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_adaptive_training_hooks.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_adaptive_training_hooks.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.adaptive_training_hooks.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.adaptive_training_hooks.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -8,8 +8,12 @@ from mmcv.runner.hooks.evaluation import EvalHook from mmcv.utils import Config -from otx.mpa.modules.hooks.adaptive_training_hooks import AdaptiveTrainSchedulingHook -from otx.mpa.modules.hooks.early_stopping_hook import EarlyStoppingHook +from otx.algorithms.common.adapters.mmcv.hooks.adaptive_training_hook import ( + AdaptiveTrainSchedulingHook, +) +from otx.algorithms.common.adapters.mmcv.hooks.early_stopping_hook import ( + EarlyStoppingHook, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_cancel_interface_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_cancel_interface_hook.py similarity index 89% rename from tests/unit/mpa/modules/hooks/test_mpa_cancel_interface_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_cancel_interface_hook.py index e7a4610f2c3..1147fa5a73b 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_cancel_interface_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_cancel_interface_hook.py @@ -1,11 +1,11 @@ -"""Unit test for otx.mpa.modules.hooks.cancel_interface_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.cancel_interface_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # from mmcv.runner import EpochBasedRunner -from otx.mpa.modules.hooks.cancel_interface_hook import CancelInterfaceHook +from otx.algorithms.common.adapters.mmcv.hooks.cancel_hook import CancelInterfaceHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_checkpoint_hook.py similarity index 95% rename from tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_checkpoint_hook.py index eac1710095d..4b05a5f7089 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_checkpoint_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_checkpoint_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.checkpoint_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.checkpoint_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/tests/unit/mpa/modules/hooks/test_mpa_composed_dataloader_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_composed_dataloader_hook.py similarity index 65% rename from tests/unit/mpa/modules/hooks/test_mpa_composed_dataloader_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_composed_dataloader_hook.py index 97a6b07fd9c..c97204b9aed 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_composed_dataloader_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_composed_dataloader_hook.py @@ -1,9 +1,11 @@ -"""Unit test for otx.mpa.modules.hooks.composed_dataloaders_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.composed_dataloaders_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.composed_dataloaders_hook import ComposedDataLoadersHook +from otx.algorithms.common.adapters.mmcv.hooks.composed_dataloaders_hook import ( + ComposedDataLoadersHook, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_early_stopping_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_early_stopping_hook.py similarity index 90% rename from tests/unit/mpa/modules/hooks/test_mpa_early_stopping_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_early_stopping_hook.py index a27291f2e81..fd952042d27 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_early_stopping_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_early_stopping_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.early_stopping_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.early_stopping_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -11,7 +11,7 @@ from mmcv.runner import BaseRunner, LrUpdaterHook from mmcv.utils import Config -from otx.mpa.modules.hooks.early_stopping_hook import ( +from otx.algorithms.common.adapters.mmcv.hooks.early_stopping_hook import ( EarlyStoppingHook, LazyEarlyStoppingHook, ReduceLROnPlateauLrUpdaterHook, @@ -204,36 +204,32 @@ def test_init_rule(self) -> None: assert hook.compare_func(5, 9) is True @e2e_pytest_unit - def test_should_check_stopping(self) -> None: + def test_is_check_timing(self) -> None: """Test _should_check_stopping function.""" hook = ReduceLROnPlateauLrUpdaterHook(interval=5, min_lr=1e-5) hook.by_epoch = False runner = MockRunner() - assert hook._should_check_stopping(runner) is True - - runner._iter = 8 - assert hook._should_check_stopping(runner) is False + assert hook._is_check_timing(runner) is False @e2e_pytest_unit def test_get_lr(self, mocker) -> None: """Test function for get_lr.""" - mocker.patch.object(ReduceLROnPlateauLrUpdaterHook, "_should_check_stopping", return_value=False) + mocker.patch.object(ReduceLROnPlateauLrUpdaterHook, "_is_check_timing", return_value=False) hook = ReduceLROnPlateauLrUpdaterHook(interval=5, min_lr=1e-5) hook.warmup_iters = 3 runner = MockRunner() assert hook.get_lr(runner, 1e-2) == 1e-2 - mocker.patch.object(ReduceLROnPlateauLrUpdaterHook, "_should_check_stopping", return_value=True) + mocker.patch.object(ReduceLROnPlateauLrUpdaterHook, "_is_check_timing", return_value=True) hook = ReduceLROnPlateauLrUpdaterHook(interval=5, min_lr=1e-5) hook.warmup_iters = 3 runner = MockRunner() assert hook.get_lr(runner, 1e-2) == 1e-2 assert hook.bad_count == 0 - assert hook.last_iter == 9 - mocker.patch.object(ReduceLROnPlateauLrUpdaterHook, "_should_check_stopping", return_value=True) + mocker.patch.object(ReduceLROnPlateauLrUpdaterHook, "_is_check_timing", return_value=True) hook = ReduceLROnPlateauLrUpdaterHook(interval=5, min_lr=1e-5) hook.best_score = 90 hook.warmup_iters = 3 @@ -250,9 +246,9 @@ def test_get_lr(self, mocker) -> None: hook.iteration_patience = 5 hook.last_iter = 2 runner = MockRunner() - assert hook.get_lr(runner, 1e-2) == 1e-3 - assert hook.last_iter == 9 - assert hook.bad_count == 0 + assert hook.get_lr(runner, 1e-3) == 1e-3 + assert hook.last_iter == 2 + assert hook.bad_count == 2 @e2e_pytest_unit def test_before_run(self) -> None: @@ -264,7 +260,7 @@ def test_before_run(self) -> None: assert hook.base_lr == [1e-4] assert hook.bad_count == 0 assert hook.last_iter == 0 - assert hook.current_lr is None + assert hook.current_lr == -1.0 assert hook.best_score == -inf diff --git a/tests/unit/mpa/modules/hooks/test_mpa_ema_v2_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_ema_v2_hook.py similarity index 75% rename from tests/unit/mpa/modules/hooks/test_mpa_ema_v2_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_ema_v2_hook.py index ff46b406906..eeb1c8d3b1a 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_ema_v2_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_ema_v2_hook.py @@ -1,9 +1,12 @@ -"""Unit test for otx.mpa.modules.hooks.model_ema_v2_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.model_ema_v2_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.model_ema_v2_hook import ModelEmaV2, ModelEmaV2Hook +from otx.algorithms.common.adapters.mmcv.hooks.model_ema_v2_hook import ( + ModelEmaV2, + ModelEmaV2Hook, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_eval_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_eval_hook.py similarity index 100% rename from tests/unit/mpa/modules/hooks/test_mpa_eval_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_eval_hook.py diff --git a/tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_fp16_sam_optimizer_hook.py similarity index 85% rename from tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_fp16_sam_optimizer_hook.py index 91ec24017d4..bb1c24217e1 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_fp16_sam_optimizer_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_fp16_sam_optimizer_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.fp16_sam_optimizer_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.fp16_sam_optimizer_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_ib_loss_hook.py similarity index 85% rename from tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_ib_loss_hook.py index 63b95e2e780..e6086c0b1d2 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_ib_loss_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_ib_loss_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.ib_loss_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.ib_loss_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/tests/unit/mpa/modules/hooks/test_mpa_logger_replace_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_logger_replace_hook.py similarity index 72% rename from tests/unit/mpa/modules/hooks/test_mpa_logger_replace_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_logger_replace_hook.py index 1a5c5324d72..f0c5b523737 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_logger_replace_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_logger_replace_hook.py @@ -1,9 +1,9 @@ -"""Unit test for otx.mpa.modules.hooks.logger_replace_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.logger_replace_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.logger_replace_hook import LoggerReplaceHook +from otx.algorithms.common.adapters.mmcv.hooks import LoggerReplaceHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_model_ema_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_model_ema_hook.py similarity index 94% rename from tests/unit/mpa/modules/hooks/test_mpa_model_ema_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_model_ema_hook.py index b07bd97dd29..f387ae604f0 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_model_ema_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_model_ema_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.model_ema_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.model_ema_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -9,7 +9,10 @@ from mmcv.runner import BaseRunner from mmcv.runner.hooks.ema import EMAHook -from otx.mpa.modules.hooks.model_ema_hook import CustomModelEMAHook, DualModelEMAHook +from otx.algorithms.common.adapters.mmcv.hooks import ( + CustomModelEMAHook, + DualModelEMAHook, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_no_bias_decay_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_no_bias_decay_hook.py similarity index 100% rename from tests/unit/mpa/modules/hooks/test_mpa_no_bias_decay_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_no_bias_decay_hook.py diff --git a/tests/unit/mpa/modules/hooks/test_mpa_recording_forward_hooks.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_recording_forward_hooks.py similarity index 95% rename from tests/unit/mpa/modules/hooks/test_mpa_recording_forward_hooks.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_recording_forward_hooks.py index c1574ef8edb..b994a166bf4 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_recording_forward_hooks.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_recording_forward_hooks.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.recording_forward_hooks.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # @@ -7,7 +7,7 @@ import pytest import torch -from otx.mpa.modules.hooks.recording_forward_hooks import ( +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( ActivationMapHook, BaseRecordingForwardHook, EigenCamHook, diff --git a/tests/unit/mpa/modules/hooks/test_mpa_save_initial_weight_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_save_initial_weight_hook.py similarity index 70% rename from tests/unit/mpa/modules/hooks/test_mpa_save_initial_weight_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_save_initial_weight_hook.py index 584cc2aa5aa..e666e75cd2b 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_save_initial_weight_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_save_initial_weight_hook.py @@ -1,9 +1,9 @@ -"""Unit test for otx.mpa.modules.hooks.save_initial_weight_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.save_initial_weight_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.save_initial_weight_hook import SaveInitialWeightHook +from otx.algorithms.common.adapters.mmcv.hooks import SaveInitialWeightHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_semisl_cls_hook.py similarity index 85% rename from tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_semisl_cls_hook.py index 2abfd2fc28b..47086c34391 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_semisl_cls_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_semisl_cls_hook.py @@ -1,4 +1,4 @@ -"""Unit test for otx.mpa.modules.hooks.semisl_cls_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.semisl_cls_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # diff --git a/tests/unit/mpa/modules/hooks/test_mpa_task_adapt_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_task_adapt_hook.py similarity index 69% rename from tests/unit/mpa/modules/hooks/test_mpa_task_adapt_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_task_adapt_hook.py index 5d8155572d8..15fdb8c8cee 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_task_adapt_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_task_adapt_hook.py @@ -1,9 +1,9 @@ -"""Unit test for otx.mpa.modules.hooks.task_adapt_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.task_adapt_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.task_adapt_hook import TaskAdaptHook +from otx.algorithms.common.adapters.mmcv.hooks.task_adapt_hook import TaskAdaptHook from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_unbiased_teacher_hook.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_unbiased_teacher_hook.py similarity index 66% rename from tests/unit/mpa/modules/hooks/test_mpa_unbiased_teacher_hook.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_unbiased_teacher_hook.py index 2b845d4c9d5..ea3877683d0 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_unbiased_teacher_hook.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_unbiased_teacher_hook.py @@ -1,9 +1,11 @@ -"""Unit test for otx.mpa.modules.hooks.unbiased_teacher_hook.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.unbiased_teacher_hook.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.unbiased_teacher_hook import UnbiasedTeacherHook +from otx.algorithms.common.adapters.mmcv.hooks.unbiased_teacher_hook import ( + UnbiasedTeacherHook, +) from tests.test_suite.e2e_test_system import e2e_pytest_unit diff --git a/tests/unit/mpa/modules/hooks/test_mpa_workflow_hooks.py b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_workflow_hooks.py similarity index 89% rename from tests/unit/mpa/modules/hooks/test_mpa_workflow_hooks.py rename to tests/unit/algorithms/common/adapters/mmcv/hooks/test_workflow_hooks.py index 973f973a0f1..1fe3709436c 100644 --- a/tests/unit/mpa/modules/hooks/test_mpa_workflow_hooks.py +++ b/tests/unit/algorithms/common/adapters/mmcv/hooks/test_workflow_hooks.py @@ -1,9 +1,9 @@ -"""Unit test for otx.mpa.modules.hooks.workflow_hooks.""" +"""Unit test for otx.algorithms.common.adapters.mmcv.hooks.workflow_hooks.""" # Copyright (C) 2023 Intel Corporation # SPDX-License-Identifier: Apache-2.0 # -from otx.mpa.modules.hooks.workflow_hooks import ( +from otx.algorithms.common.adapters.mmcv.hooks.workflow_hook import ( AfterStageWFHook, SampleLoggingHook, WFProfileHook, diff --git a/tests/unit/mpa/cls/test_cls_explanier.py b/tests/unit/mpa/cls/test_cls_explanier.py index f7137791e2d..d28b4953323 100644 --- a/tests/unit/mpa/cls/test_cls_explanier.py +++ b/tests/unit/mpa/cls/test_cls_explanier.py @@ -2,9 +2,11 @@ import pytest +from otx.algorithms.common.adapters.mmcv.hooks.recording_forward_hook import ( + ActivationMapHook, +) from otx.mpa.cls.explainer import ClsExplainer from otx.mpa.cls.stage import ClsStage -from otx.mpa.modules.hooks.recording_forward_hooks import ActivationMapHook from tests.test_suite.e2e_test_system import e2e_pytest_unit from tests.unit.algorithms.classification.test_helper import ( generate_cls_dataset, From 72fc45591fd5b2978cdb16ce4fe5537c4db561f2 Mon Sep 17 00:00:00 2001 From: Galina Zalesskaya Date: Thu, 23 Mar 2023 10:12:18 +0200 Subject: [PATCH 21/34] Add explanation for XAI & minor doc fixes (#1923) * [CI] Updated daily workflow (#1904) Updated daily workflow - remove if statement to allow running on any branch by manually * [FIX] re-bugfix: ATSS head loss (#1907) re bugfix * Fix typos * Explanation of Explanation * Add images & typo fixes * Fixes from comments * Add accuracy for OD explanation * Tutorial update * Add accuracy for BCCD and WGISD * Fix --------- Co-authored-by: Yunchu Lee Co-authored-by: Eunwoo Shin --- .../explanation/additional_features/index.rst | 1 + .../explanation/additional_features/xai.rst | 95 ++++++++++++++++++ .../object_detection/object_detection.rst | 33 +++--- .../quick_start_guide/cli_commands.rst | 2 +- .../guide/tutorials/advanced/self_sl.rst | 2 +- .../guide/tutorials/advanced/semi_sl.rst | 4 +- docs/source/guide/tutorials/base/demo.rst | 6 +- docs/source/guide/tutorials/base/explain.rst | 23 ++++- .../base/how_to_train/classification.rst | 3 +- .../tutorials/base/how_to_train/detection.rst | 10 +- docs/source/guide/tutorials/index.rst | 1 + docs/utils/images/xai_cls.jpg | Bin 0 -> 251865 bytes docs/utils/images/xai_det.jpg | Bin 0 -> 171342 bytes docs/utils/images/xai_example.jpg | Bin 0 -> 80216 bytes .../classification/movinet/template.yaml | 2 +- .../configs/classification/x3d/template.yaml | 2 +- otx/cli/manager/config_manager.py | 2 +- 17 files changed, 155 insertions(+), 31 deletions(-) create mode 100644 docs/source/guide/explanation/additional_features/xai.rst create mode 100644 docs/utils/images/xai_cls.jpg create mode 100644 docs/utils/images/xai_det.jpg create mode 100644 docs/utils/images/xai_example.jpg diff --git a/docs/source/guide/explanation/additional_features/index.rst b/docs/source/guide/explanation/additional_features/index.rst index 5bfdaf77e16..9e76843e82e 100644 --- a/docs/source/guide/explanation/additional_features/index.rst +++ b/docs/source/guide/explanation/additional_features/index.rst @@ -9,3 +9,4 @@ Additional Features models_optimization hpo auto_configuration + xai diff --git a/docs/source/guide/explanation/additional_features/xai.rst b/docs/source/guide/explanation/additional_features/xai.rst new file mode 100644 index 00000000000..3c91c2c71e1 --- /dev/null +++ b/docs/source/guide/explanation/additional_features/xai.rst @@ -0,0 +1,95 @@ +Explainable AI (XAI) +==================== + +**Explainable AI (XAI)** is a field of research that aims to make machine learning models more transparent and interpretable to humans. +The goal is to help users understand how and why AI systems make decisions and provide insight into their inner workings. It allows us to detect, analyze, and prevent common mistakes, for example, when the model uses irrelevant features to make a prediction. +XAI can help to build trust in AI, make sure that the model is safe for development and increase its adoption in various domains. + +Most XAI methods generate **saliency maps** as a result. Saliency map is a visual representation, suitable for human comprehension, that highlights the most important parts of the image from the model point of view. +It looks like a heatmap, where warm-colored areas represent the areas with main focus. + + +.. figure:: ../../../../utils/images/xai_example.jpg + :width: 600 + :alt: this image shows the result of XAI algorithm + + These images are taken from `D-RISE paper `_. + + +We can generate saliency maps for a certain model that was trained in OpenVINO™ Training Extensions, using ``otx explain`` command line. Learn more about its usage in :doc:`../../tutorials/base/explain` tutorial. + +********************************* +XAI algorithms for classification +********************************* + +.. image:: ../../../../utils/images/xai_cls.jpg + :width: 600 + :align: center + :alt: this image shows the comparison of XAI classification algorithms + + +For classification networks these algorithms are used to generate saliency maps: + +- **Activation Map​** - this is the most basic and naive approach. It takes the outputs of the model's feature extractor (backbone) and averages it in channel dimension. The results highly rely on the backbone and ignore neck and head computations. Basically, it gives a relatively good and fast result. + +- `Eigen-Cam `_ uses Principal Component Analysis (PCA). It returns the first principal component of the feature extractor output, which most of the time corresponds to the dominant object. The results highly rely on the backbone as well and ignore neck and head computations. + +- `Recipro-CAM​ `_ uses Class Activation Mapping (CAM) to weigh the activation map for each class, so it can generate different saliency per class. Recipro-CAM is a fast gradient-free Reciprocal CAM method. The method involves spatially masking the extracted feature maps to exploit the correlation between activation maps and network predictions for target classes. + + +Below we show the comparison of described algorithms. ``Access to the model internal state`` means the necessity to modify the model's outputs and dump inner features. +``Per-class explanation support`` means generation different saliency maps for different classes. + ++-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+ +| Classification algorithm | Activation Map | Eigen-Cam | Recipro-CAM | ++===========================================+================+================+=========================================================================+ +| Need access to model internal state | Yes | Yes | Yes | ++-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+ +| Gradient-free | Yes | Yes | Yes | ++-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+ +| Single-shot | Yes | Yes | No (re-infer neck + head H*W times, where HxW – feature map size) | ++-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+ +| Per-class explanation support | No | No | Yes | ++-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+ +| Execution speed | Fast | Fast | Medium | ++-------------------------------------------+----------------+----------------+-------------------------------------------------------------------------+ + + +**************************** +XAI algorithms for detection +**************************** + +For detection networks these algorithms are used to generate saliency maps: + +- **Activation Map​** - the same approach as for classification networks, which uses the outputs from feature extractor. This is an algorithm is used to generate saliency maps for two-stage detectors. + +- **DetClassProbabilityMap** - this approach takes the raw classification head output and uses class probability maps to calculate regions of interest for each class. So, it creates different salience maps for each class. This algorithm is implemented for single-stage detectors only. + +.. image:: ../../../../utils/images/xai_det.jpg + :width: 600 + :align: center + :alt: this image shows the detailed description of XAI detection algorithm + + +The main limitation of this method is that, due to training loss design of most single-stage detectors, activation values drift towards the center of the object while propagating through the network. +This prevents from getting clear explanation in the input image space using intermediate activations. + +Below we show the comparison of described algorithms. ``Access to the model internal state`` means the necessity to modify the model's outputs and dump inner features. +``Per-class explanation support`` means generation different saliency maps for different classes. ``Per-box explanation support`` means generation standalone saliency maps for each detected prediction. + + ++-------------------------------------------+----------------------------+--------------------------------------------+ +| Detection algorithm | Activation Map | DetClassProbabilityMap | ++===========================================+============================+============================================+ +| Need access to model internal state | Yes | Yes | ++-------------------------------------------+----------------------------+--------------------------------------------+ +| Gradient-free | Yes | Yes | ++-------------------------------------------+----------------------------+--------------------------------------------+ +| Single-shot | Yes | Yes | ++-------------------------------------------+----------------------------+--------------------------------------------+ +| Per-class explanation support | No | Yes | ++-------------------------------------------+----------------------------+--------------------------------------------+ +| Per-box explanation support | No | No | ++-------------------------------------------+----------------------------+--------------------------------------------+ +| Execution speed | Fast | Fast | ++-------------------------------------------+----------------------------+--------------------------------------------+ diff --git a/docs/source/guide/explanation/algorithms/object_detection/object_detection.rst b/docs/source/guide/explanation/algorithms/object_detection/object_detection.rst index 7cbe49852b3..e2528c20541 100644 --- a/docs/source/guide/explanation/algorithms/object_detection/object_detection.rst +++ b/docs/source/guide/explanation/algorithms/object_detection/object_detection.rst @@ -95,20 +95,25 @@ To see which public backbones are available for the task, the following command $ otx find --backbone {torchvision, pytorchcv, mmcls, omz.mmcls} -.. In the table below the test mAP on some academic datasets using our :ref:`supervised pipeline ` is presented. -.. The results were obtained on our templates without any changes. -.. For hyperparameters, please, refer to the related template. -.. We trained each model with a single Nvidia GeForce RTX3090. +In the table below the test mAP on some academic datasets using our :ref:`supervised pipeline ` is presented. -.. +-----------+------------+-----------+-----------+ -.. | Model name| COCO | PASCAL VOC| MinneApple| -.. +===========+============+===========+===========+ -.. | YOLOX | N/A | N/A | 24.5 | -.. +-----------+------------+-----------+-----------+ -.. | SSD | N/A | N/A | 31.2 | -.. +-----------+------------+-----------+-----------+ -.. | ATSS | N/A | N/A | 42.5 | -.. +-----------+------------+-----------+-----------+ +For `COCO `__ dataset the accuracy of pretrained weights is shown. That means that weights are undertrained for COCO dataset and don't achieve the best result. +That is because the purpose of pretrained models is to learn basic features from a such large and diverse dataset as COCO and to use these weights to get good results for other custom datasets right from the start. + +The results on `Pascal VOC `_, `BCCD `_, `MinneApple `_ and `WGISD `_ were obtained on our templates without any changes. +BCCD is an easy dataset with focused large objects, while MinneApple and WGISD have small objects that are hard to distinguish from the background. +For hyperparameters, please, refer to the related template. +We trained each model with a single Nvidia GeForce RTX3090. + ++-----------+------------+-----------+-----------+-----------+-----------+ +| Model name| COCO | PASCAL VOC| BCCD | MinneApple| WGISD | ++===========+============+===========+===========+===========+===========+ +| YOLOX | 32.0 | 66.6 | 60.3 | 24.5 | 44.1 | ++-----------+------------+-----------+-----------+-----------+-----------+ +| SSD | 13.5 | 50.0 | 54.2 | 31.2 | 45.9 | ++-----------+------------+-----------+-----------+-----------+-----------+ +| ATSS | 32.5 | 68.7 | 61.5 | 42.5 | 57.5 | ++-----------+------------+-----------+-----------+-----------+-----------+ @@ -133,7 +138,7 @@ Overall, OpenVINO™ Training Extensions utilizes powerful techniques for improv Please, refer to the :doc:`tutorial <../../../tutorials/advanced/semi_sl>` how to train semi supervised learning. -In the table below the mAP on toy data sample from `COCO `_ dataset using our pipeline is presented. +In the table below the mAP on toy data sample from `COCO `__ dataset using our pipeline is presented. We sample 400 images that contain one of [person, car, bus] for labeled train images. And 4000 images for unlabeled images. For validation 100 images are selected from val2017. diff --git a/docs/source/guide/get_started/quick_start_guide/cli_commands.rst b/docs/source/guide/get_started/quick_start_guide/cli_commands.rst index de32e70d1ab..f7878dce927 100644 --- a/docs/source/guide/get_started/quick_start_guide/cli_commands.rst +++ b/docs/source/guide/get_started/quick_start_guide/cli_commands.rst @@ -399,7 +399,7 @@ The command below will evaluate the trained model on the provided dataset: Explanation *********** -``otx explain`` runs the explanation algorithm of a model on the specific dataset. It helps explain the model's decision-making process in a way that is easily understood by humans. +``otx explain`` runs the explainable AI (XAI) algorithm of a model on the specific dataset. It helps explain the model's decision-making process in a way that is easily understood by humans. With the ``--help`` command, you can list additional information, such as its parameters common to all model templates: diff --git a/docs/source/guide/tutorials/advanced/self_sl.rst b/docs/source/guide/tutorials/advanced/self_sl.rst index 6de99a97d50..e474fe0c0bc 100644 --- a/docs/source/guide/tutorials/advanced/self_sl.rst +++ b/docs/source/guide/tutorials/advanced/self_sl.rst @@ -21,7 +21,7 @@ The process has been tested on the following configuration: Setup virtual environment ************************* -1. You can follow the installation process from a :doc:`quick start guide <../../../get_started/quick_start_guide/installation>` +1. You can follow the installation process from a :doc:`quick start guide <../../get_started/quick_start_guide/installation>` to create a universal virtual environment for OpenVINO™ Training Extensions. 2. Activate your virtual diff --git a/docs/source/guide/tutorials/advanced/semi_sl.rst b/docs/source/guide/tutorials/advanced/semi_sl.rst index ef81cf598d5..fa866675526 100644 --- a/docs/source/guide/tutorials/advanced/semi_sl.rst +++ b/docs/source/guide/tutorials/advanced/semi_sl.rst @@ -44,7 +44,7 @@ This tutorial explains how to train a model in semi-supervised learning mode and Setup virtual environment ************************* -1. You can follow the installation process from a :doc:`quick start guide <../../../get_started/quick_start_guide/installation>` +1. You can follow the installation process from a :doc:`quick start guide <../../get_started/quick_start_guide/installation>` to create a universal virtual environment for OpenVINO™ Training Extensions. 2. Activate your virtual @@ -128,7 +128,7 @@ Enable via ``otx train`` *************************** 1. To enable semi-supervised learning directly via ``otx train``, we need to add arguments ``--unlabeled-data-roots`` and ``--algo_backend.train_type`` -which is one of template-specific parameters (details are provided in `quick start guide <../../get_started/quick_start_guide/cli_commands.html#training>`__.) +which is one of template-specific parameters (details are provided in `quick start guide <../../get_started/quick_start_guide/cli_commands.html#training>`__). .. code-block:: diff --git a/docs/source/guide/tutorials/base/demo.rst b/docs/source/guide/tutorials/base/demo.rst index 2ca856df0ae..735cc664515 100644 --- a/docs/source/guide/tutorials/base/demo.rst +++ b/docs/source/guide/tutorials/base/demo.rst @@ -8,7 +8,7 @@ It allows you to apply the model on the custom data or the online footage from a This tutorial uses an object detection model for example, however for other tasks the functionality remains the same - you just need to replace the input dataset with your own. -For visualization you use images from WGISD dataset from the :doc: `object detection tutorial `. +For visualization you use images from WGISD dataset from the :doc:`object detection tutorial `. 1. Activate the virtual environment created in the previous step. @@ -69,8 +69,8 @@ You can check a list of camera devices by running the command line below on Linu .. code-block:: - sudo apt-get install v4l-utils - v4l2-ctl --list-devices + (demo) ...$ sudo apt-get install v4l-utils + (demo) ...$ v4l2-ctl --list-devices The output will look like this: diff --git a/docs/source/guide/tutorials/base/explain.rst b/docs/source/guide/tutorials/base/explain.rst index dba54b63d14..a9367f19887 100644 --- a/docs/source/guide/tutorials/base/explain.rst +++ b/docs/source/guide/tutorials/base/explain.rst @@ -26,9 +26,28 @@ at the path specified by ``--save-explanation-to``. .. code-block:: - otx explain --explain-data-roots otx-workspace-DETECTION/splitted_dataset/val/ --save-explanation-to outputs/explanation --load-weights outputs/weights.pth + otx explain --explain-data-roots otx-workspace-DETECTION/splitted_dataset/val/ \ + --save-explanation-to outputs/explanation \ + --load-weights outputs/weights.pth -3. As a result we will get a folder with a pair of generated +3. To specify the algorithm of saliency map creation for classification, +we can define the ``--explain-algorithm`` parameter. + +- ``activationmap`` - for activation map classification algorithm +- ``eigencam`` - for Eigen-Cam classification algorithm +- ``classwisesaliencymap`` - for Recipro-CAM classification algorithm, this is a default method + +For detection task, we can choose between the following methods: + +- ``activationmap`` - for activation map detection algorithm +- ``classwisesaliencymap`` - for DetClassProbabilityMap algorithm (works for single-stage detectors only) + +.. note:: + + Learn more about Explainable AI and its algorithms in :doc:`XAI explanation section <../../explanation/additional_features/xai>` + + +4. As a result we will get a folder with a pair of generated images for each image in ``--explain-data-roots``: - saliency map - where red color means more attention of the model diff --git a/docs/source/guide/tutorials/base/how_to_train/classification.rst b/docs/source/guide/tutorials/base/how_to_train/classification.rst index ff66d4b7f39..d645d9ec0ee 100644 --- a/docs/source/guide/tutorials/base/how_to_train/classification.rst +++ b/docs/source/guide/tutorials/base/how_to_train/classification.rst @@ -56,6 +56,7 @@ with the following command: cd .. | + .. image:: ../../../../../utils/images/flowers_example.jpg :width: 600 @@ -120,7 +121,7 @@ Let's prepare an OpenVINO™ Training Extensions classification workspace runnin (otx) ...$ cd ./otx-workspace-CLASSIFICATION -It will create **otx-workspace-CLASSIFICATION** with all necessery configs for MobileNet-V3-large-1x, prepared ``data.yaml`` to simplify CLI commands launch and splitted dataset named ``splitted_dataset``. +It will create **otx-workspace-CLASSIFICATION** with all necessary configs for MobileNet-V3-large-1x, prepared ``data.yaml`` to simplify CLI commands launch and splitted dataset named ``splitted_dataset``. 3. To start training you need to call ``otx train`` command in our workspace: diff --git a/docs/source/guide/tutorials/base/how_to_train/detection.rst b/docs/source/guide/tutorials/base/how_to_train/detection.rst index 1e6a82c693e..f55d1419af8 100644 --- a/docs/source/guide/tutorials/base/how_to_train/detection.rst +++ b/docs/source/guide/tutorials/base/how_to_train/detection.rst @@ -60,7 +60,7 @@ Dataset preparation .. code-block:: - cd data + mkdir data ; cd data git clone https://github.com/thsant/wgisd.git cd wgisd git checkout 6910edc5ae3aae8c20062941b1641821f0c30127 @@ -107,7 +107,7 @@ We can do that by running these commands: .. code-block:: # format images folder - mkdir data images + mv data images # format annotations folder mv coco_annotations annotations @@ -116,6 +116,8 @@ We can do that by running these commands: mv annotations/train_bbox_instances.json annotations/instances_train.json mv annotations/test_bbox_instances.json annotations/instances_val.json + cd ../.. + ********* Training ********* @@ -183,9 +185,9 @@ Let's prepare the object detection workspace running the following command: -.. note:: +.. warning:: - If you want to update your current workspace by running ``otx build`` with other parameters, it's better to delete the original workplace before that to prevent mistakes. + If you want to rebuild your current workspace by running ``otx build`` with other parameters, it's better to delete the original workplace before that to prevent mistakes. Check ``otx-workspace-DETECTION/data.yaml`` to ensure, which data subsets will be used for training and validation, and update it if necessary. diff --git a/docs/source/guide/tutorials/index.rst b/docs/source/guide/tutorials/index.rst index 7e679cbe38e..582efdd9193 100644 --- a/docs/source/guide/tutorials/index.rst +++ b/docs/source/guide/tutorials/index.rst @@ -6,6 +6,7 @@ This section reveals how to use ``CLI``, both base and advanced features. It provides the end-to-end solution from installation to model deployment and demo visualization on specific example for each of the supported tasks. .. toctree:: + :titlesonly: :maxdepth: 3 base/index diff --git a/docs/utils/images/xai_cls.jpg b/docs/utils/images/xai_cls.jpg new file mode 100644 index 0000000000000000000000000000000000000000..602d77b2eb2e8cd8418479c4a89d0dc787ddf3f1 GIT binary patch literal 251865 zcmeFYcT`hf*Dkv0odD8XM35o^(xpg5rHM3=UIKztX#x@y5{iIQq^Sr zh$u*J0YsD{32F$WaQ5%LXPi5}@2~gVd&c?e-DHoEG00wf&AH~9nLEJ(T-Oav z3;`M%8XyDw15Or!s{kD>?LQyzq6a?=%nS_l^bD*_OpMIztnBP;tZZx?oIG3{oZOsj zY+U?Y+^2Z?`1sf%0)qU!f;_x@y#EYBLkB)X&%naKz{1PH#=-l)eVz0G+{|=BEIo8I zX8>Am8ai&8lK}t<05lBXZ2y_?|Ga2u!8tNAF|)9;fj?;B0%&RI=xFKb{+TuS>uB)* z06jMYkA$*5<0&h5rZXYDDzTX*%#v3cd--mD!AYs!z8A;B$}b=&Bz#s{Mpo{en!1MO zMJ?^C*9;7ej7_dv-?Fi_vv+Xx@Vw*Y?c?hg8Ww&(0ulKj{!v0=(&Hz|S=rB?zsPx+ zn^#&^UQt<9UGw^FQ*%pe+q?FTzW#y1q2ZAaqmxr#r)Os8<`>r1H@=$8-MWo z&tJkJ@#y#;xo7~o|3|F=9G zl~`upE14yYy)2TdH*tKo?|otAmr`3hi~mQoe@XW52^RPNNwWVX*#9fn0>Dm311=sN zHvj`D)}_g^z<=)l=-@v#@E;raj}83C2L5l_0J`7=2tqwAJWDA+dA*Mq(&N#nK^uko z`_s>6M%D=A%i2cg_MY|@+K|b3VE2U&Uqt5Xq~5@|o&awui%x*DK_JQ>w?$~CbapEu zJP|3qGZC(*zp$fOR~-=#MutAUT;hE>6!I!$RkFGvsUMEBK|e*&lSBPWqC$y?be|qw zoJr+ag`Yonba$BkM9!HsbZj$|r14ub;1;s=hP&icc3?1H>_(=5}g1C?fOhn|@U*noC;y_TbUYp2_^3$CA0MnQSYc zMlN4^DTY1dWX8^lki1WT-s@o}!2IvQB3uKRp;#%6Dy274o-mg0B^Jr!o3o%bFYMc_#5jbBN^RkM;JarMU?( zby~SXNsh>wLT*YXRUG|T_lk(s^nOZz*$<5a`XbBR%L>1_?i=Pm>w3gh^2|N=xf7NR zN|cM(^T6{9Md@QvaX2{9rW|#~AIiS%n5us(!{(P{hCE*{eYI;+>}6>E-y<~XX4Dl* zLo<5OarQd#VHG0FBB?f_O0j?-2K_X-^<*PukMVnO!sXxWy=@)OESVQB49{WQP5`1T zK&ev~^X?Ou!hObJ(wm*EBSSc9m0Q{qTqHB6;ag8$m~h{?>$Fez?tn>#WK3Nxyhzj} z`5~^ZiD#A|ko*0@ua0jCe#HEw@6A!6;?7#iHIp0oq%Z=Au zs(+eOCN)EVddmr!lq*gu-`?GpykdHn${u$acFPAn;sU3#J|S6=6}qNcf*!+J&WEK# zmd$w$Y7i*PDTbb*^^A%0j~Ev?Ssi>n6TvFmI-%K1z}34cZ|yKR|OU3 zstKVnD9f+{+cB5P-M;!JTWNBpU*v-% zy#YgR`ei2T%g_tRMOa%B=mR{(_-Dp;Q`X^P+g3-d+N@@qb-};R1uBZt-qcNG1;*ZH zT%c1SE3Kgy9uf{IC8a1HBAaPPBaU2lD0$=a-IBL=d03L22EDQiIi=i`T?0s$en3tD z1{QER;#_yS&pr7SrK2!Dgo*ahIp~ODxVbP}!fc6-u^wmd~U$y>i3Rmi-c6Kzo{= z+htX`-a!PKe2R(i8^1by0zeTdJ@8ZEi17CoyU>LP=EbeSj_us<97^1IN*m8yyVK{% z{O!X}S0$2;tElfPu{+|{;|A({RN zq4vRWR?BQd+6w47T_yG9Rt!CVO(%sj;4}P6`$ov%2;G}ZSvtZ(w*jb}9ORU^8s%j- z>yHPsv0hEhGwNmmYFnM1Z4Tt>t|#SZxw~BkhRV(}43hmGD>1h|{>LzHY5aW`6CxM|N?SdQ%k*rJg+AxOXv3 zZ`+5i$&0n(C6?1foruQoP+2$d{+7gwb>?nSzdY9@?3n3?faboAs0bC8zEqn)smNy; zo^;61ys~ax8TQmzRPXg}t)kcy;BCgN-HOOK8S?NKJO4X7dl1H6uZ1}WBCv46}oOt4g?YxcTgu60rL z@5IM$`p+nAi5?7hKW%*X_T->>t`cACl4?#$0^9!HCqFQ%@Ei( z3!>Qx06$Kmro?s6QPMb3o~!Z1EvlT04(QA#V_;fTD5}?pdI|Xv&M%4c-V-RFc>R-< z+0Z?8hg~?KS*a(FMS@oAvwJ!|@N~?EAbQqJkGguRvGsjZfntE7heJu8^iq~2U=#a` z`Sk1tn#}LNE|bs|f{*W65y9;ZcGs>eu#T=P*qSQ5<@GI3?Z5l6U1!!a)w(TDCma46 z*0h=s4xx#n1NZQIcY5uH^tF2JzJp8iENZrmWew4Ea?k|e0SDg^3ZFt{y^6Qn@E#n) zB*NZ)j}V1@vO(ofpc(1 zjlA$q9=>m^`jLXQ0jU>yj$Ur~1VdV?(Iaz6DNxzpKF<#MR&3w%SwzmM?e9j*e^QR# zKks;#WQE-bTs{HP)d}gl5R&5#yv4hJ4;jRVc%TG9wDmuTnQ9TNe{n7GrkzBbo}&i6 zcC%+IYkX0@UrZDU*M>#!xl~eDLy2VrjxV}b*)eVEEGv;+SF(H$e0QY}B%Q33H#h`u z9Z;{v$F@oSd5Tf*=B6&e`J&uGAAWusdCE_?BAdt%{@%kl#ng%Ud8w&!gQFI|n{MA` z@y*4TXer?0@Dvnh8T3^A9HsCC@G9^aTw!mI8-trJI@t#Xo_2U-Dixzj?{@1ItrmUg zWvKBNc&4iuXyGN~aN;-?!;YX^ONu`MT9NW*{z`=IPO;?cG(0NhZ!W{a1)1Fyq%fy% z_qz$ooEm_)Che7w&LS|zoy7jC0&Byr9)9hHBG1+~`F8^Lzv`aRzm&bmkROP35WF1& z^OS9XVv7bTbQ`mMs}RKReih>d!^WWR<)JZe`?d$Ap7e|?J*}&vPi@dY9S_eh5;_5} zRjb_H^wfEAjnC?Wd*Q1|9SQ@45Fc(Q1Kx(d)eD&+Y1=mjHBY@`P)LyBr8#ahKwc%o zC^e|SpvU6!I82&Cl+J4l>?QwOZMWVu;sy+R>kGb$YNb`#U(bsAo-KBWhX*2;;H`uw zByogZPYTUCL}$_kj6QC~#t1#5`#CCqAkz;Q1a~$@>`ya3 zJ-$wQs!JQC)OVHFi&?nu%+SA53cOY!2opBV2|;yK6wJPE~DN(>@h_xV}X zR`?>+Kh%MZt(HY_?asT@+nSFy&Hvc+74#zp0c5q6F|!-97H>0Pw*nVP!+RWfarh$`E&1Vw z4oYb+PcbDt((kO0;XC_@ERnenZ;bJ(PVqpMTfP+$5+3>;>RG!Xseqx!yyG&a|*kU?wnzo6dCJn*< zoBhV6j?HOU*G;aq2r5 z-4XSv$eqK=ie_0MRh!$WVy29cG@%|hx1#ReOdo4IcpZ6S; zA6{q&!U3oms#gZ(4>=X}0EbG|RoZxeFZ$uY0>gma{n^dOg@}#WxKwVvd-(>z;P8& zfknbmuK4dQxY_kU45hQ$#4G5JDXBSGZtR*-!QJZ#Dm?Yy74=UWDVhk z1yOU-`(RnA5Q*8-U&}9=Dem&nzxhJS#>A~bS|5EnVGf76)OMIu(>yh_Ry*_$ct?Aex)+njTUxKSUQ)y1O z5(SRB$4>FZY23K?*6f8i;VTX9Qwyk=)0EP`={#n2=z;t_t@2mnV$Yv<$@wxbX9^Ek z#+6k~yvxwL{b2SFNBQz4LiU}|)3heVCL%#Mp3`sqM)yoC))G+hgrF%Z8_@=9-_pM( z5ng93sOWBmVQQ|1Dba;{b&+ z?m>2Cqk~deK^xX$fDzy$%8*2m|CqE9$z*4hpDO?JLEDpX`H_ZS0p{7h?2|^-&NswO z{q?gjR%Nrg2c zN5wQvrs_AEo15BP8vA)iQvgG}$oKmX}Yfdhbsx!l-6S>)@VC zUPz)oen9>OW|5l6&xIT~zQdGmabotZB!g+Peq=ys`FP~%ry}jN&0fH8ti)N38knB3 zXbdycPxzh!?$hvwKo3kDVR+13CHyoqxOMOR!vVW<3D{?lThFbMNG(U;wCQ2^@Q-@( z#Im*fd4|pJ-Zp(LOL+Or=&4cK9B0g%=QNkh2mkOL(~xdQ1yAbHASd)x@Zu%ai}QXe zzQ5;va3A}lr~hQW?H885H+Ng8@eAKe919vO53rA6kihsTbNt7dUoUPbggKd}roA=R zG}Aub)Nh61w1IAGB6TJ#r>1Ub4H+Q2qGX76 zYZGxONLUaf;%2WJMZolt0S}-#vh3=>3{~yCliQi)f8ar7Eja;X@K_}hgD>pZ=%|a>h3g`Sf_ou;Ge{+HVu)^XB@~?XSa=dVF|*T zJtsf{QW$3#A0)~zeCgT!%}m-B@Je#}k0RC@R-yb{wiF${rFKd-WcL4*;=1Chu30cE5+&G#4| z$`_Z7$LW*L*BNhB_p`^b07t4+HVIsn1>RzJ)qTUAUU|n=U|NFq2YO$82K%y*E1Erx z?f0v@7q|UQ2{e>q1+oaZazB%XT;(-qk5O?3f15Ghv`$Y>C!67i1-*c^JH}sm09MA! ze4}7?`r6rK<3Q&lU#Y+gSM@jV7*o%10FiLAUK6&8!iv0f z0`!O|QBBeJiI+(BL=P|%lG&gH^}G|v&PrO*EDsubB*ye{b&vhDSwXBD) zJhUKsrpK=SG13K5m`Bq&CxCFnVlRq=EQ4D$$v4I2B&wmkqLFWS&tI4(h2yr?(;tDS5VvOs^f>Sg zNspT!K4;I-P-R*OX0d~JJylTwA+=uMGn2J*(2&IwTNslbFR^LpI0NEa{Ih9QFoi)? zMM92ChFH*AeCX~PO}b27C}Kf*b*h4P?)_w~$nN~>4$iC(;nkvB-<27~9kI~d(fCwa zbA~81=vVWJOWjbS`dU!ZeV(=jU55w)>Fs;H;k!1^?Zc10BVM9r`3c1kPEcr5XT*c8 z4VeLp6gU^zsr={h$m7LD)4J8g#g^&ur8I@F+hVaxCA^-1vJRx`HE06x>>d;!k`v)Q zSSX55N(xWsLU>qRc;E6A9dsV;eUG-CyNv(d+hL~9szr|9k&NIIqJJQdp-g0nb<`>1 zqVuHsxW8GF)w`*^fHN#X-_b8KGd6UOqOR-kY#TFXPFK~Bu${*Wh zqI&OQzO6)Yp8y`$Q9sWPMuF)yeH!sO3Wp-R!rS$sxFvlLh;ecO;*(Y{Qmh+QG+!8> zwL3)lF-<>beBk{9Jl^>)@E#*E2pdFJV&{etV5Nby|4y8AXMWY_nvg&$t)jDCDXKT%1o-xZtak!5CY%6&@~#8N zDFn6?AX}SgdIGpzMCIGPpN&KIq7J;_ET~;3$Zx;H4nVdv`QN(9U{Hx$M~xKW@~N1= z5Q+c;0v!t~=Os)?Pb!KIH=C@hg_p6&n{b|V|0;po++6<3U8iz7JSCVt;f)c)m91_q zwBLUpBD#?zd;)x^B!i*4(GJA)Cc}y$qfnxBG$y8thGaK}ok$VqOp6N6F`T;AH4T-k z^s{X8l72)^G?YIu`Cwu;}*Uv>}Qia+tYn7{&=rr$+T&)D?$n)dJ{CK zq8w*e7`yNqEod`9^9@{zRvz)Zb%;(f+*kCK$3?h4csPnrqO!59p^{fw$R5~vJ$^r~ zw1>tWQ?|Y>_RF_@Z+kj0A7Bz52VRm_qOzPK86mva6k9fw5>~Xk+i-(3H4nvR@-?a% zIw#s3lZi1?M)R8_= zt>l(8zwh@2r5W z)XSML){n&*bnMz0W6w{=v7m>QlJlMwHF}fzRCb26-*-*=yhX&hgpHrcavipEU>L|u zAg#cQf{P=3f_>-?X{tA4;tA02(QhW0;g|mU${Xw-hLqqX?&jZ_sou$(FmLF>Mh~h*eKuvbC%xKv z0TK9JBr-@z;Rg3Ii%qf#m(2aC&3FB;+eDul07!S_1neN!%#J#&#Ebfoa!djb{I)=^ zaHkG6=L&8B!`VS-(*(KF+6J`Ou|eqZ8EW?|Xw>bX5$F0t2?B4xN~W2{0$e++0#iTc zt!=@&4u^)I`YP9|OLqmTO=aTSgp`#;U?-nl!VfmBA^YqFRIhm#hyMm@u5-B3A91$x z%gm~6Q{El=UMcHJ!j%Wg36(dV27ikfi~=h(`Z7wBE?pV<{9EFs^?LKgz6C;2gj!y} zOqupzKy~>+uEzP)%MS7FE=l{eo%bSWmcRwKgZaUk5hwz38Y%o!_-|p&`u-_XdKHJC zhivlmj>G3nrLy*9y!KKAZF>WD4X?ZcK-uEtcCuHfpO8vZXCtMOhX)RHbv{V&oBFWW z{1JJcFJ5Qy;7E0_i1Uv{3KVC@Vbr%G+2;SAhcuWvYugheV~_nE)j9%`6zqGmed`rh zBoxFl&U6dF$(JbEx2MBp^MuHDwH9^yqM5wmt@EPXp5i54FETBzyplKL%TxCtX9*1~qX-7lwD=^X!;ZXB$@?u#Ho0^z?^G zYwaCI-0iE2u5tt@rHQ<4PtH8yzxd_K>(f1!7Xi*b%y|^HZH4Zw6Dm<$Em8LNSBHJI zl-z3`AM>%VO^m2npV+M_l+HJOX&Usf+C|rDko~Tnoh%mz z_W8$;s?XJE2sc(+re}BmiJBy$4PMg{x3`+6R5clv&*CQFMf>Sa5vVDR{vfD3znS|;MX~dC{Y==i4-6U;q2mMeD*n$N*8>aOxnKX8JFCO=-vIu zGW<)eBbe@(TJVo{b;8E8U6cdc*8xs4)4V;7ID9p2~1(2WV&pBOGA zl$g|e94GjvAjs_9o8y zx|x2Aq3#6t+m=c-7IE%rew;rXj__kt9!oAK4LwbHv8>izz7Bz7(3=P+17 zGTa046nqll1rFA`${8COQdVwz0(jdke)qH67oS^v6(ZQydr5Z|U0H|1rWMh;H?}?f zP8w{}loPIN;|96FBUaZAQ~w0<_>K80X20&O zS`r9P6ZzN{{ccEAYTG>Ykm(Y}3g!t-UBzKwUMB!8and1b+3=v(?W_3(r^0bYs|k5QeHgwK;u-j<59>Q-8V?H;MbfTldE~8ebzHL>6*45G3{p8#iN){-c3fnbJm zZ}tlccT*{L6$e6?x5Z=z_-zxTLwjvsbPPzVGmO_f;n}TGpME$r8+EJH0A9T6*{XtI`ZIqp!V!IQff9Tl*AR~NF2s(8NGioyGNMS6K#*o|DHE$ zXqWhn{vwYWJ8R)<=|Gwh^wb1V3f7a28zzJn4LK&Xr#RvVN4i8}*;|zZP4ApO&Q^6;5k zCD@r!q5zhRX}ngK8UByn);A3@?g-h3jlTL!SN(4$ivMmx`JdnY*Th?X5aYi(4+eQZ zi#Xw;nA@`g{Y@^;i%S)x#pse-gQue_=oIhK2u+Cp-BYta5dlI;{+m9xGB(6-*R2ibxA%X$XvR@)s}La8oXJ9GtSY%# z;6l>_-JZ=JQi0vIAWB!oVf@k%#D;#GoL+IwF#BKO@(+p24qpzDNdEQ9KfE=T=gj6cE84+(o?7Sk3d#hxV3t;w#WhJ*$kA?AN?jL| zK$5PD*kH@xtI7?t>0Nt;>C|#y(1c#)0-$o?bPR<3_1Mv#CYd{F-7q~;ub_kspF*vP zb&3S$?)?fVv>7&x70RS0?EGy7b>UA?ro?oeB|$YxY8SR=U1jl!6@KS`g+F}!h*d&^Qs`uqMEDFB4)$X zrt&v0T(%$b-Eq6DC3z0zeggQU(`>_ek#ywCxWuQQ8|oW;BJT4iZjKs#|8VMDpc^Nn z@fP3j3Gyi;!Yw>z7mZaC{JEkf92K$9UBx%RZSQ=9HOvfH8joQ#(M2^~hN}C3h~^0( zNQ@u2zqc`wlHua>qOLk|h^0ZX5QsET<$CifSdS>4RDpa!mL!_3{X{$%X=L+{rl+K9i5;8f1#z-~d!>ar+wXg^Mmn^K}D4kxM(bwm&n)5B~Hd{)Qu zR_>6puWdXJ|FBg;U&CM@f7Dxt>J=tOVpqAGSEE#j{d~_<1g`myY`tkWHd)umVhmDE z^<2}J)G@kFaRTj7n+kf_Uhyb(EJjFwy(rmLKIn0dZFQtA04ejh!Q8j<28(XX_WQkuiYnSoh&E6#TkgjvPj1N9XYn!i3V}_ zX`CdL`3n|ByF+*|I{@Q#M*Q9LbSV3>x5rSBashh(ZfDnPPEMN3A&KlPOf8fe3^qTT z`n9Op@cG8F_i)HO(Q9Vq)2JCk*o+G0g^ZGv+=XWzAC)AQ+cuRxuXkfhV?;##dHo)j za2xX70#j(~>e3hx;{unDe!R$qg0An@Ai{?m5w~G`wPyFCCBpjlwhXPR*iJv!Kc%94 z7fo)t`^~O_x`cUaHoO;)n=-%+qMvx+x{kf-Ix^nneO#)GF_5}*LCp9BaCr5NvgGk9 zy{x(zqDgtx4LJe&;)#-LN`ROJh?44DrGrQ68RzEC&prCEpz@6-y9dop>4dqZSS+5g zVDr-`luGY=KuBzIYWeyC=4)CT$@<2S=4)JPIr<`H13x0U<0nfu2J$qDVzs?@=W%xZ zg}{^|az$+}w;|{w(xLO+eWmji_wL7i3|0v)F*KL|gb7TaXWSqR5K+YlDZd>E4hCOFjES^cr zPsarYlu6;U|27=|`+YfdTbM;Uf;w)0FR6qWYB3wWY^op4x$c0?x3FJCWZJkmI75Fa zGE0j7MLQE+uxv^RFb=X3c3gY?VJ8QjS=*G`yY}JyGX{c}!Ch*;^ol*y#ekRQ+iLUH z?&=z5p=ixzI8N70eefCMclkvp(uK;0CL1H1`LUw4+TvPvfk~N01Pe-T*#10hEvlP! zl6_;k|jii^?2*Vgz`+J^O_er3$L>G!pc3O+@my+O0kr&Rs}>zn{=FrR%M7es}0 zH_k5kSG@W-;c4Sb#c!~K8HAvF(R5};tNFTxWRI>UXhk}o1Hu)+1ZuF})EY!twi5G8 zjl7j8R{)ZHiZO;B%)*@OJ|_es<=bhCW#2@sGz zzhy(I)}z@*GJH`ty(DQ}_-kQXKILyHfWxH*2Esy`m6Y!wtLjub&3=eX8SBa94{(X94@~jOWlkyh= zHS(8zt~RopW8R!E)c27-OQ#_?xS@Smi$3`{o1ji)tD%mA)U>21n!0Vx$ zHO}?%+$q?ki2%&$VzZdE*59|rNHjIIP2I&dH^=EXIU#0b-4bY}e9FLui2*wZwQqMj z|D)|_g!u&FT~NB|x{FJTS69{vKoTv7d;iWmC9pcQ_fBe~A>@-b#o7Si+k%Vf0SHOL zQ{0V0e*DbSj>-ber#R10vBC0UuCs~yqiwJ&7v1#Fyypj3vJVd-bM5Q5Pp9Sy4AvW& zPick(Rr_QvkEdIxOF#Ce(!kBAV;G$IY69x*O0%v(#o)vBAc1pl4eEF9J|)EF_@^Mg z*mx|dXu21%k9GQUAxr-!X%VlqU_{U%dXR4UHX3ZW2<6zeSt!oBPKoMQJ2jZhNiAHJ zc6j=M?-{=o5Qz=|qlAhQ44c#~6m9vzA8%)yWVW0cq>;&+9q{Kj{cF|zCLFo~Ar3fs60Tonlkj`3|PCpCa49q+9+Q6wC~8 zW@x4;AxeeF`c{&zb-iQT0$S0!e>+nFB*14nioTA!4OiWC_PpzAB7{bRzV{`_0Gd;`I~!rL9c>erS{6eQoiH&c+1zyH|2jA#4$2wlMP z$ii#oVVE^BwEr0Os0&u<^T43)zP4??KVLybo#~@w!Ov$-i^W(ntvQ0(>ln0Ui+l?7 z1DKY>6x~bcox-!z%_0TTU*~zJCS@E-RRpLP?zcL$e+!*TK2H2Jef}=CHC@kHxz4|{ z0qB|7wEo{HbfS((Zpe8mgpv^@8St=fP`Yj1#rWbR8Rmc5L{Z~!meJ`WE85FSarc5> z&s{v`#Bk_3_`>!;C&r+i-`_DlrkS(|FS=nrJ@@*jBO)`GE0tHJTt)P0x|Z%|rm~;V zR?R&O$=unrE=mFSEf1EIS>O=>z1HS$*YDxm{+?58R!~Rq9`eTspDl?%pn&+`o#p$p zRQ?kn3D(lQ_{8SQ&H~5^9W8rdtXR({xf=7}=;ijUWD4qYn3!hYvBJ~LvN@-p4{TE9 zgZ%IKay^xmv`V;Ud7&5pXd96-WL4bZkP^*~9w*+mNW7|K`?K1q{xfk#;bE}8?H-jk zn-2P-$p98}2enYt2X6sSz=_v&FG#;_Z_U~5C`?Bf6?y2XHfQ#iaoP;Ct1#Qz+7SH6 zU=WYH*ei*R3miz56imVi|83q{d{BTFVmkODk}@g z8f;uJA*xm&(EY-;xwU>Bi7oukYHC@8nWu{jZ*F>odBNK0M3PpDAyx9Z#Dd6mF`c!x z6h*RRfY??tb~12tQsKMH2Zi;%@AU4O8J?`nBRlZC?n~fI`Vdu4ljO5yQF1ti!uAcF zaXpbVwAb#W*coVoC{Wp{#4n7Nn zq}Q&#lTP+V=eZ`n_3{?)X69ymzxgOow?x<52f$;9(yId}z(VLSid)wy0ADyYK-ZV? zvpeAZ@Z&J$st&kN#<^=z)njzfJcr8?_p6pC#*f+H;AvK`4k{2MK%G6_dJ6Xc@cC#< zXQ|oO|5A|pGUGXFd?05+pQX310eKm)KlmU5bHz#a`=XTWwQ585XN9n`=tUCC+m(T@I;W%!gR6e3^e z88hcCi2d7J45C;Oqvx`_YST4H=+>GPQ0g`&ZS{Mdr- zbYg*tfWie0a2Q#{n__0_%d;Lj*?RD3n@O3qMO>8DjWLnD_+jU=INAUiyaU78njpof zQ*f>>p$!!0+#*q+KSF?Dd+6wBB3+^TS#M-`{Iub3A>$!lMXag230e^^L*@%5B+i}> zit@m&#uGG(IE(Mk*T8)R4nLRH^hdUZd=PbDYM|YyGy{-rU*PAEpibKo(@5?<#{~&9 z`OrjiNZC-6tFNzSE-&s>+aA}c#09~GJ6a14S~{`l%anSIUjP%qi0E8+Vdl@+xSwh5 z;8dO3j>aw3CAE9TCClCGw>8GxA8)Yw&@KExk4l0L<_x7?H>_x}keSGM7T0_Nyl>8f zv2W)+_8HmGcrf&lptnA_s^A;PrpV+UQ9Vx?#&z+ z$SJWG$9M{=LHc5j z89?G~v7HdU^aL0+T8V%oc9*r>N2wu-vjil_C} zc(W*`0q6FKw+PYl)g4SdvdR&;@{8&IlG5_>#PT#giF_)Du!IlVOMs-8;scXPf7oj|k$m@iXx2e0wEYGYae*mfS@1&%sBU&jm#!1(G_kg)!#hzIR=RV+pKClK zYT?bxwj9yNooAFgc5kyqeH7%>gIhw{s?i^no}h3x`%naF;P8Rn?hdYRYcLO4*xB&c zbno2uu?fv5F}Jom$}&tG5@(ZaoNu>*N+-%!q^Gsio&cO=zsW!Q4DiL?kPz{vs=6AV zmFucqDpxj_r54iK*cTT!Sp}=zAd|xK2W`Vpu>CRBRc}V!X&#Pgr=-NB`m`SG-5=jBUKy2B4Op8}dI2Lp~z8Zxh_{}bkaN4=f=yaRe#vqpAay&Ti9JI~S&&=AJ?wlGjVjUVtPbYFDDFjX?~*Vgq(+e-w3ELsgMRl{>G z@$|h0D2Q!LzRt?9qF`z57EPAPt}M+TyYH|83!*&J(=c{7RRz2Uf zBp-=6muD2^b3;L$Zt#QFsNyzPT=n&d&5FE&4H7#kq zY8UNZyt0(rf@WB1O_R;+y$_xtIQS9i61W>5d86mYqYFRsnaBY5Oo|ywG3t7Hs9xuw z5@Nh&ER_zS0TJ`#{uEF){VU7A8tv)4@8sme+Ep7nkUtb`eDj^h$PWz87MccSjk5A! z8di!gnip2sS|;h`M3%24&<7!Ae$IRKNNyf!r@kqMfSIB&vD2K0!Y)4?6dp+D&n&ff z_Or_}Oqh31)X2KRd(YTLm$PL9^)_d(1{<_U3*~8xZ>EHrW%us|Dz44!tv`8b_C4}$ zqLwE;@@MY{dbbC&6L7HEZlQCXDV{8$nb5_DJGS5^LWW$~3;kF5MVy{^v?XtzP3wC` z!&R>1c@|`M*>?e2L^^^HrAqHC^^<_OYEaTYM+uL1ci6m?vUlw?pl`m z4mv}WulCCgISJex8S=>cn?AyzHN`C-aZQc2G6d?<)ZQ3 zB*)=1f8PpQ=K0&o>~T^bNLX1fh6o-`8mJ9&7$e-}i{HT#@vwAVKAG6?#%c?nRtRq+ zOX=kVuUpT8EKOB@zk2yWR1{K(aaSVBMx?;}R-YEi<80rr6^Iy5?*94k*GzB2T$N?o zqdi$dv~&6S2M3Vpq;++8yoYI}*=}op5Ji;TN};k-m)LOm14zLyr*XV_32CLW7M6b3RI?I$Vns31;ar6dIAnT&@O6yO}9(Vc1fn0#TA=-~(mp7Dd3X;?2>)t^NlmN;i!JQ4C!i0$uYD#!Hmt&mqAZYKr|F&k?BxhnEA3-I{A-i()*Uke0OOW z6%x7l`xN_pK(ROlzqnx@Ptqlyo7F|F`=GtQge5#4=Pcm)pp$#nIMd4E`8`*I?HDVXti4XZip9*4r?Ic-zC!M`5SQ2G;q(0a>qBj8>ANVNd7}9Syr}fM-Qk?N}*_P8vxuuiMI)=3C#|CsE^tjB1&1DH} zAi#M0HIktboP7au8p@L})C6*nuxSltYuWPWc^rrl)?fQ*dhOmN8yLFje_+Lcit#JO z=)ms4iSklV9^Hq&SZE4TxD5w?9Qey1Rk+|urh?vAdvvy%N)}FR&eHV4Qt;yC@#tl| z&R`ai2`rd&EjT)s$4h@|KFD$!8ML)c6v&X1EIu1j^Z@wjgC1R_LG}8JgSnPO7rrW9 zA&~4Z_ATVcu~(vJTXP$Ub_XwvZ|Jk(f9jx-#%AJU!UvmUH~Zf(FrJQQeZp%P`?t7ya{VPS~_#zNQ1gSTWepvHrU%lqAxIl!df1>}xPNA1Vb%B-; z6&B^CKwGAcjyAmARs>lC(B zp@oqD(1WJ!_oKV~K2le<6zLdUilb_zAFxsw(Y-;B9XY^FYfZWq#ZRm)H@NVnr2uMz zS2=%9RhopDy4lJ;%(Vw1nt3`EEd+Cmb>zOaR>H#X>b(S^$6>MIb-?RsVP0cUX z45OJC3B2YLwS6XHkj-q%!)I?Nn>!tX0)Kg4v@c0x>x8-$=aHT9Mr8i08+Q(q(Lrja zFpi%OB-*{2rz)owyzC>yQlt`ZGzIi{xfH+35GW=wQP~*q%V5iEw@SiOfQTV zWasV4^`Zt`vFR_BhfIngKiK-T*YHM<{IpS=&MBw_#NAPoH|nFAXFNmMkfQM{Qt<>{BNPF{WsNes6d_W1UnEkJ&R$M5mN|PKHuRqpB>#P$&EkN&T!5bX z3l#X0BvI+O3MlB-5?vm@hNNDuiV?BLZ}<( z^N6?HeeR=7sGn`EVcIvY6IfEeI48?H5X@cEn4f>xzElJ&wGqljdHw_WiGK#Y1NYU& z!t3lZEzMBj_Kb(Kj!%tSfBjxO*Zh(CyV}jg=7+5B(cjIj@gm{5Jz*;(z`KxV1ykYe z80h6#iTZKXv4yXl9uEqn^6AV6^_x#{pMEqP*W$gz)q5tpiErihZ|r};JlZc&xb{DP zVV*z9^&$__D^un8AC?qcyqVpouW+nZxI?~XK*!2V``f4&Ba?!eY-%Iscg4gsAr+k! z7Pht5c2~7ohbi6tBzoanx3X;aQ1mC+gCAcv7t+r$KK&(E4;BeA?z{r9S3y=yh|07F zQA+1c_7~Mt=fYGE>(1|TknKLL5Bl@3z?njxl=2TmLXB~m>fbKLV0PEl9~@Yw1S~Y% z2|drE4^6cm&ow0{iuU0jWEhD+;MRw5y(9$y+_=E6V-k_q%`sH0HFeW#R|w+Vu%Y0E z2U#WVZzHP}Tj#I&wdAl92_f3)VAPU{d(#|39f3g^XKJIyf@4pVf4kfkVvueSpdx;< zKPS}nrqL75T4Tm6_xc|7dE6AJ24u;I$$gG-yQZ9wQmFNHt{VlX!lQMF$4^T+rs~)a z2%L9u5FBVdU;GdWQpR&w9X#dMaoTA-Rcm0XeF;YNZ_I7~Q+h^Ku+Pli-_m35#fKg= z;3wjS5r*h4QoVE%A=p`DdDlRtt*`xkT5bP`0 zVsXajiH<+eHbinbs>gqk<92`XMbL`_T?0Qo!BUZSt{s_Yu8@J*^1=ED7+3 zno&)Tj1uTwOLNqbzK$D?VTz?Y<3=^#2FseVe@(ZURUhK!^)`BV>ApM=)07R{zmc`( zQ`+P&d$z0PD!Y~d2tfjzo<9~(-;5Vib5W$T&PIJ5)M@^@jvV36jpWU_-YbN|TA!_C zg!;CoM-Mb>Yh;KUobB(w0a3^a0{og7>IzubV}OJ0`e!k%Gk>3AP%6th#z$Yc!}(6O zW|eM&(T}RR8@dm*tc4S;_NBhyOd#r!YmiI|3=$w+5Sh@|Xfzya$oH*OR%IvLOEuYs z-(=6CV$aFtmcI|oxfb16DO&eaGLop$I@00Cycn}HGCzt}b?S0z&7hd%sysi!otSXj zb$RXS6S=rjsk(P|Awjz|&K%prV?S&wdBG0MP9Uy^GJ~2(^O!5 zSv~ITq@UXludD_I1#uD61nXW^7rqPQdCiXW2KH+M_#})RKttD*eP0RxepwjqJR0I= zw>WJOy1dVZKT}uU_g!b?xsetT1miH@8gi~;plP-^J{_L)7Fy`9VJZ975Hg(I9jX%H z0H3*VX3Yv#IpzvTt*an%?6xYLEAI@jlw+Nwe!(S~``j8i;U880`&CGoX{cr>wbz|| z|Hu>HZ@N6OHG^AYBPQ63W{&i<~!g+rMO1#;1 z_06d^0@b~;oM9LifkWB!Tx3*QzNC9Zra#x9PS);LIc%OpwB-wLyld=y zzJSL|C3JZpSenjviIMp)ywJZ^&juP3oyMw#b`68 zYRR=u>K#NxTbyi*3ThNlmxc-f90z&c`@UE64soLW`4abtWo;Bevrur(o+X+mnba9` zS$YoJ;dxp9RDRHfaNv>T|Mm1flbtI09-c|U+Q(D zBGBVuZB$Zl86T|oWZw#L5G>S8I`3E2K2o(E^u zd0jr=ges+Z{5PpWwK-xADZ;WGM!3;Ve|(8T+Gma%6TWV!;P1F+NuJ6PeU>}qUjUhU zUh(rby^s>OZ}8&qw%}#Ylg+7qbuHD8{Sr{Uy|*F`xI0%ItsMJb7y9>N|HCg=@(xS9 zoO1wx#OLdX+2=6q;@39%GgqvGZ$`Hh!eT5rdu51-Z`80qeYJ8LdA7M`M)uW+ZunoB$UiNL%pp+ z9PJ(GIrqzUuEdt>!Mv#O4Gdm-%>h7vlSuMF2?Z*OAhx_4a$skdM7LhLZVZX7c&*oF zEcyMQ&a=h0To6_XBee1rrVXJ*sCjYa1x4z0Ki5E{!wp|#32|`=fdE|tX8s!sK+--* z1WvY>>j2n^_8WI^FLoh_>YW%lJY!~|&sO^Qbv<$6?y5);R#)~Oxuh%#V}NRT7f*3Ax*zF%!&#Os6mx?byGaQ+g#v1o@|!7KkGX* zpyy$8)@o-_8s9gx5x|(W5ny;8*TYG_{(FZb09dsLEvN*}#|`*aBc&yQMqs7h2I9%Y zb@-MikX^X4M0YrL$TSjNNW zi)VkWd)LdSWO_So3ZyH*Lg-zid2-suVSkLSFA`?Ui!|zJJ8Ofp8-PwVhBP;!l5Equ z)mjon`}1}*a8VKa&d<;C|4QcW?LBMF!@XVI97R49yUYAMaUrKjm?_#MS5Nd|LMQ9y zkLH?1hU|2qfhWTLvFoHd?aoZsj>sq}v;yAs*-vL_2)LV2Q@FE2J-_?{6v(BCr$T-M1++W3lO1I3|GhZ$kChyO zO7PY(Y7^(Fc_;8iAYJa1#|3;Qz9nB`Khf;a^9xD)1#4|Zg?9JIbn7Ajvl;V4 z-y@%$*#aKE5lUyPX_E9nuT2YXtL8eDf!eIk$g zlY_#2gWfBUPK>}p<#CFEe-sx|N_!3UjqfQZt-YC1vj3oYXiC<~b8!w*6tpQ#cm4ga-`M4)VhytPgn& z-YKV8I`mW2PV!l!=Ofl^f{w&}`->`!2U%lWf4(yjm`|@_y#c4#4yZat9Ms8!4CO&q zI!qeA^2_NT$dH!$I-JPlI62Nqv(rp*oE!&)2&PR0Gw;R#G<-w>l5;1_3RWSxSX}GM zD%|MYU`lQShpoJiBF(?(Kv|a?#{gOb?-l@SmF5ibEjqJA2RW842AGBy7_(&%01{>~ zpH48q#=cTH$X0kr0OKTmUDiJKY}%Fvo>HK zoYB7;LLrwqqeIRsi#usg&SzWqP9MLT+N0s1loQT0qwa8y!+H$4yoPH3V}V%qEp zKU87Dgq1ANNtBJa@U3WfV3IlVj1#j@)!V(qM@oikh4r+L5J$LL*WFWpJWbW;@61!p zaWMSN{cOnyz6aFr@5R3TbP;qsNIO+{^|YkrWI*?YGn3&r7~JtXX0B@o(0Ys-&N&M{ zcIHLIA3I4fhycDva?p%svzSs;@yXHck9)}0jn!X6Mq&ab?XO%`kW`Ws?wyT~aUJl| z0=Mu!`b)7;D7;#_CoihXuTp%95!oHobjw9Y@<6JPD%YfE+AkbK!xmbmrHm(Ll;z^tSLdDsqon{QmtACl6a2NlVW~`O80EaMV7`T zx_oHF2Vr#cTv61jPfGS3QA4+O5xg-0y@nxC^ivG1NWQ|v?JiOxZv4>NQ0@EPapdN6 z+<`26NO+DeA{WeYEnsBiU5)`Ns-swV3j_U`?-5!s=ZU9omtLgxm^nOfIDQ4Hqx6aI zzHIn7;XM4h^Sy$dcI-E1x7QJB|5=B(OMmvnJ)~HzaUZqsXI!z~eAxdaK^LYp#0T6s zk7FcIr%5Lmna>RB@;F9C>ocpW8I=}a(n?%J_vX3HYk4Y~e;qt?20T5X=ALA;Ds6I4 zicbM?r8}?Q*f_M>w6-oUIIYy4>vO7t@}=!f&yf~Kb^+X9I{_naDKm2!RXO(**G__Y zET1a&9rsRO%?zv-3ND<-#^n9dYd@{SW!+&6W*;@$s|T1`3}nykkOcy5s+}Nhi2vYj zuqGmWa)3DF&NW@KHcoL+Q_5JIme`ba13T^-O07$d)DHiqW=q>whCEE;OXeurRX8G} zvek-tpQs8Pv_0kvFxx__kbg#;NE0r>o9ZO8Q%XOR} zg;qfYMm;rR9#21$n{dH5AtKYHn@jyB`}Y({0sRoD`wp-Gjr6(6ER1zfgTVM2%+jJ_ zz<8ejQcxa2E;q8U?3U62`ZvT%I5f@=u&Au;L zW>l|_l&5~@cyphB8?5-=%cXik!gmUC^Rxu;_j?BIEoK}^!9shSrv%#&{Hu7g7`4UM4d(RvWOHv)qN~Eifk-0*I zEKj%n3cEY&I)_{>W9lv!Q_nbOBE(~lQ50_@pu`t-RTT)2b~Pma^64)UM>0AD@P3Qt z{9`^vmY@Vm*~dU|(q7^~#CSwt{D7c~B2;O{ZyycZbBfu|JShM}1l|8vL# zKPsE1`sK|lcR!X6G||V6p#g=j&Dv54!O~}Eq1A#se4?WU%U?cKNb7RnJw_weyi!QM z@c5C~^d43Y`yJf(vMWab=vl*ChQzv2x6AKU&2{ex2;HACcY}3Hshz^a0N(u$hyO9P-WGVO zWTE$etPKy^k*lW*xr*=4tFKR!@$E*g}2lSkK=%yFTq)`vh8^7MbEp zQ50E)8h!qwykFc@XZ;OSIho%)LH!q&!~cSoa@?5@H9=1LG8WmmLocRpKhgUj93^qT zN9eh3(J4P3)!X*aqet2Nj0ReH8+=!+nl?}~QPknkz#~&)>oSkka(RA-**2P~g8ucBWVHo~a6X zSjAa%2E~OT%PyS=bwi@&g7A^tAI@o9DVmxXkB7Te(tAIUjs^tauVEA4Sz&IvI5_lb7Qr1TuT-Sy$~7MhBU z!F_MPZ)%;UY1Kcu?noS(6LR}ywU5N)Su!Tzdsq_kT`=hg)WO1&!|PzEthFm!B1bp5 z{d@I{!S8(7ueFJZZGTGKVh4gdE|&tu63B`pWkX-*YH7T9h5bRGu8CHD*gZR#5ppc= zCl@ZCFq=)-&DQ~+Ofo+c9jrVa8x%=Qud%DM4&3KmXA^@H|6D$Dn*p%xs}#qLG*W$e zilsa-V=1(3#sl9FEVLoK42{9GS;jiO1cgtVW6LC&C&aA1po`ud{AOHNR7z8_bLhBB+)&AW zGdr8-1m|0rZRT8V8WSoZ$B7EpmDC}4VoP-K#!!^O@LAy|wAw(5pE^rXd-( zH_%IONW<_8)TRDwZJdqd2mHs{`wkMr?|OZ&j)684xXpBswH}r*YFc5Ux4v7Ng&gzD zS~>mo?S7Y*nD+faQCCb`da|STMRq4XPh)2Fbi0y73A06dsNC}(fu(wq{nlzbu7~z7AD#CE=`GSdn3(x)YF~$|n%mb~`e8;dLcDptHGa@l6}@^hc1SE0 z)Fl?K1m~7*DAzPglbx^)Up0*2cZlVSQ|lr!KA2V z#d=bNwJgqEpgxJ@bC8CZU)&rBHFkPOy@pVZ%-y+ZzsQ5m;!JyqjhY8CRbt(1zJ}y2 z%OgZ)uHQ)RJ#10yWUj?;!ef3t?l+%OHbMx~2I}Rx#W`nYf$^^v+z^a7#c}eSgRxVI zJ)CSuE0`!y{zeqOVhPh+9ij!^)Z^+ALus+ee88-<4}( z?9VP)JkZY9hdJKfLNHVbJqU&esYb(6rls-MHi^fXoxi^+IkG19bIdI8+itQ!FA=K@8KW!;tond^ELzuQ35X3#0AVh}Su^9js)f1JNHW^DETrx+Rfu)qO z8`*@;dOoMoYrZ^H51YhP)FXB|Y`%dnO*%!pvZT2=C;8e*%-u7XpWl9RZ>y2DSvfdf z%r_~}j2}M`jp=kIa?{c5`|BGa_>VpvErMSo((}f&4cPm8;?~ASd^0<#Q?`S7ZS47Y z?hL;sr0|NXr2_lu)<=&4(wb}DKFPFY86D?GO98U3*s~B;(55;^poSXM4#RKs3@WT3 zq~CgW6;$EhFZaEixoi+p;jBt(e%h{Aqv{`U#1b)6p(4Y>J&i#+GY(0&ZeU3@+G)*_ z)HGkFG6Qs6bbTj}rzA zL%R?4$Vba|@X}53{#T)}fEmW}n1+$qeqeD?}PHot`gc|7^6#dJ6qh zXs=T5ymQ6|JLIza^M7dZ{9jn{|58`Lm4J6<@#Y0fInY{5&4E6C9G8rlErjD0Cz7{b zt)b2B%BLKIB(jNW_YAGxRVpb%&i6p(vW~vNF*Fxh51o?<^xYr8rr7@TX*OEQdG}AD z3|UX-dgTZ6uW+2R!Fe1ph+teLB{W@rYxZaz1UwO#KWdA~J6^*=FN$E79Ip z=FEV}PxU^+ES|6#>?IC>k9N{O5PO{ZmIwh3|MzzfaT77K1(?liFhRl!apNC|<6HG# zB#uDPD>sH3eU_G0OcSBV!(<(ND*}h`tjd8>t+`)w7cG31547>!R|qd){sRHFK@kVA ztMb%^g+m?-Zo%56_O5R-w6*NdtlxH>A8f_)vyQSWSs^s_i;S-{e~u(6BjT{@*-=CF zhZ*jdoYX&unzu48DeJ6dILH!vWCdIKmVmLvX!kye5zMtXSDW$BH^9t6fTOJ@m{+v4WfWS2hnk?WI8g#IY_jUhOM+ z8=w~^Ix@QkQk z`JZJ5Vium#yq5|FG@3YuIsh1M1#Jj^dVO8a6qj1%T{KSW_c`$Nle6YQqRx3bWv}K5MczKLgfL zq?a}PGN3lZXkfy{)9=cgPTS|!vInd!1X^*az#F7^)1gHk^DPv^7PEpK!?c-j1Wk?nI;3UJHja45|y5&;=6)Ob(H{{1^gKO^xSekYs zY;bgqUds|EN$n)IWTu5DT^ zv_yfPaXc0jZY9oOlE&=|x`l|Xoz*WTOEU-ASdK)XX;)I+tFjq}ibKNk=S|5x*|{K@ zM3oo2WbK#vD-qP*Vs6aOECv)zZu{UIV=D!_4rgLhI|*OEx z52U~Zg6b@lhePvkxfaB(9D!Xp%6ubYeGZ%h6MrJQzK1j=O@^;d>`}{%J%<#jRCd-r z0orW`!jVpL|B;)i@g%dgbG7jftxEfnXwVZ>zw&47rk;zYisPv->IXb5mQPX8B31$v@B}}gvPATO8!s@rTw^n(uyp%!KMm7*ZL?ix=j@5tFJOf~ z(C7`mrGR-{_H?_eJ|)7rjx;mX9oZA5YRe_^59D&;bkb5jPezi;&*Ditx&#AS!V;xM zzT^lEvBEHwK!%<)USCBlTWZL0dpMbWZMyoUC)fEKkWW5&O0Xwzu@+5~*_FWHo^4M$hJGcqb6qMt1Yz|lic$9N;J|S|7kgXtuum`Hrs($kpe~y{K2lv^%M|imOU`lIqH!Rw4XA+1H3o zA2irCZ;43G6V0F89Ux++6Zl!$uy(jutX-U*2@7FVV|l;e?u&5i>-SqZm!g?X1-2S? zYI~r&02eL9T>a&AtT0tAb>E~2&Yz-Nej*(E$l9=W)Bf?6CQ_DV*{}?s&A@C15((_g zsNz3PLGC1~$KY=xm;mP69H>Z2U2D14fo}1m_sS)98b@;a@~4H91-J4(PyYj{OryBH zfFZq~ivL?i3RPeFzvm#SZ_3k26C0Rdz@C*(wB!w}&%p})j0#)Y*)Pj0 zR2Iv&iI`SnQgH}>yW;Mc;BUR~;!?;h* z3>6MGj&_Xxl<}Ab_q^3Krn#z`7j6DdKN=IhV35cpXq?O78vX~O1Dz;TKYDBBsHW!7 z2_#RDM1bj$idNYDS3hxsP@+C(50gwtY0+<>=6Apai|@%Ly;%#ie5KiU%+FuA{@R|3 z!K2@^KS=W2sr>+u+@Eo=faART5KXF*jQI9~Vp;xt+4c767jjEWir+K72vy)JJXPYC zt>v>HpEgb2-{Q`Q0Us}H9AzPRu`~qvL5o67j@5foV9uvSkHf7(<0BRRUZoGOgblk< z(O8fwo+7MYp+Pni)7JELUkg7Y{rKi#LNT4}K_=Ont9D-RPyFv^>E#DvE;TZHxF^I9Z}yb{#zfn zu-vFY?#o{44DrUuq+8$ayxq6nH`v=NV2~~8lZ850z#OH!v0s_1G2LSUvVXU-LmrZ;EGOO%a`H7oy8IXNA~Iq>yv z%Bi8%Y1?c3yh{h}C+yDw!VwuNcm8pxJn!Gd1gQv#5vvX#sTpG>_IH7xh@Q0e- zWf~8?%^-mU&5}%;YRBxwq4%*4P!%USozS0`9c6K{rJ|q0WIM{5bt@f?@*LIN6e>~w zXn!5T769B?OFj?>&Y1(Xr5BaZQJjtuz?tuT%}7|5m-ps_mn$2cyV{f zliT~k{-^=VhQERjN%-tb54AQDk-{XyQkzYro|3B!eT-)`@?E^WN!RjEwtv5RpdkVq z5S(+lEZs? z#%hQ3t)8>z92rBb^a6`;>>}6#WMSH1&Zwt5eyCvT%=t@%`}6925Pg|mwE$K9kYGsM z;}hqsP*gj8&>zsB+ zdNHInOQrB!f`jv4GR67asI*26$ zE*!s70{=6C3f$l$aHLa_;t*OR){)wi(X4dKNu@nAPPlCQWeGj~>B*Rw>|HkR9*yzl z9;hMkODrl$DfolU#LzIRrk$I^CN;w_)R95wVU}H`=aP$K;4AZbsVakXxu(4`?hp-u z*37&3q*k`*q>MVheL$zj;cKP9ZE=@Z95Y-#5Y7cpw(jV5qHG9p+fa_1n z3D{oRCp>OcCcdzUd|YTdU24CB}q%#xf?41uC>~|(|&~GaDNg}zKu}Z(=mmSW*nhArXR!{Tp0Obb7}H4L{q3&fGb{!dh)(C zPhtUNt5JO(It3V*`E+BNapG15OQPtoQ>eh(myhu(*RR@tcfF21ytDVilN?>_Z5k<) zKy}O1wxc=C+@Yqo=`nt-#$*t`j$svR*18wcRL`y_W-sVkXW!> z7pB1WTwfk63o3~H6Ec*shCJd%G$O_*=(eo}QQ(uHn*T@!PZ-+#9TlEwZU#$;U_0aq z8~u?VG#%KSr7D{}^icDOkIxf2J)Lmgm~@Nlw;`yzUNEq)S>nL0MVT{bsrFRzWm99! zapH`n}e|B)aRpw|SDX zo5$omAB}PKqSi_WF*C+^bU&{U7oXCW0i^c-2hB>7x(J?(xn_bO+;59oR2VkDVtKT| zG)(>Agr`hvOy0hd8j0h#ru1E>BNzZX-);U?mGPZ<8>LRZYy4ibsvz*TSgPl}P^6#8 zy53PAh4J4jJ5uqMTOb^Vu%Osk?CJ}$+d^XBILkqXCQ{ljWAyv$Df?qL%ap|qBQ|&~ zzQmn4~CG=?<)Y+rc%#y%EICK6)F&Cz>KTG*4XzY(aL)nd)E7KkL^--XCZ1yg2-`RZ=-c ziT4V006zKh&m0G)QUyfy4ipd^kN*QP`JL@eAeN5_SKgTl*uoJ_Sr^y^m>+r`1TYTL z0KK_sCi*Go8w`K&Ayb8=3ZmTL*%2+nrFG?sEN|pm8n%_)Rr*15>8nNin)1Kgn}6k?dOvzman37Fy_tOv--W;MjS&3pm)4+$6j9Ar12C0KgDcJWu@-U>~dx73O~ zJA@Ws8PWK%%$;wH|2dc*9IB=pe|f;BG(Z5-)O6&K>aKp?6X@Wd#Z;F;VmTc<285Og zBmuCX4BKG8&Hl{mJE2waGnewAs$n&=iFgT7ncIn0u-CP&U4&}bS}yI_oN?!%D9eD+ zME%jMNU7336>?eYc}>xmePRYT?xOrOZuZZ=Nt<$0xe}I(XWmAW3&q>%Iw!S zM4`$BK_tUY%k6o@w6_CmTV{RQr-04Per#z7BdP)2wxx1i=E3+sV|qf~e^qh>19~_Tusg15fixY}NDuv`HL)VHU4LIg z&$ji1+J~Z_L zE#D+n?BpF*M;AYk_(YuR)`ei9H9y6bU(OwXL4hUubl?8<(+kE3A!o*T-qVdUb{aBN zPk+dNW4=58M6y8JTtd)xJ$VNI29*n9d$J)?bP4gzgU!hgil|ufhyRSy7O#ExYq0l6 ze(^~N>o z_p}4^D9~5paU!WIH6!@UD3?(=RT)H$*Q*b;8R3(}3C?fa;&-TXhD@BUpm@&CS7 z@4*COCmmbic;wwH!{$Lghd$Iq=AItckg<#LUWFhvdzC67#wxqY_k=Z^$9ecwaNBQ>O|@yA9Zw&x!mpH#U0D=VQ9kqp-RgyYm&sD&dXzjLtO zcCck(u!E}aWYWW8JC7C-8db9>g!6}z1BzgyKATl<*4kPZ zf%lundS_05XN|anY5@&9RY^Yw+`rLaq9i`bI))@SJ%zicUyzo*_{G#-O7{8Tq$_rR zRu_w#GQzXBLL|6;kZNFUQS2f#55tkXTJU`3?9X@hIS38Qv!6GeRI8Fcwes5VS?s=j z5dSjM$Byt$FWP2VKuM45Y14zZVIqcB>ZX+vrII_0&fU1=rO8YCjd4FCmGA2-9#iz8H10@A`9!Aq4mn z+6dYy6Q6|qct6!0W+kVOX_MmcIVqfM*-x9+NKX#fme+6o^MnW+RSW`&MiiwEQ2B&7 zG^t1i95=EdNxDvVmA)NS7`l}?mk!!HH^%U#O2gn`lGYlCFR}!){;FHI? zA{R!H59x!oOJ;TdKw|ngS<_7R@)m3o7&T!lQF|#530=rcv^utDpJCk-FAt)fL48=+ z=Q-d0oxPCF-bcG_ha?{;ez_A5jb%4te9cJ29WdAX;{k1kSC!Y}5YhEHFwoc(m$aasekqP%*iwli)Wm5}Cz|q7#UV z^i$NHwFoK&E)tZlw=(Fdx?%w*$(u4b-)Wit(|fIe#iAJk*C zUnjEW+NFvW$)lDEr&S$7f|2;$;+2axg2d|fHmND~h8JyrUn-{Bg}~b|5pneCwgDHA7W> z&8M}I6k8b$F`LDG3>+37NMNW9lrXK(WBWuImB=}>5V;;ZM~M%kzSV)gn3_ul4R#`Y z{qhP~|FSbY@4|)HyG){F$I53U!Ij?Rq34!-b`n#H@5t_L+x>v*gpLRR z0nGtqY=nrRLoJRgHY&xA`gFb!cc7-3FzT0^5gO#?r~w5`Vpu7{pM`)~k$6o0co zC8UJZLhp53pbsC6^FKRQOAlC2{Ec=?K9IX})#y`)l*TpHcMz^^(%MH_qw{1w-F+fr z%FHq3-1&f#;b7&Kd*)D$j;n#9^LnoZU{}d?Yvyu)jxrp@6Kik==n+@CZ?8w`hhSs0J-iN?YC3PSY0f{Bna}K+Q(bsw7x~cFB;47$9 zL=1}lYTtjXFTuz)n0(r~->S#twU;zI6&}1X=@+yHCquCshKjo=e%V*>A5*<8>>iER z-}A|`*)?C{vtv7#r45XXG}*P7>w+;(>XePnIJg6q*k5Jwr2fo(tKbx-;gcYVH#+v1IS#WY)0DPIkljMeG>o@MA{ zA5b{E5?Ek$J!LhG>)6!&EeP= zSpQJ{q|$Bk{)vuLd6BhJ#$Kn+*nPdO1rWPTX>cbP{TpdHcIknc|(BK)jFN$$22n49wz%~CP$gT zXS$E#f#UO>K?k93s0hvD`v{cbCzP59Y>XF_Z> z>(MO>-oX>qwOOQ-HFb_hHP8EsOIW2nvPg(m-`t>Rv&4WXN)5XnflDNat=-bpb9Y~> z!kcJ?tiMqS6>>CJ?tiwcRa{~YvX}>d{lJAU3Ef>>b9rJFN4>?c1krN@g>(5ZR{m8B zS)ebck#l#q%Gv(N%*UThH2P5kz;)Og#Ih4$G+0Q(b$S_ppy3Jkf;VdWGPQ`G&3Kbf zxN9jtFl)G}$D?#b#4$o7zNt@&4SG4n&e3#S<-{86glbfuDty3t(|=)X9vPfG3#|gN z?DGZ`({>nr6!lsVBNS`LSlyszE!H^K69pNIrJzo$Y0kvfb4?hzMXrOX|d?{L$`0 za;>yLX_D1V!-kc1HI1pBK_OQ&KgZ}Cy%HlTjJ^zDzcx9}0Ti;+BV(T+17A)=`7w1J zRY`&P#++Tw>p!xT&Yn7K;l}+_A5mrC!j};UuXzPf!fVl%0_FFveek1}a^yMFMI=>& zx(%_y&);{fWAA=%<%){JnXwZv-j$?pBpX`X+ZU8{jQIDU9iz>4JU&2NBIy#>FJ0AK z6|_xD<@0!RGkO&}+gy@)38hD}quG49`f6!)B=Gi+dx%>18mB3ru>G7?!n>IlOgWfB%L_^$VlUy|mz?gpx)8{n z5EjS5KREB#pRml2r%`{5DLZ%MDf#VKP-Y-)3ug8Y;|#J<-oK9XqV{R)``NBF{Q?J! z$x4n71OHBRE1~*y2~+-Oi*EfG%Q=MNh!ef!*(+RDV{^ZBzZD@m^+5Az@hK7fQzjbS z{s*^~8!;QbMUb3i1U6f0%{@eF?z?(q;Ht^lH#)ALe%@YR)*bu?sx=GhM!Qcd9Lmxs z-wCG=KVOP$3}`r9^JUZ9ZD;LMcjTh%(J!$fOeChwz13PcjOm{_U;HpLH1F>SDIlDzveaiVK5otTow z3bcGer|VTZg$vg*WmNhM))v0V>4)<3#AZ40y!um5mt#K+C0v8wHN_@w#9BjBwQ%^y zc>l)wI-UK;Pa7f|G95HFMctw^Gt{pfGB2(JUdsQ@ zhxt!mTOMUM!>`EWrlh^7bv5OT&oH6OKkMET&5ezC$p>9;kngyjEfxKqb<@AKYr-wLmA^L3JUqjh0O(gS|2I@JRCs#nPrpY+LqlEQh}F9ofeog)bfj?Kno?&(0uzUSH&hM^UvBV=$`OYWKCipQ_(2p*UsjM-&)Tm{=%Ybz~k` zB2itS`&hP&T$kh6)mWOy26izJ0Pt)rDlu{nUTllK1Krtf-~q!Q`gM{=U~bC#>Ubud($0Ezq|=IGlI{2tH+lT| z&CV#@YeIR3WpB}dj#dwJ$7{SR9s5jZRmn-uF1<=yP_JZ~UHg!cp$DhOo*F3K`!E!G zn`sQ{$aIzkQ;~6y(>~~e6h#%fW?%bi*q|)>`;PvPE`?|_6{jy_IsN?(-CtcqF!X-# zf5tG(=#4QU0n2(QyKZ{5W@AtuokCVia?4hru3u*KgEoCW@dOT3FrQucEkMieJM}oU`yJ?7jVFa*hXV%`gJ%B!!S2=gtvAZ6vL-Ts! zdmvGdm>v{B$+4fmY3(1asxA7v?@$|DEBb9QeT$8afKCav&d3kf4vKrCZf@9($^GhY zstbJa&_gc?eUV43ZDXe8+4K0LxCDa86dBeI<3%0bNPfwIjfB*WH@mT_P0|c)$iRr( zHY_)dsy($rS7~9tuO;p7dC7QA>TvZ^ELg%4t30kfRufHTzTsD&&$zvAp}unD5O)sH zUR?xW*a5!7Kj=KfwIpqi}V?U--Smo?sGWyWZ zsEb`ULbWwRHGF#%LNY#s@Uv^ps{tJ2&EH3c?p~=YnCL8)be!|6%l35=@19uAJKX`- z=yxx=@biXw%OkTp`w4a+J?>$@C0s-70hR(#eTZq6vH=IQBh-LbJ{ zOkH0;m9ch~{|Gh)P~x#cj(NI_hh7ZFfq0#hB_fzPbKcT&D^Bua+FW z%;=-CThv)9wBm%3tw0<_c;AMjXE(NARJJ6b<6-YKSMZ$5(Ur>&CAc+p7UKJ}ABCzE?&jWQy-DAmGg*|uT>(bon7>}EzD(?-`fPuZVDp4GP54LmE6Z}j zF;N+xi`gnxX#%S8`}x!w44vuA><1f|Aky92{&#O}h)ywXZa9zc|C&Qto_v{g`M#5M zUgSIVVl%S_=7$~=m@PQ<0m0N|$>!2y7<$bhpNW*d&oY~6gHw;i`RK5& zjy&e91iGH-*_OV(xOEc3G^2AE*>TiLI-HUJb}5T?U9{n2$@kFNK5jpu$n=VLGPHTjh;iM%tH z-#T+~qy?X~Cb}ob9YVHrTjW%>_XQjmv0c!*MRh`AmDZ;ba@J004l@*IWaO=w3;!Bb z+;iz8oG>{YOSv9~A%jlpM!295O{2KBE#0tpeQ}0Hx;L3r3=;i#=EU>$9{Mq-zkIb< z_fUEnjYL6fkxW&*qfBe7eoGj+H2ZaCgL;;H==Nk3q1#Z2=A(1rZY+A>js9Nk?#(Vx z{^{`S!Gi*Bs*<&NjYe|kYbMH{Im2$^8eXElYCT*(b7zvJoy)EdRWPt zPKxRr+ca}KC8&N2CIY6U@hxwvf1#daTz=i^WbMQhC3H#t{ySzHU0;2s@yB3jX$$}f zR@Jf*x+OLOpC=CSR-aNQ^`b?qOb)FEF>x1z*N2y!8}nlrh0P3&<^6)EH4lxd7DG&c)3Y!xGb1BI&D^C2}A)SOa&5%pAf2#fb4?HX zz682^*$^iSL>t3V_N8D_0RF^wLuQfNChSy6j6*tUQPb@b+c?ixv$pO9wR%c2_~-J; z%~xXg;9GF8jIS5@e&W*IqCIuMi{=d?r}9hvCFM#6uP!}!HtDpN`Obq=z>M%irf43# zKilill$$X0lr6ut;f6Y1@m7#W!7%>m$fCB>mS4?s9?PH94vpGL1}6cfvLJpcfObW} z8-RxqQk6hb(OH$5>uk+SR6?!7(nd?R^R!6ILT3=4LgMPNrGB|dubZ4DDi@8s zx9Qx;vrcMU?%Mjsu6NP>4%s`uamZ%~nq{>K3^muG@>!?fWrS(sc zf;N580KJua8faS$^qt9}%qN2i*#-*k=k~5^?qr8K+-DxV==)LOHTxazvJdvq%6lz$ zd>JyeK9F?x+;&bR=$}9=mxD190Wj2!WraHNr`<;SZckvSl_*Ey0zCbxgsvvm+uVf1 z_|F9iJIrC_QE1n_J8p$)C{hgNWBA!cav^c~ph#r=I-P|8w8r4$3Cy#34+N!VjVd<*6m1w;?3 z)Ue^`U<(sR@~tRPZ$kzPf}k{VO#8P0y*z$o&2$o#jj;8E<-5tOEc?f%!KD@Ao4VWr zav6<8-}~+neI^)$p!6LAV#6N8?_{Z-RjJs!|Lsafiy zMbm>c1@wM4(gHpZGtTH2pN)QLod6>L&lh2@oQ(!jT-Rx2Auq%8*(2<5WroTsaihf! z0>s?gk5`2|2GZ+ym*WdNsC*}BOqY9tWsl{zQLT5V3)La@m^P!p(yO2ouH|YNSW z74}xCr6VbxjH8aNH+nunJ41b%nUZfEPN901$Z_4gAdwUT?ib_JgD4zZ7C^zTgJ?Rm zK|yzIB`l-);tRx=ZQYf=PqY>RDbW|{#T1&H%1=o255Q5^`DkG1`YkTd`!cE_Mo-+W zqMOPJRDV4vnVlb` zezK9pNM9e_qMDi(N0)+Y9K&7dCJ{u%Gh84hmIy{lJVoLZAqG{jrzC)FOz8GlPn|T3 z!q(954TYNK%6Y#T-ETVnT-7C+=N?lO9AnU1F^!vGJT2GloFiT@*S%DxMg0r{%X0N>Tv++(}~34qfJ=NN4P+&qpS#we}T z1h`BRlY5)a8WdWgq`U7p5k=siba+vXj4R?`o_R`?aXxGF^o07d7)oWUxHGLwh}ctM zO`t5urQfgk8QyRd4Npbityh#vw0DSJt(ufJkot{*ELkf);t?w!&dwqSI&0l+@+)iY z@64_5CEGrAk$Ce0|M871qW-BBf%l!tH=N7uBX-1L;IM6g26*`W`+YGAR2)s^_lY1Lv&{h=V1K<+r&w?E9j6W@xOal!;|T)gbp-l? zL&Gu^%+E&@?Xm4BH%ae(bFpBs@)65c*ZX7KE!FzWSMbTY+pq6WT!AFQD5a_0oTfw` z7Kag_^Hlw{JBvl>XqjK((|iewzQ(pv@mer{`V3)SD-6v7f99g@2Py6kk!CP4UMWH5 zR}_`|N#6CW(66w>z7A6jqjT`GF_+%Q~kUVXTZnDo6>rcjstYy zZJ5H92epeQg?ESno5XY$LjqwXA|Y62)mzg82474~J6J3Q-n+Pqew%EaeX%FQ)cJbv zA@HBvgH9)T@q(%comp(C@;>E$G~5zF;PVM_a>I_auEN<-RVrJ1fi5m_PZjeO*ZQJQ z`ty6$6!*9eQW+kBIfG{>T=yU6K{xK}EmHeJr}codZSfNX49X*zqQ+3h)JKmIg39O+ z{6-(Dk}$;{VPe(}ljl|^aFKyCx7QzR5S*z~6AxDc5w5#BrR>tl!srg5!vJ?j`k~bK z13@BBsw?Fl5A|-*?kOA$=!NlkZ2nCB8f3yJBjb4Er%2bktHasa%sa2d z=y#l&tj+HCyBE-wb>ghyPfn(Pd2dBfVeT5YZqqFu@Wkl-3&|J9@S;mjf=hm zU5Ulnf1B43cH+wrwhMHMGg^dG)ib50=LNZ(xyiLxe!kl*ZYkq~(r{VPQ~tTJ%km^# zTcb!)m@ya+0U(sOAif&pYBaYv88=jJkHZ-;u%OxY&O9NDZp81d<-QSa3n*Ki&?WWt z!k@^)D|tK0_*rP=WU)~R5{dm(tSH@kul7W=YNEZpN^e4}g(XBfVRhd{W7R2|P_MV@%#!R^3%A)t$oKllkfKi)M_ zQI`*|Z7{c zyH|dF=vzf8nF~lUP>$$q07k137$I9KQi6412g4ISzb*b$f=L%Pw5YM#lL;8qbXa&g;J5drc$%cEBVk z*iMYT^z6?1p0 z9+-h$&^wUR!x%R7n+ZE@)uO_lvJX!*=MHTPrW?D#2Y&9%7q5QZwjg!3S&ALO;BJ}3KH zU|ELR8-7#q&G=r^h5I)iUuu>&-lWCZ z5!nK1zmMNx&Y=sAM`T5H2k z30HiR?AF2wI*1-~p~?FMLvtXOOi;GK3c#yUSi2>6!g7A3AGL8w0n1`7=TCivqi3 zk_JaccNC=UhC8_nTa2xSQI~d0xt?icuRV7vxs1gkeFIM$(Gy;S zQg$foFq0Y%!4QaFCcH~v-1qZgSm@$#Tk_)NeWjp9M*D1^YrPPAl6m|Fx)sdKshngB z{OHk_o;0ZRmb;WI+D>`F0gbNRxli4(TrVE+_yklAZnn?FcLrOJB;LquJu~KZ%PxgE zw_jI#mkt9&)#IC9*GoZ`rmZ@K8)>tv#vF98Q#JAWHaC@MF*`3obnx^zen2?W5}2C; zDbYduPnv)M#`nPQDy%D9pX5w*7j3Tj#8EsbNMrV6RA!g%q+_quIqlAw!N5K!@F;LU zx373vq&}8sChzUo;CzM7YaS;jZ&$%aA_4X`iOMsSS7=#Y>rM7Q;k&S3gdER0<5pN# z@Ntx}wBODp*|{?cRC{-u`)cVPyK^naf&Af>a~~7OKNA4#zjrK>QuC&sASqz13QHaw zws{cwNqXApLrDiY;(`7h=KZF;GT?p>WIDD+y)-m#XH;L&6i?v4v47&zNu6AC&GeT! zy{5D^cja4){92b^N3?qZ$N2G*wVF!e*?I^;VTbhueQJ5ntkq}1;o)R%^T+bn6{jDC zlOj6IBy&J0i*Mj`3o22cu2)lO+a_9g`dYN0oYZ@+TBSQY{!q7%)kBnf0x))fB|@kr?5VC{ zRNnhlYAfvJHO7W33x@fl7*hnK>7WY#1S_*X_1XeOFZ!t3XQBS*}D@WBT@k4Cv}m z+Gz^Xt%$exfSn0yV?ejPeK?ji4DEm%n8gM*`+vt-1qCf@Lb$6NdJp0C-q;T4hOQ(C zdBb7Ai=K?7U)=(IC6Wpzz%)284G^P>PV7JO)b-W@7$(Q7CE*QaH9x3IBaj^nL=U;mn$YY;S^> zTP)7@0#_}3q6TV-+~wRQ*vA}QCTw688(0HO#KTi3lKLc7;UDfY5Au%9>p}WsF3{!; zz+NQY+>?TL>iq=7Ln&+^3W9d{a{+MesVz`&l$4hUKe`_NSVkUzqu=(Q$ASKHk^`_l zl-dQ3akPWOz&bXtLLz1L9ys6<&HMfX$M_Ycm6+;DN($?IWObSyn$t zME!4U9+JapEd9_HAjhPiz&!vX)Dm#2Fzm`daMUzC1_$U{lUHxF!-s4Ff`J=4wkMSb3;aRkhpUnPz~~{RADko= zTH3s{C63Yi3|*Z%Vq<_0z{Y=K7|`<($L0No4#36$9nix5-#E^;0&N1Sy+qNyj;;>l z4%G9b7h^j`K1FjkG?nACopZgl^O+#&f9JQ%BT6+znoV|7+8K${yPj$cZ zf@o<=L7F1tNUu+dUIe_}!>0qep;JV9Bw1|fbU3-{crd=so?j`);iyqW;p#r69sZF# zQNW0#FM;ZOIisKG>fJXj{RS=f7hL(jec!q1G1h{eVQ4E~eW3@K^RMk8e{a|ee67rT zFP(U2Ycx#>fasXrlLkUm2f+GJWFk?cU?kl>r3nER3kd(L`U243vqO6?%VV5I=kksL z%KQc)m`hK=V%>A;=zRyo^zWzBPl+4%Zys_=etN zaO1x|;KPW(Ye{!GV~-9LmN^Ok`MuD8`W}D?QV3KCrRuLMyCHTq`3`Z=OF2G<0uLn| z=-gYYUpJ_%!GU*bgCic2r?(2_cO2=|ZT^`sr%|iC6X4D(Xk36)bs4~eVLwFl2nrt? znD7Kh^8*@B-(=9k?8aaLr?-eD4{yhEy+=+|NrPo?{Bv_xFQ90W(i4F?I$$d6eFf@+ zjlSbNM1CW&+LoFLL^+`CS06$)ZfyL34g@ta_T#LF3@802=sONV@H7dfeQm7Ik@o;A_2R==Q4lcv~UfvJ_ADo z^qqHVE?=e4djhZj?equA)EYif>GN}yL%6{$r=YVRktk}c>@0vBdmbQ1zomn`r#~S# zfFk$}a>;v8S^$MRM^JFvmYNJi!O`|(521fR0rBH}hfshIpgVTf`7nSS!;jAQ6kh+p z3jjYU^%r!Z6P42c<$N+7OtV#*reuIR`%$N{fzdzFc%TLUKJh*x=Dcl?*tn_H-5+AJ%R(| zU#LB@A?yK^Aph`=dXFP)ec=#K=Z=UKK*SGq#Grt@4;Kau3bp)>^NWGf7XUkK|6&Kk zUjhQ$C1z-ih$*^~Wm8N0uHP735Y z?@RG>@~U0{D8=tX(@&UqYIO?313WJJWu*wn>x%NwubP%1k+H zR=!;He~_Yv-v2)f=|%V-GJ5_{M(h5R(SPe0h+YmRY~|L+%7h&`H^@(w@((_MjC98y zLiC4H`sWARa&BxG5yYPe>^xKphf?^5R`}<>0HN@|b;`eX{6i`IgU_G4qekI}D(0sW z>^-_J^AG*9exzT1SU!LpfkAnoUQ2*>|EP-}2mM7mKleHUjkD+P zg!psce_#BEGHuze;Hm{ISPiOT|JfMrpgyqun+cq}u!)Q~)O2I)c|dCUZR{i9K+*j= z_|eY?f1bPl@Z2UHUP)D?So-b*PsuOC?ck~L=g*~cU^|fE`d6+}-vUw4f9;K>?>Jl_ z>xWrwy>E#8x7}d!FWECapGgrIc(33y## z%}S3QUSK5<1&nOIf$un=@8|EL^G{0x-1z@VZ2X5a{kh;TQ{yKFz)J(+aL6b}bpPX$ zKjqMWp!@#_rGKROUyReWTMVCuJCWQO1_bo}@9hsju!jxIeFSU(*ugvWz_dCdAVB&5 zCLjPa=wCnhNpQb^|Lcta{fl0IFv_0<_mf_1YtVXYixb;TtIO(W8A-i#G-B`4hds+L z`UvKaV_#QVAf9br@Cutd^4XMMsbUgM4jRjV^7NWGM~!9pTNGNN}lA=BEIm! zz_&v6o}zAe7QZQhF&CsR(weJ4S()?O1NlwL87Qh7b{anSpbI?Bfn((qbRw;W3N-d! zP6ft|9hQ=*5Arcsx6-jsj3-EXFK7rPcjJG-x`}xzzFr%jqNfPA@`P%bz!!o zGcucj&`&iR9xngu$`qdZa6!^+O{~jya(70a03h60w9$7Q*^-#=INqgbEqJ}E7Y}q@ z!WzCk1ou5S$CE`0c)a)AMpl+po}EQ${IT$lej>7+FM`Q-^g!Cg*{}9cpmovg&CjsZ z3pwJAUc#g#%)|UwutuljtiRmGyRC==+g%apLuw;`-ZBk-aLJviJuffe?wWI6ZYcjI zhbK(zsO9Fs?39v7&60#QyD!6o6~#H9dGX2zv*__^A5pjwu$><4#@^ZX?_>XtBfi(U zo8xYI;*GU%PLIIwkXi^u{)acuuif&~C3sB8a>n*s^W-dsoVqFDEIN3U3r#SUAtWj} z&sJQ7tS)<3)h$Bge$V7;w#mIui6u>^AF8}A*vBaZgBaBd!-TUlLC? z52v?kA*eaw&FO%)M);0xNLV9|V1roatAgt%V-{BxX5s#z<~9mSkL7m@QY%9*uXz~l zQ?+28RTTH-c@~A6Fxx&PXRIZSo1LYV4cyDx%Ggh^uepT%P;(v)vV7vCm^_12xa`DL zWe_bPcDd)G$OdHjkKY;!KS}$E*m>*+xpYJSq?SWCQg|A|Kh~gj8zHkW56L#{$#u}Q zI&4b?eKFHbrBOT~=B-Wt?$R#klhY+hmQ{aD`BBO>3uxJ}TIY7bCqXazEg;$-!)%e{ zK78tH>f@_{Ig%Wco^4Szm4#K&Msg~})5D-#FuRqRpGXE)V^A7ASp&>(zn5qeKsLFr z4#`Fn2vh+un84rU5_Zo4l{}<ZY0BuWC!#^(`p#)xk{U8SV zrXQpT)C^(Rw|bDTyeF13;evY~JNyok4W9a(#dbJh&Uex+3vp$gj^k>Z95*qRsuOG% zyUEY6B06;n?@=u!iV8dyk4^2rtvNo@VUCKMl0ssbeZ!)NJW~4*m9E^Ywk3*D@5KGh zJNl5<;1j7e;6?P$8B`9MjlJnUIs=EN2MovvsGa&w%&3ANGyXkV?Oh#HU9o^r6QOF_fnbD)?IwOPeZp4p1Nd^AD zB{l%z&Vv1yF4Jqx!No=}a`ff}x9q<6`Uci-da{b{jte~(d|vq>J3}<9?8P0r=(ZW3jicrklo*Wc<292)<^SU@ZL{Nps-{PM(S)iu7gSt|ittC}K zGs~3v7=M3oAB5ulUZ8IQ#!LH$@i)b~)pkdIBln@Q>Q7siT68yz#PFXu!M=f07Cg*PB9M$k|}s*bAUG?hXpSNf%50AU>D5n=dz)2cl+1;?w4Gs?tuZ zRu?COZB~kFRs9dxLQq`@BKVwhH~dia!w<>)D%d=%y)6vQ>bk7H7U(H;Y0|%meMM4) zp-#a>*0iQK_5v=R;_~n@i&=(>d+Nfwm-a+_+DFgq$N10>p%v;buV}*&-WkK=!!Wfg zCkzN_=5S7lYTz)o0>=;WZ{!55kw3{z`$)@>0-HnOBNU~X_eH%KR@@&@MfuiEL?4BH z9V?aESlr_=yT5c^mAevtHjz-I^Wh3Ug2aCYbM`;O_Xi`YZyv><%Q7O=%a8$ncyY zQz`L_)HfHk`c>&yo{)8#TFvN~H?L~vUV#=y?SSo$b)ua>Cy%c6#!*pp%oH)1mt79- z9XoQD#X5ep1WWShi2Ih7owms|)O*`mmPMCr-r1T38+VvdIm+GMniV$6bL_cnfO7NjF_ii5s@IwW?s^GZdveLz@vNV@ zcu(-gO(w^7_x^u*RQgc$eHrC%PV_&&_1|8yf4$dw_w~rY`|Inrz=31+|MXJUX@GZ> zd;f6U{^sZX^_>0d9sZl+_J3Uer)TnD{nBlL^~3W3PvL>17dRCU{GC$=UjDGFGu1K~cDO6C^3^%JHj1xTV!1*5 zoZVS^!L}8Qx%>is4=LUZNu5b`67j>Br`O#qTs!_kg_&mO0>>1Kdx4)>t-bEC*Zf4a zXb~{GymDf7sTrjTEFpWYtBX`cu?S^Ra#WSG8kUq$oYHue2+Fx~ak+qZacGjK@Mlvt*JA>yaY9eBv0 ze|X3hemMD!+sdaAMBp9p8j8vVNovcoQ}5SShbGHRm zkNv~>4^9F*Jjp?L0tvzsKTqP-R=#!_0R)f}Iv}~(9SoeBDu0J^Sz}!p9sMvVAa)MV zv`77Gc`bg-5*HH8*_fILBf?xoQ9r94I>>k99f{+O%1Pz(}?AY6mYyyWqXFJ zg#Qg8F0Mp?nNKW`(`*0olfr?4_{&ck_Jbinpd@iW{W}h*Atk}~LQdINkxfcuRZE|B zWL!?&k6^MZZn*WhjEL}-zYC{_pXj;qu!r4P!o@W}>^?l z7`-WpIiGgvau=LO=1=p@8P51Q@F<sA!&_er*wPFsw>uXW?o-3U{;)#wWBPH^2%tvH@#}Xoy_v^jSmoC z{~OUEdIccsE3gBX)2a`?7ccY}mKZHto34rNw?GC^XUg8#~kO^JHAG~j`B^_(Qcg^SzJ{gu-uYC zL6%}B3^qDW4z;_#g`F7;t<12qt$Zh_qojc+UoN33PoP~MV6OM??fwlE>C6D|47q<8 zM63sCig|&C%?)`W3Akw7)F3XElNN=QvELT>m8#1l-URX9cyUd>mEFfd;L)c}(|us^ zy@&YV8CgA`is^KFyshbXTqQ(CnDyDPB0)RrM@r+@@yY3G9{r)%5iI1u(zne5u?TU# zWSP|(HXLOMX-&QUv zWE4Rokg{$Yc3;QSOeIL0S4v7MV>f~A2bI2pm8vfdrGhGRYRThS81A*S9FjNg6oXI6 zMpWrd=ew>gyxC65u$m_MKpBbPy`qA5Wk0zK;}H!d%#pO6eAX|g=q%JHtT-)7_0TB5 z;uvDZYb?q~0TdSTq2gfZR}NFFgM3?IsMWB%o8ijNZh5?URC9Ukg~Qhay^6*5Kcscn zj+CC;_XJH>yCFnaF`o_}78Kt@F!2m3-{*p09aV|fnsH0onDdz@G;QzOf{$>393Uk+FK06}e9S}Fq$UA-{hklQn z{|77Q4^gv2Rry;W|EH!r3e5jz-~9;tZ43FD8!1YzxDEkrX_wT6jQkh7-TZT`HO%B_5mW}b#_1u47un^%+Fh5y z>zGY-EfU)-1w^z;S|Tau&jj$~*Ij|cIBX|UOm&mfQ*jhuO2+yOuXnwwO}(k3@Fww% zVsqc4b;sBTt8ocs87X(2h2958`c0Jyq&LCQhAyZy1%5HCYoz zyjX6fq-ivIz@#RiLn$jQpz+XcKGt5uGd8A8=b}>D@=&c11wWotrr376)9{Qh%i0yf zvQ?Ez<}~jGMV?WMTEk9NJ{!!~hfMiN!8-fZcFrsY903zbq`y~7Lr8*$V_@i%Jk?%{ zSb9ryl0eV#u+cYB<+VA_oDEpR;4+%SR@bx2ji4b6DZ?#$W6B9Eb!%#S^i5wb#7KBL zX`9tM-X+p36-wAiTThsIB@uQmX*bZC)=*bUEV>!46$!t=#xAE5JRhq!)?ap<&UP2w?*qFpc#W$W^2u-8&grqhPrn+LS zy}53mo08&?Ne;xxNx}>E&@NO`@UgWD2-=_+tLh2%Cgx?EfVTwatDmmg*izN2RA z_RRE43a4A@WU$^fCGzJ{euFw%y*DY!HYqLWS?^L&+DMRpq`t+{d;UxNAe0V|DLvI* zI+od3(Cn>}l^_nqivI_VyLM^zIqLZr9vUVlw8NkFvT{-MI4x@7I5tYOa(%j_6lPd- zo*>s+p0t`R2cNl=(&}nLky+$ujPR{n)f~FPYYM3>c;-1mE~$54cRA50tTVj2wB)CVz|QCyF3iAXWxV5$FoIX-j!dC$}_mHde2;PrId0Yd!E1ej z$T-kK5~~&j+UbWHHe;jPVO@-H5k#>iCZ4(pj8O(NwRmD{VwkZk`$RT(!?;20eTnEO z(^FF>@v3&C;OiosiB+|CNxb?&nBBmT4ia2K@)dwZ$aIQMa(X`{ZBp$0x zO}3_NTyy3uVmvkcm@1qyb&U2ZvU&Lb@}d4{1z7q2?%B;J3uz+coS>>)wEpx=2h zaCs_6sIHMDDIqDr?nCkUmdWg^QV;-7j)6lwwaiUoEZ~pY~#t;>IQolbW?a zJMFT9Pzv6=*z2cYP2qu$0_14Z*0uOY6mOJDWxAz~%xj#lbW|CPl(K3Uz{%?Y<042g zeTY4fo3y;|V-UL!=>9PT-fj)ZpAS3M9~lcT@A$fttPCfIYRx4DVla!p_-6+cL4&t3 zi={j$D8fK@)tz8!dLKb=UDyt@xX8Z$1Q?I&89USP9p{7s;w8MIX9Fq?me_f+UYiuC zjAN+GVTCKM)hUO=S1kecz0eub6d{{0p=I`{$=a#$!joXnZXHi8(wBQTvO+gsS(%Pl zQ|CaGN!+rh4H*-9NF=m8Ji68v4GM3ZRu)j?sCCUwvGMI-=v3++jR^7G$65646acUW4R$vQ+e@h9GwG>*eXXHvyxOT0PCUru_KsE{(2(+jncZq%R9Lz)@?zpRdwTI>RuP7t z68{gkXMBw#M)Oo}Cw^6@BzUgD#F?Zb%Ma&tFkV-?yFD4VXw=Yj^){u{VbM=4Nj#s!pYf?rBRYO;(sWwcRt_2rf%)8cP+acD`13mB~6s&F0ol z7-jd_gsUR+suoh2Lympn`MJ+tf1@pAhf6wQj{-VIbmi0Dis=aX_Y}6mzr%}%N);F-9vZB8s|yE3npuE=Tr(D zf11>(c^Oz!;7y3@XmF>J;?2-)IMSpYaQS66>z1^hA|N zS*)F*E7zm<9Ca$=KBxY2&^Na%vH|~_$zr=Y-yW&Z%(tYoi_JbVBu+nW9Lh;maoc8ZgYbGAl0xGE0<(u z3UL@SlB>^4-gsEPW4o(<@C=`Rf)N4UmxCdSUHXeVIrFftm_sXz1p>etWaxl>-V9@I zfY+xlw9ngr+67|GSNtk6c6G2Gs}?uKwjb)zfsIjxb%nr%$BO+i@ghy?dkmY1ecsrI zlDVPOIhj9r_dTR7i zsmrz>pM5Uvs(_!sNFdi(b0{ln-`^Ku&|$wUQvAtc^n-0HP9W=j@8iz9uzFC4Z&G(6 z#Jka3YGU>2u7fZ4xC#g>s%SYr`8I2f;ri^nip!JmdVN7NwKBgYjFBr-OT~Zz`vvW4 zi1u_(Zj$V%@-%xpUqe_`dFyGdbW%;sT3S$AT$A0K4+;BRzGbCRxj2O;4?S+&Z+@e! zx@2JmIW^_i{IT-p6WVtC_h&{-xcPa_2xe>+>~5EbyNjH?1Fwu-a+abKn>s=M_R9RC z`m3|X=pqz@fwbc#uURQLy*Gukdd(ei^|)mKlqj5q~OROyTt zT71On&1-*1Xv}n#ykJL>VtQ+~?;T5bO2o^zH}$f{(C|+WGw2nivR+kz>2oC)$US32 zO&c?r1HAcj3D!Kcq+fTE7!h<;6ug<#Fn&H-OqHD2)cV4!h0|R{OYB6L?HRbZ{a{y3 z_g7zBmqv^IJTqdBjEDQrgLsUh(YqYFuuY|CSKX@r2j|^|7dcRaSH1N-%1f$B3M+V(7Y}q zN1=iHU;YrZK~dVPI|6wDux;R*=#EeIA?RQA$58BHuiH-Q?bky$X02h5@oM^6zT>#Y zppn;{ZVVfnMY@PbQf{1Jwr-Q9J1!((l@!=XXwrLOBjc^yY^og@N%EqsMPwf3JDm4Y z;@WFnzT=vqo7vYS7&+XRl~h)Z&gEC_c2=f}Y^3wteactO$lj8&m6LQ{!K=$Yg*9A6 zxJt>aJ0|sVO33Lm1csw5&ukPU;(TdY^JJ6M1w*o0^RA%u9(h%%yOZ7O;pH;D9jGaq zBT?61ism|de6!J>?_sRyw&HMqw#-CjWjc+FgH?5be11uE30CXj$K=>9!8-;#9i$bg z9qQv}S?~?%9DF-dOqPzjP%p*B+1RlbcGanl-r^>>t6O?EUi@r(>fn$(#W~kaibY07 zo#53T`yOs@45Ogx@n6|zPsJ^p)@1!h8+hlvxJ;eY(g-q#Yc^^o3;70qlNi| zf?@XE4lTFb+(osH_E1|vRmtc!loaP=gBj1;0#V5u_pc>}lU^w8Yd2r0(lH96e1&|# zZ>#_Au33bloPNc_TC|i+wTsg@W{*1MazhN*<@^QvZ^UEM}G z?k=b3njc-i+x<-FsopD%B6({L_3Df6?}qb}Vysg&E9W&Ni1X(5wsgQN4~B+W6K|I4t!IzVkY zcEtIDLE95s$gYtvT+szH>eGw(<_x2OFWv8hZj{*U8i%1HHv}@@NtVPsy82np`_ZS1 z7bPg-=WyB6Nvl;8k5f$a2i_55$DIl7iOUqx5P&@t=!=k7Q@!xGm#ym&-gZ)ylT0Fh z^@-rUE)D*a)J9f*Yn*a1ir_Ep$GT}VAD)d>)(pf+yxPS+lRY$`-V2buo>oD{8mZM<)mJ!bPxBGb(Zf(>y7x%Y{%wh4Us37&!xUJx*()#t~QtL zb|;f}(2gnx3NwD9QYUUBys01@<(MjYJ9M&I<}Q`mrAPzA57z%2v;wO+beG3iWNSLX4o%a$rOB0dbv1#8OgN3oV{$eMyR} z7B5e^>uXxHmDZH{61T5tvXd=>^5Jy9WazW37Z-FbK(~OcoOqGC(!poL;qTJ7U*6QS z8;9XOZS2L!#k@_=@i->Ui0onc@OHp@zKfsV(O5g?+JmcXk?9ZVMBhhMqqA+pND)O8 z!Q{y`TqD8}cdYHtYcM3vc9p5jQyP?EowCdS2FtKP*>`3(I<=5+ILr7X}hIgQND7YC3mNJE@#nX?VyyXUrPyTJG zFNR^L3b9JA_3%BSVfb7iCN!_-8+4my7x^(3d8TYL`NE4XyF9lBWezoVEZxnE6zRnm zl@v_{)Rqs-_@q#dY-_PZe3rGr1SGG()d>Y553(z#M%?BVN)6c)@fX;qirE&gW+WaL zOX;9sdhw!xcl8C+MpuuvLiKrnp|wVlu7tPxYwZGrO|Q$F1Y+awvYPSx$F53V9CwZt zkQ%SsY!dIy7*_o+{&%e&IEu9CxhTQXP&S|F`lkaxYN^Lj*NNN>A|HP-kg zSD^LrPtu-RNM1o3$h#7E%1-0?uQa8aH%!~7KTQ;A-WqPD%66Jpb=%tO`|u?%`u>B! zbm1WRW`_&$ey$E zs0h?e%pw^krdi=g)U1f5S#U73c3+kC)EYLz+wF#N7CBG8Vm{l*iZ5slJ;fn+;fh7x zytryp=DmUj-*?`eT&KnOa>?2IxtsB-o8SA8N2;q>9fQ==TAFd`%kM^V+|P;@VOHj+ck_&u7Uq-H)T$u`7-Y91g(R~|IWG<<9b=z6& zt2^^xYFW3WNP2QTxzas_(ObqMfw5D%!7AKI{QK0R*&}dcf#r3`%35F5dXuSORpbYK z<0V6e&7LUg@*W!)VOmYsZKQ2Rug7>&mZ|ke1rvJxadO%nH;^-j7BffTiqs7-t>3K*6Z7BkL93yWSDaBg*ue4~nO;+>modAu{X>NLu>28d$ z((=A&qrEACd$sUVO>BbfB;wTdyEh(v64#5hp}O>?MYHG0MqaK7WWxS+1-Cb(& zFx8);l@6IIQOf#4HsZOpa$aB1sVSXIk;@+1?{LPQ6mJXpiRnGK@2IG$hZOo$J-2#Y zwyL)JxX6R<*h@;+WlI@mkZSPpfE)z1+d8%iyjFyF{13g$9lv>(arSL~1`htqzwy5! z&>Y*Cm;DWSdw%fwPhH4kT-!+ODNtL%D!j&poj~k573IA@o>%hIx7;hjxTwk+uQ~*i zF{j{Npo+BP6;M4Nq+IkuH|uk;?a@z9JY6;aE7})v`acmiZxjb%^g)#&0|Le%t%J84Ee1WxU3qVbIRX7huiKP z%k+dLn8gLm#TpIWj?@2QtKFMd5v>DHr-9ASij4u~l8*i!F**dy>tty+1G}!itZlyV~lNDE1 z`O5xPP((f>|{$6hqbA5W0DGaxo4iMU;1b!UHpbfUh#3n zV2Q@^;Eip3B77w=H^K9c*eNZY$wi9-~`eF89CD zt;@#j5v+5c|N3-#sZ!62MFZF9Yh=yJW0lTv89*kX|SyG1tP=)%V%0V7=Ti?)#PkX61G; z1tVXxfG2viu4fhgKgQlWoX!4!|3^!EtI|Pht5qYVHDXk)sx8zeHmwo0_b6JWm0GdI zsu2XCMr@_k*fS#b-jv#E-oLB&{T`q9=l%U0$M27D9M_SU6y0?EKt+oY~=f6T|!Q#zWE7@(rzlF|4Z1Hj9d9m!`>K9<7 z9bGC3UVAmXU&pq?vudOJ!p}9gE|>N_=JlH%wyw}D4(W(FsFFW**U+Q zwg1O1J16!PwEW?XEAW2zLn*(9_sX$Mw0Ch0&`-u!N&z!y6B)9`^)j{p;Jpk0VdRghK^Q^xR}!fxstu5QRAzr@Cx{UZM=h;PGC zDIZr$PE8am({zo=U8I9+-8~P}?}(0aCr;$VcH5_0l9nmktxXnuASLU5W{+|bOCn@A ziLSOZSr31fs;=CX>07cCIa^FM>`%1}`O>raf*@eM`$)vl1k($n%loM$ zY@301#fjtwj|Q=w^VF3UfR-Qr;uSwbWH4Q{ZUgrzxYxrvIFz%$M|KM4&z6;- zgF_29H1skWk#mW9eH}w~PT3IW7vygP zi;t;4Y|{B+v?Nf^@*Z5}2ugjdWz){{X3m=4h9X-AJF@}hp91+ggyY;qCPW20l&(sY zt{Qxlc)wN!joOF;tr)Bh9mmfZf1ddE20jptj~+z^(r; zmDXr-2)MN3owwhge+gcDaj*&APY8a`_ADp^@HcuF04XcBbhh7TT650;;ej6_Zh$9@ zJV^P)$bTz0x|2-hiv>2r<9Ne)y2AZWc*Dj>MPStCi{uLqkAovV;`k5qhia@GsJKuu zb)v)Fx`m|n^m)+717zC{{)1<$pDx4H%tZidxdyQrPrjexK*BtzC`J37$mc80m$$bgGa@iF37so zgKiX}OA;^$Ss<389tvVtkAc89_DEbb4O?p)QMaiF{^*-Dmzd13rdP7XIDX7bb$xO~wamD^ z@LN1CGK>LdY9Vi;3$Gt)X_KT&^l+%wRHhpgbz!pH?79xu?zy^V97B3oN69x$zcQ`nq1jk5sSzfO{e{Xy#%=zW zthhF1Sq0q`8!R|VFzhNJp|1D-(DN4F1!A(T)L<1QF4FcjFN0`_+!|zxSP?+dpyVf- za%noIEjlryi)xEL*KJKzRqon&GwLtLn~E%;@b6U3oO4GvBgZciewz)C?*R%9xTl~) z!TU*%e_e21u*e#?sC)Ymv=syS@N{P*K}5D)&lv{UA0EWu{d(9%g5z=Nd5P@e+85#A z(G`FMu6Ytdc8hp(-jLI#FDIpZgG&JmEZ#h;c8>I7B#9m;n6K4CW4HZ&#!-1Rfj!i2 z2X(>bPk$*iT~`HcxGK~&BS2o4e?Y&ceq>MDF72UGAVYB5fY^j(mtPHtbJJeApIGtT zUq>%QyO(zTsrVTsL!?5L0rKfgLNY-w3ol!YKlb$^iK{KF z@}o#;ldRR%J58i5F(1D7+Lq9Hle-C&>U8s`K#x1squJ$#w7XFjLXmC$@VFD=U~Gb^ z(1!7pi2kZyZ)wOY`KPmJFq%5v$@z=j^-OHxi9J$7Des{-n5FcMV1Y0DyGs776pS68 zvAq+cv({64Pi@hsDmxrUP(t~T6H1P0(W^APwWVTwJSwuCV1O-n`|hRb-li!Y$D6VJ zRn1Q9L!FFv#`?+YeXwnzR zWIz{itX8SZbM+52sh9os%k2`^IM#rk7>DW#vw?NWcHdEEZs>&wpf>9PzD-@{k4N z`#QJQ!<&|R(oo>_+^M92+1J1__A9+CSLqfQ``<6&r)tog(vF)DaaH_Jy2j&cmo7R=#ph{4Jk^W+o-sU7 zK5|q2Dq^o+9VeqON%};)wrB@>!8;fogI2$KnfJ*H195)I<=Um(a{N00f2Z|6Tu|4! zmp8SkKGY;G`APmn=eHF|KQAYRa`?m#Fk=IB7SYh2b+GU!36{OAhyNVdFTTAW+%d7x z{I{uS;L3*nKgXB<<2Rc?@K-I_#kLFX;7O4`fj+{-Uz`?AGuyiT$3o5jrdC%+CSSmQ zIRr;ss4Nxlkki&4<$?;?b|^58irKXaHxWbQc4=*XkGzb-UPJpwkz$dFoC?- zKx}#5OM8+yn9o`F3kI){OmO>Y9#W$%T){CrLYns(|5A~Mp)?0SD;k=KRm((8dWOZ# z-X1^>?}-)7zF@_Y++XsgsQpY6aXn;7nxD<+kk&!5qjF783BG4izA zq~deBHK%%s0{zjLJ+SPu;nP=CPVH(Rf4mc#ukz${Qaub8J-VfmZ4qybZ*?u(Wd#+z z7BKd)MyuvmYl4t^w?>=oBd4TWV*QHE9#5c_zHqoUtF<7%x4-o#MrR2{gww+3Fj@&j zFy)8=h+2Qdr^(jT2*;6;OE+RJnk*EEctFz2?l}LZf?E7%FX}V57nH` zcoEb5(&G1t;lNOF$YT!}%@wZ)zQ0+lQ?L$@*MIUYs`dZDx0?USwvxuIH}Oxwa25HQf9tr*U+j}p!B#}9qJ zRvnid19fCgAYlU(AGLk{JsdD2y|Wmkt`b>MH(_%8gesxUv14EQ397c1lZ1J> zp}}Ow%jGsjANfmx-aBp8MYt3?W>REO#{E^F;2SyMQ^o3l{X)#hk9Xl~n!GR5KJ+f~ zusDkR;3ni!Csw*}Bz5io)afa&h`R#zwyx{r{_tMniGE@@ZyRjLt372+lE{ zUOi|1i(Sf9nQ!+u^?AzqT1vRXs&PfhLRLPLmp__`CDYz|DirmrB;vuu712{ZnLf98 zQ>?v2&Vum4AyU6vqN60DWZqhC!|aLATl+`4ZtZkY-l!K`?eMuuWz~)iMn+T6tvc!A z_XN>Y0fsNS@Iy!an&2<8l>LA+R&v1--FcXga%xq;sXJ{#rl9qQX@^YtB!P7$rKuPn zrHAsu4!k*e0B_f$;=>RV(O6DOZB@m4%u~e&po^=)KrLIr#ccCwutk#BNiy(!l>U5-D83HoM@e?IxpzBHNf}QiTHM}#}9s5HHHo6 zsvIJa9r1xaS9>G`&K-yzG*vW*Q4VIKsh_(%q8QY9QdbA?dio>QlOO@MaY9GqUdOhdB`U38H z6IBb#@*WVP0_Fv$q+U?I9M>NTYtPiMawh{Cde_!$dO^QQ?Y5_d?AG<$+V0YPF6q_^YaKSgT~L zl0lQ2Z)i`(EwZXGm`)X5TCz+n!{)8$2Z%Ts>Sg{-^TfyRGjuKo*iWb{dV4UZ=Q!m$ z3D6#gCY+0(1JHMI;*9-I5|^wKW9a~hZ}SfqV0An5Z<8HpgBQ5NWImj;UAy?HH5jD4 za^)`q7yO&RnFg=Eu|TYhH=D}NVw-o>FPMOBnk_JHJfr+4m^=HEL>Lw*ulN{IJXQB0 zL0eJyEu42l*Od~g7BtN-NSdX_J7I)~+zz>YMIxgU z2S!yG;C7OjQY(VGq%58+Qv{mta{Mygz;hrWMMi?oUy>7{EDIJOqxOF=}-#s1Lt5!K|ZG&jV7LU%aqh7l9yb(tbV!y z<{V3+hbl)kF@_&hl%_hiN7)~TJ3e;I*85T*)mmjZBrsPS>r997%g^HSQxP+ef0LbX z%T>DAy6xA)t{!8@ob;b{I34QyENxjdO7|GMn(>I7d+yi0{0rj7CfF8?9hr3=>8O;3 zvS=-yZm*qG`kPdK0c_3l)UjpD?T4S` z0STzS6O(nE{+C&>IdA>qRkPz&turuBn+R|f`?rO#pz8=FGz+)jZgL2$?w|g26`s)+ zm0z&CLl``IO*1EV!$~Hghy>jywx4He$AYG@=995fOW)BIF2xIzb0}Teu}!%;Y5gYV zTi&h`X`cOV3n|>2zE5gMI0(!4RAK{SZ-S+|Y|ynQzoxkxJ?T~Oz$r4bD&%9No3;=S z7^Jk>#`i_H@U-fx{D6+#czk9;~UI>ip)SxEZa}Nz8Zm&@G z{UIB5b20!H;2oM%jhag+$A09BX_TehJiD@8VDxq%Z#RaQ`Vy<7a?1&HY~t&6q}}kw z$eC&KgPr4f#;{KLITzfd~;ByMqiy*MgcW6}EL`K`A z^h?jzGT8$nF1tm>;OmO?BE=uC4kjwU_)vAu2KYAx&sg7sE1j|?ntA(1F8Qo$=t)X)OH?Dros56HZsx&6yS z-lh0P{XeQ=DJVqN%-XW^g{wE~=#s@UC9P{|tNE&i`xIX-hNW0imYpLjgdDD{nX!DA z*5v>Ax)p&hNKNre55gAYu#@|iavXJ+$7ZC3`J1B|drC@$K?${kLBpK9kS;01f-t9p zCmK$5UZB;-OL?$s-?`iMR~-TsOYFsOR$iSRHx0hU9L(QczdiBQZP`SO-s{p^5RGY! zPH@~7o6K14OmyOYY`t>3o;q^1aLCx%?(I*S%#fX9eF3GlMFv}C^NcVLXjXWsKq&*} z|~?lt)*PpgQFgt#5+*SOR?a+f;GWuZ#jMC$vavgLe7R|d;6y3AUSeW09;=6Q7i-7I1NX>-8h=X0L0!7G%}DL79k?}u#^kl#)ne$^2@&`gihVu`p!j|m8nClE4K7;r8<2x?o| z(LJ0&vX)dGUDMKAzxhm}RZ_Ba#EPWDgY%-he-5<*eefXt-1*q3}PG9v3dnFdW-&C9tV8Gt zOSC_zbV|u}NdG{g#wA3B_cUyL^LnKmfIxs6iQH>N+6K*WU>wo>c0-pRk~gIBLiffv zFA690hmJ|e5U;cdFAP%9sfws=&Cd=wfev6Uog!v-E@CiMmIT z+2O`~{khByO!5s%a6X;J6aB^11lJb3!u4W1~VR-@Ly(%eJA-x=jwI!t%_zTJ72{hIb9 zG){spuSMu9eL)F(!Vp(W+=L5#(dH|V_wrC}QH;wZ9P@ya+i6S$aP@E{(d7ysfGsjT zo`X`8UBj+K!aHM+dG5V1*q;%TU(+KFK<*ESn1VNLVE%};`nWbFB`uV3GXawq`7_3$ z9ad#M4WSGr$;cZuBWRlG*; zL3MsgFs9K>*?HI5;6RcETiAoIH5$TfY9}#?r7y~$gRh>252_Ioxd4Q$)1s7jNwhX^ z?RnQFxlecFtt=bt010Dyom^FMx<#f!0EVVs5kuNtHDI5qtAB6DfJNBWUZ-xoc z>VCr)78ETvo+N6Xct!8NIO-^KvV^!__h2O-N<=y2J#GI5SYk}n ztL(_r!Y_1ouD7{S!&t6?(a}2D*R^DI#Td({ zqvlNbMxZK$uYnUEbxCOt67^_w3LY(V)l~A^u$0+T)#UgP*z)R_9|O*7sI|{gw}RF> zqS(!4Rwqo<$3!O6>x5-T(k~LBKf}Fhqq#r~)f1P|Y=d6j} ze(%0Atd#4)*g8M1var)8)U2C2cc6D*OY7E7h%3LvD*~CQ zc-$lk85Hci%;TwYDGEDKMhgg4xh%_u8&y^nwDY;GP7~?zT8Os~oAi*7TuzMb+Pdv3 zN83R0Tgj4%Hiq$*0KE#+s0>)f_nC*iBgD^W$f$N}{sHK`?o&L4Y%;-MIcxsX(!~~c zx+myNKaq;eH+83xx^_dKyk`1QXIp?5-JJf%x-n*sav!4+fkz50#NH`i+{=%rs*eVQ zv`qCKJM3bFrZRSFrs7WwxW)t8a5>)81ri5z{SKEuhxabdg7{M9tzw~#4d!W^cVk*E zGvAUL@!($cAv)296c;dimB|n|Po-N+H(RA0VP4!6ZX@5Ss`!Mr3R7X7rhD2IBN*iy zzF>R0F^4diH#90NAi#1pb*LC$X?hD}Ar-8DJP%q1N4> zIFtlvq3geBA>BM+SMNVdd;d;$a}Nk7^OV__VOxPX9qY$knbYfMltCYNWhH`Vv7xd+ zyFQ>Mt^c7WCoD>3x1rdn?dIRPEkW6Rxv~!6d;Vl(Jr&4q1qG+l$=n-%?LfJs3H*GT z1wD%FSaWqawLjxGA(P=FH0taS^RD7#fltQ=8q~ADoJV1sFjRe*znMqI`uO>E!AZEH zg4(PAx8P9y+XC#Po6&r<=>vM1gmkzMX$?_^T0dH;Qv1mN@K4%&hmrCTR}-iFrhn_P!;m|u-LXl@guYm4E&FqCq<+W#;_Pyj z!2XN~`Ips_3_WjU>YwNsn)l=DmgjVi-b`LmPo37FpTBd)z@qsKZhTKSpw6}Pddx>_ zFj~J~d~N*5DB8humr7X)`y5*+7 zh?SP>b&{@Iy!l-9s(kc;;cej;?Q)^Gb3MAttO38ogK?IMHs#Z&pRqI*$01xHG{iihIkQcl z;wS9;mr8M%iCr?s0o^KSbWZ^7mdA_IZiTlIjljjpa-+-yH6E;`QBq8kLK)BagJ~5D z^7m@c2mvkWCceY)Q5n-=n||pc^Ur`Kbi**T~ITUbcK2c{0Oa@ptbyl%B zszGWU$WU;tQC_@~b-X^aO_-tI24?1W{lIjKUn@S*DqHS91ZsEUQNIA^hfNajhfqV$ zuOzSl9U~9>i5VNw2_74`OP_l;Rci@pTEujxF}+*06d~nzQ{3bbO?(sH1B@XzPUhpi z$90YDSlP=to+S7EY7lK&o>5xua?dR>KZRbrP z;x%9-@qNJ3Qhelov4^IG|5N54Qj~H&uk9xw{V)aAPnG1CxK0ck&)9!?TZO>xA_SKi z%!cRvS(?YeR}IgLPPAHm#y*GNyx&_yq1uAUi7XN?UpHsftF)YXV4qX(MX;-_K`aCd z%8ZtY4JMtv~*yWQ^vYGk3WLVIe=7jfVF|0@K1xfYdR51`tAPf%lH>@uaXRPJ6^$ai^okN_S-dGxw&t&JVcf);D5NW z4D2yoLitT!DFd(1Fo-%hnMszXpTsKDNDW|t`1?ixZ3r6`kBU)oY z5;d@%Vk51>TvrypDRfG{N>F(8zP1J#oEOan+TXjln)<&IabABBNDY8M*0cxzO(2VS z|9|*Nn}6|*oTdEy?I#7cLw&?%a>t|M3ra6wD=D?Zb1~O^;(;yu>H<5z@;>q3vhub= zKNOP32sM`)A-Jdf@oP}as9TJ8qNd5zn#@C!;iuAd0q>y*jyG%SRrn&LE6(TXAPBw z!|M*XA9>p|o%TGjgy&&RLmwc(vZPG@b*hK2>>VQu98RX<{cy{;vB zhb6cHV`#+FqXJ`-q!ygTDPt~~A#!b5nJb@TBI{wah}^sL0oa7|#J8cG4~TRRRaHcf z!KtKv0YCEsovhSnMUBs845H$n(mxmwY`C9{2G?yHndF)mQOI6hC%9R_|D; z#oT>E)p8Rjw%p!nxq)wN)W~dy@GvW9#|)ZE}44DvYqJB`)b1=LO2h-+l+lV z$@@azRsQ0@PgB`W=v{gGt93_lqtAM8Vy67p6k|=#TBVC+4L#jt?ubdu}nc6Ghcjcj(puzm!IZnQ0C!u6I=S?xJ{vjBDplof!mdB{v!yT$4GX29Ie5VPvYX3+~QQ-GWvkK)ezi*^AUV6w3- zLjeOygscK%8fVuriUB2>DuX?|BH=8WD!OhES;|Y_+m>)!{nzuV78-n920>@|s$5$X zq0z1P62KUz#x|AKp+fqW*V~fa@_N}tzY8t^wJdG9C}=k2RN*PXDSz1LN@OLe8Nf)- zWY{CMcp0Ptl?3-ozDk4f*G!WU7!-FR(d#Z(SubI*vUDfG$V+uHiMe8XM|qKoOckI{ zUl5j+V?&VyMYBW?>O;$+S69G}T9M9|4b+xZC!UPR=YQRO>pxPdbVrhh)1XiIyBqYd z{KJLqe9;)srAtVHNwBbtqQ8(5SJJO|1U`>aZxL@YPrpqrY1H63U!|guYp{Xz<$k?J z6)rQ(5!j`rR%%8%8R&ML;jepmmGDM0<{BFhlA4Q|@uTUpD+7dE?zQb;o9^R!yf8_A zMu~!Oe#~;8MBzBWiH9DXGqAPs{E5pDv(6-yNwIq!P8X{apT1*asLg*+2<xtxUJv;NUusH zcB+6jrC5je4}Hk}h+FZZReD07@;H)W2?_UT8|-PZTv6>!L8#!bG|kIZf09hz1=z~Q z?gS9Y`%i1>l5O)pBY80dz>(A}^Y7Y^FTm{dHZ-`CVX*m!EX_r{>Btav4?vJi{}Vyh z?XZ1Atk|+`VWnyMfx3)K>PW4&FaGT-cgbil?EM(27^tUPDif^{1(~v%WC0zH?!l+! zX!~9%eC5mRT+v22;G9zzxT?C3(s6L-Q-*daA?Q4MW$z>T7BGH^FZ^j^H`_8X8f57? zjqploH8<|-1J-a_JveqzztG=v?JdT^9_hI5gTA#b(h1J|Gy<8E``q)gztqbL_${S7vo*4tS-CLX*7#i@5pkf(=VtYW5E5bm?lvQ6eD&KifSbKOvE zOZbxp<~Qkv!Zk9UVKI{EGu6V>l-cy7l8?qu+AN4ofdVY4s{u0YiVpW_Es2p2j6obz zZ)ZuN&pUcU9vkAdb6hAD+rFsi-CK%+8fztyNITwNuznmTxuG`FTSL~wt!ScveLgT^ z+z;eG_0uA4%jfDI)cw>nX`iYt9)15=%wTo7q?qqH#O6Ww?%8|^c)g7D%|VG7Yg8 z))vq823Kkj|5{oswVR=F^w6@0@vhe0^r9&d_&1aQWfY6Lzu8J$k=8VeAu+chj%X5f zxMiKbs~t^l*re@p4?#1v0#-I3b1m1p+sVs3-$9YI;w6^`otz}}`jF_l(W*^qnBjO2rG`=5=o8uDr@3QDi-UE!An zzW;5qj>Xxo0%G8=j*fnFp;MON&e~tf7c;W^75<0*=a;VirTSz>XwK;UA9kDpbvbW| zb{BNPt%1P<|B!#NE5yGTWAHy1<3BHN{-%nP2Uid=|}*=BM)+)>647y zo)bCrdf?mZ{iy3KXq)PkSlFkr?zHu!O3(C8p%-5YRfA_9-uhjGD(-fU`Gq`?(qRof)d zaU#pqDMhr&t~W^g?A?6=^h%V~!WCt(W>h~E?=e+I^Jjdws*o$bFq!~N@B7UCsg7=#^!ktLV$n2@UTyoJezB1y z8oKr8TYmJ>ZGl7#ti3xJEEQm&gkfTSyxoeiU%%05=dIza;WS=2&0Z_ZX*Ue<$?dn* zE38mbn#?>1ErDXObvLsR@3qv{4X%W#hh5%phvJAo7E_s57+Cn;>vlN{+d8#3DbB!= zyCN*#;Zq*!%vQyk|HY;o9*K9drU7S+H6@lBJkhafQjHor z$iGe3F*K#L@O;`8Za#M^=YGcsUP=1IsFm>x%`FODlVJu7acnNaMxQZM+u^7U&+gY^^%RExVojSr_C<0KWH%k zA7ZHZzfQTW!&oGl1)p#ET=Ps0{H_EusPTF)&{-xkaHh=I5r__*-S735ow)8snfNNw zoo^SLdndU9*mK=ba&O028acXsjvJT%A!0_pe2TUx1-;$jvc)f|-ryfwp~Eny$Jg~n zWD|D`46nVvBVExm(3c&`xPHK+{>%6Uy)~%ki9%jlKt@O7CFBX*LsvCF zP!Xn?=8*ezPdED7AhY5-L5UTVwc~how#&UR3s0Ue{NF-F?SItiZ$RtAlais2`>*r} z^??E=7wr0gOr{_z;Tc24)jm*{({`yA*8gW4h@tVF-G>lac%%Q5t$T)Gee@9Iw6NUE zEv2RCNxW_cCR&+}!mkAJ%Jh(y4#y+8rq&LB5tRey3#rY^fj_1EFEi=LGD}cq@PE)n zd$|}QFxvn=>;L=kS(;>^icWUdrukOz7O?Oukb<7wIRhOPTu=hhf&Z!b0u5IuC7Q>$ zFL;hQ<7&RsP97Hc9OQ0T=-QC5K`nx+Y@_n4-ja6DG)X($ba%sN8ahk%`V2*|kR|8_ z;P=ckByNoUIMLbm-%-UU7-nBJ>LI7y%zwk;;95H_)Bs1=(EeIjP8<|Klmza{$Iw~j zNQ$QU7P&v$qdV03aac_D1WrcT+hK}p5?f_NO5xf*Ub`A3{}tzH&Z6i* zu@Jvz)ET?%uN>ap%U~x#XT5a?(T@_0%VE-PlPK<{M|i*+r3eccA*6J3Z@igz#?^&r z5Lp{b38)BvxGmJ{FN6S$cvw5i63YcTF3cVk^YH8bDpDMT*{RAc9$G+HS-#t2ISDUk zVtkVuj8bQ$(*(fzr{nOBR&W2IOCsVV^LZ*Mr^AUK_}%$UBbGsYEML(E@-;}^&=6j) zacpG%d-OH4hn#e`rgN%kXggEyO1?69Ied2We2x6&CWnCLTM1&2{QSqX*M*Ma5W>w6 z{#lH9YIpY6j@-1*b@lWSQ)0_`kL`6MUyx}N@trRrDEK*96c+?%tpV#=jg62 z`65$4P~?tr?7E|vjaLUIpvvS!2dLL$7g{B<#fI|rX!NvmNBX}NTjNq%z*j!J^SZY# zdV08q)>@KoCf30#<4H=ZV&qMuy*nuMJS(s6dG>ur(h>Xl`k!}=8+{^8Jxr?*x2PYF z@WtI(S2c=DQs zr6p@EYEtbWFXzn_m_Ck&m{qgXt8$4ZZNeGEE5rV0*1nr-%|uFKVBODdQ(GqJSdO3O#bj!BxIhP#a!N^f5e zqm@XAc{cbn(*&zWPgb8HACvTvfG}+i*VYe>_AQN5zO5der>`Bg%-2MvS+6F zHjMHyvNihQ%nSI;kP!t#neEMT+YeH^se>oOAZa0Ek zQ?)4S*1#{LbubK>kV^!bji>k{rG!K8oi zJ8UFtBU*}MoNMj`Bo}U&V($1IMRD1N2|Iq&YBHON#F$R3FjLO99G4riL?kUtYn1Ze z*0OZd$d2Wv)iw?R_6FQx;!Tg^%S5k2&sBd{5dKN>uMWbc()GV<3IE46#NWMyIcsMs z=W2lEX>XwfD5E3!{Q@n^{^AvC2k_wEF+%Cy*OC@O!C&Q@7b91j57~gsqH3ulmGX9; z-&o8kN=sqyr)av)X5LS8(&NvWgt^_!ss!6NAE%QK@)m~@6~rD>hU<_<>Q)kw-Bq(5 z3I-ZOwhx<_%JFFp>DtB>VW-oCrF^)yu{vNZRcp?WehmYo=^Wcj`VHV*`hDaYEN{%gM&kt9JNDWG`cd;Fj?g7I*aoOTBMz0um zFYX8Y@|QLq8{dd7-m^-f&Z!Tko|68qm?y#Fw`u+|yC>6XdLrV2@#hc0XJSTchLhV5&rHsBDqUF?QkPgFJ zTvokfT4kfYcE9K;Mt;m!f~9yWS82uTv2>JFuL6~1uYsA$7Q=k`V*M+sT^Jiy=qLP; zU0Go`>N<5kXm11J2vh+}MU-6K8dZ)O6*RgbXXNb(D&cx1KVQ-= zQG2i0&6In&hRX4M0!>?)L;ZN7lBx*9QOPmKp~y*7boDx?Sz5>*S^6XTJ5hH^)x{qt!Llw@oL3te zN^JE1Wq8-qd#C`FD$2^C9*Mixaa2@p;T&g_zPZBhNMU@vscq??jocm*kR7iT~;rzzE?7g zG@kg%`eVb)12jvgEm>WFWQ2Jm5!+b1L2T?I^rw|QTjg;277j%_iESPD{1kMAr`K`Y zEV+3)q0NZd{uT3|tSlg5XpbeK8}~$Y%T=qJ4g_d!wrf}Y4)$j~s?!=`8~u3!Y5yks zkK&k0!^n&F;9ck6hr#~c8-r|C{TAFX1ejFMVoI~mcz;VsfS|t}UieEw0_RixQ!Jhs zZUp~t@9M!xruxMaTh{VR-FZscj7!t;E_3E{DLd@ur1{HmEvN=*(^@Mn996Ha)5h>q z^{7GDV?2`R6sfyAsK{;CLw1n&X@NGY4{3=zLS1Y4l*drezA;@iFTa$zj~CLprQ}fZ zwHznf|YnhwL+%h#b`Z^!hXCd1Y?y`GITIkOFn{6QNI?qD8KpW7F45*80i ztTGm0P0X28mCl@Z6~brNnr+rnQ4RetQnW9kCO3`~T8ebPZe^U|>zgcURNFags9(_5 zXb8?no09$-F|?}Ur5B&x+cA=6G$8m2u)e%3=B|6fN_*R<3Lnz` z;K(T_ToV+T2vzlR?J1CUl;>oe53cB~9dgN2ou_VOOgeceEYg6nez4@DGzJ>4x7SJ> zo&T=8Y^W%$cBa%)47iat709-iFgXzPO;-2>8AMP?;VuTUzyIwO6W3StT^5Qk7Ek5T|Pyzi~*fm3LuXN zoQe87kT`R_`5$q_3)qq2KKj3HtX;tS$p0Nvv;ty^<3Yn+TAPrakI}Mo!hZ)9<2H`# z&Wrud(Ls+wfy0QBJ_ql5!$@|s9&*85;*{|PJj+~^uE%n{cld75YYyVb1cyaDJJ0wY zSKAMO+3h1;d z*N4GRXAX9_;&mZ&DL2H*|_3tjqWiZ zIjt8V>;l6LBN1m0>t8E;gEdcjoP+R<5B+T#u9_^b!6P-Rq9a)oo&BX07tdOm_e?HW z8-#`THgWF$xRbH9uPL+hRe?N3N(1>LOfl#(nz233^`;yM=ZNTTUu~}!qjw79%?5NS zNP#GjGg{Hr(0NcMH{?anf>5B&tU6CZvSvKqQBS<=Zy)zm2%ldyXuNn&@HNQ0y%P~J z3m+=qHF)Z_U*N1;XZ@fgTI7^ful~gZy!?KWlU`-DbTQ^e$p{4JI&%wqDIa>V)OWGY z*T%oX5@;GXP+Vm0aN-|cpP+=9M9!V5?jQxsHQ5*DgPQtJbONj?R9=6Fop-y z?eOT-*PU;+(of`H z_nN+Wo`z+4>N~#K^<5LYWrn#dXaF%&%IPd_TjTPD>LWv{)$>(;fk5Af9z@l;8L{3SVAGOgx0M;LK|>o6bM4@z+QQ0-{Ntno5kH1k zPSTDEL?DQ(vODH;+iVEx6m!2$T21U?_*!&;VRjhW!Dy$F5fq;?KMon8x!QH}44j+S z%K`0mX}DBKQS=Mhn>~Ez#j(E~o#Ezfi1$azq$}YZO(WcN-ho*UrH(Xt<`X00aA%W` zaDgkHZ$W4hdXgOr(_fWeed_x%FZ5n@QK)i!G1)Olz8_x%p+Z-M8r37v<-|hGblo^1 zhS3;USos7-ZAKiQLl4gU?tI@KH6MGb%cTa0P;y#JAJil1hrF>4rbW&uAwuG=k@~&Rc``R~#)uGsuJDTh)H4{*Fd2V0Nf@9WL8} z`AfIQigf9a(2*h~p$MT@MQM>DARtA0fDl4xp*KM~ z0fEq)(tGdi-H-2_bIrNF_3g3O*grDD2!8@2d7k^e?&Ca8dk1mLr?z?o{V->v3QJB& zik;;qSy=IWIN94*LyT>*;$~e3XjXNY7LzigrEpc4%cGr>0pB97;drk$6%D>5v{Ix? zL&7>zs=CCY%UOATNUCXpReu%H9O5uT<=v@=<>5q{C`K9fXbT~vRR)py;Y0gLH(5!B zrODDBGNw2iRZDt(h>Xi&hS)H9*_uW8mNU*&?2R0fI2%tSkX1a_0QK9Iwmy?=io&}2 z=ph0X&cC7Kf}b~H<4PC=uY=Q2)88L9zEwIb4DL~yGp&?0Y_L!Kn5gV}-nUbm6z3Xm ztA5Qs6PNIlz9?C$;F8{Y+6(a>7}L{okGGtrLr**jt`^k#iC>+KWkVeqjrl*gbEl%* z!2)0kC-oxhdm=aX3U&0i=Wd`JjzQ5FTMv(f$D<>G-UzKRPs*8-XOz0$!&g^&)LSRo z33c1l!}f1mp6}%sO$j%!75wlXM%n&FLcZ<*b^TR!2=-q6?ymCjsR}okEM^5-7BTA&J2Dhhg5ilHqD~Ez zAE3xm(r}i1y@~3d=K6*d{a*Dru?D_9w6Ey{6pDjO~w_ z!6@Af_^2&-M%)NY8mui=&_j|NBr?q*z5ZPnWqbv>o zypgN66#!s^Gw6)Sk+sRUu{v;v7@ON^TSb@#Z23zdYHQk|u*sTo%E6A6# zIyefzmnaACF(8oNUt}W>A&Y7FxveM>4)HxK1#8PNf9KUIZlhtDd-}$qZOm~7wSq;Q zPbG$a1}Qnp&0DVr7bWPw5eL)cK5&Euf6F~KTUlBEVdF@PFzFkFAx2q6o&j|pj?^2s zhRUQ0C}CeZ{egU`GykEZgSug2jL2xW%{+!Cykmj=EoXHZ?N_m1k$Alv57!%lKEn}R zgyNw+1*b6cNypKJ+SX-vV^MR{y1Zo_)5h<*soY)rUH40C-JGXzqn%`HQ0(6}cq^;W>9BJkgbvJ*p~ zzeqw@d%6QAxd4@qk-wRtc498yWEL8CAFeovmLyHe-3@&wABt0#2=?S-T+)+s#V>Td^j2&wmz0@@#mpFCw$hO()CU z4N`F}9?PAWsE9&9PGUr;zEby)EZO{iYjHV7N#D%J zpcPiUzMl+!-5Bm-5;FZk6M~_qkxd9Ca1sY>MF`@)r;ijU(Pz?$+DGT^w$+f^1W$N1R=-i_|oD3Xk@ zoYB3LGw$@2rKifM;l74@8{iH+q0h4Ub-aJGJSEe6*e26grX2rbm@c}G;I$EMKB1kReIgkCXw_iIC2d7VXvq)lH?ZN z5?&>RGi(*|i=Q81i3@vFOZRgyt1p6y*A?S#jgNTsmAlPwzV_8`{X?*Hcz>_Z-2w~8 zuc%>R$uI{)mm2`pHVObuat*q)>W;8+*Yo_EA6YV+El!j?B3mj z1h=N_tt%tl9>YQzZx$vuxj0s9P7L!G2rV0#(pB!@X&X0SL*cK0D(i?A-93WS%X%v~ zaxTj{^3@_;VVYWkD?reIBK$+F87$UogV?U1Ttx)$W|3CiD2&7LYp5)j9k(&Q!}oTf zr|G!AU8}WLV2(qqrIXKKMkZc%eU^tcX%P`g^*UHJYC>XTBbu%FO*kXuYXrYEtQ%Rc zc^U04>?ZuPIg&7n2-mKC=xN!H&kpCQvY$o%!ee2!%x~mke_E6wiSuZKiYc&b}|a--s_gG07AL|bdgO$TB0t>BOw+`yQ$xbWQ?HgJCFMg?qLZA4lyQcCuNauaGb4dj(=?+gfn~T87uuhDsPLO=Zl2<|)w#it?GG&N; zz4CI6?ER(PCVI$=OUuTV&iz$61A`1Ml^X()8)ZGjg_)Qtt1F0Cx#gU3y$^vXh<8@?_)Y_&>&jFFbpmcL2379?y?bn#FJBVVzx5YdN-)be8QBuAT3!2Ikn{T!Sr|=-;>{$rsuqp& zlx}Ziofck95e2J zly&;}`{@i8Lvz*ga@s;rBdhS5GTBI_Ry^s~F&|mUkbbOJ`W|&J|9*@7=Z*ei{pH*m zGMt;0`CQ7-`!9zeih+ z*7z#YqIO>I~n_edgbs$_0ZombQLN!RG0+HGI;qgset z34V^_DzbBH^T#*E*E^^O!#uyI$+~6;&L7X|PKJrH6u=k<%cUwTq!=%~ZQM0F5LN4~ ztcT-s-HqUwp~+pqba9tPegLnUQXPFmD}^nT`hzD7{CiHG3X)1fnejhI&K~{5`kRSA zXHjX@L%T|IdBTc3zC@T^JL*^dd2?=cB570j{IzuBrFtD>SO4pQJ;Y>#OO3N29pxea zw!B*gJ;T@QyxbmpxcYp#&%1~pF*QwEunMxfbOGN0s+x^(xRH31y56y z{h_qM2EAsWyuz(8s~w_OPMLmN-is9b+bB8DK}R)GUnOr!X%o5X%a;SI*83W{{wd0P zcjaQ!vOjyCFG7D0*nTZLfA#S4P2KYhQRK(Sh_%6fZkhaqOGgmB2iIMm9Ta}^ER+{z z)x}F8S4>K%0ow7PEWaYUy-wcMaA>4ZX$fH*pO<4qWGKQf{k~vxnYE~k71N-H#XNVu z*lzJKBlnfz=o>q*by;t3FO>Z;re9s7>D5ej=yY`i+YsbniRACCh(3a5d{t}Z;zsX! z-r!U@5UHj5U8CkWrT;kGiu?}psOF&j2zGI2o9^4gaq^Id#b(aFk`1aXAY~`9ogwwM zE!@5HTA6lV$y-lIUu{iCIGg7SulbQaN{SQp4_U0ax0mwfA_ZP`$AgPOmZ-p5IRC4dt11}&Fj?(5LRi`Psjq0GrLKSdu;o(hY1#IJwZJ-< z02@+%=jb~4P6wLZ;mi_5A33nnzu2gmY=9z`By zTM}Q4qloNX6qpSR<*>fOq_( z%szeFI^-dP(i&V`sYzR!YuvTk*MVq(HC(Y&$9rs!=mn6q4jbA|d0wg08p`n3q9lb& zxQQDT_i9a2XK!C?8T@VWgM4T`H3RX8T%}Pj>c)}btyYCNIr@DE$?w%3G3h_@VP813 zCg$Um+7P{alPdFczICG;aD2ZTJa~*QsrpvHt>w48Z!^mr_WbxQttW<%$&qWb-}LK3 zMpgoqUKBT5;O%*@hS*6#v+<6>GB{(%%aMGzToL zC(EuFZBMc)M!`CWIeWtEh3RwMI+kRNC1e<9jG=zYlWTR?^oDU(LFx?1-mSxt$CY+` z#x_N;#MAt3g=pXL;=@##iucIUk21k|xX8>}i{c(3{jtIKGLPa_eI9z;WTbSS?$5WA zVM?9WjG2KmvNC?8o2iOZi5P3sc_sHFP*pH8uBI`SM- zbhuQ1L8S*A#W<|D+)<2sDYn}YOSmt-L;6FkTu(MQi8UJQ#; zkh}qt*=N<^4$cM(%h~2?>AR4>I)&~zoN@ymi=s1NKw2!T_;)GGaVKtR_y7v5h0Twi z19}VpsJ}a07AMS^4XNwQ=G@E=ZSnwF(6QabzbW%s|3S^ggY=_%#Bi%b+n?gUK0<)F zEM7tOckTt>f58TOi5&k_7VC#dFJCCJVkKP;s3)81Ya2B_wPbrPdg`gAT=mL7Ze9mseSXBToETe$}LAJMciCTWy9E`mi?#IbrJ z%J)YVB#15mp9ZP8GnUBhXxOJ?xuiXg6d7or>(g6nuNpaGe__864y?dobW)f>4&?mV zeIed-OD5)OtOZr+n)zX|7VS>Yykj_E7w|+?+^}5Ys!uL77Cb9J=n9_;;JIg5hU(;+ z>u=(B6!sU&4-k2K4_(?2@%X_)(aYUGhPASf0%xyyf&1isC4=m@q2^nYA8qL9uzEKY z-8saf`u0P(cT>3?8P@yBnJKIzm>-89tLgr(7O8l|v0pgmJU`K-Q9Lo0`}N8dT|At+ zvV)Z!HQ<=g#kgg6I6#q&tcQ$JlqSTggjK$o$&6u81(Ux(rfF6RH1$xp+%pC5&$O|| zz1mg&&IV@h2@IeyGAQ)CwQ4Znxie9I^+TFBYIa>>{FHgZfNAry5sTp*Uzyh?{Xj>B zD;eoV?J!4K++g(2OsvnhX3ub?@9(ebWqDV8op@OR%LejX}Y5{iy>n7OmO3WSPkKwC300*TXDGT4Lms;`QIrEv62#j)e z4DXVNvg|)R5whuMWUD;fIehVhX^?&{(<^lQ z@n8w(Hwx)w@f={dR@alhs;oNF57BrF05s{g0M{UYa~gaS2Z*?2Wq1Bdx$IzumfeyA z4%hz6xr{LT+qrBk1Bki)#4(otqr3(fw_ox&W9SQh24pSHqytu`7U~mmi2!uJ7SNQ^ zS5nh3;|CD*6OjKlteH}3JO+9|uN-|xL7t~gVtMD+-a!NXVWJ(3=I_#*vOm;n(s}sx zA)TWGDl^|P%!RwmZli!39JQmV%yXnk$9cD|;LlX0$uxdl7hGI8KPXkifrsR54X0{pCnFX;N%Qd|L{x$Q+N0&`PQd1y=B~D|GY7-e@CEbXYW0tV4hu^s} zo$+uLR@wL#v98%`oqH5AuY+NhMcox|ljdH`m(TIzr9mb3nlyq!SN95%Lyy5frxh#H zR9Y*e=o+!h?I#j~f^^Q0(&S7g#&e3oP6Fe8ypkA(|Na7Bf9TtKtP||Qk7w3uu*)LqJva^S0kN*3lrP@~!B`?)UH;B^|N2a%$A_Osv!#}mZ{CTjA)s_7`=T zVdKhJvYyeCVT0V&7r@Q$VSa73^_R-gGJ2es^}{GaP|5uivcsi;G7rgQhy+$-OLeEX z5&J`}D1{OE%U8PKX}{%B$W^G_wVg66D`z8hQK5;R`r?S>x0)2%Mq+vu5@<|P#PBQn z1@QU{RjnY$8wDodLE4(6QaB{Z$Na6)c@m-eD8z|8)u=M!-I5|$uBpT>){G9%Nx~FY z2d9FRna_q5V$1f4JSx9+uG_Af`YDQ)$1uJ|)kr)!^SibR0zNB5gm&}Hy+!j0ekboa zDcRqrpO`Fsy0Buj+4iCD#6a#E6voD;!d=yq%I(j`zA#QLb!w_wVq%^Phx(~GG&|DY zt{gYc<|jDKV~YLiRtBEp|w|Nf3<~N-Ne8PH`n$;CA@jOJJ zD$3wS`qaGrB>ZLXF^QZJGHf zcFaoer{Z+Mvd31d1A~-tL1?|W`0l$-qUwo9)6uC&997`nXzp*w+xuYdO zP?hbT(6uPBwZPVZP^a9wT@hL0xm$iP<7b6WN2{{doFMn-d+bC#mS2S4K94@meO!a1 zMVG1+aH~JRb4&pUTi0;QI`P>&hEJ}wQ`C^aXw(Ni+@T;t`MGUFmrKlJ4H*@&=$8Rw z`CxECFlTg#rrl2H-lwpe-WPkdD=CmpFB@-^vX$5e`ENMu zXfqB8qhxf~45C_MI$RArJFt~fdhU%RBOsk-nd|r0q8^HYm&zN+9mWKL(TA*6WtQUh z=5{hqQ$@&Vi#AOP6K3q#t>_x_*8Q2r$Md@08^+h)3!c8lMzG;j7-M?-F{?&2o%pi& zLpA>C+%A!FznySzc}?gL?(wolndHGRcwaILAA!L*2~J?Vp`2bBtyWT2P>UT*+orxp zKkzQp)#!Gn5vgVrqDP}SVAdQd)b z9RA?vZ%7d?YsaX+y0NS1&a&}E+54|479Tf+!^NH19Q1eFE%0w8tjluk6()v^k%p5U zI7P~RF#$XMAEgMtyr=r@z2K?mxy?>&c14zT2!G%*xbD|2G*x4?Pgd~V^ZExZQ>UaK zzh|&6A(6h!3Dvi{=-)IgUaq&R|2RI;K}XB=1F?@QnT_GbQDqfhpKm5R=4Rw)_J6v-YK4Ris~4j&9nj)*{Zor0gIm0HfT26j^XGdv>1+81bS3^| zHdUu{ZqEoo^ao$_Y!eO-sbhMkXL`=5lGj5e9GaDpjzgKednT9}?(Jgt_C@%GZbs!y zI{bx*NvCo?GaeN__aP0|V}9DKJ*ve7o31zImiM<15+rvpW z@4a;#&9iwP&dlX6)*@nFr_@SuZ8Y)vxp@{twToRpCuGjzbK$`ERn6=0t@zcodd^QN zG_)*OpTkeWCDpGy_;!`rR>$fcws`lqc~W~QH{dHgF4nbMC+T!#f@%&dL0Jqt#8(Sv zH#23DaoL3ptSM0{CwI+4KYKd%JYj9>io~~iLYwcbOwj@8^SxFv@Pgr5Qp+JS=5oRI zkLCwH^jj;BwSCt7OqgM+>bW>i%&Yg0UiIp{Dmr`u-v*txb4Gu0+CwL2YR?5`#M@%zB>^t_xs9UYv8y!usAOt?~94jl`FhR|3Rv z;zx;S#dGY{JK?Vn3Q519GFI|-45T$owhC*9A;ki|jNU`>_yXD3W-Dllr|FDF=}YQa z0RjdE2VFMdK z2d@ejG}H3S1htll-(^l@W3SqI>#cNBlzVpz4|uN2yFv6b8fllfJjw?m*H_Yg^t_ye zyTsfTf}&uPdMSFbV)G%0Z)-NwcGV7rzGWJXWIt?YljPzw-<|3ukH^&@irvY2V!APo z+Z)P*acpC1DfZ*B=r9b znr>Ga@8bqMZO(hlW&*yb1Jk5$o|zF<2U)hbLh&{S=CX7h&)nCGW92!8-y_F0n9!No zILGi;2GpMjPvJ-X9b?(=;rOL+>pi|vG#R%#?T$1TDruWw*Z6JQ7(};7{5&5OY1bJi zkvHC-HXqqkc^t{?Qjws(v)iA$v$R9tweSj+s&%7YkF)GT?VO8)0M>>3*r!HUNI{aU zmNjoKO4vBz5fY_F@4~stLeMVoC5qFD1L}_VifN9l>p+!Ea`jScW~tQW96=5HVIwvl z{8gSt_TrCqr*tZYcaKc`n{wZt;wxA}ZR#s`9?Vyh;C1u|BI=@&FMV$VC;YxvS5$wh zq_&bd3+^dZyO9q~xY`xynSTK5CRGPC7xx2Q;Q5U* zaIuZ7c@O$_3Bl|{fr~hljjzrJ0<2BgvFbJF&Pe}b3K0e$?ORWJE3r4}M-wtHbcd2g z6}#UJa$}ZyZ-QRgO->9XL5i7((S`1pk}zM!RA0v~t2s?srE&?}xc38qp|Fn;XyRA&M){L*7BGJ(j$Gem<|^AR32fZeph^yuAeKHHH|UYtF6EKY4l2-K z>KD}+s*`OlYV)#vPG`YQ@NJD;5p5$G&N(u*GBR1b)#5S`#rs~=J&^zxu~Do4eC1tJ zJ>uxw!vy05jygZ?S4! z>%+y5bAaVn%D^p(G-UCOT3)Dy{H8h&JCVJJmNQFn&UOgN`UAh$SJnk@$%DXgDEPrv6e&wT*WyW75 z`dq_?I7&IaH1nZq>dP|@j}ng1&D0wd_Gsj+v%;ABlkeC#0b|DrO=-^GYP_-%L&7c2 zl?1_;GpGw!!8!UV;je{S=DYN2RcMp96aApQ{J2Msup@{D2PCqNN*_-MjjIALBymNf zAbn6cVbQ}$Y=6bhUJquo$;kcn#;-zmnVXj zlNNooTpMhah9lvQraHX2YS}BKO2Z55P(D7&9twnnam0KM1zf4c;y4nHNqAw>qFKT9 ze9of)Ljd^_Ix9(YgI7Bf#>yX5M%vp%D>rMg(zmHay?@XdWw?|T*!RW!{`<%;19Ki% zK#U0@TR9+vfrN(#cal=@3WMh^US89oU}Tp1cwsmBJ+k2E?q|4$HMbc=l`)V`uvXjH zIDYfhPPKK+j9~F1*8K77;!nP(+3aX#I)8b!JmugsyW2FcqEr?WY6}bxUv&j}SZJuM z)v|=q3;SnQ!QbiKC`|hvQxL_h_^8tI8maDlcXZv118ax=#%)Wc)h>S?lhS2@Kt!x5 zVdiFxiFE8+HuB-%GtB@Ym8g-Sfq)u|h{7lq;go@7(wJHrcM|F`2-7mR2mrxA?EfCm zT=UNcP#CKJM$(M^T{rw69a(b={}35e`u;PvGLZN0*oxE9lj~>Eb^ng72$=~5e18TI zHe^l;z5jx1ni{s|{ux~PMLzpr``KwFdmJ>-JI(tQ%>n3FTGIPAJTK`DAaATbdj(1Q zaYI0PD)NHpkx(4hX(*%n!ojfgE^MS~r9o(XVYXT8=j zcOksZ=WHKIDs*T^a7@rT>!^)xv6?jI3TIYA@2%@}9qSKi=WrX_94G;U zhs`$7Pdk=3rLQ+g>@Abr>WKCYBC>b1&+|tvRb7&r+|$*pCw}inatgO&6|;7*Hsa4F1sz z30JUS5OTiQbmLbEizD~V@deflY{c{0`YhyPZTIA7@q$h*24{xMQuHDUf)ARh3Ej8f zXt7nJvsyrx;h2cpmzrj(xO9vsTJsI{_gQYXfaa@cyF|6rle(9^%knAv9uPP_ENZZI zG+9{}cp`t(&)j@ZG4k22W9coVf;SiGd15IVmebKA+o;FDtV@rdbvmZYWpe|Lv-FeAOjqZGu8aCQEu&CQFv}A$}N_F1#-uF~7~8c|I0&LhL%^;^f*&PNN8){0wM) zA*;$}8cb2iJ^-Vj=gfkd{y#Ar`tzuOxrwvKz*IEg!~pEWK>EUY=#ZkGBi78m!+gSSNL1F*YU~&H@Ks5_ymx;3z4PWZfI@+ zLWvscUmP|~*TY-JTSV(P)7>)Lud?oBVfpB?=uU|5uYM{UkR-;Ey=ER1dGbx546whP zl||3mwzr6RP*l-~Go1*V$+-7PO_vy1R zt}s2;jfs4T&erK5>&Tuyz?Jh6QkKf^^!0_{b(G80gGfK-$#{V{^Ra~Q9{w6%5BU>s z1(L#h<~1PRNNr@9hj2(#As;V2t);cwD#6i~ets(BmQodi%z+xBT;^VNEF@;(x~=8s zU6aV<-|+*MZR9Q1J$zVvkZH|Z*dt=zZ>1+`u7mUHppSt`LTrni+XDm9D+CTi)|U{Se(ibH3Lb-@ zF-wIJ%1&t^6A*SxsnVQaY~;!0@{FKs^iph3$aR`(l&e%Jv}dv%<6`UXjJuHqk{w+i z{BXYoT3s1R{*VA)e{4xR;8CS*Yaa4ZOH=p>=|+UgQopNMj=k?~W%d|h*VFi*(c9}B z(FRE$FRW}h!0;|fKRDNU`J5|8sb$GVIar%paN8<^4B%JMSOmP_!=d-K+y*SU8q zLg+H;uhoP@(c{aUUZ|)UF&sazXWA#{KQ^rMd2UXVS}A1_R@1paeNJ@Fv3u&k`+(!) zx|yJ7rkSK+wc}~;E-t6`Rg+{_l;Tz7@rNHQJ2W+ak@x~1G?mndIOLI5l9~`!>Qp;G zqF?bV7+qhsrtJqOy<(f|ti(;DeHZy8oBw~!sEhOZe(K=2?%LfV4E?#bL)xUbNV< zia^Kai(-oC_7k036jQ-TZ>a8U&}BfK>h3iBdcxBo-oBT8`Kc2tQ&tqL*STDQ6i?%l z&TmphThLLJ*p7~Q;)Z^_b!pX*dk0rpFz)Mk6+tkYY@L{`L?ip>J5P-H#iTXrl%}8L zD2}K2Q`1Z!BDoGHx;>8*!l}HSwk?tmHJBs^;^NRJq?s7=&{j1*={#ECqEP85E{l?W zN<(De9#2v?LQ4>ed z1x6D^Jwprisr_zu3TYmJGMpOhE!rzH`daN{K9-dVra0xx zygz~#WP3{$ZBE>tW%&l}!)C?)NYMFu7&w)>P)T(e)-UJmO#U)0<%4O>?>Z^RU50tf zj+t6R;<445pXr^>l5mIOWoCx@NU$Dq;{FNLIoCw!w$=H_9!|91ckuI5V=7~)dCnL? zh{PleCmFXdlS6Ag>mGqMf&>o@y)b@FcD$%x?|^9ef$K(2&zc_3P*AY;p<--qh{Sdr z;O=bs#~p0mHC9R!h8)WZ9H!anVCnvWtZ^b69V1bE z;GVAo`_Mgh&Bie|BO8KJ^^t_wtGK5e7LF@#)oL3YM|Z@vdg5dUpE~jp%UC%o7*NbaZ&w88-H@UBX5va%VL| z#TN~&*x9at|K@H2!;iMKG?e=4bR(aT&=+NfVxtJb;VoF=^9@?tnu2G5TRExEzh&a!g zlBFr*4#h3~6~tgpJO)v_{dBB8>WMV^Z#da6on~ix@duZ#aajL5^ygCm+npHD5q+lg7fIzmc$kpX{Sy!JfU4dzjh(PIqnUfu+81S_ub{&t zjn||sQHt%<`!f~Uc*>8Tm}D7s5b8RBVX9MA?tyLOL{buHS@|%5G&e{B5^KX`Z9A$s zg@0%or_ch|2K5M-x*5Q{^sLmQw^iqf`3*cz%(RT9+O9<3Pf#sIrtt7K5&;X@i{d%g ztS;7El{6>nWIM#o_7EpLDKmxst!UL!>AVGwuRXTg8cHi&O`Q3)#40)!Mde@Ta%;Oc zdc8LaC0aQ5LG$r0_}b1{y!TU&z*W_-?;SJ6b=iq2C^yT3N?@9lzu%6b=c2VVizz9K zxBG263+f+j#w&>9Ok)uqQp-%&BAQFWd9T^a34Wa0)u(w|L*j)UXP}8zK$t{|Ky8E+ zWW&IBm@w~$L0wC4>DX*oW8kRmFV4?NCr1%u7*7ukfu`(8{GA~~(ESYWCv=+bfJ}2@ z7rMdFsVSh_o%1Rj3I!t>VDak@AdTRn6MGAO-G!v7Fx93pjri+>kPM z5#I17>QBl>_JCvQ?>aPphc=dty?`IUq@k4JACrc*p8uFMupZy2|3paY z0>sVmif1W@Eq{@mGJ^kvZ|W0RFWAn>^X>b%_%NUVN!<7NC zN+oq@PJ*~Jcc$=^+aG7X(HxO7&>tK(=aWnMa+3Ba*@57RY}ghe_ePUHH&g> zO?+&i>zL;ZeWN+aRmv-tA18hD@xX>mmj~mww{YT9NBQ&jW1ifmT4OaDS6BRB`&f%T z^d!6`X4}y*;es6ln;+R^fEfiyiKtjdIID6cr!yB0>pH|Az99|IzvH6JxNeZ_?bc{~ zO&#tyW!ZXh4Keh)f&`C&-)e8X3rF?!uLaSV)Yd9x*)t0mlV%9i4|>jBF`3_w!Db0Z zyhqNsj+;B9mAVd}&s4o((w1*+pC0%nqd!wvYv0v@&Ip&gukNfV4J_^k>k0yg9z}{B znjc|nctpe>PGI$+G@Ea4nqE^h_j1nd&hJ*ahwPX{PO7bbqA{z6$rh#F1j&v?{!R@^ zs&jmqPhs{vi{n%(bwU?mH%FziJoHo0TTc?>>pX;NuVKVwwYK$VCfTDqutrO|%$48} z*BWJlx!AOpelt;V({oO@yi(xTfYW`cfpw^>FBF8Q4mOtGNpayaqncE`N?;7|iUNto zi`@1eG!ds~QiE1WL4-nGB`D+_v6gq0*|(ablYaR>P^h4%Gbi4K|HLsN?Y~-C)b6%H z=3S33i+1P}tHa(#>9iRI=U5HQ{f+>`n{70|e94KchP)yLKqhi?pnxr$d!OUKSFR17 z#Qq^Y{-bh@1N_g*wLhBtxIdcw-2c$zPj6`g7Q3e8zT$s)@>Bl?JAl$PmK}vXr&B5b z5y?IT1qJ+W-z{Z1R7nkKlnv&T=2Z9Dd9rj}91Khi!j7~f6@^7-m0U;&SEzsWmBfZ? zr=9`6^jBm^+s?LFZ48LDPSE{)YM}qqtqlFNgsB3#pGgHH=wH&X^$czo`wQLqHdvG- zpa_JBoea4|2DKW-pj%WV8Hu;rw6xKUOslQ9dqY`|+?*1mSM(&`iRDp$Sl7llxA>lW zGOyElJf>Ss<-O{ONu% z!9;NwZ@kr$&ErVU(+ERudndKWDBu&V%xZJNOIm|0Cc*K!PK7B?MX#pbXaSbZ_m)n* zH!hbtIR!0-WTr62u>yb3m_lXH^{ttD%q8{&&b==&?pEGjr^1{&?pUgFrM|7p*k_$I zGJ(b1 zO2xlLQ+HvipQsy`W98{B5}nw~mNNY2s$;$+8l^~)y!VHDo#n3=C&4J&SoS&YN_}|-~7{! zGuicS@nbJ-xcAWR0-e;bbkbArkzbay!a(uw?l1^r|Gs!)Fp;NK)T6+-K#{Z4#Ng`= zB%h8gDiEGg9qMS0Y^r&ND8;KD26ft)wKEI#cow?3R4_&&R*j%iHpKmcuzOa&qFbDM zM9eLY$GSER4*TK^>x*4kQ6f8co^mkRiD6(W3Kb!StGX4DkCQth6ez{m%clKOrfs6n ze)M*>7W=Z5NV!~egLNyD#x1zlz01LR4p*z-$ z>Dl(mjqa9U0Rs`gI_{a`GzKYhh1UHv*hKNVzPsObaE)av_xx%aPZ$__uAgUxhHenm zpENo8$_TkXBYwe-#ct#BBHIY6Z!@D>kyb#8_Wyf(U;3}@J?j!@z@J>rJg~kQ2-@@h`d$e2G;mM)9ntWea-*5uqJ`(Eyn}`I=wW z5X2{+-3ph#EJy9rAMSx_#OjhfS%?olS-{V&T_zu}GDgNu1z!RA1`>0cov1&Z=*I9) z*#+ik(M@`aYRZv($|76FqWD*$+tmf(-DQ%SMN=6ZfzBrDgT0$P+1Ed`7vwVAMl!Dr zNKD@@nKWt@h=sa}nTJ82$a1ye-RtyvRhD=xcLsfiqs4YG%#&<$8?NQ0ygxzw-0~(6 zsaSmo?xEHQ1XYJ=uVk;H-|#qpnT%R^TjPZp z_;@F8GNJdV_2x7DAFQO41uKx|9bN_KDwYm!94*01@kBl+9>viMrFR6Eq-?QEAu3B{s$r}H zx{;dUdeY%u_IppJCK;+kx-Rev6u$d2l~NI~d5y!cbNKXro~S?)d$Tt*(6i0spu6QH z7JAK+?9|{Zh4Ma%ZQ=7^X)oPWFv^g===DR?Oe}47f&R}Npv|2c^nZ~oo()3xF#pq* z(Pc9Y$TA!>{?X~}TnXI`In@)9wfl=Coc1r0jDOYiNZ}UmA7G3x2BvAnX8#qINwUl%_pTz01%`E2Ry5*86bS+MYu|XFVcH|lS>+t) z#$IeEEz3|HSGAD;oJ3hdep3sSZ1F3Q^&3)cw7eec2y3=LrSFb zI+}88{TMH9lqV~DN&Q}Ue~18VNKx0mpmOY=NbPF1ZEKU_-H}d0tna7{XK}vr9+-~D zEx&hosOPRsCGd&LIoW6V)T9Pi9PlqMc6@A?twYXPbLX!VE0WD0T7|pjAION7)ZpFwNUGiavL2jok6*aBY{3~iY!GRG3DPm(G4WeHTqdl#r{Z_2S}9bf9+cX8 zxgA%639Iyj;Ruq)dj45c#d982)fn1B1Lg$E0%m?tsliFmfP=Nvdt?6)+)aOz<3obu z#H)U}&x85#@f?$NeZ4Cxytz;P4d=k8?9uw@!k%?Sw%yeIFh5%>$Q6{F718D^EFL@0 zxDS#IdAnL#XTM6rL;o2H4$6w9%&4HH+mxS(o|8m>y9Ly$aSJeE;!dCSOQ|_X`eJIX zF4S~=WU=XeA@3FPh`c?$z-==R-h z{S!p?uL?1A>fVc2zsj&_K9-OxLy zTJy-#k|@4nJvSs4aDR;s?`VlnPrf01yFkFgbu<`c(=g(OiMuK^D_k-(}PPL|7F zO!33idJCv5pH!E9`uZKWK0*4Ph*Hc9O+}aO4%h8(>h7JwqFe)Pnw+M%tv(AyvvaJqHgPJ-Ycv>B$P7I$ z6I$G@mV_+ThJd1!vjm}coou^WB7v4=*5H|q6?}Q_NixCG!>m)?pgsSwnvgt75r~WD zdy3UgussX?ne*d`=>r(wyIZU|&MgM_Fgj6?eD2LpVJ8p1a|wVll=A!bj5VUhY46%o z*;5O(3+cE#?35`1?VFr(6Eh-3e_D6{l@Y-c{RN=)GNQkrZ9DX$=?*K=UyxSNRX#dR zAy3>ti5KSYTA}}hL?~CTQE9yR>)|iT{O@7@Z;}S=!r9+8{%PzvXL~x*I2Eb{ z$QNwerIh6zGsdMe=DJa_KUIq@SqwYpl8yYJ{#yO{>l;!1xeApAC7sOAfUjoVu|T#q zf=WR@x~R%qJP<*CGs~}}Iyfu#zAVv<6S(O&*2PH!oL+h3^|eQ7&+&?ITfiE)DE+n2 z?O|P}P@qmr9V8lEiqp3-V^ z0Z;~HXcN-e7n>pTb4`Ctt*YX6vQZ>fU1_(5)>eN-IqlV<-FV?a&xC%)ZRP#z2^8=L z3L)~7D_xeVzQA#Tj$Jtpg2J7lWa={UhrlRa-h3S@7i7e={^xhQQFY^k+79P7rx**obIAS394}n+XLk zV2HVIvH-HreUj;AmlgU|zLl}x$u5NJ7vpG?oys*$oWvqWQBv9qB{Pl(@pY4O3(+_i1 zdvn@f^hfw^3loq1*SlFqoz{Ugatgh{i{Bnj+xkp#MqMsO8E1pOZgW4sx+19Zv{F?=WX; zYsmV++=x2mKYqeK#H-0QqH8-^INg85kNNy5t{^j);6MH+Lp?$BP|hGGUxbic%?14B za)9QP4)@YYgSn!H?vslTcDyNA(|l&0V!NID`Y>Ao0X^0ZJ(x=cdu2bUq8scn3u{9Z zwWXO>2&k?O?!Et9b^A4Uqz8qT_FI!@Q+Z_F;vi+egtmOx0c@b5_0*2Lz+7rtBW~RF z(IUhA^-40zP*)Y>(7V0(%1M->3R+Fr@vYOuO8M}h2b-- zg~G^ft2!3jTIYz&!+dSuCyZgLX6C3B7qy27BlvDyw5t4@@FlawM2U0AFm|50=~I!y z^SlhRs_5Nb1C?<@f>snbsQ{9!qj<5CoS3n)Bgh}H8+IEXQIs*B2QA^LjDYkA`qRHG ze6c&HGXo#N?Jx|~-5b>(Iognko+9)npc�i?2a(Av2`KStj;jmXe|H#fUVdLtDOy zYjw{Xu2KB=WC{t9D0N224UuV=nnd9$aoS|}ueaH6(qg<=`M=C(Lg?j&()B=JsP>xd zh}55-kH@11Sc+qmwK9eVu#{hD_6$C6!IGE&@QePyj3>;u#R#Q#TUA3wNgx0;u3@Co zOW@ig7rkAs*+>_Ihqql;xx&7(!HutWDzCXzQR;g@&wl2nkd5{r32bv!04h`BE z{j50tHMmdjM%Y|x-*|M!P-UKra;gxQr{y`fn^nXsYKs-{d;IoSnUw4T>K(Pk@QmZj9B6RNBzLsr0O|Uo&FPkEgXHD=932ki} zjc|ro_5@Pzh~QiliKW!0bpgmXipu=F$cFN@r%D`AwuaLlK+)@h^~ShO8$k@i2o-}1N)0`;~XDnrwlNBM~gLu1RbSCbISkOa(~ zJEg0XhqU>Mugfp#uw?I#Sf6G^nMA^4ZLt)>i<6EqheA2 zTB1cl*S98>v1&Z?ix;{`k^|emq+2X_z?s@cVvp*xKav5M2lN+jI?R2^jw-W4IjkIM zaC230piR+Gc=N4G8wQ6S0ghmZ*s8U@Vk=D!tJyA4Gt0v%XGC;mwAP*mTg$a~tTl2k^a})4kSVGogHZj%BZsJP& zA%@ip!1C_N4R@G3HnW9+dz8ei#&M^ljUCG&^Ch$595T0Ym~GZ~6VK8>dv!R$Smo$m zaZyK6TNi{1zr@a-XIa8RgdK_ZpK2*=$w=%}TGypUnkzuqAH(|TlmP5}Y4Hpi8wzIs zbpfykDjEK77|p*I1Iqtc4A}HipAjRn(!v{}I{``lO@Ad4hRzv=_d-s!^QXrG=C4aH zL-Er4rM4M2;_X|sUUpAW2(DlMc|F~MLJ8Bnd1udq5MC2o=^nMFrrRvXF~TdHEDo1r z7G}^6d_M!zgw{1Z8dQH)PCRWYmzr^HiV{UsuZ!|QWwVMfl?ODytnWAL7@_*sL*_V6Rp*2NT#u#kAX7w?cri!NIeoTGE$7@GTY1nsV9YIFz1h zZ6xX`*B#-_BlPCvIPo9iy6})lp_+yN5UF2QhU44P*u1I{fn4q?Y6o@{;Exxj7RqDo0 z*&%mauR_h+mQKB4#b=Ea8(pRwA1~Hy=;f7mCC$~7Mdu8C$9q&jrs`aIU#5+G^JSKr-jHBDMID+t+kjYCt(GjpGWaP_S_lqYG}+G zWq};8U-BO0)Q58HycDIP1>}-=w}m9@c?Bvy!Xh_)!w9^sFBs8hH_!`_Qx z#ycG)T=dHB-GBT>B&%3FW3+9JKdjJpcKS#x8QKh~!6Q6G^I0}t$G?qeE?ag|_D-CJ z+_@ZSJ5%L>#h64iS7gRmlM>r^pN}8XQh@KMN(Z;2lJs}v8?#jWSjxx0RE9%cIx6A& z(@Z0YlV-Db--SYr98zbhm729dAHKFeuatutKpTwge(H9!SdK~$i#EIA`Z!kPdNiw{-_4m~OuU&xD(630c-y}HozlC{{?|qRl z@Hf#^uYy8;N_m4Qd9}fI1SIG4f`W6tG+i|H2`;n1Hkq1jxTNMjtvJhs#;om7#i{n2 zf(|6sp=y{~kx9~v&YgeHP&r@JROBnlO4`t<(5n)p#sIn*~YB~9i z{oxLgmAhY1Aqna7VgLAWw`DsCl(FKQJJrzMThu1x*zhQvj0qc=3$e}VsEOC9u(t1> zR+=fuN6LMP_9Va&Ljsb`#{qh>;qYBZ^bf1(wH^s_tl0_jwewj(f8K~;t{lmJ0Xxb@ zbz=ELfz_PQ=gGnON5l?ToV`-Bal=!rrQu4k5%})=D8o3IYF93TG8(gH{AMXuwO@g- z4z_52@R$f*@tsxJOM<&jF(R)&zs4+Vi-n|7TDI?U@gHGU>eDRS(lCVYeWp=mq})*0 zXit|`OU6&;_T7g+Hw=W_x5v-gXfIcsaJdO;>ORvSldu-;vPp}JOtwXQt8X2L>8524 z&Q;7zYdyuKuz5OMdnw`1>#Z|dIix6nEjHlee?~=Z4(};;TKh|nJ>@;sF*T}7#iB_a*^WrO%mxrjZpu@x#`)pi%2+i(J%JM&$NHCwfHWQRKOZvqRp(eX#OlhT}bdN_j3O2Oz5ShPb z%x`63@y0Dn#m!k^^&WPBmJu3tBVImU%kD7U5ewq?X{3kzVm*)dUl{eHBq}Va8U8C& zI%hlUhXuE7g6ZCw7)=l8cdzj7;GmqBG zJqo3&9H9uW0&tQ_-LuQz${e-SjO;SjdWb5&>w`J1xIm)D!o=_meJy7#V8fdq4ivd7LDBm!?qKM&?@s+{7Dc%zwFHc>ei}?rJ01c zi|P^_$L(A6hTA*i%HXh(ao^}em%MH6fG)rmxFJ0X3+WK72D(<}I#b-^g$$IqcS69K zOcGfMjK-C&`&5%M+J<^NN!n!m5v*%QpkSaPof+prpb(DZ#pSUjhS&|K*p_wq zx)cfg%&kB)lvKVH+0s9RUN3jwRP~u+DNb&AZEpU(@4&IJL}6MuAEJoe!cKX;HHE?n z1(<0r*0l`-U6^tG%|%iV(q@K!S?9EG5+2aZT*Y0ki!d%@kqnqvWoe;%W z>Fg^H_s4}9mKHIUZh&26`t3GKQW1=vwh)bO>`7c$z2O&CW~{Eoi2~+UQ8GeFPP`DH ze?8Nd{M)_{Q$+`n2bKk!qsq0rQ0EziJLE{O?I?W~IT;w*Ot9a*}d*L7e@?|2_Ky z2>&}UU>bZu6)8=uS8>Z;_PMT2r<%T1U$+&;+mJ68H_4JrIrsJiIQJY)9Ug>eD@~4 zfOZhjBsR!iDYbpQ0p_u0s^+|ZWsl^Vg5MI@_VI^+sSJ|mgenGAY1hNT-NCLJlh|Q# zpK5QAhG#903o^diOV78b`lr$gL)Ucc$y$K%Qjo0UYxGXQ0a=JdyL`u{hZ|S9LBC0cO)_SaS2D{+%qP< z6lsmIRESCxvC&rR_Tv@f%!E1gXT0fdnD3NmzM)c}5{vo!_mS-xKZ=mPGB?RD{VWi> zhv=@b7sxPd$gvToAB>jsRkgHvu99Om;m)f_otrr_X|u~AF)b+zBQsOBw@qp$C2k+ z6t?quu3m16pPa&Q2L%+27Mz(wpNj30@QW!*XvgnUa=TBXyB5#qs&iirk6SfM2mL*GLoudwfY7Sp^WunnZOZ5&a)%&0#0$&Dh!o<~F7c>5o!DOTuRlf`5}V75#?; z0CYnVa*%sLM!X5Jwam7hfBwgIz@f^r)~J!kTg#P#eQj+7%77HG{<;LSYcm7e0;*-FHHhaC zF!*xV>yako2|~OB(V_DuJsILg-WZ&_q1MDZa6rYd(+#k0d}WV9F{5~==CEN`>HWZR z984dRq{WOYA=Y&_dGBuIgo7->T)CH!?HSyH0xP;Dal2crY#uqkNtDKV1ud&6q&y>& zKv(B%7>y&%wi6u!r4Y+v-hppqz0d)R=Z$ZZA$@{}k0Fhi`zu+>6LS5XEMY}?l2!&L zJn?RzVeDt@r}t2>NA}via(7-;Xt^l0SRb00{k;dFd=+u!o~p|jQ!1N_U_|GA(Vy&D z(+WplCt5^TK&ix?y}ZaeK<_7UC8ZmSa=}J6-IX^jXJ^J29js+m6zTWDt4&w7dt5h| zZh?n-_3Qe_fO%1I{<9*e_B0zWMQUzf$R?|-yEv!@l3O&y+@|yF^SX;s`|_S)db7D* zG(UX8n0%vT`IAiuf3z=$Pda*o>>=H}3@dBWX142ZlIJSV`6c$dKpa%t7AUOhXh{faXew%JwBh)EV|X9+fBgY_po&_Q z`sn{+Y=Wf!Fg9U-Gd9V;{E1m+wVVn5CTS*z5?v_&glrOnmdqQvm(H`Ko*hbWe_i_{ zj|krQI&Z^NlHv`QT1|cq)3lRT%0-w8#o58XaI{qOyHv7d*zE)$Z_%|&Wqv!-zUP%S zoPv?QVx1|x@U&X1rYA8;r>@Ews)N0LTE3qZ*AVdwA+z)0C+3NGx~3$*K7{DcND_|& zwy;w33&u6Hj?<8svm**~2arRY=I_KyNT)*dFk@NoJXrr9QLN8M(768 z`1&FLd5*Hz?dO1JI-+RRn{rDSxpamPTD`DMBB<5Jo4nd1UCuL^dc>dK#KiLnu^q?$ zo7&4JKUYuPqf==;Y`i;(X-ap^;jtD5a~NcgEv^$S5T;}ll#1cWJIV9bZY8==)UUdL zA#>`NeJl1lDHq5gLnVgyy+}nxsc*pNBbu)io{i+&x-9e_>`>~ISao|aliNqHs+WkU zwj}i%`iS=CZC?F2&no)J|5t8JMc;3dl1xjVC|#mg;d*_erNoegjH8PZjTonQD}lSj zm`*(7%L~hQE6B}SfFDtX|+fo`J;=6A*P}Q!AdF!Uu43w0PODJ8Aa8YgR&>9OG z%4&J-9HWXcmM^6_EAO$>965H>E02;9XIVl$#1F@JI6_DZMVbM6Nzm1G^RK_Bo%=*T6uJ);=d@;gz^RkP9v z_LI7w)V##e~uKX5Ns+ zVg$tPG_}NAUwqB4r}0UA+cmfchVC$}qbj+Q9<1mn&*6F`OIAJ0!UaX<$_z)2qas}N zU?y<(h+XCqj)b&XWf$YfU~{BBH(N@*0NBN+M&$!B3%l`Slq(j)x+nnW4b*6i?d$XPF^A#pM4KYJ3YeX3G=M8-6wqbere@Yo3Xqs+N}Jk8w%1;g^W=4dW{ z4k8w=-LE>37zS-4E@D_i9+|yApApHkN5LWxTVo!fJcOY2itq|$dMGaNrSEo6!^Pu{ zQDg3V!s4Cn2Yk^b8idH!(hoD0`7iPb}ecU<#o{wtb55Z*zKPht-l%? z_)YRQM06Bc#2ggM+2{ovQWL~4_XGR7^;_dMzS@8GLs&v`vm@J>2KMvel8;^`hDl21 za6WtV-u&$(_>*lvymviZ+vT3kuTM(^ca{{2Aau-DQGT&9Xpkn*h{v`R$qJUWr`z$S z&k3HJ?VT@M$XCb)Q+Jchl{sLVZ(c4QyO#kq*khNnXeWa#7MdDto2H)R;DQK*OdXCY z*Sy3*s=W3A`w2OhH>zcAK<;VpsT^?4VBPm&P)_n*;kJ{HFe zGQ`#8Vikd1kQV=T`Rf(A?&egl;ar68>nQ_CKF_6p8 z)vssPgonPXv2BkE)4UJnW#Rp%8?~=Z7pv2j&uzE4F>1i~gLy6wK)kj-9S!@UM3mGk z4atPo3F$V;>=QVuR0GAe>`5*tQdi>QR5`gTI8Mj}*5kQ0*tJ-_>g56bCOsGKI7eBLo{@f$%f4`b$4&9vZH|H`OfXxJ$-@W=ZB(WY$p6c5+Id#zW2pg zkdXo-1Rtb2Z%MvjWug}?6Z=)Sxg0?YCt1UazQEBN@g}{s>pxr82e#ucni!uF4G(h# zOJ(-{ul$QC&Js=P9Z#uTQ_fl8wG#6WVN|{*3$Rh|ss-3PBBuz!H1kj25NHo^sztO@ z^!N}NuSE@dMC9gZ?k`}$($m8%=AD@7hdabNt-tU80~RKYDXHOVL0_TA{hBVveI>)A zdX2`l;gX=eOpG`KROjYou+!y~Fs+BcM_>RRbzO#}dR4UMKrL6LoDndds#r|>{K>iP z+`M%T9GiNZTN_kF`jG^G|CSOthT#azL_(L7P#EeP?kw;1E%6PcxZZ{ zdUO$?OC6Pt)Ps8CCw;m1RffzJSlQgt@5^?Ux3?rsSVQx=s^)?BmP?_E<&usQ2K3pt z1W$db6=|uLEjYf#E4>yTYfX0B$= zSTo8s`;bhLJi&AQ65DP3px7s8wR^xD#GFXkz{Q(RF}#Cv8o7nD(=r8Kj=iA8B5vh< zIS?nI%%~A9jrK8d>1z)86{y&QF*nrVBUA<%j+DeAZcPI4A<$_QZU&VU3>d zxxBqEw8rA{56kT&4^3QneZ+U|z6pmbK;f^jY?i75$desgH!*0eU4+u9OnHfA`#szm zyqgR35zzOzR+parMaO_hmV<<-Y1X@Rr!a`DNb4R-3F>X_b*cp^N~~+-1aL@)TN!)G z%}w#0zZ6UaO&}f&b^F2lD*7Dkp%u>s`m+_unw{zbHaScBq%7S2PspOf7yB#fe+4g? zg?^Jbr*!Pe#a^%;a1jwD-Q9)%LX;%bI-dCxV;9B$BX;3P1ah*%g#4GK|Ijr5RG$PA zt52l=RG(aW3)@j1#|TF%ksBtx1${8z^NtX_cCeu3^a92BN0NN670|k$`HLwQ&*t4X zRT`!;PK}$<6n@1Iza_~@cB#KCOg}6GZWS?jYmm;J03e^%}AfsQLC<4cDW_CA%_=d@oG{WGTOb z2F~kb^k^cq)i8BEvq)I7r%0Ag1yfo|<#quJ>H(}|HO*-p=9+CTh0r$~@pBWj--Cfd z;AtoN>$5re7EgBlw%`P}Yiq`K!EBMt7?)p&LL#P3jcH-bYFX4x);tkAIIUQ#9JoM! z6Jv5`m2zrIWo`Gf8T0@co^9y1&MEPXuXIr8epnLx*ZMVO!N|&bRVFx2|K?%7zriOX z$Eyye##A;-BBIT+8ep4HrbE-8)W75&*fh5czMM)dS}ZHE!5~VMfR)_olCy9PYhCI2GCt2jo9st#@aRG$RbRxBssvr{aq!Yf@A%)t-vro zHCkeUv{_42Jl(k>i{a9T6-^8-JfHxd!teyzY}*qp(w0veCpcK`_$Zpm69~4v^!|q6 zactU7YtEjzyFix<9s51%T{>Ll$xV?h!!j#du7HC)vcPF+iMe7#p3Z3$=U5Wl$4lqw zZsuK!uX5rCxdM;Ep&Rh}<@S}N=92M@n+=b{Ed`}5s;)DnGVJKOMemxhusV}lQ>O46 zEXj+!hKwAB#}h7nOLJX(9*4FE_iif01r3c9deSm~X}D zJHJ?f=K;dET=vc^OaydAT=4K^hpLDEch(G9LM*O$v^xXXeT4{9^5JuDI|eVAs8UTh zL?UAiVnQdf6*%=3r1sw^{?b1s0*Trxj+HNnLDu~rwueePwIjqKS#|i|Bu&uT2r(vQ zG8?%N>5%f1S!OTsf{b{e{m;#hZ%t;cf%HsC%AZV#|H$9Hsc~dmO@6ge&kkw+r16pH z#%Hr~1LJUY6t6n$*Of=N)|$i}YpI;QH7#O3@`I=@=Uiyl43;0IYM%o#`R$^jeH)0; zf=rXxqnvT)Nr0PbSTgfhx+uQ`dD3L4R+hBVS|F!8yNYcXA@&)TcS4KkL|oC<7p7lH z=JY%A8G!HoOfSINW)~T$R@(TkjK2Vj^_6*O7;@Z{?LiSgFfgL^ky&MEH?TFqnO}#^ z7y~LrY`jrt1|k$pOcDIO!?MI;OmPY!QBM#}-sRY9qBI3!7-0xqv#y6^eBIuy^!B8% z9dH!7h%*cn;Uip2`s#jvb%3(@Q&QM)5I7OHIlJVEaLc{#8id==7dvX;DIowoBhJgY8}y%HcwT~YF~Hyz{>POwV15TdK1zEBhs&;NSlkel{p$AU ztGlEvYA!Gco-B8Q1IqofyL`s zQuwo4Tx89?Nk-!Ry+#FI||Z!M1LU757Yz9i}|Zg~C*VFczy*4=elU z<=ooKUI9$mPMmKsD;1MxZT~n)?|M0!%QfPiTi2yq+ZaLpCHL(?M#0PSe%CIj2RqL~ zUTzA>B*1U6UsjTQ7OWkccD~?{cWCug6@xo_>VGv+0@HGyY#VAL45;K6?ijwDnu!?E zLw%5+Rkt36edJ8)zXbTDbDr{>WFf$Zc)Kius1lEsdTS>+|BW(%=(!;E%NvVZ-y%vJ z#|;^Plcw_=ZT|)$(Tn)ef2LN%{_Lf4h#vb5cRdNi+|Wr ztpr!sg3+W7{KRB487}a|Tl2CFV&J8`lIR)7ZPH%@rBUz~pEd$?97!gm5H`3+R>)k> zj2}qN>W|`xHZsthnduFbl|R=&+1tKLV`=OKI;MSRgNZV4_cKmn-*b*D&G=Dc!t94g zOUh4*Dp?}AvofSPGC^6tN#1Z)<*UPtN18b;?B-cQ^q7=$4h@MOj3T%ZEh~LPmpfR=Sw8q$qZ% zAXeDOmrZXy(~TgfTRar)YJg4&iuvV(JCaW#-pEiGj6|A5%)g;^B8N0WOCs|E^;TG` z2G$R`k}Qy`f~OX5ZW$0tR9}hqk~qt{I&~F!6q5irrGj#AalXb^4L22cJJ0Cr@olMc zd$(YE|4HIb49$)8J)|s>_%Yl`Cmnoqu5t-ZfYHWo zo_TqbY5*z5fqTRQt}o^{$(RL~nq)Lc`FQ_&zj(KycWq$*n>g_C4*h8jc&nwgrHiJZE zsbZCg_sz|y0{#!VMqb4+Ntd)s#$~d=o2*9=O;v}ZqgNb17xs)93={_JhqRsL*-@=c zEl7g|*k{&|q9bik=yA=mtyoboL-RKXUP3-wo{UV;ajMo}*Pl+FI{YVp82@Q=h4qvH z>rjJftiQL0b*Tu&17k0ac67nzsv}(}-ONFLOyk28$lxbi=Un1lJ@S`q9eA$xMTbp} ztK-Dkw0NVpk8;_oSO8lR2O;m-7|Tr9U}0f#f`{)|e>966?fir(Nw)Z5YM%Sz) z{1IE`%`~!6b4_!1Ii>?MqpE5NIi z6o&UIy2sD@NXxH<+l#}-n~@1gme}3$4l;l9nZx&sp}=>$$EVn+xyYL^ku`TtZAJrBMh#sVgUf0L-` zVviNyiw$osol!MhsKop`5%UdlkbECh*K_8$8`5Ah_flc>AcL$&@Vv4DH0J#EmS)yU zlk7A#*CF+IP{?!c7m1N5JNaGEuIgyJ5R#kt6t#IugX@#WwT1$L4NA}Wx5SXT1~+{EXAH9`KcQgreyacVn25< z=B6t1l%#cexaM6{1#Ympti9V%d+^%nnk%!p?jf27SztF<^^+{wN{ZVz3&J$ga?_vbAI4@)=+%!ytP|c@EOH~u)(&{rItXYTt?XrR66dZ;$GC>i?)o8vPM;`2rI_P!$kw6pI-;F zI>_plmy`A9t(GxcX?l*Y>Ag`IzW1P8&N|$%I6Wa=YMWQ}P1V~h#oVor#HiDF^NAWv zcVdz2vtTXn{KOz$sM7qo@RTH%x2nXXBCcj8SF4e=bvX?=gy=N0jlOAWTU^R;b`9Rvxqr+Rkj*TcEkGubNCyPr90FCW7*`zpcagiz0%C~*TCMBFG_(AEg;jPH^K6;r`ku$FF&$JRxQJhS3-CSqS%4!fd8@@n}Ob*@6 zri+gn9!RPO?l%yv3sIo@op)+5s+dh8g(LTxs+L1b>L1@u)Y@MOoj-qJe^GzzE20pk z?n89^{Oc$nSg<7g7oX$XagFq<(_c>2Up362T&;lvzT6)YWpJ(5uE$KV>1ce-{Ffz4 zSy7uRj3mIoX^>_SynkTu5T%p_S$5iYFl*A8(RR)xP>bagF6Z&&Z)8?yp;Q?Tp0;BF zGPb?|OMK@#g9gNE0KUv^FU&EAv&Bom%R{L)@@~;I8K_FTlLO4A8pEKv1M1Bp1?+kO z?a2V+PLXwWl%65oxUn?N9NJhKP2I!&vW***5XzJ?3`f}UcLUt)U~jdF0gaWYE|-qG zF3MD<^5&~tZbcFdx^sG`+AHXd4SnUi&Q%)k@zT$fvVoCc0LGj9Lz7l{(IJZAMfMza zV$Mjr0q0y*H!nmIjRsD6(rpW}uuQ<~DvBCh zQT-0&@t(vy2%$Pm>S>(9foX7sWUi(Ib0n#$&uJ#FU2u3-Aqk)^yEcHAX? zRTAR;rk0F8YhfO~_~4-pQt`Sl6CeUhwj2>Gl`*C|3SSjS24P^v)r2(*skEx8X?6a| z>#U@Z{IUvDhtTsrqfqjBoTg<`|L3-xj(9dCT(qkBXLbNJqnNJFL+DKM7PhBhr1<_t-oi(M+d^VW0X3MW zt8scA?gO`sF~fgyueAy@FMut^Iy5&7z3KbCjF=DW)&$zQA)t7XFq{g#;%auN7+ zy$QZQjiYEsm?0z8P^V$$cz@Q!m!dsr5tNk%bk6Qz1dsnG==zONL*ZL46qLv&hsU>Z_;rxpGcd0ySy`0RQU+B z5_Cq{ypk{9)ITx*XZ&@9ba!#>nX+TX_vEFE-W=uoMH2&a+h+fgc1{v6NJ)f!==*Bd z|5{=V4*>Uw^)Xlejv5^kqeg@x%$blEF8&myE;Dl}Vn4O^u1%HOifWVR1ezesH0_Of zX1bs0?3M0D%1WRvE~kUQV8pUQ!EY__)la-G?JrMa6q-M`ZmB%o$!{GHeeyh)$;R9< z$?WT-x?ofdzTg#A1g#Fw1Hp~S=deM;@@}sTUAAHQ!rq2)TRp7^1Xme&NMFs9JsGnH zx}$W%O{^)F_O~)No>0 zbu!|sl%<2cWxzSJ!izP@BuRfSI?AG)_|oy%B)9Bz^A_80@ts79V-Gae=t*Z}O~=lI zq9$9RYR`OsY4a+@8!x=_m$Zs!qVPDLSL33sm_jVZeBX%CysB^O!Q2!l^$#cr8;TNo z8ktDFbU{0OC-3u=%!f1JfXUS7vPVoqzu2$bca1D)(Ak6@Q38iL6RSOQAC!*kL7POs zc}OIz3{$(0-_DWs#gMjZ2UTT~Z_ zx^LgQ4^;Iwm{fMg(k;YaXiOtREbJD-2>Ijr%SRsjxo+Gc*q|22Pmkc{ALuW`Qfy1| zK=eUOSat1+(`Q|gG5B({wn2E@tUPU2an-}8C`*G0;9yweqKm=npcm4g+_Q891)LWr zFs?t%c_#MrJnA>iyfDg{6Xc#(zGrc}&U**sznJ&K3)CkgL*H? z#C_6fF$p~N3$AYfmkEo&Tgkwa^vI8rHqE>SkL5gI49+Pf6N>zd$Amn?R2v;wm_?s+ z_ML|JdRtwqF?r1Mx^asc z&=O5hDghKdiPQ}heCtnbXSif-mNbvVFgDc;y*7O@} z7BSUFmZ(u7Vk4W;f%DN4#JPa z&HpS5^=9Myz+v=dbQR#XsC7%c=G}@Kc91>%gFX$zTLyd-hArAsprKi_LTdXwnrCYG z(0e;i^G<%|IwkRc?gALZh5S1~h4j#p1*UvOW4PZn;Nys-%Y3_wTsknCu`E7_;c1JG+VGE~;Ldr>n@L11$| z1=NX>*)JfeipnlEK&o){8>`G_gKzG?SM}}*+6wTpPuej7);H5$Ar~6X^L*ME3aohS zm;^Nv2@{L1kW`U0w1yp~`(!)rCPR%DO!G6iZRx`m6z{fmsiIZN*YH)OvD~m;Bo5{1 zKmi7*P0_^q%(LXwHxMPwme>Sf{mM5F6;uIO_V(>8K3S-VokzmgZ@FV7XR@xt_pgRA zyXbeBS%lfQ0Svg@<=HkDQ>Q(NrzYS0?&koQO@ot2zCZ)F{lnyWl!NaNoE4+$OsIf@ zK*{YGmC@a$YL?nd6@(N*z!%uCG^fLB`j2IuUYUySH)OPdWDB24i76<Wbg-S0}411<*?GaJszDt{MoN#+a z^5wyaK>Ff`dS7L(@aOaYCK+nu^|D>Xs>Uy`tG8AIA_sXT$|tNY$zLu*dokK_g)uDR zLVO=Zn_u+6TBC?+BZz2Na_{sH9-@w9D-|h73-fv#wM+f3N2uxcs`Cx1Oh*k}%qX3N z(4{mT*-d_fU1FUUzDe@J@SPsKumUwGLdH3Mkd-kuHTOA0-!pIGK4S+Y-xzSEQtH60 z#Q@?vr=ZiXmhnNK{UOcI)(k6GBg&?CVoDk%aRFgS54rJ?m3XAx%(Gw044AhqGvkS0 z+2UBx)@`)em}$smG_~W#fWcYYWBLksvo)HJWR=P~KAaNu|TFuRM(hF#!Hf$uJA3p+@K?dM|N{TqxP7QHRK6g(fI)a*7M zA*GC~y)rE6ePxwsww~GM%TvYJyGe!%=}T#hz;qNX7Y;JI#3+q?&&<+Iswn&hPZ(N9 zQ}v4S;6urY+hudwV|V%uCU8RQPS0VW>d{q})n8MqoVCt=W-cad%fBR>v)w~(O2Xj3 zZv2GtW~0ZNBf_5)&W2jSm$Fr5vx|ot^=~kMjM;G#y^Ch= z!0(Z%+eZpT7j405mm4ht){NpX|1^IMRi=+fU) zsHUz8`aYL4Z5VeV&+VBXsH5ZlAx70u#7-kWUytJoD{vthmA@W_d8%LxRQW+M1vEQO zX$;A1ZYMu6{LYvoY2MV6ZKQbZmXZgh0iv$qH4ayU{_r6xrpzyE9n}rT8X;M)lxTHUIaLgMAd^$6tviZ5~RP3 zDsOFAq?(6{b((+9oNzT{upMqIdha$Aow0U`bm$t8KYwprx0@peS4FfqzEk)?@ud)(wjQAMHUQ>^ zQdjM?y4wBuR})+0=hi$<^XREpRT~?cW2>64oJQXr<*sEJs<=F2EmN9o|K?C?!3%>) zsyy#Z3NT5p-ImWrn%wv7xD&j0Gi`qx?3A#w2CpM!(JPKNe}q zk@dAEGY39biPxXsBpgY=3$`l&d8w~x^4KZw;ql$1ne?6PInB;%uec_2!iR=botqNc zmTie^Gd9pm^SdGkdDT+9h(5h~GIBsyqsqyIj*MJ4vn?K5c9J7b@Av<)_0~accY7Ca zofb;_&=Oo)ytuox@ZbfCB@nE*1PkueumGVr1S>8fq)3pU#hn&+cXx;9{?2*tIq!RC z?jKATn8{2s^V|Eo*Iw(h;;}HNqeD?F3j5*w+Q^6q<&N#;C@jOR!}U=PKc}lfLk4ZQ zx7g@jKEtA}3=vN-7-UW{uYF1tZ%+Ep9Zt`F*ivrzm}`Tm(PLrP9o5Qwz~Vw^!ujGc z=xf`V6`E-IT@{w}@p+%R4%U%Pr_bUb-KvLBZe{So|8gXQ(|jo+Y=OVhk$2{<3@y!2KT?0Zr|rdT&#vyw`c@6Rc=R&wxXKmpSTQ-JZgq<(PM@j6KFo1w z<@Tj<1YDO!tk`1T!&=Q!Am`P#mAU*{GW%Z;(-M>f+wH(r&V$9w5JeVY{*ST7NNPSO z5BTt}DQ)D?@rI=pr)wV*%8Oj@{%T3@hK#kYE=?5_BaC9Kov9K!c7spjr{~kiU&f{X zo~T@Xa9>;AdX?=XKe5v$m7!;*YrJ$kEv7mmsr-vLq)L@4=fpxm&8C-vkzQd z730;t#DZ_!9+|1xB4{->N_iA^vHQ^T#=3M}E5^c_xerVe-X+Im3>nuGY z<_R*JoU7R|I<}ar>xs846PTQ-Jmh7M9Zz1KB%tkc;|HqS*+8lvP{iCXYl1KL!dJ%` z8N%Bue{U1e;ksd{Qv0d4rrqWWCvKMEIHY;Rt<_IQPPJOH=v&Blr%IK{!CT#r_JoY|j9o5?zJv!B1LJ^9O`ELCRJv6R+>C;9FzOcoed`414*qtg+%l6R{4EjV7>u& zs#WDu=_#5h{|drcs)`n44&zPes0N9Tui+dz@}np73|^)umFSx@;FEetSWGd&yCdS` zSKwtIFA#5;rlCtO13noaV|8Eex}FcfAN!0Mjuv(8>6CCPOn=r@q^0DIRF-p%s4i^= z#9lP7PK2vb&`fg2d$3d=mT9qmQ}5eoNVdiccw#~z-(uNn`C6`r2-vp_uJ1Z=1;MD- z4;TZ}TjnMpsvCaK`Yeyo$3e(R*~Y=8ci6M?(Pl~OhnX2 zs=1pn2B|)SRRuo(%-Hq$aD-@wz89yB1ax*ja)`AbyWyI7g;kINtr!|?&n>-YH+;oL zw#v`@w%rbcL2dNri#_!jMP|h#bB6w&vzp9|dX!qyAx2}4Xgm#+1U43gK_>&Eui;~H zHevz8Y_>$|GC7cJ|74x2W{TP)X1;r(A7-uDAVSC}c{vO&jcnrbKKPna%aRl4_V*Wo zaUE^Nsd`)XPj7-r!}(R-0>zUda;dovHm@$G3X7m61(gWz9uLw);4NPx@1}Yd10By6 z{%wcPq0X6o#T5#r;nCXt0yF+u+D)rvLg=>WKCKLJFEzO zs7}`8ROZI-{N4*CO+OsJh_-Gy{{2Ww``Z27cU0GPW53k>cDmBr;@oICxMypMwEgBh za-)ghN;-;7%cHNl7iaKmq|V9r0pi#M9;N+aSOl(TS-%iS+3qmXMYgq*yrQie1(s;IuQXIrs@>6sadqat0)JcRZkc2sCi6KX^z#RF(f~@ z=vRviyINa7{PJ+irpHjRI_49xltH>3uvlpAK5%dIcK^2?_mOYYyU^5pOBWREGrYdU zh&m`04RlX0p(oj+BBAZA-<)=+IOGr)lXi?3O`L5M@8!K(_unNT`FSHt!Vn`?qM=o_ z)#lz{wZc#)MC96auR(=)D1$>rp~*}vMpSj2Q22P^Ge(NufE<2orGlVCOC;s*Qb9C?f4*wmji2_r zJxLq|;jH|+YLi(As=p~lB_RWFU0cQ;qhDA1;fU?)w+9nlFI!km8pvwADP{~QEoe?l zoa{DLI>Wl;4MS8*$@X`QZ(#BcQ;teag~gc{o^4fVHONt`MKy%z zuHAZ&gX!Ve%88KKp|3yP^qy($#h8px>txpFUOpKbccC1{H&Rd3%F@fxWPKH^$XyN= zN`@i(8U4z-{6ch$9yORtV1e|ywszF2&^hMssV4JlCWt&VI{TSkqjpvXz8Ci0qMC%K zZ-5E>4IY|J!X;BLYol>rqax8(SN{*2c;(d3L$T^$rbv=e|4diSg>Ta1%;< z>Z=1#T%c3I9}PZntg@Fkx2md=JN!7J08vn4&;}+v2|}{tjtinD(225d4qdBt(7|=T z_E&Mjy`+>U=ntEW5*GA47{bp0d?3z|-ZNKHyyj38-B#r)ZyBuJaQYX1fg3WSg}^Hx;IRj+h_Exs+MDhu09eGJ(I0ru zMSdMiA0l%~QT@-IKld703zRc7A1rj5!lU5t;7uM%^{w2>*HBx-~O_fMD zoDs~i;r(8(extj_w2k3Ft69}N6vI8ZpbfkJ2t=Z>@~^!LOXPT3mPxmd|nT>cqx#Hx3&W%hKXbSYPUot=*Ut5T%oYoSzOU9z$ZznnhvHzyDNoA3)wf3!^6GNidfIr+sq9W z3oGsrD@NBES_5^eAGh=&f9-o=WkFF^@vaG|DJq z?5-RFW4U_Aks?qd^-fZ!3>w7b;2xFe7_c#61@}*IGj%Dyh38a`1BoYZDi_{QXvqX; zx#?6$h4vd3d5HHoyyY}!iq$v8q6|y$)fkO~JhyMA^J!Ob}WdAhz-t!n#tm zzjUQrGp^tCmH1G`j}^i_OIarQi((1&rF;0D%L?AA#`^EM^fs?MmHwDttca0ddOl~n z+|~4<8OE=NSfN6@Uq3{b5zA>3;gg%4XBUhFs|DD!y85p6M=%9yUISN})lwWR+f+dD zd$+s`UMWnsvV#Y0r0mZ7Zfmr{-juJ2KzmDfA;>DOTY(>DHcG?R=R6WqADt^WSJ$bp zK3U|T)|x~sF=b^&piipY;@yE)aLB@AVob-q5R&26FfhYTN*SAgD zq&q(JB8u}`*QFBRo)(mNN6)BNp59}2$|x*h)XC`o-rg97N%L6HaUu@c07{w|9l+*2 zbg=Q}KdYYWQS`iRF8OrDXt%jn?}k7$GHzd+@?!LBvffn?p7>-4G-O-ocCCT*&;sr( z`6Og!%8L;h3-pmz9i;>@EqhdcuHH^vorRKRb6$hqub#Di& z1v#yiGe9o~WskD5S}OTgK9w}ZhjatGcBr#cf4*vPeq=}e8&~)aLcj!$Y@89L zEFvB!Y=m`Rg5iQbf2SZr8U=Tz%zxz-VIQ0nGSYMKE_-0yc&VcjMLU1I*|fc0w$5dF zrzH^lC@8P(mEJ%JL9Wmz!+9OVa{@{2o?u2=|2HSZBrgi5hEQs~itjQ^wW{cvQ);o4 zZ7k68zwM`dkp6F?7}5n04#Q1uk|SV;*|ovA3W^IB&bNJUCmd!DG@`^9amr6w)c+zg zp8}=zarr-vSJwR8=SD}qZ=AoF()y))B8SsiU9_ZQVc?g)M%x%%vb2J%HKP0(pNWB= z%(+TBE%LEjLG-RpL>YF=n45p9*rm(?}7Z-1Ru;nc~GCC z0R&qb_mLtFR!eEpbt_tK+)pIDaxSbK0<(m?S8B^u+l z5)7{6(K<(r^3j(;7i!l)nK{=sFBrI{Eh!=Q8W14cuGzI9FldQRnG1QoHJqD54D*LKplZnXAx$eBJWIs~wfz3)?~h(Rb; zjGrS0M`s0HMA{pLHK6m?`ep{}lsg*e2o(gqI zswdXcQe=9V|pX{tKP(SAEoKJy(1Uke9zaGIFz3Q`Ux7 znuoRe#>Ay%zkJ>fO-yzk-GBNlqPco&B4P6Px2ui@4lD2E_|;g>zR{X`6|h$q1<{^K z{zqrn6O}%@ftDk~ZtqA>mYZR-vbwU^m=5WR4W`54M^X~nBQcM|Bs*vBVm#}j{dnr z4Gd0Z&Sug~M9LMHgBf<@Qs_NzbxrSM^e8+3GxnIHUl=uM>A02MruJgYOHYl6O1?q7 zgEU<-C_JxY5FaT_v)JJJQec@cUPS2l%C*C8sseXWRFdJ8>N9#1+1M+JjcprUQOh6+ zM|)Zr4=s>>|XM1Fl(^)xGXyQvW^jDK_o#4_6iX;Y;v zJke}k*F7A_l~mOB_S3F>dY~t1o&1M}2Y?`LDJBl6loeK74z+rh*qT2JUGPIR=pSbT zIaKSO(`?5-eWsjF$#u>F@@CaLn$_l{ zR*R{(>sTNF&9z)*Ok2{6Q_Z@2P>rQ20NP*qT03vEYum@|W$cQAcFUTQelRIjVWD!9 z?z}H}WHB_mshDFgxrU*EJJ%bMlA3%Xn3YMVV|8J1q?_lTAC)vRkz{V>UXU7{jQx;hYm#uOB;uwkARqatNl6{MmWUyq|Qa z6E)S?`MY|5vxTZ_m3wrfFJSSu|KLIfa0@cs?gb0BU1J!q&4o;fc3*nC#R+eQZ_+5L zwy*YE)AO4zfpbrn5_+K;oj9&l<#|C@`RL5fAUy=Q=UDA1`421)biE!hTnAKMkT-5S zj{q^BO1sXP~d5JnRkrh-PcbdODv2Y|KQ_9SH4@zgP|8|b}B)8MXlR#is#+V;8)6RkPniD7z)nF)U+v#JAnnIQRIZ|C& z%5~|XUUlnu1hzRn0X;xaBWl?wmM}Wsb^o-(YJ`5G?5(Gp>2Ve_`ZVeM9Dcb9ql@zG z{A&K{+mx&>p}Fd9|HwSFWACq#`cydLdQ~7rA zSw86O!a*d=wrhgI-*ZCRI6q;+i@`x!h<_@rZ^vjLM3MwQ(lA6ebDLE!JkT6qsLK3% zcBzsb7EZFVKyC}Y>J|+h%HRG03?&;~b^|;PPbwsuiju#jv)^WZ>@GR5Lsg@m8!=1)Z+}=pG)W2cYpn7O~^!aUzdP_zVw-pJv3V(bF83PV(+iV=F*0N zB7VcXUQM8Ty6q1M-CwNOTtY=d8?sNt-Bl*YA;Ov0ymDZ zIfAjcU+IkD4N>x}NSst{F)U`h7*t(TZEw)pA6SpT3L$gdYb?wuPD?^fEwwE(fy&V| z&{`N*@%J4kz7A8vla9~~NFuCjB`;t{ju|!8Ui!0#F0aV&amS3jQT>*DP^3hSotpyc zB89#N@q+Y)KaAgd*2?0kY!tNbb}+H`wZZ*Cbe}t`;}fg|i4^Uz(kzMX_?Igl;Wm2h z4|q%+0maM~c*Hq(3eZ9}y0kY;Qp^wdTZa#&zeLJd(|QVI@GE%>mUt9AN-ZueWX0N< zwOg)KgI35o+c)rjSs5!qoBaNE-FYK-2$LN=f!oNzVNw+srQ@A^Ouot4Mhf&Xp(QS@Gvy zm-|(ifT+M<)b_VM|J)%8Obx#6pii6%Y;P1WN7=MEczo~~+pxYSK`L0DH$3wa7hAiX zgZ1>r1r!Qb4IRmy)tK;I8UYJPbMk))f__~9Ve^4Rz|-})Q~D6#jJONiz+c4P5_4vKZe8ErfIbh8xci8g|5M`Hbg#^Yr`MMa23I zSH%V#lW`NZ^N_59Sn8J{ zXZjnaTUYHdO5$I&nZB)|`jy$I)xHauq_0vq4Ae9&3iBw;ym5TZ&vRdJOTv~r=gA8) z>E?Q(7CHj@tv!lo3&eg8?H{Vd)@h!dOPT?EFR7tAB42c;k!i1ax)(;zj~*^KXMvo9 z-47OBqKBRuaWs2sYg{E9a>36^BS!1+Oi*i*JcIM3Lp6T|lddm=;^jNM;~QOjnJYda zUtZOe={_b~Ab;$?nfI02p!}CB-O!LIR;~dzA-qEWrkLYKipeR-e|NFIrp@ei$=aAH zd@v7ZEmo|D%clJLoFn4yY-Xu7zB*FnAmW}Kt(|g(esdpwajNL+WU0l0ZmY=SF8a=A zuzomNQEnQ^EdY@Kmm@v^7veA8ERTu!;#JGcH}gr`4!*gxHhn445Rz5Ri%yO)szu71 zQ@&q6Jpky85ah+KmH)IGCMD9C0yAC0KYiEm=LlGIMAhSfy)orm2)RT^EgPhn9vYbq8(qnNKU9de|!b9^_%qrYJ~Qv+2=HG`nByNB{Y2LE+Zu)#xGY{LT`vf&@imB>OOq^ zAxwX&+%!e(xTsQiLGNYP8K~a&*--=}77C@8Fqmb*MFll=2vx!C`^6~eaJ2EE^Ed}+ z2Tmovdn^Q+Z{C6->&&kFTV35NFI-Ogt)cH?mEaBSUdrP-?d%%l)Xrb31nT$S>_JVP zYNb_Q_Vo1(7g{d}SBLxMe())6LK82f3Gvcx+@*cXcx{InE$AccV9K{U{s3d}ZL?nx z*afh_S}0Z33^8$C>VTW*LjQ$Y3RByJ6~d^~;J`F3{Qlxpd(1^)Dc-NSoW-1nVrei& zA+bl7vp+j)wW5ycFz*`PrNr|yOvRTpS9)b zbjES?0B$&|9^B~E(>M(z5rbu71ZBj57%~C<|51VouTkx=6Ckq;_(Ooq^1#1obxPQ- z^pmT6;*Q&wAFt;;>)2mvh~(UNOPh?p{F=ceeGtWks^2UoSxW@|9Hf^Na;?89{F&0#}`>B{nrHpTo& zAnLTvWq)RlIIsgtd%g@URkO##)eR(L+Z-@a?{ykY|G8sTP3t7kb@}HD=f012P!!HF zTsl>|p<_TL!%<`OsYa<^9YVK!Egw`@8IYCA*+k18@H5mamS{V0d#Isn#~{P}G!ygI z&>TcJL$pO80Jjq>PDwGUDJj_;!&n$XPZsfoqY?#+&%m2glVv4b_yZ${wsgD1)Axy5 z_K`#{*Expr4OG*22&em);79t2Ob`2PZ5p1|3^P6ZU6r_nIBtCAC+smLHFz+5-#F^W zxt&@b=_e++^HQi}^Q89hmHi{Stx28n!=2~NY(t7va^J-djHJ%KH2exrB99&>JVyIJ znp1zkPTYYu%Zhnk7g0knT9uDT+jHGEve89+0V_Vw|2uYSK7#k8XzZmjjGI1a0QB3A zN9e|&XDDN8cA+w@uyML0C)J2B;i`ya#0IKyWQUD z7xvFU`+SO~JXy{1sW{Mtfs}Ch>v}EM#Z}F?V;iw#Q;z+sxk%!G!pVIRMbzzmr z%r=Mkp-EEFKKo6}!zNl7_ebORZmSJQUY(`xsL(>NZr&P$VE^_l5R=^N+R=z^_=xV4 zW|OUAWXU{L2*{B(5Sf+q(f(%YGuzmewjSmXaY1ALHFx@wPFpSRu`q#vO|u`SLc+k z6X!Ep)}#ztCcS|(lFim)b>3}vW%9KrudTkHO9b9fw^{x&mAQrl}yCg_!EHK8w`JJYns=f7bU>BmXa&Z0jG z*9fF5!p990I{R%EY3M4(Z*^I5wB8^i`C&t(qW1f0!>Lrl@BVW>4($Q`OOrWd5d+56 zea01?1C+({V*TD2B4NUO$zg>t5uetwns!C)Hi@)cA3|IIj{L#Ye2&;*a4?lUVgCK! z+Lgg;q?j++sZ?seY!j)!MCKf|5sxvX)G2kaq^vqDVct&r#NO=Ivp>lttaDWvd`dT> zu1sW={uS4`a*;%$L@uM^BvD_amUCR_=_Grs|9zC$kt zRusgD!ciC2zoZEh!O7OuTHk?7NLr2)@>9EpDyMHy^Ql9JtjA5KqF4z*-JjL&TH=Oh zWtJ)!TO*@v{;_#IBM+r*MkF;zUDGGxu5tVV_Q}M#$Fw4iYVCPSy~aE(3?|h!z~kI% zT}1#Y^ghiU$#E&->|6rv#{*rPXVNF&g2j#5u4j=7t5HSianZ75vQG-LC!OWR7x?Fq z9QPk|k16T;qmvGpU_#l7uey2LIST6}%vWo;ysxL(Hnf~HT-Qt}ygaXDzzMwh<}^r- z?+t&VK5$GoFx%E*c6JjaJy_|XM_aW*HZ9L^$D+19@pJ*~5tc=wzOW~mjzo8%#IJBw zN7nwKGNGdtlks^eM1teiG!O4Hp`zaJ=Ufi6LPi<9j!!N!aUVQo#_HYi$T7)DJ+V<# zzJ`ZPvoY~VT|kVXeo)yEc9+QLh2kj$Dz6$!LVKQmvG%VNl zvYcriPxsG`cEtuh#a1r5{(ewRg{6h_6Uwd<@a$XB}zn8gx28-a+F7{6P?j z-o7+0jETE6qVnw~Z0MUMkB^ZC6-H-AJ$re) z2fbWw2eId$g?wU9NX!M@Y-jUVZI~!S)vu$aAEm{4Cruf#c10-j^xoJl^OpM+)lt>yTUfqWCt65&^O2~k;~PS&;* z*yig=2>Nv@^!(}2bkcC6T$({ZHO(btUE}(N&^@p0_JBrnfcUi-T%1pIJ^tpC-5{-X)~-&emj;CKf(?P6KZBmY~V zpka423V3Z21G{4VfD_*TOE1i8w4HCfAlJACs7}gZze3NyJ1z|(*mg_;EnEnttw6PT zSfa^NMx$Pi%)!MC!;?8qb;U0D1ekWgp1eMLJ`m+;YrmJi$~R9Qr$Rl0*FAFHot|nIh=I*c$OC z!!~i@l&;8jR-aXfAFJYn73~1+yYu;ESU%+rmwio7^Jm2euI;dzm^H>(nr?0AoFmH@ zVU=3=Rb#5%cPOETIYOK|&|!Oa@ofRW z8rzcAoB1hUGC!frG*@lJIBh0U!1yG~kFRWSTb+^kwMm1#v-HE?L#o4=yd=L?#F^@b zKG%;YX9l2SN)LYiDE{dqc#{609ji>_NbuG!qcs!R<}&}te0?J zM6|dkRF>ux3kJpd(12`eDQq2kw%(&GBfs3mE6lTDh{C_JyslC^tfCdywg7^$e(j-s zZZnuT)g(U(hc8QNl~8e1l^!F{Ks=GEzF-()Ptqz{07`GmBopiq^?>- zwxFc}Yp&CdPURoEc;VTvLKLP3mLxPvK8SuKJvYBn4`3%@b+x7xqS@bx7;9fDO&>Lk zo}h<($y=hcb^K`ef-XPTR1Lpr$a4?pF4@PL5VG|?xK;6!s8FPEOs$kWx3$FwVc3h~ zu1p80x;QxEyMyE~_m$T@oVJts?=O`rPUW18Qfl&yj_WdnBNaoIb)tm82jRJ5D~|az zJApAvncmh~8SnfuuEv1=`PSbSC=u9*Aw}wPL@fy7bE0QesF>LZ>Mi>oFW2r~!|*Rp zHD)}8BT^Fz2Jp2pW=eZH4gG`nlvr@tnj;X#4K3ms;itN;Odj&Cj6~Zx?v%M#(lQ~4 zNw39zYJ@qII46@BoJSL^7{k{VuoBP@w1GLzJ#F098nK8+%(*zE6jb8 zc?!gGsH|^e6ip~wcaLp>28q32*YA(1*s>ZWr9S)@?6#(Hngc8kssH1Z#s1qXd#rEL zkKIuo0D6>sZ##&A`zci`Z2bbW&xt?zsw;Sc&X7&8dQ16Q_tNQgJKcDpsi8-oV#OLF zf!FmD4E>x)Wzqik&(Ly+MqbMh0AgR5hd}=In_f}K=ZHdB3YC9@GoKMxTE^}r*fH7R z%&VlNd&W4+`192YNmd-u2tDcGk%MiU% zjl&yuhaa&UZw;wd(nWgLkx!#~STT%jx`hXtB(*|4z~$2j4b`@8wB4g9EXb?lLYs6E+fH=bAi9MGW_=Z!xL+FaA9$=|yTo-yhr ztpPaZX`q4raVSdwupz`bO|v zd`*eVk=p#cS-nXcm8tLsn{ldD^;8dUPqYzIIRg?@vW+BDWi?JNN+h{}$E@9-#iOz8 zGkVpOv}h_d7S$D4N{|K4SoX|rE0bJ|t@V;nNI%>Uut$M7N`-&1`B;k+?iF94>VLg>vUaUS)k^BS0N*Eo46ydEilKV9pII z)DM^GIVrALwq3_;f1b|x}KR)r1+2ShLQx;ZkXJ1{Rh>F%K zMoF!XqXl2WpFa;$%^Icp4%oQ{ze@QrlOHHy-ML3zfw_jS;tH956%j+r3eI$}U|73FH3nIH zYtomuO8vl4VfyckeC+sPxyc!OL&(||U3y$?b>0<_c7yBbT?a449kZb3gjo02NQHUT zy^1u4)n&yl6z#&wM^h&{^OR3-A42=9Qi+2%8|tI{x80%E?G|gwX1FVRVgmZu7;B{ z)*DbCx0^_>^J0c&9}1Y-^lTX3s_L32hc-~ZvJ^~`tjE1HS#u?7><#+M(F&`vPZvuU zUbHa=A-Wn6v;d|dy-AWa&6w%SUSeex$)3!u2KsaBe49!~GNBj~&fp|u2&g9eoYoX6 zwu|T%@vBvA1i9+qZMNr;j6w0k{_ikX)W5jX_Zb%D4ey@2WA9UkxQ%3I9w$i}<~mvl zVYEBP?|jIP0JCJuS`r)vx06 zmT~L63I@J1T3(=lQwp+vz9$SWVj#`$AL1R4&zTI5I2ksUqh(J=d2I z4EK-yP4dV4&98MY#}Y-+61DU)#@Y47Hg=vb)~i_99Cfmy=KEX|Ay}538cIo26%$(x z56dQ!^PUPz0B%YppI`Y{8r-mRA|TyA_%xn%dMKdx?Xop$YKHagCk(%QxG21`UT=8a zn?Kk8eIToo{Dumn=sbKDr-toG8s|MOilWqdMF>)!{mpMQ$cZ zA9dLC#}MU?xjB@eDS4;}i+bmu@`y?fkAAxNG;4a5bxu=)J7W*M(;u;9y7W&U04bk2 z5fx5{OE`4&^q)I2Di^)-ax0~v9hbHs?93uwLgM`nMG?vY%ZS*t-_g~1!LbV0%T=Qn z2rXLPM)a7yB_KVpa7_hg{n$Xh>HeC>v1W<17AcP%@3|c~Aw9^y1V<$*{L!y-^#?HR z0=F$1WafG73-4XkDK5cIG*?mMO1Y=Z&|uXMuFw}coVbiJ6PJdPUV{|wI6{~KWGl`Bh?te+ym(2>*(B^58CagXH|ES!5?xKFUFl5X!-n=`nO0vD<7Z+Jang_BLrdx4b z6}4>i17eG2gYoX?o!$Fs72-nNgR4ah#>?%nEm{hexSSU>i>21z zu1B`D3Vf@5@_F0=X27R|-~<+J{-|W1vAj72a!-uQ*l3nT9B;q8k6V#``i$5NoxD)V zLepxIHxIS5nmaj}DYH8xwVI}luE|r;6RfSYg{3IRx>ft2^Ao{|63H@h=4JCB-DMQc zoA-!}TJL&dY6zzRb9ff=3Z*L|a^oLU&0&ic%nnZTc4_j@4|2@YO}Nx8532da$Wr_k z&z<5-{>VO%T6YG6)vt(s50$6k_JMGN*_;TwYwjEw)9TY}2KShPd{E^-h9wov%9uP3 z5?QBmEw-p)D|%>b<4re|Ptl@#asXj*M;jK~LYwNMAl%uK>#ATBjZ?8pQMQSYn3yoX zFNC^_x19DtMO5rDI*F0aiMAw2`sfzaC#*3myBbfvc6Sy$c|yJ0XXZ$hAc;b)g45;F zU3FKfA@mskXJjGeQ35j=Pmrzn!4(6B_}b#U5za4^@l z6@GyvX*8%LgE(T7=ko_|no*<*;i|0|e=aihtjytql(#6_6-J!@i@9WkHmpb_mSRsI zLmMydUi4C7&~H#$?{c_%dwXo{)+G|NETAkrMAdE`>D4Vdf9~+ zub|ZwNF`Ykh_Ov^cMN%yoiOwLV~Meb8p> zV$uBqKH8W$_gUuf?*KZWVXrnna6{02U($eDD|@s5ntOZK2521neJtCUYBKqQ{@jiy z9LO?d{(pF!h4uk}zxfU@G5wElJlu3RUt_Amc(<_Vmwt7w6^#{!?E5W&v6|2)0%y^J zjt6VGVv2OQR}wai7&gYir0lOKm+P6$Xbxv{0G0WI+cb8gje8(vLoL3F138{^@`kco zf*x9HduB){uFH+-GXSFa`^5>XK-mVq`tZ<8EQkNPdS?*b#cDd3+_<_mmI{vOD3s{h zdw(>wrF%V7Ly>8ip2kjT`;xOY7Flt7!;UZ9g4hdAVH7lJE7UlE=#{)mO~BtTlr`hc zk>X>eelj;ne%4DDOJsP9KkYL{af=P<`_Hdmf-e`$6ZLDSrb}clN3>#)i&N(F%vDai zv2(fZ>BRd{r}2Y5F2b_U0_<)c99VCwE;K%+hDmsVZ6{oMbs(v#`%VggE?I`xc=dGt z(?fY0BPw_D3T{0g+#6$p$bj8Z(3iO)B_`+LeWSXVf{nbj6R9~7(ZDk8}ER>F0E3ix$iJw!=j$r84 z20g)^R74f*wb|fq5@Pp+Tsg|+mpo(Oso*$bptJ9t^5UtaV&v9j zqi~uu#FjpD5JPyd8dh%YO+2_tDW@BZXLp`Kf636uXyIRBy_s?o!$iuoYHmI*OdEOz z3=6UN(ptMSg_Gm9?5E;js!2spKQi!mnhQ+J#^Q?mLRxb}->2f|0IZU+@^4B`j^XB& z!xZg;3O=s_#ThZDaB^_z#=)dQ_r0q7qjhK<n7@S*(ArMyFh&oh}5Dw-8y_3&}7XVHtI&I5Nu#H4iwK z3XJzzyJnuWL}tKA6Pj%+9U|e)siOk6m6wp9;#bI1;m2PI^BW#g`4}jhbHZCrQ}T=r z2J5~Cs>c@8#7T{ARzWk^)X=T!(W)(diMp$ab=>?cf4*cQKB~ucx@S|= z+6id@dR?{IfsrdT;YzD9^X|A;JZ;E~t5ts8P5}CRv>EGNBI6;wIkv$cXVWH2oe}Ok zl0U#7U(DsrXs9+I`34iZb6Q~D{D*4g%Y%VDj@;jVJlr_`W<$lfM>WTJHF?kY3ogu% zaC6_UIMS*GDBdZAxti4{hCs9=cx*eJeuSHg^^?knkD|1jOrvzaR-zx@{pU`++mL#l zcXnx7rrPa6qFWeOop3r_-g-M9_t6X6k%)?ww6Z$9Vo2v;)fqu8D!eX7wK()Hr`xLV z<>zgyXD7B~k-H&Rs+NT4mKYrfSVzNX3Q6*rPB2Cy$^v(q{9$bE?v@soEZJ!0A;Epp zmtSDAs&JR|5WWxfgG1OJ3ueq z`gmf+wOkTHWdD%8#vlER<@Vp&`{&L%vsyt&>nxAEJsJuJI6G}78BOfbv3>qGK zedy=NA2eU*NXD7W5^f15T|yqQL+On9!C|Aen#Uy_s^OxGHU2CzwQU~w_c9(0^pp~Y z*L(EZu9-t0Cx48Fvod6sNA&=7!*OfS1=u!pBvq8%aIpHEd=*scSkECm;|;<%^5QP& zU}1CG$Guh0eP1pOCT=NLvxybVek5y@NoTIy_8*aCtZW&;JgiTfIe0mj*9Mb6xsnWt3r%zd(sPd1D{FU)@NeYK6)TC z3tERdZC{z2Kf9{&YbH?1l`k(fHN(^0B@&EzcxSaX-+SdC*JJ721QG*n`#fCsS^fGHv5@=D82BpkPye*d?Bom!)+MQyFs5~D`!(NfgjLaf>&_TGx3V%4r)B?uxlVz1geMF_EX?Y+n6 zpU?Mu{eJ)J#&si)9^)IlqNXMndMYdz|_ko8T}X?Jt3Ep>Pn{X9+dEv z0*6A`Trj%zi}D-3gSrKxz5MUIUzSsi`}YK08JR5`FmaE713n?Cb9@vQ{H+8ayCnp- zhd+j3Hv#K*9=J5{9Z zdCV`Xk?TR-sJukcyZP4Hp3biHbdMI!ko9@GJ%)4ZgG(HZlWmPCyH~o8h-ncBFfoMs zj}U#FoFh2D&zm-+9nV}u6~rxV*~d4$g)|hhmCE-dE^beCE7J8adR=D^$-2|d>=4!6 zMeTmCHh*r*Ca{kNE}=PcIAKJD)P_QQ3Q z`OWv0Mv{uPCO^|bG%QxV>N&f(>Srn%_qiyR!^?^IR7-~k>l1~M%FpBvU))F9AHRf2 z#$CE;f~?6y6&s##>L1UixszN?l!1sQwsTA-h6Qw;cw*ir&vrS}$cyJ>Jx{C?%Dc(E zqrXpva2HF>;~vYZw>W|C@sOkwOt`-WZ^UF`{;G0AE1#7P=n#{qH;l(d*zw`2xsR9e zmCZ+QvMI)rID4$`E~RsRJp#MQt|BaF`DX08YS5}+K7qB1?@6QSG@jBNa9aKu?L?Ls ztb#0EO&`m&Fv;6M$Z5}MuyO01TRNnX4@8ye0>BXTnun^@A=qZ`;$o#~S_&8PU ze3IQ9M2Lri~qX1DeV*H!lo#YlD8pH$)A2iy$`2 z4J+A?#TYp0LyUvc$kYItqvi{)#yLq3T}{~e3)g#=OTcTlY0iy~o;b&%-k_AIf@wSr zy(Lo9?XzQgFWp>}86DE8j8p4<6@M4SH5SU5r406pp24ov7xF5=j}5%2PI*4$^4Og( zQ^VeONjn2CvICr3(0^?I)vu}MI7BJgXEa`n)>*|KiBpl7$Bd;)j7{*{4>iZsa^1Hb z3=DTy#xu?ek5_c83J#;n=ii-Mly1fS6t~fp>;-PzGTmg_Bm-vqYN!9zwJqRg+xPCiF}Mw~PjWXItM=p+`v|UEO_4*(Y{T0Bs zClpghQ!yOlg$oH-V{l(l)k`KGH&RKUoaGeneZ4Yaa${Lkujrl*uONQw_jWZE;&K*}w{qGU{i?ty(1m)lBI(i$X|pM=1x*}-l@BOnc{KKC z5aW1jk}`xYjM^n|qH5fzK22m8DsBVKd8m}?o$ z$~a5 zUB-2W!#r zt|rT*A$8VhdaKMBy$>8&3_ZT7pmHjQPv6RF8b^k;APAXECfOe#O#pK-gY+({lxWpc({5K22ALWnP zUhL5+%!6bMI1K_;Y;dKU4J(yavOA``P=v34eH?&0%9{1apkCDR=flJk^7Qno3<#ff zx0|gSC`502J0@#_zj8b=DT%?xE=Sea_Co;X0Cxe=Fsd1Wt|T>zB~xfBZK@pTq@=6& zyV(1FPukNg2)LSPWO=yTm;b%Ce;Ifwi{H2e#^4K0h;5OwHwpg`)S@n$2ux6ycwX?n zKi-?tKHYO8BYk?>G}v*2Gvyt6uDxLM_XOW2#^}2!N1<`&-u@YW(JfbS72XVN^9RPA+F#nK16mdY)rH+u`C{RvP6M zwq|Z#5_S9u2p@HrinY$UCb(u)u?nub?NTE8gM;)dE$u4)XZ|>?I z-_ns9ua5_Abmv4is--nJu}y8Naq2ewb=L=YcDrkH&#QyA3T(vKFXx2)Di}ss;!;8w z(9PgDiyk1FQ-Pv5^I+C-V@Rd;K_N60J9!UQ?05M@7YcURN5>*f26!WPqyt4uz^7A zC)A9f^U4htXFrUrfxei>%s;ks!M&^Ml~qxY#Wir)n8Rw#(o%}_Z5uqti*;?F%SCe{ z2);#;fKZc#DvKD#w>Ewgg;STGHl)KFI3oD_L_Ny&(|*P@HU_Wag@g)G@;{ZR8e( zo;SSiv=F5n@6B~G{=>7N00cWqIb7HUqhg zEYy0BTAy!Mpvc={yRb#7?1GKuj#Ex+4XpE#IC+1(Qmx%595RL_aY?R5IXGSTSYkZZ zYeEo1=h>Df?SoW0t2`k}YKqMhwg8x5^O`E+a1$g#ZijEzO2Fq%-MfQLX;MbnehU+E zFzN=>dq?Nh&Z37h0HVV=W#6}Pj4d@Wj!*ReddnACh=cwl&hsn0&aOrJ4L8*$Dt}-! z*`O@TYjRtoe9ZI5JElLkO?OX)A0#)-OHIg4osTSUuhlKUQgcF8_RrXvjGL*Czay%2yES6XgE<%U$2{x3>)0-O=JSl zTm$5XClt~Bt+6I!$J(y+NrKRTitGaDpE0LY$aAKIRV8&&qj7C$Q?=--rs83qgGUo* zBZre~BRxnNQXFhOUF}LaEQLQ(w?qsWPG1Z`*AzzbF37mP~54!9+Y$;tH6-{rf zhbmB7e+)pKBH+pui)y)UU%p4a%n3?K7x0bgq1WAlOIh?RXZ+a2hh<|;28HTZFmt_@ z4x3Z5&{{gKo@XRXdpdkQxtdx4&J4!{3UouOb?us}_pON+HGRL$c{p&0&8DHQZNo4BDBpG3D%m3yet z^8Y`bB~{q^s7-B{Wn$l-7+z(EKfh}qV~`!EBtC90zEhDoW}igAs8oN}uNSX$Im z7Y9&fxkofwt=dTKXNm7LY8P3mI6n6SsTWP>h#(2cH7@9Nm1#2q>zAR^HK>aoN<;NX z0$0P+)Os`1*HFZO7dozU9Mf2c`;2c-2zt|_Hm)?v4SitOjk=fK`(J5VCrS*E_Q`uj-$$dH z(pM~8bj4UwN1Y1BPNMz8FW!BU2A?{D2WMoRgTh`VKfeHFs4j@-R|WLfv*gi(if!L! z^D0M-DmLxs(;t~?N?7udD)DOPKG4!Pokd&RH{~MH?diAA?(gAaqXwy#+J<`Cgx?`7 zE|6AZ}6$hZ&P+*~-$T$Agul80FBDLr(^wjAX zxU1=GvGMCf%~>PbYxgZgvU(i)F#Q4#63MzcHD(Sj)g$C_cIiK2R)h(xThfS?&>0F< zBr>QZi8CG4vw77&?znm;a=zMxmtXcWWIsRfYJkB~3RR5;&vZE>Rq)r5}La(o(!9ImSBrM&+n_ zeZK_zO<|oA_p0YC6S|7<7tghc$Rkpd$Y0){#b`W!nQ1HeZcm>!BIC4{u7GaHKCoof z^6*go)16}A22mL4C*`D%Xot5BAci37}=tRtF@RskL z7yXY0?L}G5e*w`vj!LR!2JgFU9t`A5%owZcsPwLw+}jt=7{U}9tasWFao!?)%A#pM z^KBU9hWqPFNDG`MO34az`2)A}S~WugT7FjT?begO8lXk_y1cporP-I#zv3Sj2Ksq2 zO+_7%8|zQ|yD||3+zhGSB3nqMC}%o8u}(yuN+cL&)?+SA+UuREog^&>$72helP$-$ zj0M~|oSQ+G5UZN#?U@Ruw|F7h7j6yIxa&yWyJz^O9+2l>aURRG^Yyqf@EdcKf3rI{ zwTSz;EBy>8>*`{;%>JV%Akc1uA$)LhkFWi3l0rUKz@xXY=u+0@?ayj_#jq>1POdM# z@zn_Kx`~@*CstgyP@DFoOEW#py5T}cGqI7gFXAGkOG-S_=pkL|_kKhAgX5wSM&G4h zAcwIZ{}8aT4=9{911`zWsfn(ip9heco-5$ves9?0T60d?L*IA*dx_RN0DmcX(**y! zI?fQg#>e_q=PQige%$*#cPQ?Mhnct~HIVDiPfMK7`^loOJ@7dt|7DoGO^1bdGzhPO zWZfiZ2meh0)K*$Xp}BDGIo)Za@ryk{m~GPhI608&Jq>9OS5+c~(d?{Gw9?T@U5{)b z^gLo?DfmA6foZ6Gufu7{3r%6quGcoOg%u*rGefEGa1Jj4x_bDlK*YA3ynno(^)_t} zyb4uja3mHs@q?f0nxpP@M#>A@w!*9Q(A79q{ZMuH=<#3uUe)#LEhD6n>iiD z9Gyf!bUw&i0lnr8vBwyz2xlRdYc7blMpfLLLrRJ83-b1&yY=;HILurYjO@$W^?QU7 zgvy`CTb4gg=(y;b5xR(UqFqd%5$-KS8tIq+=peAz(7P75l8OvSdm5jj_OZ*4+J2;n z&BMEg2B@Zy(ToNiP?i+Ngv3pKx1sEiwK=U%d=$bpNO6V9d3m6N8^^pU$S1f%i3^b4 zpH_s+vJPaC&cgy#U+h>6)<{6d^%!5M>$xz|-?eKn$<2RFOz-QuHx$r!0MNK1i=29X z=Zq9F5pX1>GF@4}T%Ls61JrAp0yYVR^amitevvbdWsFxFK9S1`@;z~a{Pa_vdX zTR|O``s;)5%beEwk$V1*UBw~KKi5A+*c%w2FwrFBsXJ{5X_9lHb-e`U2zH#02<7zj zhN};qekk1+`5A%UhbcxnZ(0Md(3UP@0vS!9hB$w`nEGdL#Pd=58JSvg?YXhsdZkw! ze~{)FbnM{4v0uuOlBTY|3m_c>Wc!*Nb7V5hsgVs7D;YW#Vf*S)uq?MRo@jSh5Ktzw>vk7IJOeod~%81FGZE|~4z(m-?Q-%uaJyr;8g zh`1Jo6P=_zJj!2`R*LzSp}7B^atqv*SM{QL8kqIqZpq(MLyHY??2EJ`>*g#;HZ@_J zY%QbF%9l-zd~opUp0$(({Q=l7yTi#Gkl9qnFC?J9#6DWdlcmrN2K8-d33eIYO(*Nm zXRx#S)a-}NWL%dGz>%h^gyfJ0Ug2llEXO&vgUB1de?|NCEycCB1eVypudknZ&E_4v zjv*$o#;>b3(7LLJbF}j3{+w{8f`7F&#ohmr?*Cu+O#JwHSTW6e8+1SsJSqO)<}tfd zH-FG>{56H61bPy^^lVxB@ClyFh(!gl1g1@A+(d@04}8wJ#eb(!b0440tdwVmd|e3M zsS8AKr3H$2@}_h*uwwnnT?mKuz(eGW=-stSgrob*WlHSd(^hZj(SdHYM44l-$IPgs zuL+=b)X^)X%gm1IMx?w${R=b62@>rtp}=KuvCEkc7iYc(=qc@U{3?>g--#f51XT3a z|E$&phEgJ^GwpRhCc%^ibBuhDz4-S?l*Ny`HA&)F>YH#gy7CQDC?(%p_n~)K;Ie!y z`j44R-wPZ*&Zeri4pXTABeuZJHJsABJ&3aL0EjEF{SY`!d7RsUj1XdEs5OlG`Al`% zWyniD3Ii|BlKy;4C^_2W4$uB^_^i41{k}nut0qgmlueo*URh5GyL-EBU33^?w1gN_ zh}Yu!87m&xM?^k~h*j47La;ulsRGiUjM46Dvw))K;(90&Qc7ufs6WR2j#>fzJm=x( zHrG6QCS_sdH+3{KBKSmxWUvK#QW!>+hk3iU|~r-w$$mxe7uk=1&qw=dqB@ z77ZEDn%v0$U08WO+W4X7cIZ!eVb4%?{Ag1bkAYjEi+N4`N)S)|l}Uv_(zl{}`$&m# z9$I5-YRnGJ`ol(5&A5$K_nC~N>I5XPFsTHNY#CS_T&4&RV%}BFmV#SLsYcc!G!yN< z$j^%$mmUE&sahMCK2UUsY(i%T;k_sHPZ=sGkN@Dy&6KXb&Qim?>;otYuZFE(XsxV+SjmANp}4ojNUBUWIW z$((VW1cfPNR5ur(!9tOZ&q!M=BMd~*@73eR%c-x6oDmSPEs{UYHT<67fTJSI(E?Fp zT1c0_&W2DutU#^Alfsm9JlQO9=u_&m$ieN&vWPG3oe|=yc1B_f^_etfbX!YKI zVN((>sniq^Sh2AQ*QXHuT48pTwQK<@)}#!W09I9G+Pw;tN0JyYuH{QC^vCWV$7bPckh;wVT#vra4N z=+0LLRJU3zxSE!myw2SrSFoK1i&?>K9;Tm~yz6qZqY8lb!{n&{y6ctY-?LGy7FVF0 zwxbI6hIWta1nmuE>z5;(*JvtzkTjqd5HYMwd4f&S>rSkE- zi|Wq`Gl?1X9Y8gBl-qO;oce=eBkzX-KF*P>b(!(v-OVH8KSPR8XS4Z zSzjh}DYE_Gj@QkgL)5spb(-qi4 z{B*o5z>7U99B!=WeMh1HZLtI$Nv^}U{;sbQ)+nBrFBsrl+;ja4AH!gQ!_TQN9MD-M z$=(W}VwsG@`){Pv`Hvyj@{eJ=_=?ikJ&D#IYGf1DftQST z129X&Vp>ixsGd-v;{)T#s*{xo)*S1cIs1i(@@Vx=p;D)rhty?;(*};~sS{@3e6{h3 zYR&z2LV-Mt0XV5hZa7zttF<)1yUc)x9c8U^@eONLCVQXO?}r&n#dT>~oo@U~i;T-b zHfOu%fA_JK_2I0%fqs<_+V2fn_@w?dHRW>MuZW@DKyf@wD(YRi9+mAF41>Ohv&6_;hV746Xa0EI1F06b}Cg8 zqf0&M*tvG9TH5{uCGj>a{GFJirUnz zu8kP=xROcAFs!?0O!LaT6jwgmfEHT%@nk&P(}U##gb3L2%SyB-Ip*jRDSE}4U~%z% z<#NJvXwhLA#(F3i+gGv^jz;*_IL@Z7!V*a0^mUxNKyz6;7ux3WJJ0-eO=X!4>D&R9 z&^c6Cza}(Hy%yIDcvn6Li6|%^E^)17&T#y^R>=dqsM6bG?E@ybatw2tqbL>3gsGhE zzqO2oXe#`n^?xdgFc=KC>0N78XkCayG~QC}-L&JC{8nMSnfnG$8ia~s2_c)Qz|0Ueb@h=a{Zhh!sdPRn(8Vaz$^1_-}VbmhNGXfh|9#T*Ihk7?FB8M`I+R#7K9bLbI3u^8+SZX-}4b zx#SU5>kBuL<04vc;^to=1zi1gDf_15pr7slJan1|Tc+)v~ zJ)@@~Q@msxbKc8d$x|rlj%&$?6`06Z!58=*hGb1pVrFt6vRav|3|kL39o93>-i-Y@ zD;}e9cR!qPpE=4J<0&VTEP?i=u3(WwevBuaK^t)T^@>u>0bC_ZCd#vgbU%9?illm@ z`GsY#Z&J&nFGWz!%vtidkA%EBk*`gca!;B}>Q0|8J;-m&LGMoLory-WaQwlEAesml z4PP`USD)n(MQT@QHhS^x==UeAV~ojPgkMv_eB5TA;YI}ke-^A8>Q)%_??KV=YkwzM zGEj_5^<``-3n@Yd%m&N&HZnY=r&0Dzsata8n;fKr1z(*rH@%~o6w})TUCCILxIE^4 zj%FQ6QXl_9;G4DX>6O}`qk8vqHsFOurWb-y2M`iwY@K!_|6?E0)}y%|j*jj$Nl_(A zz4Qt1c=OU9IUhdnr`XwI_BY7SUec3Wk4-maVhMOc zKVedhcA>R3kxPjOWQ<>FCBC>AOGOW_w?K}z9oVRX!qS%N4W@i}o;f%!%QjVBZ`;ki zLPhS-g$yeGCZ3rHjyw@P)}JJ=r~c%xb1HtKT!fp_6YUD}jEd)0kc`K;P!C%s(*3k8 z4TQh4y1~5bb)UW`J=(P1ofvHA6MDFi75_H;64Zd3A_a6@h7U_Qh6hVP_ZtTS$`bp= z_Gkx)4Rh2~&axpeiy`3tnaRtlkRTHmJQ(eXU<<(Lr^x&wxT5WkeUi6CSO@sN1RR|) zXsIwCz;_RS{Om-%sQ|u`sGoA5*)1|m?b{x9Qm?+?wODf`?qx(pvsFy?u}~!VfsCD< zseX(ox_zKi>y}!E!pb@*aA9BDFYMpa1VdZW$i8akGK+8Y&G)X|PW!GTXP$*bjC11( zM_&a#p)0-mCZa{IYRiqp&6?f6H>n4TE~m@@Jj!(I6yRV)jeRZ4C!~Z8PwF@xW^J~Q z`qq;6eD7)%P^+$GtF7(l>-NWNA_cg7@3PS>!RoSK%J?081{?*T={Kd#)GhtL>*KR` z$!FD4U01T=+H}PYgZVAod1vtXdlG;G=^iy1L;=Io{KWOcG0q48r4c)Z{u5-2Y;^lZRZ+h$l#s0!d3pkJx?FL;X?DV}{^yB`Pa=pmF+jqWb;2%3Z1A>Ge`=K$1b3KWQ@{$RL ziWt`fT+?P^W2FQ%W3z`x(}hn5$s?sjrlMP)IY5XopHuQXW;~gW3|TQ}v((Cw?d=27 zL%P-MIjEbmayYGC|3iQ<*^Etzztew6q2eE?3Y=d%&MP2lBeDKopYW?RNjZh=`&GxO zwdgs^V(S$?7y+-?V*vP=C~GUeYw~?uLfCB*UK!)Z7K8TcZLVU;_7x5lSZi(8O|Plk z|Lwj*{-^sc!8EBE1ii7wFNgyQYXJi1-gr#j|38>~{C1rrk+$6!2-&(B%f&H2TE_2(=f)h-16>+bsEh6gp0tidvuI$S$xmpd#I>#! zg{@spvlv#l2%N(ik!jP`R$i)Fwb27S^eI_yU*@Ar6Wo$69wt=##(Poax-VmP(A>~V&|>XxW9g2G4R5bd z!}1>qS_gfefK8huAEwiwmWfpH>~|OBj9zEJJ*gMx3VA+3Zw_oLm0CL&m^O@?v%kj5 z6Ec*XO-+~c1762D7jwPJW=Phn2!{gY7?RCPH=ioHJN4hWTXQ??kB}Z<5LzqRQZts_ zocQZ&-MpC2%VqGie05vS7iSwG-u<=e?vp2^{@tjeaP2oV3|+p>UDd%jH%gO}O|t{~ zk~5J~&9#{|o+c@wuta<6ZIc}{H_DK{yVI@Go^ZF<5Uw}`5x2kd<`wr55TX-M?3vL7H(x%X zEwsEqY5%IcD8y68_O9TEqP#1j#vPLO6U_vp*L<(;kz9i?W2tG9`V-?a>(H&(^?6ls z_qr7}`%Iq3s$1Ldt5$^K<6UItfROkhyQn*J&2Bx-gLTd_2F@ zp}-0>$)6_IHkG^Tl@=(AT&)4qGX9vIODVWf%_koZw^^(ZHmlx$s7U$Ek$9p>?7-C~ zZ@3`2pWSNgI0}~T%0j|xVPu(zPdZ|is0G$8?I@$0p8~8{wob~mE-%Wn*k_AT2omv< zQ7IcHsE)IiVM;Vz6wTLVl&rhkFQf#ma`cyQMM;125>|E3m)m!`16ofNZx=;1jy*;5p)t%rOH*(ktPA6FL{Gvn0~>1;>RMs zotm)m8?pW{;IOWkba2OUMfnvU+2RnRjtl%#o_+FJWQryZmGY)Alkc>$mP=tkOs17v zBg2`m#olhfO^uXaM%?X9lCummqHeawm0=@haFD_yFVk7A2&P|MdlACIih&JoM>|y6 zIkt%`p(tyLm(nQ=LHF)P84kq6#9)ga73Zh4SikobT5#>MQRL`>U+Ro0p8z%ZY_>x5 zF$o18Y^KT2a{@Ue24gHJn&nA_T};xuWd0Ntr>`@VUQ(UY;M1Da7td~@alyAq3Z`gb z(y@%&h#-9AT;KEaz<&tLPjm5`uo3Kfmc=!v6!K6ZFKkZ16(3x;T2xGVy&QzWJD)w6 zs0Hx($qNN1emhiMv_|m>sc<5m8ks|OzZ6$qH^s&mV64-`$Z@Y_d_<* zyUUOK{t;x+vP=3}B7;UfxKX(w5&SuDb-_F>JSM5Hi=cdLLt{p@hUz50w>-ijX7CtF zCRKe9?j%i%Ki{0V8?7P!)-G6n6$(I4p?fv;EGtxAupfqjDrFt3M2t zsWEE*#eYjIJT(m3SaHI;9d^b2+ZiUi#GpscHaN-0^5pZ`t?t=+M3W?00&tzu?34u5L}ipt9RxG&=1$nP>_iWkCj4OS~T0qJtU;Z=v`=r z{4RCLcUt<%42%O%nQbfvPUfNn% zF$wvUd)PG=zKrw3O5`KrzSdaaHWu~Gd?y9*LPspC6SAtq!>rGY5zoNC8j}EUFLl;vsraO;UKeKbg|T&xy&1JGUPoB zrJnudCH|cVy&oKT{13sCztIp@gEUd=6X2|#$x!1iy1Mu6)(cG~Y>eVD1J&A<4s`CV z&?_DzF9wKv*9%4Xfc>w8(?(yS7q|O&OKrVPZJO&6vE=35Ch8T%9&`hWG(VfggMIW> z#6b*-?3huhwE|8FS!Mj>!?-`w&|Uw=byYrW-5uA^EZr^??_z0?p&N5~(4~zfva8u) zOKHndj-$AT-Y~!wbw=|N*hNgN>^3Lz|5i7`lT26y3k@hS1Rd3_Oeo{IbiD2B-2D&1 zb83{==iEfTa8srI3r|WLyT^U6VIESP>y08K}25^b4|Xy_=~ed8^NO=WZjh=m%cDS zPD>%o`l%J>TieN?tX$t&*~{L@@B|WZ0E*x9RPO$mohoq({5&T4dmfNDck(-RWiov! zRe(->prS@vu%kPy!%Q!HYe;uDNdNq9vE%fSG?yKVlh1}WpE$oLawK!j+ood*ppWF4 z=UO>3I83M|ILN5y5y~FV^cBI8`?5~(dkMxjeb~fm-8TUNF&koTzV-yaOK90Mr1V75 z&0cy`_H`nNeT?UtwMf+T?0X7%kxO&<<_mf1gN(6;{v%v{EZCwKO}Wu`*trs8Xl()u zTn+WM)N}m)T-1wR$E_Dz`+(y5#}Q~}TX}(Ow8_9-Hznb3K^|RYr}E>N+@&3jo7mk< zIyHb6R>3_!Il-7Q`)3)Tz8g}WL(`Y#jlVCgFx05KZ%LghyAibA z6g=+=>M_ORgiy!#rVzaPE@&Owq`FiP{{w$uI<-{h4z{PJnZCq~5sE4`IR)O}SIV{5 zyut$`5ALfAoPIdiyu1nf?-lIn zI}zmAxk(ZG6UY_%FzR4xNKNGfC~b4l3U!*l-7;Qvs*&#QbzezN2V?#6Dd%x)C1)GP$wvaa z{>%wwm|ME)cm*rK)_Wwjf^7_Vu|KAdi@pG@C+%Vp zYEh>ZFe~>7i5Clvj!)*O^WPuFr&nOGF03SNPbeWw>Ph`E9HQ)qvT1pDiRmyI{xkPj!uR~^G2U^vn@RPSAlK!PfLJF@Y29@ z3wY_B!-_5Aow96^a_=6#$1Jg**9GON(o>n!>IMxxk60+h;8)Ehl41fYM|uI}3lo*G z@tN?C2+z9~EMXec-87Z!kgQTJo2qdDHu;jySg1br&AF!numvYdux zI42@C&F9jilr>NsYg1+)?YbyzDJ|f61NmAdGsS9+xbT)$&XGtZ?FBFBj20K5*%6B3 zok!}lo52>o+$LVU3&PxC643283avjj=Rc#JrNBD7TZ&!(fiBV|sck z@7M4{5xD1U+N@yl730)202g*2(oS7rHhZY5pE#41Dr#LcWqbIst@cj*eQc6e zRVQZ6A)Hay(gKmMa+SL#l~P;u_b2eE@B%bfW73UQj1erORcix@muT+u|m_*OK;H@cm_2VMet;f7|RvVg(`?muiM zQR4*uglmHtTa3d}IQ$hGE5!$j>MVxul=Xe*dAFs<>fSe`U5|%c7U`JG)Fk_{BzSbj z&NaurbDNNL!Xb8Du^QyS;tv0y&my3jG+M*Gs2Td;y^RwE(P*p;k`kYedu%(#l2C($ zS(zGp4Rh{b!rzyBYu0xw0%0P?IpJkcZ)`RQ0g9w^Qvg2Y7D$y8*ZuA|KyDISugMpd z#OC05p&%${Jyy$%Fq}$tl7q33ZfZ}_*ndFFo_ z4FDH6H-+4_L8DBReGCBxrA*gZsjWXaub|HT`Q%)-ht19ld*?YN$5(-$zC#TmD5n zEaba%V7VNjED+wle7scQktuL^(1l;UZ;i}bp7VFP$>lzdD&EwovGqiy;Ry*cUBDBdc|e5uAzbQDunR=8h(WOs);0t#@RSd!aAx0lXkP@y?@gpt?LetuWxK<+iRwZ2 zu~oel2kHJ_aHURA-fosucDj4AnS^)zM%V6l`sD|!;F;z~ptd-g1cF;njH+ZpUz zC&v#7i5>VXDQ>S`1^aE@SA6ijj81>!VK)I&_0Ucx(&)VUxSAht+hDfsp{`(6Cku-% zod%84p}>|W&D#MHoN;R^9S)g28jzCxoL{dK5>*Iwm+F1LRgq{-7wpfobRC9Fw6w=0 z=lkahXdR@)8TOx% zx+ZM`NlT%gL^?-Zq;$KQ+=$KfKe5X-h)7CFSicpi)oV%h zTCfSxUsg`^ZPD#T64J!_q$YVO%zWB7WX{1hB#l5I$|K;#i19XtEkot~YnfMEgC+TM zg-w{HYEzy`f@Wd!`d1OB32T|vY7LW-s)u8sZ>NWXy4f5$Ny<4sMb8f}vA@Ulq>|ho z_5h8iSApVZLpnL{N<-%Ag&*R(k((TU)^IrzX!ZcL)loCDjHo~0Hp=&9oCC_t8$E{6 z6#k$>PVp!qc&UJ;p7C~Mk^M4i7?B{q8(1$|c8<<`WuYt)VJAe^T@*3zeYf^puZX4T z`}sIgE?E6#uPd~kG-?@&sjeX6Z~v8|)}2r|m1zfiaFGiOio{`TZqc$bw{CMzrAPuF&Y3`B3nye+b|aWH;>fWHy62% z&Uj-`5{VJK^|DBbg3cLYSGMA}W}61nSu4-etz=aI~s2Dc{@U zjPF!v`z)qs+!UEt3VEr$m9df{@K_s|=N{vQK#1`P`ka*e;mmpt5jWfi`2JIM*;ItI z$6A1@#Z4yS!$LrOFJe~sECGH}a4-mAgH?qb5#A;@&|7vSZJ4Mk1z^x2gLYttnOOaw z#T#?!z7Co1>1lH&UXJX~j2dwyt#%Wv1NTKP#j0Hv?cEwFK7A3ej7xz-7{@mzeumDw zJ!`g~F(A5w9Ihgw%XJS$RPcQ&g(miBBq`TD>%4O^q+`u!&2f6c)p>Bhd<^OPzQK+K`LaMvCX^5&09cV&%VV;dpH@RL9|E{dV zH~7(dHD^eQ>1LN{-gd^TqB7R_N2TV|T)F{+ysSsB(|qxIzBTgW0dMrrB-PaZUTc+Y z>z?;UuTQK1j@TEOI<1sJBmWBVl*B@Xm1pGWYM6w_+;FY$BbrY0;L zwY9-)FCv`5(c0q$=>~>hduDa1%WVczm&^y%1pa{X>E)U3BAEOX5=55x5TaKV+qNnW zJ5IuF{O1?(@RB5nG}3P_q@rfz@P7zS#+GBUl15vc5T+1eVa(aa?QTZs)YCFEpL>MO z`uf<_3wW2hitCoTu4bs;4Io2{xuE(^59?{EUtoR4g7_28N?siMBQbC0hCuc~=CB(8 zw4h#d1rspXY-a!P5oO^#XZ&b3xJae_+&=DBLv6 zgpm+=sc@UDY?5T+c_mqwwD>TzqtHqE_$T-njQ3A-;nO$sZXex#JX${xI9k6YCSyw9 zpI`j@4?!`0hPz}1tp-mH1tEfeo3I>7(VyR&*2PO;AE4LGSJJ4Us}7*3TQ9HhMb#?b zi7y28*CN_PO!HKG1N;wx)hX;Ci$mSUBC0Z+K+` zhP(9s9J;0)0|lqm8@oKg#@|FPAkr$;8(p4WLQ6nGdTfU^HPVI(&)1Z~rF0BxBk_}& zcsaFpzHw1b3j6GkY0{eh?1^O4L#oqfKnYinc|0zi!XgeJ^Qe0Sk3#6rqDL#Q8FA!X zuSXl4ADr_qchMHOd`LIC?yh{syKXa39b;T0 z&U@!tvgVIj955yzf6c+O{FcsT=(CijT1AxJQ;{K)h_`iRsX|wkCZy$I&*lq7@aVmm ztDe^yOuzPtoFlzWdd#mH&aVUBMUT9_x(uF%Ut&hu>59wimYRlr&0?};Rj=e4IS&#{ zG`%fORZVoP)`7bw@1{+hmsKUr#uNSo?xOZIJMnQ9da(r~gG2US`+Pdx*52E3Zuq8|y;1wG24$LnS~cnuctF6W z$9Hl+Vd?&#O|`Sng`~_zE3V~yB{F0PwG0HHm)ktAhUtsy6=6VjpY=}SMG|SN_`O!j zl)??`4)vJI)y$+W`@Z~v5=ewU4Dc)~W+EaFLJv4p%=ztVf)It}T}tT^BW-Wxh*`-e z`%Xi?vi9BycR%C&(|y2UMbru4g?Wf|{q{j5L|_X3yd` zb6ts+r?nmn)&9N`IEx(~z?SgY`Fj=~0iAI)x|b?gCDUn{n^C?YCXrV8;Bl%HYGbqz zF5i0|plkCrGs%k1Uno6Hn)MjGbuGCF%6z8MXt6Z}vAdE5{Mzt8!Mo)G{`5FGE^;O;lgw^?X|OG`@=y%u8q&=*s%x4N zrhEwhB|1Q#k@bYi!*Wx{+o6NzYq(C{gP%+idwf0bH5WVxrH)Sod)oI2Cz?flbId5B zQH|z#*3(JQ3>6y$h$`d8Z<)2I5S;+k*;zH-GORf)(ib zz1nN-%l}B}AwTaM51w4D@2lT_M1OAGzGL=(gmgznupsyB?Z;!6Cx;0*d~omu$98zI z$UI7G`Sizm3w{FZGAj$-a2UU##~%4#>6vie;uCOmESb(#P@jYM#q1%j`9tS8bw9aJ zMojjh;st$ZzpS^`{kh5G+-)9Kz7#uo?qNgnlW5dE>fh9P;pwkmy7%U%if*Vf@blZw zHX;OxHHW@pa!^XE=SzqhnvY$YbsBWgbO*385&jScMo8~X5q;KxN$5GUocNSf53F|k z*mkcEcnQ2B@gcC4A`Ub|)S8-nE1v5NJ}F(4j>x%>3GiHsW)M&hSx+?3VaOMuLE`p4 z|3g4%zONSc!|u5}MGx)nKLq~lmBsaCjTy$QcDOC3{iqFB;>>_2u*hZyyBrbPhzGy8 zb8@+z0-P#Fj+q_~ur6xW{oXLK+h1NH4BtLJ|DD0dD?@x25}_aL*Ym-~=dWK*CUCHm z3cYGfS^E8SR}#Xjejt?+NYupUwT!TDr#g85b;XMrHl{gdCN#I&+*UI(ft>!~;6y*J zEGZ;+5zmf#Rm+(#D9_E$5M5Imz8s=5UcsK}N^{G~atue@{-s`uJ|?x}QP|I2H8$)@ z_mq%1QjAwmhsNLmA(LM?(P96I9w_hW%r;>?bvbzR#1fa;;E7BwWg8roiX++(5ZC$+ zcp@+)L~Sl&fx&3=DoH^$YUF4IFWz%7Pu&eU8y2MbNK=1eb01i z4p^E}|1c`9+hg2^wQzrQE6R5TWxrZh9pGg4-<}4>ko^(2p}}q0OixV*h4n|f9Dml- zW*zN?JyCNI4W&OryZddL+J!rdvPj-pD9?PxY*T&vrZ&5cRU&i_pH&vCy65LVOrlxN zZVyV`vh-kPrA_;>-orVwS$NB_sl630i>{_AH&3LC)$n7B43w9Psx<&E-0QX~=RDou zeXbS|<$jdtnV{iwngIDX{V9S}K z8;&h4aujd zRg=UD4h|JRP3n?8asBi8u@<0vD>*bElDGzOCi7KrYZKN_YZN|~xLWb1)}Hardcp}> zXK&|aY)ee7I8d^kE?Ol&(mz$T=hui{pVF#Y_xcB7oN@_yuoiyT04qSzLTf`N^$)~R z9FmPaOSmuE3||a6-g}Ce9V|@rj$|GatTUbjaCc&y9Sm)}?}(V*={Q_? z#1m3i5G8;2){9-XH*(=}ypH}b*Q!6EriT2CoPnvBppMGCE6mV$szOv#QNzqFDy-tQ zoZ(4Bo`h=KFK$mAisPcF zEuK%WRbD>&`5m$=soMu|-|-Btg8=GJr{*^qJm;JrBmGJ%SQF30t{=y5@3Qh9B#G-d zbljW)d?jj=N7npP#BWbQaw>!y>viYl``-~Si2gr5h}ZY0XQlfm#X8jfB{Gw{GN$vkG`}X{n*Q0c6W$ z4wLO^>@2bg(V|D*{BR;<$IKlAe`D+*lb@IVOUoy=Sx-UMlu@{8M4hFJi2QlIJFSjocIHEHV(WdD9+0vj>#+pv0xJuC^(2P0s^^dpDw(n97d` z9aH0p&e;sR>bOd--(z`)3#7#>WpM=cz1kXmzaF~u1CO`H{S{_L#5v#nx!IR`nE2a4 z;YGZcL*?KpA~8R} zX?nBL{`sQ=o)tBT&!;qx@ZMA9-zgI;<4Nv5?IIk2kesO0X{_TIcXX{jSzTQX+A0() zMa5vr>s}PYRpzI~e~>sojWIssLB6gfTV`oWfX9+JZ~am6@a~<7#%XuJqe@;I*SYha zrk5acHI%XrDkl%?9@ex)i=Vb%jYM<=w3k}83G*uH!7Fs+rgc%qwq#@Tu|I)T%{Id7 zFk|#QPSh#$5 z47f6@vTYV6p3x>gQRL?qO#*6XPCrAnh^Rrv+$e=iGapQ=GWV@c`<_NCDC5Tw;U(zE zQ9q6gE%5^7lPso3uqPHOd=V=G{ETbMmd8aD9@;!}x+@6=$2W2xJ3a@&tJD1qCcWLg z-<6l!*%$E!XbZ*ZzI7?H@=_kUUtcEH+yCaL)N>oMUzEprtIzJ@^c_Ac8mA$;D{CF5 z$=qMQFejwT&1y%Icnc%pyhdpKToF%&L=k8=4_HG8Ufz1{|JrsW!L-4H-T@8va^d*xr6@uZ+sX{? z3TaLNP?Wv0x}{rhE&Z2BPnPf}a^1lbt}_=rBZSJ&L+D$*$)UXKex9pG+Yz$u z&8wx-7h|(Ow|3YL_^o{x0ky6(4VpP?)1Wwa zLL-*ZYHt?~JaM+ihoO)!U2nU#wOd^89J*6TssvHd=eLY_u39Y$mA#ti=^S^dvznbg z7{XpOznKIY+B&5508Y<&rrD^zXIG^X>U6}J$#y)=$zLG?`kyaOSlg!lw(oI!en;>g zSH-n_VcYLXM-qdB5WuMs?683IekM?l%1P_hL0j`%=hNa4L-X3vrK#3nXw;1Vc(!l# zp_8s+pwmnMn8nYInJ4h%6-Dr3Ea#gC{K)R5@zS_<=v%hfp|F;JiM%U!jWyS8pnI4z zNJUAsa5w%z2Lo1i3?rRCYvt6MfxCPsr0<}(XJ~&Se4&bb`&a`qi2?7jRV#OB=|RQ5 zVVI0Z5e95E%$zP}Ms>YuiqHhSJZGe#Fk6{e(v~pIcp6wF@x!d%#0?#>ihO{Ir_ifw z&|$r)CWR7fu1(e!#l!Ne$`0`YLdMyml9QBu*QokSQ1wFhvVdBWDBsJPih;|^CqGpn zY*g-b$9Om2-G^V_bUfoJTKW9qV6N!kaRx8#tg8Urx zLiq-!@w?W43c$JoJ-nYs{n_1>$MRiZ28=wsc5Yh1*B~nH?vNEXQ1=DCbyAwlBwP$M z2rlI7V8yiwcp+u+SBV7k zFC*8LZRKy5%W@6c1#cR5%h%H z+f>{bCd}lT$iHf)AribMHw#caJhL^>0!Cr*!8FFzBX9UyH|H8&H!Q?G@xvcjFC$M0 z^85I@m|XDNPv*bP|0-R49SVd_4}7rxu)FKN48D@CH%CagQJtyZF#jjNQ_=t5^1Iym zOJb*UwygB!Tb>tk$B3=^LTo$%>yR7&Etq73<6okIE+WF1GC}P(qL<#+_CJvX_n-yX zkC1qljiZlvo0~sk{&CKUHB0qZlbNAcB5(0bivD^wv5uXn;150Hh-j5ty*UIjf0R!V z+8w~yU0u1Y2UV7OVk`z`fe+;pkF^Z*&UB3v#P@WMO+#n9v%-MQdt7xW(f;4r>iGKf z;zD>qOcA7(z`lv|5_n^dzU%hrWaLS%h29yYa_TQ7FcEtZ#eKLfXuNhJM` zMY<8iJ!YS-uc{4 zF>D9Z_bdI7H{u{@F3oBw$MES3=9q5Wca04T9+fSxQ$%5fDu|g137Ia9SAUp+>)n*) z8fKzy-*S~Qyq5eb{D|hGe?+)K`p~bWyd65Wmv@iRu27p#fbaCef=^GjA%DyXH0hn`LZ8b&FlDX|y~?$}Y66Xgu}WRKp1}dq?P6m3LL- zp=`2JN6m0*jWe|t>kkl1y?J{{{p4ypr3a#drEw~niW?W~5#6#E1fr~OSusU&coxa9 zYsPQse|^dKlA*(bu>W*B>B`bHLE$eJB6^IQ^_&l?)D9pc;1#u!PG!%C^$;yM)5jFg zyI<-E#|iXONi5ZdwbOOz*ZTSJyqS3g6N)&NWA<%)<>KVreWl59+7DQfjH*qh>#tl9 za2{>@)F|FH8b5NmU)Yk(DB=RlTw}{}Jq{*;$lsE-SJJ?P{n;80RPL-Yz+dP%$bk-O`$L`4tS7C8 zbcl!L_AfihK*0o{b5H2}<0R5XCS>6RVuRUK5xq6imJ! z6U+ZqrTgW|SX5E?l=uZqMog`dVIg6zVs{sm&s2r=r{J{qb=TBNjWwBCW}pZ9X+MvF zJa~k49C6}KX7F(o!h8o=6^msuOaGDfA9GsEAF@6%YevnO$+VE@DxaZ_l*{wmqRny8 z_ev6H0y}eU_2lQC8OU9>OFce+Q@;;8_-6?VF`*MDat{Chfcsft#ZCAB4@qNEy0CxF z`%ck3Ecgy9f?)9wRP{BjiEV;;eoIzXL+Dj_a@$PEz;9%kBkGm0Q^I8FRe^MA*Zut- z{xd060$QEjS6nZyxK+om1fA0dPt6F@D;vzWegl3jpTBGi?RN{H{A9S=VP493# z^?Q*`)$%fW1HPG*9nI@~-`@Z({tm{g2*CCnL2ag`cUb6j1AE=`UT&oyXc=wk$K(dT zMOcZA+;9stJN-J>KhUbIIs_99mHN3IwOPUFl+qcmmx@#@c}%h{qm92zmX@CT*n0I- z%M-RUC*^(y1uS0tE>#|&nr;0>c0s@<{+$_CKRbMOU;J*wWsK6W)onj#w6KVdv=}IC zOj=lZu;PHFLgBrJeM^zw#FRKz$mF|`*Ce~cM{R%Gnjp|^=dTCP*SDcsWOlFQ;zpTqabUF%FSH9cd8tU zZU>+09HyUP@-7*;Q@=l{0H%55P75bJ0r+o6>F0U|*L`tAl2C@R7q)JZ6?bVSe!3}4 zDx6M@jr;UvuHvMCcJ`YC;Nuxq@_@GBG~C##Q2}^dY1EBNjIs4a={wA)OdpUIQ-p^0 zF#lFdy3srEebLD-WQwyD$Y})fDo4#v5kVVGHoxwZm>uYh6skd@q)V0jr z^El>|$D}<0`WOxIPaMtt8TrH^{zJ}-wd}EL1=eijD^_jQ^=BIB=+PfE+s=hCAWcyw zE$~bgsK6^AIpd9n#k{hOr3iR0ms;`lQb~!mrQ5rL=J?eJiMkvVoQtf+O;b)x9M6N) z9NF;)ONC2PxFp}zv)=INDp3CxmmZ1JPb~i&FfJYX&DnZO>J&{&yYkN*sx{9+B!kzs z?@1)tOQqBqj@>NpF+n=n6Nyf}O&xv%Hx-UBmwOS%R$0A(R0ceZS?#Dq^j20oq@czyonNt4mWSsm8A20eH`nKEgmgK$k?mAqlQ04_SU$*l z{D~G+=@~z!`wOi24<{28C_}g*5uorFVay*$3AGP#JMV*EveXflkmE8M!ADkCMu81D zfp*ysYaU}Cj;mp}3FrLau9My#FSvJI>ghRdes2nDOrCP8SKJbE}N z))l#qC;)b(Ox}wKY4D34NqciOFyTctX2u73F7@Zb>$PjP9j$6Ag~2=DmtHSf_`8-01+~eD(yPOftN)Kc%~~R)z7S-w8>hiyVq^Vt$ocRrK# zJXr0XnCa`-Cj~Y}S;|SMv6@8e%JRMCOa8_gs{ahS$L#6>eLbw}N3+bN5g7ek8pkM$ z`l5M1!`q~&9KT1 z;?%iavs*d&-}Pg3{o}uBvmkXPVl*}6mTRQ&n$3Fu)u;O`DsBD~j=aeYGK9kE^vqHu z+M03wyC^oST0KFFCrhV8I$^3_<)EUqXa0$A&D9%|PbwO1@NHT0O6P(H$r@I2{Pk@Fu7e@v*}k7_D_v8rvc3U+3`$3p&x(oZ&ce3^XM{8$ zz8wWe^m`zLbA~yKR9z3_SgBW$GFN>ie@loiOopxL%#61-nv2)Pc^0p==vUT0KE(H0 zdRl&l+Wj8D@&MCU=jD}`T@+QrmdxRn{sFSnVfkk=h+B!uhCi&=Lhw9&$i3AvZ`VVu z1;I0Yz9mD}dT;z9~unwrAhT&Y8`}56-JSIl1lN*{EC|UwT=a;x71jl$Wm| zXqHW(BloS7BfDw;r+tmGu-;gqbsAW5>~u$2J=0KwU9-htzKkyOJ)(UimLMQ(wiS4S2wF2Jh{poe!bo4G&CUG$!1obE2t zKcZB{m@?gS1c$unUX0{Yz%(e;S9p}muU0MCmU!0KhS`g}yhf|(tJzeT^Hr#g^FnlV zvspcR*Lx(~%RH%f4HqkOs%lMqyDZ}LMLz%wM|#F_cD|@{$Yed!n#D)n3B5X5Z#Va= zQ$@;L_o=ml)fYEY8KdL&Q!1fACf~OA=iKU6S6y_R(f(j3(a&Sj$4l81n*S2jCG8-F z#}glm#!Yt&-!*uEMR>35M!|)TbmI*<%12nXCDkTfn7Y4R*W<=tPR7y0y-h~E))Q+} zBt1C{XpMHPgu5qWk90-dHV^3Bj>aYYf;e|_Jr2QqD(+inporaK&bFC~!`+pl;LjhL zxE-hFTUiBc`$0edC7O?@Cm#~s|G|p8WFcT}>b`do++VrPaRR+2y7pD~<0MKCJ6pyN zb&)!kbpI)4VT1oiGW&l4(z8-sc?hIly*i%^3MR-Jxnn06LZ^@$rHb~Gn*q{Zc(+$HWZcy*SV8>XyZ8zCx z1~zah^Epc1?D81cCu8k*O=!(WgsZ`ky6Li2HWi(rfATT|I~Vrtb1XyQIanP12g7@? zGyJ6LQqd%AqPFBqBMR~L%1U}P#1y4cXsWx;0JsQos2+6;E=pWHwFTvRjU&2-^kI^t zb;TWh3m%F{kFzR_t%lBy^Ku60*`t{m>)t=iPNejP6bu8fN9^m&r*xmJ6COIb5e zK4bcHrcbt!_<+|TYhLxIl+;FlAqMy@Z}gX%*aeiZ#6y52PCHe*IHrR`u33hGq!lK0 zuZtZ6Yih_&mG>XIEdwh`huSHHs?eN`I!*bqgGt*Fz43pmw(&q`NRbZrtRy-7XWb@Z zre^18s&tpTYh^S_FAx#1yMCu?T`Nhs9S3}|;pPKglJNdw=y-BB_@6?q1g{oS=Jh&< ziDtT3Vf%E0n3z_*Qri{b7cF*0_EU?|%1fn=J1Z%kMK^MqPTlwWhid)1JNga3h#pUC z;ROG2<*gqS{gxeNTFs|DQ1&`DoW(mLrnR0sMy0}j(d)?VV-+@|aMsxU^--?sRLNOn ztrzKc+p+SYe_UILh=*#C8eFssohZ;?D^@8_P1k26lGCoZR(BD;9steiFP z3I7)Zmem#C%1G%g)j=gKR2j&1FUeJ>iiAM+HJewFx?N47+Gi|%CEgYH{YFC<@#a_D z*Ka=9HyJ*V-z&chWA2obrUNrp87#CX@Adu|KuCU@Tk->dB0s|13LLhr_#Z2#l{$$R zvT>cRzo1U*m3D`3yU%cJD@_@==q-CNzlm7QIT0&kU|o{1Hu-vZkHo_mk~5a!ao=i+ z@b=rWuNX2{iu@v$ET_lBOsvn(6%bCjLzNqZ=U11J!O4jgF-}BkzVA$4 zPFr>&_38=hU6kubB>g47%=90N|Jxfe59Gq->*3a zX2(Tt(UHF0Hu>AKn%}vEb^zUKRCcZd_*Tc`YYqF2jal>^4$;NKruT1mwUhRptv${S zq`r?qOF=TEJSlzx^zp%vDZ#D2RxRL4GDhDu@!|LbSc+a!o~bEcVF37xafTJN>xgmk zUAVn^;Tc~7;%5BjCzwzS@we6A*IbS5;w2!M*|??R?)V;f=ZtX1rTQPt4x!?;!<@QJ z>rAxHd`0zpbm1`>fr=^wcuNgEX`6Iy;b^glbrG6d^dW>aD0-493D!r9UBTU>uYjG zB5j27vbmumB~lc;kA$Cux<2>JbAsVq{c6 z@(3FJ-`rbo8r%vlNM78v+K8F#XdZx3ccGP0)0%_ncGpK{yZJM@k#VxFZ^2z`rQ>bH zi?ab&JZn(MOgX7P@vTA-8y5EYaQ9hM3E>rM$|DI2sujHCivF;CTIY5X8at&@uGED6 zDcV2{qkVEuMC6B@a6EkamY3>8xOfg#$$p5<+&{U$(2%(@6&b{fo=|zSv7fCdRu+>h zEw!x1V%5y+9@bM8X^||mUGX=lH<@;EW>u>wu=wwo%v=4@ceGa_agG(aL3&vor%d9* zc{GL*KeH|RKCcxImo+IE!ZE&u7Xo4GBj%3lMf&z7LE#lHZlMeW9k_0gdKVg1H(|Gp zcbe!8f@k3PC><53JljUGv=7n~`e3{=I)QB*hj5YBA!*6)yi|$N3QCq}b^)B*2y!p; zpXjU}F7MNd=PpZa4vI@Q#C0C!WP6P-8Zq(B?V6QX$}&He{)l#q3k>&UlT(%-pSYK9Li|Nx3Csx4AJ}pW57Xi?BIJ@4ZKPAb{(di|# zCO5w%cRPA>B36Lg8-F*+{Zda00;|f2On>+jrVBgS{V4a@?H$4EcS{$pofqkfM)f=d z|Mbn7Dx>#iBV-M8TwWjh^dELPMl7buGtviInnKiQRZond=} zgTV&P)^HZ9va^Mm$xVWx8m~``@5Og!L!a-my$!(b{bc$JsbB-kW%-|blvzUd`hPKS za%km;%4z7oL?6cLr>W>ImVE|}H+!fdISv(7J##_3=%;$&Tf`plMHCO5xR%? z8gAof!Q+VsXV-uS<=6iVP~E5#HYYFDe9V$eZ_l(w4n@2INeotkZT;qs{g7h?;UTPE z!Kt0WVmo+cJ5CkS^x?otye;1t-CQbg*lGLCCsoFH(>}lTTuW1^vsKci?ktpT-%w#> za^EbG(XQxC3UrR7lRKiLwvPFu!#J3}kN5eOW)*OEFfWMYK470~HUrM(pi+RzLm79i z)kV0KRPF1AB;vKG8U=6xYF06G$^{8~ud(&g_XsR|vBes=je+5P@<+T!Q)7#}cG32I z<56FT_b3TK*w1p0pC5#MYC>3?P8&=HVU?&|%bLFb&*wLa~18P)oTwNE}dL1wYLrdPzK%Pbfz_%v#?x-HZ9{RAJ2GO^Lg}*l4^t z12uMQOeCueSwsUo&gc@1Z^aIdB`Lm7LAt(okrJ^r%fQ$G=O{kL!68yTS-}Ft!S`Ji z%sxjzp0Jd=)-m8fr)OE$Y^t;j>~RdGshfT`I^dN@m%!H=`c{Z@vhHfE@AtsO8|kY= zA;Aldw+ZLtRF74MqQ1bQqvioFUF{G>fj-xch`%1M`a&3hVc-v+PL0&1Q5@^VmO-ix zD|*DjiT@HMebr}GZNIpOPpuyE>lG7Dj0S9JXuwep#hX6rj}@zyKb@R z3Ag@Zn9S_@5Vpj1KQZ``D1k=Kc*L`M@cW{IEM=;XQh(LPtr|H|dzdtM;V$sp3~vl~ z91_~SM%}bV`GMB{arHG|CM$L!diP2Ra^dI{&>t_wzT^I5Q9;$kP{&>ua8U{D-%94; zcpBL8r)XAnQLxE~_D)shcyDUOL!k|LQl@u+2&P>9GefEgA1hs*{d6lQG&&Jb<~UTV zg;znMU&R+lXu};u)p^KQ{=&)zL+-ABg6KX;C%`J}V!4zwd7kg}h3POP6vusp3=bP; zHTn9UpbX3VIkJCdV>CRcJb8ZIOH_@?l}u1EYDr6-87M|DWRoE&HX%ITSm+o@^Y7Vg zlc2qH&!ViKA1eq+cy2B{vtHrK)U$fR1=_^NqRqowusqje{g3RZ+&9d_$fSexQs#8S z@+jD|3MlreKNzhlT`cY2pJBM6MbeN|WwW5SR%!ItFwMhlrC*IZ||Qw1<4K*F08`i$nmt>zHFDK z1}dt{bAm$>4ju{g0)%!ejX37oEPjY78Zmx=$X6Fy8(vO;Bx`^Ry>uk#^_fEK~|w@;r9*kzofD(H0kvrPex&ZKpI7CZgF zljt;J00#mwIE4d1&#RCpejUKg5BINwSc_bpTx@O9&Q*ZR^@5KSFRZQxX2uK?i&T*F zEx_zGZPzepT;BwGyh!t$p&Z!+{8(__co!)*51=zkzhCL334gO)~zU(L-;HvNcno2tc- zcZ_7e->rtOCuoiB8!j2!Ak^#Jd>|v`hhlMY`{sTjNW-$CyFs^%zK-=#D787@2{O7H z#NQn$`5BHs>8EBnwVUwvyhxplyB^svZ`!nYs}D`rUDG5ZncxOa5(TUP8Q$u%XqqmW3o&2OUj-D-WoV z_Jc~ukZi*;zb!QnC8Z&Sx%hUY#OJ|Lh?ddpieke@aw+%jdBKeFUb&kE53y@+NXgdY zpygR^r{6<)1Z8;f5k--$!3P^Ls)gB#ls&ph!y=wbxi-{DNW6G6GS_xXFVV0GA8=^a z#~q2~(ygy1`9?UO`z6-ZnoWJbO?09eYLo)MPy3(zG%Lp5D|YD^YASNU%ReG&@Yii*cE zDgS6hD26Uu{_KYTW~(W&)f}1_xn+grWqHQF>6IM)(wqS&)`{)+EEZs|^HMo%?dDl7 zbm~1~Q0{{VPd)neq)o}nxw!d03(9!!F7GyEIE?4HZcVX>`m@eHSSi$zWDbSv6UJ)` z1z@zuO+36G$G8ofs`yUmh2-0&KnjRfYlddj2?_RXeX!@D^>LvMFRehol=FJV=*`1x z3Z~!(*RpIU0tB1(KMO_k~!)#MXAGEe_6(a!%`FxwZ!H-&O5oAp9R{}PELl@bQtur6`8>pdDf zhn$U(RkOMc`X5;iWC%_Bf40PIf1bVG*I-~KV^ER(L6v_q+5Qrl{z)bJ#dPCIj!<{z zPZd^=ZJjSQ0O!8hJuYQS-9w2+;yRKXFE)zxs`7)RM813%8C%^N{}b%A*5_eUShZKk z?y`vHbKSN&_|fvI!VLnlbUAM}_N%GWe%WoPK}VW6M;^IJpcK8g2mo`c)=@k7hy>a! zR9>4f$Ek~5H%4B8IsXg!En*)jG9Gq4tgmESG;3kDmBP=Rj7>V)H?g=G4`+%1({t(b z)4?s~UnB)uroAGXXW7Z5$c{U$jFJJWAe%sg&XE&o#DHOv1Y%ZRpd=0Gun)A?~QZ}smv@Ae8>02`LAojudUY(!8eDf7rPFN-_+SkHt zY&=qPbXx~xtfuM)HOL<>LMz$u4CfKN!+~Jd&bTk)WI}Tl<1PMyuV|Fk#P=bL(I~%n zQhH-0V11@;ELWnmG+lF;Zk-CO=ZO|&0`U&n7uLjsj$N^XH^w;Y;iAk{y(8xFnrrie zifi`jS9_df)0z_4Pk|Oo;a+&i(wZTETl52LakubYgZp4BBMU6X`G8hk1=5Oq zN23XL5|lAs!Pu7%^S>jia?V>62aeP7_xK`{m3XMucMWURmX)d1K|17~?L*)?zr%?x z)U0TP}tl~f|=zfhNhhy<}2?KZw% zW;_9z@}GOT7GKKD&uE{|ACUF5UWU|vCon#rw>r*cGZoLDU|DLS^ot*hDAtKFkA;km zh=Cs_#nSH{jSx~$J9y9Mn~af&v(3eVH>$PZcWU(dZB7 zZ$1g7BESa^hlnH%>UdS)(I%_rFtpthGw2@6KbX)trbMSWR~e0W&-yOr@^uqpDk}u0 zHp&n3XFfcJb?1hZzTdUdPgAZ<4fgAkAyFa?r*RJ4^f`yzOso|%XU@nr|85((~rFzTll_wJ77DeqWxN-q~F zq8y)dL=v6{xR)HJ*&pE&8oJQlNq#%)w*{~#W@>*x3P!`8<#C^EP*VzNY9Nya|J~y> z!5llpDv}NY?M20}3{nj5yyGDzyK1?85Yck`SOAc}OThJ%5;Qhb!5DEUfl*l6u#j}* zm%(%J`AgiX`oBcW2ZjvII!ziMOc3w)wz+~M{#(s*JNcjDx&JMo`+t1B7alwXy~VQ% z5gvzfH+-KP_@#@-8g%6x_deJ*(0`cDyiJlfPp;wp{nN#^Fc{pXUzZ1>x^0`}2NuoMMAmE;LF~|9%4y8p7M!>t?pO^Iq`wz?Ob%hd1Ig0Bv6b241i_ zgwNb4@wGTA`S5pcCyw$<9+^n~Y!|`1Gv*RL2j0?Viemal4=-s|=%;eqXO5srxn5(q z6RGIhF<^Ou8^-LFVAj(NlcD}hhwwO-O&9ltM5+$l012Hh6BJ%80g0Tm8{@iIl77~7Yqo1(L%SwgH=Dtj=?K*cy7 zW=0*o^%mIB-4S2bBk-_~3(Ca{8{5rkrvemFG=|G_BMxDzI)>sx`c~WlPaI^~lw%qG z@Mag6l*f>;UuywBQEXb0i?Jq_!u_gcDT%O|t=*?f>eC{) zm*umHkT1&#hz2xFaoPO^fmU-*75_zR_NY@BzUK-Au)Ggs1^qzPlv_H8>1AI$ZdrRK zo47H>|I;Z4@>B5`MPfeNuBi#5L#Rq(r-4ePB-ZWx< zcas#>xRgH>Wq3tj|FoKMEt{`5RVOpS1rX(z@=Qt-Qnq5*ZEo8%VW`mh$wzfPy171v zf?EfjDfcb%{j@3JT~hJnN%miOVwJmd+7*t_#pzNh&dk^uFSM4#jfGvk5~VTdmz6z< zQ{88M-?&yVOO)C%2lPG|wZ4d+!dE9+I5oOhnc+qKe^^G;P*fLhY1o<0Jbco>4iQnU z@ewmuZ5rVw&&Uk%7tQYNcaJX-#!PfQTNXVRofjl^4^Bn4FgR9S4j68|{rZf=U#GtF zWJ7POhs~lwKIuk7q^~S2CJCjH*Uv4iskN;Aot~Y+r>kh0=FF~$9#%Z|-)%$-0x$~=o z?isQ6{%_J=2b5X=-9f=SfZ(@*t}(X);F$g!b;hvUA6=$2uq&>~zfcpcDeauv`4zvj z$?>7&o$XD-!0Nlfxbc&Ynd^?NouU;i5(vA$DP5gGgJCukI1Bjc>nK-m6e?NA=$Zcv zQp4}>rqb0q-cw5a}kMXf8 z3f=8}56D#Rx*7aOeM;vQI7!;F9{JxKswHK!sCLfcV4Ov_YW`iO#};2Rp;UbCKIv5n z%9g;U)ie8&Ue4)tS z291$qkvBDUZYF4Yu#4429~>`hOcj9Irv2oR-P%;XDZ?(aX|?Z)P4% z6x^-rDmb+``DzmHZTm|z2-c9Mvi9%e#U>ZtkVBe>wl<;Rp!#sj{#aBtqojP}BbEt29dCZ}Rde;~DFDD6R3Rxv6D z=kKg+1I$iGr1$E)r-Cz`SV!62FqSN8zY7|RisZEa`FM}3-iofbKeE+gXp+T|kEM+J z{hivdrKHMXD7k(ASdEhh^ok;lWkMXwGUzGmsw2zJxkD^UqkrQr)zwBj0W}q-R_hXKgZrbOKGSeckOB8y_(PJh7#&gexS6O_q z6D5(3xtih zTfmQMk*&=hHHM*l`wQj~w|2%yS}>ezmeq3Z8&Jw2)t#E*Gp#oqybt1Eum38rPN+~d z$~~gXBT?{NW(Fsk?M`e)w=s2zxxOs-wzWN`QOsL{4H~V)kke!DJ-P@Q(FcoEiAq|- zY7uE&}JN6}HuPNBTTPeWzuzbKbZ)&RStF^wNn`pfC89}3+ z;pfS@@D>86RVJ0!Ll(w28m=G95#Q4OVLN&79{}k3G=Y2?_g~E8xw&U1ZKy}gPUcM= zmX`^`sx^Eh|IIL`e^8ac7&AI}^M4@yk^}-!E@*!8s%8B)FW|!sM-8?ZzN~f5ljd}j zaV_l;L(pmoEphRCb?R8ClYm|i9`M%-9CzWf?vvez;xhM_o;kygNl)cIu$f_G<`UnL%w210gZ%40Nc)4kOyV1&qJFeI&G$bBQtQ;hL7aU zj`!(?QE`(vAxv`3`&R)Qy0i2;D_6S4sxI&ZQW1v^M0FHw9{d=({^M3@JqBF=%y+`u zo}}DT_){J*?8Ihw0VW36X*d;yZcgElCW+AYbw0blF7F~X`3+~^5<)sz4%LB%piLU9 ze;fdRvYj-TXm!3sY({xs{xEh1-$_B)YbE7DEo-uj!l>B+-hFS1b`3-#+`G!Fe8Mwt|abrq6c~%4CUbVJxYo|T05;-59tVf2r6E} z$x1H`Jj#!>3L!k8oAWt@I=ZA{(E0*?H-qu*KbfE459|F_$gUe$c8edE`g_YMHfz#Y z7HWc#nH04)r5Y_09YD0norGT?6Aj%bSSNJjO8;k5r^OC_$Xs0ELyQt%#z|t4j^*J)3^%0q zU!t$y4Jzv^yG6s!EW{i8Rco+t1m89Km4JtP_b(33N#dAXn+|s6JVT9)@0;dwDXM|J zmk*g84qrSgeR7wXxyF?Q)OUQr$DA%XBGf4hPxU8sOa3@hz=X531x9X|mJJ^N!CE7p zx*#q({$~0I4pIyz(o5uZFostojN`P}fA?n9Yu^L>DA>|Pas2r<5!X{elQh;oX_T3g zp&_yINYJTfwwg_~g!#P?n^>*|$B^Wr5t9AmgTksGJP6r*ou|Jqqh`)tk9G$* z3u@qNh-qQiHjiAJK%48GNS9KSf31CP2M9x%d99UUG18Ttcz4O7Juwe+a!r1mrfqOl zl7(YjGTNLvgRuGS@~t13g);YA`?D*073F{N{*y}A$|?3w)8gsCjnqG(B~`~-jdHZm znT@=z94mTfVfcKK<&=LbGJmaBYEdhc{qIdW{5l9y=Pfl*{K*WbDx;lXtnl*S)T_KFc#@T1C`y6nPGr&&^2nFR$vm)7gp4 zSF1Oma>qHzL8}-JjGdyxrR+s)6?0k{zdT_0K#+}r1Qcfx)qHi1xYGy3ixT|s<@6^# zh2bI#HBu1S+0B}DCt`$g*cRhfn2v^c?i2KoQ?0wnKZ=YeH86GEFyPmr-2>Q$$JMqa zd>^eda3tDIq>#Np%WK6;Ge;{`2lnt~LkJ5$1`b$OAC29!(s@b}`@-=j-D=CEpfm*| zt-|`UUTd)4%R`v$>Dxz)%~n^VxRDkojep8t@_AvEHp~iE+zj=vhxj8guOtnM74tR* z`RDJn`}Z$41e(a&r9J6dY>-&A{RUSX?e^I-6)UXfb`gM_F~B0}v{NpyKuSqTcz(8+ z#Cc?-K_4Wefg$ltf{96cS{3o`yslrA_?TK$#=LcBC&6lyP8RqP7{;Ha<;l3(N3u8r zwtA|Dra40fVaq0YRFWL)7+p=6zr7?JARtd8)qBa2TE}-iR;>P{m~GE;vlNraixKpg zD)78-8Jyi(xWu?sc-xQp<6t`ix~FUL117xr@sCHk06nWW?_ew6HlgSV(g<7iKyr6* zE;W&X4Kj3$oaSRnvp*6J=wk-3E4!uHWO?$o9rcv~4CtjzER8$=91X(1SaV82*P9Y&Vqp70_}M=-RG#%=%a>2sV|sehbOxqcA371ke+|sY zfZJ7)tQ8DZ)}vEdM8FPceBJp>G=x?8OTUED^zs21WbnBat*+_{u7Cu@p>iPy?_$C@ zBBM(?nmYiAJahaul3vBWU$SWc=@s4M9X%!#ezaLyG8lf#W~vP&VeJmXV3+uB0!^H? z@w*}A;J3Nm^*t){y{yrls8l>AXNvF~9_h$&-5)yU7fM(ZO$urBD5e;S(Z;NhjYT@gI}F!7 zFh{J$`!5tW##1B^YJM82@!V&O@^47Ek;;YHB))XJpKS%vY)}Sxasf`R`X$w#$VxlJ zQXu`fq#jmOG!(G9m{(u@c7F72RlHED#fI?+o$wcfK)b9e%|63&6rXi!W^JOtR2Y5Q z@<~zh5KBkAuSYvOxkV~Erog|aD%W)uc;K1&rQ*USt_d2VanK);pn!>Y-t|N!N}3-O z)+;Z{5Ai)`8I-HSbug#fZz`l2!)q0-Ndf*^^^_3@ii;i@l8s5TFPgl*#NRBDe^90m7IYkjt=qaxI z^&=PAk38nVCT*dyJM*gz0Kk(T=adNN0(xkAH<2io%tipw*4z8j>bOwMRDAegimYyn zI6VKwl`^mDX}ro99lGd{(A_ZVw{!LDARl>;F#(>3d$0x16&X8$tXA)m1IiWH$HB-N zWM80y4cU{=_r-AHZL}gU2z0{(o3<}00?P%*?Lw5@PM?bmN8^?+BEy$=bdhX!G>J0LB!_mLQnYP) zwGlgKMhqEh7B?$Bk5B#asr6{lH6D7g9TTbggKYnKSX)(9*r;ev;k`3N=UBI{7yY92 z$Xxhp{*$D9o>Y#SCfKZGwv$sc*a}*k;NRgC6o%+`cXsC3<0?H}Ew_xztB$u2#uC#a zRyaA3wloC@(jZFp2;t6455ykP1SNxG72}aH=FgSxdcPM+VSk!~=4m7sRq23R$?_@b z+xXBpzAAp$V~H^-a#pPl+ATvx>XWP1!Nz= z`bG}Zt*>e{v4-^EhU*><=JvTDF!a|DT_X+@8O84IYN zD7!ut!oWaxvcx2xwN}?Pom*@QApJBpnYph5=3R`TN%%`3op~jxyN}FwQSe!9#G|)9 zYoe|->W$kzr2b3Q*)I=U2`#+wlp^K8?_Faw@j_D8aq$03+x~y*-o&umiL_f>R?|Nxe->%VEpn@Sfo?RRx0@>9 z0VZeI_8z`|8?%Zt3FXdl>?|%VJwZL;PrJ}CyK~?D>2tipOL?W^E$`mDhR9l*CLV0J zR-qkp;fhP>01gkofy9g;YmZFS3!SFN1U+*b+J3a$9j)7UjSkB*Dqf^#wV;m zKD%R@xWG&l#cAnyiKf~k9-kcN((22cKI86()c~S1}Q;#W+7qksmT3bG7UK%jvR8Hr%c7KheuozTKe@l_-oyE2wiD3LCZg z>_e+px)1Gcz0EZdmHZl4VkadxgMSI2j^W>Ywv+~UOzVgIxL}w9G_~ zh8BD9Kwc8iRoRi2zhWe`)f$z0$1HU!E+O&f=RGk{u`^JF7iVpEX%}hRV4uSR=!2{s zJdTkYLqJU6kE(6DL;>2y2Vdxn*(tGm2>04E7*PWK2PdcB)$1p02l5R)-Feh+;`)Fs zC|UQ2cxmS+T2wcy$f$@0#hqR!D~DnxVBQ~(YPKQHT6Sw|RlE83J_|$J$mFqLZ z-fD{u6yW4sRJo!*u3eqoFNutj2qb-?BpY%4OgGOF2eVXoiU^pcfOoI8TOHdb`oaHo&GUte*{R`>DY&@}Gc~rh_%FS!F!BBbK zk7^M%)T_l%vo=_Y)UWejBk|SU>PZm16RE5E<4dZd;mHfdWcSg0qSMDSqoSRCo3SQu zeRxCmG#5pe$MYe6!&+s|tuXu4s8HQnb-aQ{GZGk25bbCVJH6hHnTa`Bl`^~4r17A@Y)sT4k> zFsw$@D4}}X#WalY3N$(_%qG{p;t`pfcWpu-S`>PR<>I1Y26-!l&@>BK95OgA>tbASisGZ7(LBUib9HT)iQ%iFrHtQNj7(HPWV#7I3Zu!BAH+?;S6{vijb!iI zFqY0;k_#wcFO*wBV|Pc6bPH;m9vq4`zD?cstnG-1F6R$rUg<|@4`zjH)#~?r66{lR zf3z;Bw#fc@q-$ER{oSPFPFRjPVO;Rl$e}LR8=d-$g8eO>P!+vzRfy@zgd$MIqrI=u zhUk|_m3a3tjQyP1A^_|x@PQxRZ6x=6j??#SA7c&VDrQ}D)mfn0-Rkw- zmaW#r*{N0XcK{MRIq;pqKUb~~(fdDk<=urZLafSmn>UPYvTf&(`XB+Yx0k7+1~8CJ`k{t^U;U5+VdcdTGV7k?F3IB?q^)i_;h z@JV{NTXjSxf&6s76V8uHCexS#<9NwaKrZ#Qz%;t99u3Y)yQ(oGeWW}1IMw|X8Emwh zd%L9v4i*x+GED>ba`>#${ z2^8D%*m}%0<=*bKp(r+}Ea>#K`e1H%@FMWzN?QwmxMk^uNP?{$}_Z8zaZ2 z@}_W&8#Q6P3rQJj_EE3BX=Ww4Ns^yg>~}Sq{z%UbspRTvsGX580D3 z*V)-#u8HN|7aVN}S{N16?dFg6HwraWj z)XFDd+91!a=Y72o)C2oIYq0I(EB$$0qIV7cJ5Zsv1nXftTGz}ZL`$tOQ5NLu;)U{R zx|6CBxwo&Ty4I6FqhS`~fItp<1^dK$1KJHe@T+>8+F#3B?9;v^afaDC`O%mWTBTwN z8$3zGAlI?YCf{bUNL8Rv3g`R7TiVK+H|imEOAp;7rcu9dc==L~*KT!MIi@zf_!k;; zSc5Fu&_6ife>a1v_WEGFHd(Q=LfXptP~Ua&nL2b&AC5nsu~GCp^r^HDx6H&0Gb_Bl z9gcrhv@9LHc6H8~`my9WE439`KT5W`b)o;=@{=FQDv#%nYfHi-^+q|_hlE`?!PZ;G zVdr`g(6M=5RpeS>=f-5CC1Z&4WOYx?=Q~pja|#!|aERG&Uho zJP5kEV99ZjBj@Td7`Q5!+z+xJ?c|^{OKGrIb(C@(%TU_d(vk-FtW6CzKBc7UFKq3K zv8xPdzYgFQ{sBTqVoPY&I1zFjtLA-#;xZ}t0_Zj18$}zx;hW>{-=2TdOCudFFO8l- zZ;YlhNVsawC`p!8FE%>WYuo(I+#)RsLqmxsbbauYQs6MN0o&P z*RjrG#Ge0QewDP%Eg#!KZajafV;pl~o9X@@$6x)Sx&;ZSu!`uwYST) z{2IA;R-}gCBVz?OfeI7m(aya3*z={xF?OZA=prremo1WH*lm#pMQFkO!#Kw6fEUe0 z{K~P^9b`iwL)C)kdjAMh93$;L#z5Pi!-YUHeNA0Bc-HqyPe$+ova+Cg=1%~`ASD~^ z<>Gcz6dL&)b!Q9D4rGg4(Z#oHXym-rk_@h}%+(!X=lT7r!FMzfX<2pAAzC7_fp2;U z!q#$O4{PwmFJU5ADt#J2RFKb-$TtZrUngp) zEb>Ur1MF4z{V{&FRV#N+{L}%cD8^_I92_IR)otV3dgOXZ>Ro^wm-D5pAU9=ER!Oj0 zTW%_WpH`&G_w<_4O$&p_O#zXTrH zw=}m(81x~Qdmwb&6r_6luo@<3@$)^hxSNHK`&|h2gGb5YW9DIzk z(+5Uk5IMEo#3pF-!c~1nwOy|w({i;T&F>o>!^oA-W|}R$iOhc#0``&);mxN{HrLz$ zDiPb|k4b|ff z50*zh6t>;C^W`#!HV{UWz$H_~ahO&9gxW96vu2c=e*?FKc^ zKNHjMx>nv&5__RAzuTo^>Tk&&woMJ{7%e%22IPDP3HXEceFG9YA}b+=uK+IhalV6j zB~U6?z)N(wwbwu?0&nI>v#EiAyqw`xJUdoWz{D28iF~lz45Jic-I@lJOE&gSPs|ze zl3@{p*jc9OLK+NxtH`Fth-B4n;EaqK8n&ou%xa1Ue$QgohQDYC=dd_$ob`@(fJW@x zomw~ntrMu^5ctM%ld!{`W)!>~%Zamm{-mnBLH9i|N4f`A-4LG!!O^WX7}lnI_`2%= zzqqu+A_gv2t!c))Yu}-HT$&6Q{jC=}g~koJ#;LL5_^6%Fh;CqwvKq(0o$VH2V7>QWPjjSK0 z6lO5+8xJRA`mVpF7q4HThwm(74Yh0!+)Gj^8I078=@lfQvhzDk?B!L}hHXmi%P)I9 z8v~R~E4!Q%yCO5e)lVN+xFDYngYgXsB=J7dn9E8MUU}n-T2G_oxNcm2(RKeQ`h=_~ z=T5sv?U$ZTP_O-CCky_=v61(GE_sB~cRy^-YReWB$OS-BYcOU%mN;sGy<)7O7_Th% z%dx7k?E9N>CqBOM1A-~T!`vSt^<)awD&I2fAZ~0QKfjA`*A^`0nqC^MLqD8BAAb*| z(y`&bAP&(+b-$tIh!@q2!3D!g^|{(0rgG7>=`TA=29&I3Me}<5>Hm4nksp^z%R~6u z&M~fK&#bq*nr+UK?-Nd&PaYe_3Z#+c>pXWqR;%n+V`;CEk z_)N*uTY=Oi=KO08-~QWVHM7CcAYn7i=f4D4Eq_ay6ETR)r3vVHhURaW=Igr+RAI!7 zg>*5wv_g-$SHGi$2H>$Td)~;lK8QdYW?30#PDA&*4JV+$=aNJfh1>S zoZy)(`Jj8S?lhVs+&h$|uW{8`1YOVc9u%mvdz-n6(>H*Y1C8wuTZpuL@zc(LGax}ZrkgllJ}eqvdQ&u1HggSDV|3H`&4)ee;|ZM~XI3F5afmOiiG;nBC!&w_Wae}wQe zUg_2>-ww$(P^VC*04hm-5|37@n%rNvYk%fBjkn4N#$wf7HE|xgUWFx$`dva+90r9t zQ?l8s!fU!m*1yw+PX)LOr=g%YrWBrMN~^!BheLnCjsjNudWt_fnA-otHMKfP=uU_F zswFxd^WzjiHHL$(DzjRE?k(1Dy+3$W-x$Xd^-1nCgw#oxhD1GAv2fta@1CdHNfHdi z-|GFSQ7HRK#w#cVOTq)EJ4=5FP^PLu&N-9k&YG2Vx+dxzHlg9#@vf#32YM~Sw=D9* zq)isp5jiu!uIC3&!zHC$S!eGSn_ypwxIJJLr#m|Qh;INf8FCapkuK6oGEi>s;tFKF ze!8GIB+2Tx7+~n`>{`xGEowP*4e7#|xNRBMyOFDkyqw0nz$HBqK8Jf;muuu6Mo_14wQMMKKFfsoWyGebK$3rt25y1C)>&VPThG+SmHSM zCyE{&{Q40i70UZ~m~_T;!_tN`4;WROW}@Ef1Mo8*ZkveG03};A`hU3_3+!$K*R3de zav8V_wCTqS5cEmevG(jrJ@U?(w2aqfaMjLXFCj3))`$$G+A*e>xmL4lf07t^SPad! zz-lC6{CoUF{Tr5`C0iu|v52_Qyl$Lv!8D4&(Jy%18=)qn#M4a8=Gkuh2Kc&(Wm!)z z8@WF6<3pU`!bYLWAWOwGcsU|pQJ?^r2L7H@RGGyeT9_RZQwL-7&jK$tploR}Yy}&T z?g=Guze;V~*W;3_t^De63(lm5t^5S%B;7E1_scp$<6ziII#Xf#qk>dSO$x zM>vNEhG2{bSg&{wbR|k5NvVl1ct!X$qm+6&xA^k7#YctCnEAz4b8EksdoB$YXgs^$ z0oYZ}@QrZ`oeo>{)MeZ{rnr4zOA}#wV0d=LdD{>0e~>%1`=c%`dgN^F`tj=PS*`;Y zMug0%PjLs1)qzZL<05_BY$^L#0e_P9Qsm;%1fTxDUXt3UefX^us;fp8ZQlS6kJtqA*0&#t3ANXvAK2nbWTp zyENFcx^YkQnsQi%p5evbuV+z8W&p^I#bWtFJnHQX@X(>3WMtyf%Ls;nHBAuhIyiO~ zUB-k>eE&M;=qavj_6qr$cQEO8?@HK$?e%Qp%KRo*o#4qkB90Bg(4pgtZNBf1-t<9+ zJ)kB#I_TyOP>=GIF=UnHFEun!>G7Mf&OrplXhi{$|8>k4|lt|drkiu~Nnuecy zRb09$08FW*L7oc41xQ$jx*d&U%q^jr+|Iz{KKg~e9rsUR8S$~`cArIz<8erETyrs* zeVmJ!kfiLc%J1eiBy04Hw^%!MN%F%(QrM*9Bjd(3lh(mpHFop$I(yUSRHdZ?@9ya3 z+M1M_0<$yPFxJNFgK8WEAWhU`_ONfsj+E7b_;{Y(sr}b{h7lu1wyB#1oWm4yngEzg z`3;Qey~vg~pk`I8?piE{7=9`*vyo?mRe5w-J2%Q@+IQI9PL`y)MYa1JsMj^gtsL=m z3$YdtucDSrZ0;kq$ttdUG|kr0I8P_I&qeOSi2O&9FkDot65qR5=oR3vp}mS5x@k~i z)udCu^8F1{QwfIsgOyap5OwFr-#L9G(SpKF^``DR0*e6XT=69p>jsS=J8o zk+m;9Ui&Fkv>dNU@=5$sZ|x*s(%LBDYuh1^<1MScB7-jE8^wD&+aGc5+DU2?DE9|n z#xHo(xx(y3)PK;({s1?~8ux-J+hhN^aR~E&O zw&thOiU4EW2nsAZl>v#WuO*Ug8=0}yI-;3B(03O;@9{(pxX?H$&14EMZ}zHwOQ!#d zvzb<_aB`9&7Dk^)|smFLNKw3z8)<4tlHO+3r*jQASNbM4v8Rl zwE}AH!uIm^vYN`vs%Lmr)!^%F-M<49qp}O}xV^@Np70P@A%wLo(nG9gL*$*j#isTS zJ=HRNpz+5GThQ0iE~tyqudx8p4Hw^!8hMjBj;l%vkiyb*D`Q^suH4Y7xjJ}VPUZO5 zBDhu*&TdvfQ^+SfNycoZ&NP{kVOA#BnOV`l*?%MxF#zb9~q&@ zanAg)Kmb$X#5qq7(=cmv#f2wf9T6brE;E0-{=mpj=b30D*rh)xprn7WlqNB2z=X?6 zN3xD-#A$TkwpaR{w#Sq+SICRGmzpM0h^ik=+)ko>z|H5dGtCp~d{A$%Q2=vqtP2!T z93Wybk8sxCaQtMze-4fv!ejE?4R0J`tX0-{MxX1keb9ZIBqQ;mOGagj1N{_O!AGb7 zCv}Z|Ybn%N20PW62;&WLlLyzlhxyTMETbo@9~$VvAp&Wn@ajcMfh>(IoUm0YflVzL zbt|)v)v(tkeRh~RY__b5(sA)u;|sTScM-aWvNt*OdW=8?Vf`^FyCkpG56TO~AO*h@ zyGhZO3v~*>k$WnFPse}M{0BF`TY>=J31ArXFyaOIZ@rcNK%n@TLS-~|f%A^jqxkR| zM3e^J_b@5%^yoq$>uQGEM9gE0n%8%AQJ0Yj5<+R{Ing0n)n%MC(NQ-{oBO24%4P`!&pSM zSxSvi*YfXJ3aavHr#_&xU5xMLG|@xLCG-#632XI_V^E5VYXaED*|4jrdLk|ZcUfyU9{`e**k1~)lq=*G$4> z{hDFj6Q${WU{IT3tYbF?sNER%@BeAX#lry^V9F%H29DSDWJTDjT$K_3laqd3Lqme- zk##*E8;}(qzohf%-d>f%1c@VD86pxgxaSXdhVETs3#WxwF4Q3gDm`)tHa!w2(RPC> zv_-Qm42iU0AQ>CF^(6H{(Ld45!4 zrLGjUx`3GtU7NHrfc%uvJ@PMB;DC8ut=H}DFOJze(9x{CF2shL$LF12Y+YkZY z>nN=BYcc|7Grr-M-QwV?HPNwbvLu!dR9WS(H?)-p#@6f7vaM#aeOb!jE-!5nMZS$* zY-oi>5}*e5&e>QTVirsBipsw&qigIXe#>mf|M}d=e~y~?zv$=W66cTc0`V%c$&>TO z>Azqi;gSE7T{y_R255TU_M0*>p9oFiM-KLxfD|(ZyfUMe(0h|vFDl zRXI3vu0m5%E6KADWnR?7B;ifVS6UZLBbWVR7iJhPD2UL)U3lN^)iN+AUt%6@-Rpeq)U8d$IEC&i26`@GW~jj4h<<7DKmVH??O zF9Pp?b~Fp8FG&EQw`~G^<5*l{B5I)rT5(nVy_l#^j$w%^-k$82sigOP20w3v5##H* z3b$$!<=+L2gkIo$y104`tu$2OP6p*LJG@VzgDK}yPtRG%`{|nng;-msc^|;aEq!%$ zbrhx51BQTr`>1yVPL}s6MPq2E>)p)+jVKBm!}eo(-Ae9sZp>DA+(_xlMc$CgFJ8p5`~kCeU5Z@vdX-9r-}`FSH3 zj})aThqe)e3rTk0X&qaZm=?>0aArz>9OX8=)cgvnlrw3}{3*32TKtNQL91EMp>r92 zetK)r(T_FkR43wG<|~Z!H5T_*8qKR6Vy?1-4daYTfJ2T_Z(Xoiin!?S(Y>v#xq*T( zUt8RG!XbQTgpPV#Zsr;dxN$zT&TzS_5FqviT{-C z&T5DiPSJZOGL2AC(mv-=a90r+XrH!)AK60EV;hjlXF3uG>lLY1>8dXtu5&}aN_IWs z|7|JtqFnU}+1IPr^NTeZwdwqSvR|e;wWJ^IXeru{k=03PyirSoMkRC?Eap|E)VHan z(P8x}!v^NGZla>+X8EV}U<1)u;UCh|aq1Y2nc&EgM0$nS?ND8Mr?ve1#Vqu7t^!LE zMi?8Zfr#ZK#x^DzlChNfz6SUAYE{mx9EnmioAw_}mCnejqhQ4Uq-J9c9hcJJfjM5q zckqu>O9>r|2K{T!6d4$H>V>{WATX&uV2sf37zsbAulA9It?;do3@S~#r;S+?7WSL0 z+uCbyo4^8X7Nhw3M3s`jw8upUMm+^4Lg|k@5YURBm5p{mKlEzuC1$&~;#{5Chv5Eha#>O4I%O*+1VqYdi=`X?0$hk}ICYiwT06ep` zT2CuJ?mYjAa!H>TUQC8SLq?Om8>|9KF9%Lrw@iXGZF}51C!#Kj|8(hty^W@bFy;xl z{(W{KgB|6`iwxE_kXCk+s47yhj(`nHj)TUci5}OsgkQ8WKA$0o6s`K;#Mf2aL`k>;jGtvUH4%f+8$6z@u)&Zp=A*|#`DdA z1zhvoAaj4QGHcM#^3014lRb`w0SIVz6@m;mD;lI|OAFf0I2vF-5-kTX+%{f`3yal! z-g=6TIfPh32mK9yE-axPJG`TLdF&pVNV}aK{e>rq;i*f=Y=xtvmOQ=so-0@}yuB@T zrH8x~gs{!ct^HK|2?*0qz!gRwadmtsj(%bIsNMz>g*I<)2OB+&e@Dns-#wntp9HL9 z8{+lHnkB=wJ+?qGPJ~t)Nz&ly1Cx6K_umr;K-Pd`ZeEJGCA6=I+kUKU6MsjpTO-c_ zmAJnc@9;dlq1Y5rYojcTUI)e={L@ivp#0>h+T}M)MQCzuCNp!!DeRm4Ve(b(ci;qTsuhp%7gmuJfW*HKtK1aknxok=H?-G2=wG6+cRX#3$ zIyTWyb5=sX^?5>~_*k3@@e4?yh*o@`>ZQb{Cl$C7ZVFL2l$E*hjC1GIj%)X;u**~p z&$BUpWgH!b20aJ_i?twBC$JBYwz_qb`JAw`kJygEg$4AtLSu61YFW5(0z9C1*6PbC z71pGPOo1hU^7vI$;uq_;VpFPrtg9JAxo(zWWMmJLoP0F*Om#4z!)FP-Q->%3(HVny zOs+{P@OK7x#Py&PsOx?8=hs(N&)Ur@t0c>r)a0DGJFn#J6@GFN{i4*+;^agFFtCp2 zPVVn&LqncwW@>KfB|m_{H0mkdXmjOe!5wf%yw#$u-T;+Ng}4Yy5^)<$ewcKff||7{ ze<#yPL<9^Jp<&S%9WK^kdV==wYSpQzK7TJ-Z1^RVwZncRuVIMRzE_OiBb*T;eW?}X z)nGpy&tu=OgRqESwK!jr)DxN-Uxy8@?C^EUhbPdUAz?&jFS&svs$%Bv+snx84ZAAG z?~GRdx-jF7Tt$BWW)pk7q01J_?Y;r?h^i|TE2<*?X$r|r$>P=Pjg1!|40^*;4CN#B;&sSYS4wBMzw4m@fWUT;!?o% z#O)}p*<`{JZ9nsV%75hMAmzmCiNH~{zXY_32QjxDyteozyS$$t4>OL#l0`L@vGxkg zUa^nu?0R0`7t1Hw8BGm7*RD2I6e86$GURIzPHQwhw2*1d08)7QX(1JS@;4=>EZ1ei zGe}0Sog?SV=C4Y5@`^}! z*`hJ|fG9U;3FrfgVgbazh*K>>DmjqeVurJDjQ2emO!KR*MGwAQGG>I}{j)Oua zot>TUX&I|B+|MR*?y8S2#&K|lD;Ae*0{I&oAd&v3mt$yADCL@J9G#oLoKuY8Wdr+pVuR6#9ocny^u zk9YVKQ-9UHV%>_=OY%uQjX%hY=2?yJ(4ubhh-L`6q${e1nlJS0y@kli=ob{WZ}$Ij zbnj?n*zdJI;WeC{zNCFB(^sF0ox>!DN(yIhyCe3UM1J2GFY*CdJ9{SP!&GgAium@+4o`!VzIyU6$jS=JzW<1df z!}{zcTY$WcslV@^m)7y@b((I1U&n*Z2+!%UlQAJD1-7B^00Fs_=az9XDkBF~-*`@( zAMs`9wvD!by1284_0=;wRVfnRY*O-(YZVotYRP+Gz0SG7S1sXEP8Exy2FCdAHSG@(o6Uzmf|ZFf68buFfj-R;-1S1s;by7zM}SIcf@ zj8u4R{hR^yAbChfsKR9I}eBBoLF@l;!8lRb>4-H9Uw`U)Fm0quCOc1@%a(p*Yt$OE$BS-8qaN z39VI46726L|6G=a_%%KFsu5)l-rWqiHM|!EogV44ukGD7AdNVu)H~soT~viqF|$aA zjMfl+f#N9?#FFqsZr$`_TmNSWkFV`@YR{lF*Vu~VaaNeu`__!KVcv(zE1zEF<%CU3 zhMITA1$`Zp)NU9+CY?p-B`pdY@Shaak78(bXP$T!69MbKP&NYm(meCb-X&4DD0MLc zi<5U8B40X^*0qu%>RE}3aNnMWRFEX*fwg*}Qib@lX7N4E2QR}FRk$I2*Hx9=@svkR zEY}DAEu%`|vlq~cxRn@{TCdAO^P4N^K!&W7%{tTl$cQI>Fu8qi9SYgyL&w2WW7*th2!d z@?%ugv7-1|{Hba{hmXC$AZhf;guQ-@241c=KX)4E8wJA04%lYZLM#Up0X=G+-~5Kq ztKut1Py}7TYTml0cf-+V58%#HqGtKL#EC-A4DVNN;bEIwbjJG` z$MXMG&!3kV^X-KEMBxSR`R{|!47LO4UxIuMyj%K6F!{DaD7u9CKwC!X-=5Lt|NoxR zHW^3o#FfB#C;q4L0FV~5B!zXoO_`nMS+niBCGhdp(|D;qau?H>sNXWzD-(+zYjZS!sZ1nP`^iThBJZt?nZx< ztM8hUTZdGrPxI=jd)efW+8^@{mMu8@)=~XHbqPxHt*u{paayW#vJ$G3X?*j1K>bWa z?4;;b7dR^=E73|W%oK9;?kyu-vf?Jx^u@=NKxYcGaRs(`qKUC*I!V64#zcft3P8l4 zYb#GxVRQ;0jg*L`W=30eSbDHKX^zil60+H-~UVSNwR6y_zCc8tnmO?b0R(2qc!aNdq(~FHzne8 zw9&xE2W$=PuerCCLyD<lM;WeP=4uc@EVp>`*zx=As4j0^{BZZC)l?< zlxelTN$b4Vsf=fS0zaQEob1t`f3~Yi{rpTy*ZmfhONT74Fc3g=S#w%WqfT%@4K231 z7?)ed-!%q}Z2D>*jys(*Lmc(#$4866pb9ycTbGqYcv)aSsJ`yLb&4-YyI^GT*an?u zSm(?9x??fIXMxP6MR`v<=(nch%C~3cFOk5WYnj3bgwzg(ria1Y5U~M{H|p$hQXiGN zKmjSE^Jz9uB(L8(&7^F~OnHrgyAP}H{u!agG|#b@KxJmz&Br&~lq)*j@gaujq@>LU zG$>*~CWu@`rK3Zq$4)#t*Uq?*I+J(l0jFC)vCTt$#1aGP%?mpDm^(!Fb)6=)W8El7 zM5fS0$5#{yBne)|4mVTD1zMNcZI{r)KYNp9+F$xJkzVkCfxVX2Ins#+nY6nvaGc%H zqCw?hk}Dp*Cyo>eZ59ejzVM#9M*Z`WL*s_(_-*w~GxPWL1;*snXetpEY z!0N51Y`biXyD=73r2GmP2R`#8zshAJ-qKd;7jzCv1|%1n zB6mU%zCufg7&kn(;rng|$U?#X%OL4;8j1fAa@!O1xS8+HE@Uv$(Xug-*MpHRNo(_s zR1t)hUF=A-s8r$Ui&aNwU*#_YX~5N5F<+(;8n?}l>LZE#J!u|!$e(Y+#6&sb4gP`6 zzH*gEB|B9-pwQ9fSfs>y5cu>&~uZ&1QwOa zcspgB-fvgitA>|)7Yrwt|7Flt{9mnp$+jcncP4{03xx*dCM$leXN8&&zt+0>>Gb*9 z`#e@~sI4{@-z!)MTo>@Q$72=8&zbnbrT*jgo~xoGJ1CK<^cy&B*}_Lyl}=^&PD!V) zyc_HdrlBxk%+-ezL=@k#E6&wo-(F`XHjPonXEV#Ujx@cPoN4HRS(1&u)UkB_ z%l4hZ;wPgGQCA(zNy=P)dQw!0UbMi&v>sxcNg1^sbgI4QC=+dC@)r zk~OSczHZ!WpReusKE3Wpp=Z(=(_Zw+8%cELnb(UNeiHm=vC%gas+G(WGvaowC!`x0 zz`eo#OwfO#88o|orc_We9!dTHv_>sXj&hZBq#{LRssjflW#;Lxa^-xN$?N0;cw8Zg z_5m0k#ag7ZGQzMeVue`<505hRbs)Nx5eqjRu^($GoeE{rCn1#4Iv`41@H1B9W~Er@ z7rL#8k}eN;lsQrSREqlrZ|gsVF17jC9-a!X(>j`ZN;>ayuXugJ zLoxJ-m+#*ZC5^=N<`34^)fqw3m7N?~6=OxukfaKHR71K<`O+MN${m&?9!-zg+u~*a z64U{=rnQ$xdFE~SgIP0rjBPS)Jw5I!jWYssN{6&ST@2BfBTvCbr9-XghW?O_d2_>j zDP_CyB%|Wum1kAS%xCg%7)xB2;Te8kA|Hz%muw&(zLeBsqye|9!n36VF)BK&J!<-J zJHVSM{F0kvUg2Yc`%6Y{TDHk*DLnUxhA}1zyCgCk29|K)F2P~KC5Lhn$PCVsBgoyL zsDW4dPxHcCEzs)foza)`FaN0tFnX`=O)aS#rK&Yf`>h!%eS3jSN$-24w=T*<_gRrl z+{vvDxJg%0_Lx;8d9)e_G5Ef#4I3Q6C`(r9-^y>m7hbOLvOZoZ-rlcxYB}3FWKAXu zj_VQh1jVuPxRv(X*9Gu<61b3=8*}l=;x(U43Jo=l!ry}v-+W{u5a3z!<$vA}H#Nn1 z;j!WIHb(ldrr~>Tnqa9Drrd`iRB20a)LE@7-bPv0wiqwkRzm#7IMMzi8?wFX4RD7D z^}C#>Me)IVUU=&>3$L|YV|_-w7krR*6q~2)%fj5V_wKkiGR}rMV}eFIM?`wVy6tgR z@8?(RNxBgDW_u1fmP}I!-$iK-6Q=MvsrdZAm%$z^1pZPh7fp@d3HC{h$q7@>NslDw zz7@p>1m9E|rupiqZJEC@FkU`U{Z%4+v!qkvXq};cXRX2DndCO^$bGQlxj-nBu@5qI zx0s=S)|=x!&%#uCpPi$la006fwzy+ej^jcH9`9%eaIFHQMohAEpbG7Nco&td{0JV* zs;U*?#k%VKteS3)Yc64~_2BFk7t*)jXp+MZS+R(kMqcKtd-xdHPN3JE5|tz1KZP;@ zj&u8!O}kIyw;myjEauk)*MVkR-v?YmH!kU%J98o8qTymMKE*1H8`47 zPs5j_hx@N&Fak=_riuTq4Xnaf@^|j&4zKb3+-kR;elrH!aB}PbCA&JO9hgROhZidp z_D)aj?ihGKN_2=O^Qd}%LQ|5O8f1U^aaigS1Up4YXR{a;UBsiC`+YaE20VEg!KxA7 zlri=zPdpob*FHv!!%j3FoZp;4z=KYds@oTAtZ7TRzQdd4>!l}8z)xR~9Tq%voOG>h zkZG;WF-jadOPNQRo2t4J6*A5xIljJkP@!kOphnU5)Qrcn`)k_uOi-eUl7jtOE|s5Y zD$;PO;Hk)iM_7QGaZK9=^wnp=XCh42?itMc;#1{nZh-^B;^=iqQTgD`@qSt7)0H*X z9Zfn{SubHr43qwvq-U-jH2@3wdcfSiLodexO5WkiCp5o68=Gbw3?L(dY$mqxQa7({ z_pPq~Ky;)E`IuDt5yt~jKzhI;8Bn8p+Wt{?kY^0&DETX%OAB=p!E7OFuDY1`W4epT z_+C;OrPjSjL4LPUn&`vNJZ|}1EounTUkf7HQZCC-oSF<`Pg+g&>3afYAtf^*)UmE8VZG;O zD*WGZxZi%xzJs?M^O1tmVAh~yu=f`SbGs>TR;zGEZ)IewlzS(#tV11oU-0qTF)FhV zk1Ox0K>8<{fAaa^=B~fZe(>6b1s&t(aU`MDZY`X}jq`q<81wEp&#u#&_%XMOoqCt;-#HX2;d)&=E)vR}kuH#D9@B(&gu zvA+}#vC^Ak>B7V7yC|rEJukx#}!Ov=Pg9HcYMcENkJ; zZaeUpXQZdAKmHVT4rD}iS2YOFim|pf*T{{i85~+yyBrb(Lz_`7*~-%=KU0+-b-7~R zy1!Uz(Dn#5;uZyYU9NcFAI6U@MS=v~(pO$IC2`r?B6q_$A8Z&>y9gF0{vWp9GpMPy z@A^hL0YwB0Q93G3Ksuop73l)f0)!?V5?bgT6$OoSq=hEEgwR7TD$+ZI&}(R+cch;0 z_PX!)nrEJQK4jRFOqhY0eeB~{YyE#qQ&XbA$q`SGJWV;(Pm$I$9l9w~!>s7uW9#%! z-);8S6T*9a4jE11$nJe-wzq8Y#^b4a31e-xLa)VSUAvhkv+IRx>Y)}15=4FeYf~}9 zB|?;q?t0rKl8>%Hcc7@QHH*1sW75*W=jmL&*W~F-g&@M@d%&M z6K=&N+lTNTyN!7Kipw%RaHDWfwCzp7BfD@sb5XG|iI&h=s^3_CSVa7Q^quWfh1VlX zwU5WYy*OoP@tE~K6J_I(LOEu@W1nORqg7peM31BikDK5)q>EX8GB4jP!5`Hz3tISO zHXd$^Sbv~r2*J4gCJ2Ul;9&L7Xg44HQFy`-8H-1ftO&%;CgNpeZ6RN#VSG}!ZNZFs zy~Z@6O=)v%L3u{LQgz^`JazEYz}2@Nvote{a}i)qc6WE~5;!mtUt@_!=bRLm!-;=^ z;`olX;9cFW#?tFPPtVqMgV%~Kw#DPmZUeJ7uM>NAY2NoE(llKzX?YKVw@i*oM=Fpe z<0>uYy%%wx?GIM&2dVEX4NS?dom&*UU}>OQSpTuoO|6u(x)DP&;HU}I;{5!^aB$Ap zGD{7|jPTcxhyVxAkE;>=s%=AQLqn~cJcl}Nx2;t)=5XHNmJ__Hz9i$uevsOKswP3k+m5~AN zeB76u#eD>-v2Y77dgr7tZeYca$^C;?767H%E>QGRKo_>zgt~h`l}i2y!v(ZN1Mtm1 z=aZ}M3x}WYu&y3nZ@RB)2x0ESl9cl7Vwy2NpeI#4A}6P%Zw3QyxoxbNs ztojNg4+L_PUQ-`QjVH)`sJ*ol{(uoV&Sola70DZq3*y}{Fd!$t-p@&JOAm$;3e6_P zt>P^?AQ$;n)3@Mye4?L=ntY3Y?|CIzb368kkcLz@vg$JoxnLQICEuO3a`h^4oaNt4 zrc}cyR!s0(voon6AqHt#?ZYK0Y-WUNjDD1r-cEqNos|@uT8q&9Dspe5^p5ccQ~<>f zo!nFA$;Y@87uDly_L(lThmsCUQuz&ZskB@f(7MU3cbt^$?#p^D*>32giT&5& z^|AV;@G;?gANN>%idU*5L7n;(lW7Q#%<@?PzJgpuU9NKXa1W``60Cz%ldOsi;@sjp z!MuHS`gA4gZ)&GERWXbkdA8CK?hNsrs}7~3f`-m2IlkjiP0h^@>+@wUYVEbPWX52= zzOJcoyyEC!TObDy*NAqOL46!7-IbU0O|y!sblo9ZDj^f=RnP@~@97$Clj>|W*(Aa1 z7)Z?$MtU=&LRh ze|tze#nAm)7LpACeF&QX`sbHj;H{UFSvKU`En#37zPn(aGjc^ zvKcQ=0R6jmU%*}>*^(q06@5?zh$ouuG2;^>y373teXN65Ib+MO(uwv5a&^U~h@R$A zCe-M8^b?+ap%RXRB~p5AXuVToRLF@{0p9{rSXdZ#FfAEQ#`tiu79Y8lCcPFmO;c4H z;YWQWHphKEF2m39;hk2No2OBj7h;)=Vzf(RubFgLY&B%-@2Th`;%d)u+HI?<=^rAk zJDQ9l?@}2__IYpwzMlQ)e&?FWO3Qbx_y3aYD+U3s#(2yB=Iu|VFM8d7mad7+yIjP` zhy6=({J&uGAe#<>6H!_7r6}EN4bpx55$lDUwXHhzht?0l#K=o{7D*UZG}#n|W$p6#V!rj?i^q{j3} z8Ks2GNKCWiVcx3fa?#5P)a31kSy;l}Bc{EsXMGvsU)9P3W<{5GpMj_k;iR-Z&+UH7o)e;+~Za;LI^QDSsJI{6&DYnLn z;rMM)Ju!H=b3e7b)M-lFtQfA*p&+p4O`Y$GZNK44omQ3D0;LBVYNY0qlhenjF0&8V zPYfQq@l}d1`ipUIv9`HO8CI2{ujGj$R7{?ru*^U8JHP5HDx_tbH|tlC>V4XTn$xfy zEuvvc^PrlQswJCI92m8LZS#h; zsE306MZe;HX$~hMhI$Jra~xv*1S}w%9*s0&3I}#z&Jv6BhW-{q+7>?!u{Hd z>?Xt8_n)pAB8=vot_!!M-8jcx8xj50hZ^b*|Hys)yGseIJ`vXj$6qo{?>Le(7QiCU$jGW_a_X= zOsL~6*bJu-^x(A-U=9t@?`$0I$s@5bTaL9eV&mzlueN)O)YQ~?YxY|G6eg@?4)clf z60_0KXh?Q94z;$XBf-xe*tsyJn-TN~da*K3J9Ya!crA|6GQrfdypiF$E!<72(CI0f z&>675qU{?cc^+e*$kbW&w80uu*`m@gXR^it=h<2e#*D-New9rkYAgS|+u2lmfcKY4 z6MYQlxq4ox?#hct-K$vs6p!%X=bDu1Pa8hv8$*_W@Fzxlvb_RU!7n!NzGF1fXq}^n zd3wfh?-b6mO#h?~^WxL*xd@LqSo_Vk^M?L+%%ia8p zEc2-pLq;olMM>DFC|#&g-AmE-!lBeFA*X#iFO1p631K@GEBmYd7PFws*+%6S$x!sZXK^wXYq4e-cX(&0>EIF6aVb_DxJJqW-08J z=i_s!ExA$7Fs9A#Djn&#Y=g<3i7pckR@=I)O-BbQJ=fN6kMz$!!r^Xibb8SRHDLf# zjZv>T?wc8z|6OGM{ehJRp(FKo!ysLAKcFd;VU! z168R}A3GM!>F>`D0Fhf{vz>U_?uc^Xr--6gmqJ-c-h=G5{{lAuf6(Ut@so!1vLOe! z?`-6m#Pi32D(u;oCsz~Zn3kFm$II9Yuiy-*aegCW7VPAC{?k-5&T|B{ zUq0^a!z6)pPSIfY(0~w}SUKzS&lVdtAT!)D33t2}M5Gyp2e}W57KR*hAMf%tPDq->NKT_+;2!P6zsjJP^w=4TArU$8Oey*ZaL?aC&y^ozEE z^@HOEBOf#-x|TEf*#{{!$5`Kh8dAbn&)crsa&K%RZa1GD=ca;mU5e5B{GYzDQ zlK86R6hL1O_%t}@1z|Z+xRIQTM;EuQKjKJF4`4B(<(&$9Lu8L zp1BA02m1M`R7>*)qYJyJ>=pE)H?9+<`d}d$hw8;)ZNtUs(9rgZ>dqfjs0F~jRtJ=F z&~dHI{rD8+{?U}Uid&b@JAcQadQgs~W)87xK*bVG&sdzfk!CCs0?t(InvVFRi^^wD znn&mW5KET$mqyAmW7(Pyr_UT^Z#|Lza$I;=;{PTF`uYTOYw8{CfuZ6tO>N#!&$~n_;-z`B+C~dfR#|!V z4WfJK!_z8C6?6?}Z+<70vQ9d8PV{14!asB_bnM)3l4E=?`gY}e zn%A`2Z)?k}idLnL?*nR^2n~~+><7uRxc7RC;+M-_b1oeC`|^(NdZ53n0qryvEE-jLr5{mAN97zfrXh{>VA186(Igg78m3Y4J?A< zPsMCL%qi>$Uzkm%kw;<>z6=5w|A(Kjwu3YGtnW{5EvpQGm#-)*=UnX&@yxt~PD@l` z@AnOsd~@k_(g=f>!JMVrdHqB=4hQW{+R4wS9RTeZj}iM*m{nzF6r#sSlxY;!tQ)BdV>KRoA@I{RiOdh3!WfHfmZ4(2*_|C+yP$q8S9Y#cZFdi&C`m4iuFYRsgsNI1k8H8*g9+Mj z(=S|+qne70ky4{yzfcnF3j{$G?q=a026i}$wf)DK{Y$viY5X^!-u0%ji)3+1Vcn!8 zXWfi?6ZHh|laJ-AWU#ZcP%^jVMRW;0;uN3ib>&$tZCSgaw{3T0<9m70+;d^>@^;55 zI8!@}S8+MzeC~>&aeqNtzWln%v+q}F$@=fp=YH_4aAs;`CkUllG?>&t5*r3$@zJmPkgPNlPN&0 zK5>hA7>x6Ki`eY<*MN-y5Onn>Nn-~tWapQ ziF040Q}qp#`BHbNLBz3%;=?(aN*c9}+SUdV!MXv0Spqc@<;imqNu|V_IP(R)aOv;$ z$Av_?maHCfILlNUX0x5bITT)|+lDwkCg1x-ppc*|eX)CqZA(wG@U~L6rJsotd3N2XJKthun zV8;7kU7QPd!oJ0L-vJdof%6;FfYZ-X&)(K6#{!m6l!9@>-WDPTc^aSy8Mt*<|H)DD zvz}0SwdoPy99K@5tbBGBG3Py5gV}qhWj4A$K6z1?y|S*J4p9n+eF^tKoBJx=)`{@G z-_c~ROt(K89ZrabF9_)~m0At8wkC{$3{nrjm-eaXjAo{>9=v%_^PnFIdON>%ZEDb5f z7gXdSt0sC@zQ;L8LOjg%nxx5<15?(n!FWS|#gjMI)Dd&7ZB2_Be-KjLV2l!8vouPZ zFe96LLR=qBtZ~O1_3PqH&PC*YBR55-8X(8UEuNGs5~cJ|@qZO|WiC_;Of!)Lba%6J9%cQQm*D1a_->|0ZDl z&=<&HYDUVJfkA!`Y`1sQSFr=W-#1RT`otDK!d!Wng-D^59;VZj2^;bBM5CvZN{wSm zY7t}C02JeGaM|Zu;-sW9w)M9-E(h5U!pEGdlpjLe>n}81&z0Z6))*v=_*DMnc;Ys6_sgFIE;kn%kQVX978Kn=O4s^N;pg`z>i77dRV4H>VECWy18T z2LsDZmT?2@zuTfe0t1iS9_2rE@&%H;oHzxlC@3QH(|MtcrZEL3?wu=znC(VuOi#*meEo;EMIFKcEF*9>rY#OrOs$H>efRusn1B;$tS|DdlK^#i>kV=|Xn* z@zE$3>nVAQ$knvJ0g?3r)U743F!0W|<3a<--VQx!5&SgP6x13$vhw4_X-n@Ps@W(x zVXcvUu}`N*`GMt8eyu>ROL{9~%V6;Qe*0NV%+UC300Ffj`gi-3Ix%xWK&^XB&8Ch4 z05K0=iC(#|?1^a?Z2ZFGg%8lT;J8V0y^tm}D{wr~%_HRnJ(?!jHm6`#-imFbvoGewL zAE1M2_Zx)vMV<%Iyoo?G^730wAp3zz`q#yQuMhoxyO5Q<)l3N^ua(G3d}AxpJuGfg z-e(ZeJCKUd4J=UjMzIbzw|J^_EJ26_&S;O>>)?{0tY1Z*9e(Sgn3p{!)pvVIlCD6l zGp4-d+9xnZFO=lRXdxXM*nZJlJEXUN8dgZB^&FSHmRoXYu%8yt!?}^}QzVYII?m+C zl(0z9eUs+rA0hep+tLYGQ$8i;M`>|S1+qMBM|=Oz1yMg}J3K*QMU0IuU+Eu-K9!<^ zWKs}mzy$l)T5^y{e3iUhht+9mK?EXHCXlD8cZZ7$X5#eW z(`(umQ*B*dtUmR<*f(1nry$h&)IWL1Z;?I6y{9IJXO-q_Y&7@+%Umj4z8y2 zYUxF1txA&D@x5FNUm6=<;4b8&DfqRJ>aE~hQXh`jm>G%`ljxH&F4J47pd12H06eKB#A-Hl_KVsdhFVxlZV} z0?O$hHMsi4XRD*(wf{TjU_B4hQEfv3eq+PWeC<>vU-mA_dpg7~n;qRAw2QWOk(}v( ze(#j|BJSlZFI9QHCXSv9YeT+$e_NP9nd4dHRURb`b|!iY7VrfbA@rZFe=%m%E*W<> z+0MyR4?hMCKb^V>5Grox=(5FaK=oAHdp&miqbGY9?ohhVJ*jFQx@i`|s)i(cA8AY; zyZlyhu9UD+AWra10H5cGZh3Mnj9gjfc2jsmZ-BtkN-i+@DjZJ^a25i+nlsP8VN|;9I|i!O5CO9 zf4a@XgOC45%MMl@y%@j31J)hA#$I?ZZnYS2`#f+-L(AqK^V;=sXxmRYqO!qiF4CX3 zltz@+$@yg+8eXiVHMkw-TdgG zPAFUCwB8H9Ck)TlvFJ2;ig(vZ{leOA;}E;Zw+?_7?$d&B*i@QR$_WTm^!N6S|BUbCiz)k zW~JV&U3>f4h-m7ycQ6S@GmSQC*QW~EUQlQAG62A z>qQI`rJhu05{Yv#UDFu_YgH2D<>kHVj0=`qYWZ89jCSQ6 zO!37!?Cwre?!&ksLCPNHPjG>e=`l|1&aQBXqucYM^{@{TeMThlXON-1EE{g2^^P7U z+i%rv{UWH0-UustXrbur+h4BmqAt1_G3~~raxOZM(ywWgXFCn6ThJEk^A|iF^u+~M zajGqkL)Q$CN_e`5H8wqU@kD2aCv2j}11ICV8cxI$&c2AM3zqh*tG?={C+c=L#j(&7 z4d2HsF2Y<&1^F7ShVeSu39(OMV)^wLK@B$6 zovd>KJ*{uN(f2F;Xq1@KVNtu+<|;J;RK<6sRM`TPsZa+7?TD2L%eH5h7LC{8771;c z1KngV@_hUzFej`@XL$3^#rRQuORIq<*PirL-&9>?JzpLHG8pq7f0K{aAqWpYZ^#IN1Mzg-b$>2?ZFoe=f4Aqn< z9nAQ{(M!&H&!;SyW^J%06`)A}{eb7PqD5l?6)-L-za!tx z_1o^->7G#VjCl&UdE#o66@F42^*8o8;CHrT82}7??@Qe>ZmeCY59Hs-2e0DCG5{%A zJ9pM&rB+DPL#Ee^bw-v|qjvhcEp`D$_>V561s7Weoi;msoXK7-=o>1ug3?PFs4nz@i5vT+C5Nh*6 ze&OdmiWuK5Z66m+RaXBT`FU%y4mL7kyrroP;dzMp-Q&@#^0H+M?4qC%}eLTn+MMpA)e~_hH=-<`$oRp0e_N zX+w^9$>{qs&SpDE0eqzex<@SVP1hvOt;~Jw-ov2>{QL=0tvE10H8l?wU!Yg5sp#%k zT}prnM#bQ&?Mt)8?@z;ECMVeVxi75L)VwT<^EC=>exilx zTRa@_jgT7pc38wBfL8oKgU@UHl=m^(w+K@6wW?L31K*_PI=r@a3>}X*?yFIb+TqfM z1mS8DmT6|Ud5R{JQUYk=w|TggV}yH7+;NbLB4sBjO?HS5qk8+7T8dmF9PS&H6g4cb zNN$PP&(nMVYmfF6a;C!}t*UVUJkL4P2kjDblBDMq-0;z^aHRN3 zvNskOT_U<1qa&>e;9PKnNWrngdzTmz<7I#>RSd%C!OnOjnQH8ckbMMOP-W&caU#Zfvp zdF9vQu<^a6s zo<)_(d9SzcHWT~c#F`E^)ME4K4xs$*{EzaR-pu${`E4ThdHVf*ug_y@o6HmH1Y9m5 zl5E&Vyce5c`2FTYcg&8!69zZfCy38X5M2)kKO ziIb71G&XDxDO?&;q-r+&LEJJ~NuSbwK|~Izrz6t+4HhTzD3U9WpViR!(3ZfvgiFn^ z_jq>uhSAEL|B@u3V!36AF9y2IuNja!R0a0Qz?Y$kPa=#C!dXU@bs8a7toCQ4A-tj~ zZx$=T>c0A#L(#tAsl4AA-la=A`Va-C*?F`DP zsdr8Ak@gV}3;QOcp3vWt-TQtFf5AdsqwP27D}YbKY^E zi3DT#O(80K$S>P!cFa8FaVt!F`b%;2tAHH)!_2^rMSyvGTMGVRVhfd9p8SRU3+Ge~ z7lu9d^B7xG(EIm4d|7G2+g%FH%u^)k+H@VNkBAW4Xkk=mQ=Ps-U(lDtdB*Cn-#|jz zxl?`X`IBgQsa4P03MGu-2bl$Ame)#LbZ}Csi!HBy{uN_;XhPi+MK~I9*H@d~n!}7{ z28IjG*FPCvjJ_{~lRB(4r$;iKAtj7yMm?jRG0%Y!XP>4_dettn63*WosP;fS5??l* ziTY}NZ->1OQzMZyCZaE5d=*=#{KF=UaWgwD1Xb~zpFzP9Y6ftomD zHv-J)23^R?4kTLP*MZi^mxVdSULm|M~Dq-OM$J(-pE#N^81=IEyI>nG_5 z!;?fh3VeyV$&Gzp%0&q3CD~KuLfu6EkJ6|`69V#*2T&`d+O>oDmRy4?gn;=F>}ijc zT`IynmvvVqsj>7A^eXI}>t7NV-YCJubD6~al}M?f%aNEfE@0BQGyTnfdHy}DyV@TQ ztLKdzKhMqM!u}6p>s_v8qgk%%- zHG@|Y{WYi&V)L7c1<+@KlN$fyQnz@8iEO>>Ok$yD0XJg5RHCF>-6kNK46-{D`9M{Q zCB9s?*6<*l6*jD2vKg3nJ^z`rh3QMfw|JS)(C0|$u@?xI&8;p9hOdjfXG_jXXPQ1~*#URa*IxtO7Nq~BibJ0|zL8(keH zB%L-P$@jSHm{!@=+9fV#1mv~;%chp|MF(tVw>h>_bw%RTAU9Ct^ z^&qCfe-Jsb{ES8C1}cxKyF<1Q!4`b1*WM6ZH;7P(7grCH_1V}jOCdK5m!t?jJYHgw z!(aODEvpi^8D40_#ZP6k`(SKvbb}W$J->?DU)Qe#@vNC+yWZRwz{01dlC@4?m>hcE z_r{YR*OkY8+kAYG)wt7MMdOOTw6E<2zir^UUE|kCDTZ@C-uP8Wgvele^O_kMKd(40 zXKSGh=qd6f(FlTsyufhEQqe&;DnOy29lj{!Ih#LfQIK!x`Fd66gMCc(;RW~o_u+>H zC@YrDh*t1E2<_i7s-2t3{5y=4f}+XF+>EN;adx~tkO!aXN$eP|^a{1~Td51ohd4d; zP_%clj%l{~w9O0cpLyaE?}xO0h*KAti)sVNF4q$ex^k9XtHM= z=3{lP_)Y7^>EgQdu*1nsEM{A2Sjrh4O~g>8+n|?eI0E62Y5nBbh@)4#(;Sy|FC{y8 ziF#Fy6XsRa5?tYFmwhe=<3 zDkl>xU1dkaB%ABT$E zHs^Dt;VkZaKETZT;OyYquSk333crRT?M_Mc(lvd)MKAge&iFUea@3k|v-V!^!;(;( zeoW6^<+jU2gjxX2?r@Fy;eXl|l>mNp$_gg z(LpW;a&Ttl^ak6p|D=_Vih-!|@FjY-Ie(l<-atxPeINtlaprWQ2_Golkwf%(pge)= zkh=;BJ#7nBvJbTyfvt#?(>5Z`%q4%$130_LzajPoM2g(%N7?^Lh0wRnFdvEpWYwEf z{o+Vc@uQrtx>WGGCRzG&cjA^aop}+Xi@0~G_uieam+V9Z7@9q$t z`LcHSk~>vPAWiWPLN#tEcGaDXJW3?o4wbp;XeM{0zT)^gJ+Il8-e$IAP+f4u{e`p4 zx7FREN;9kFfnL~yTHEI;oy=(PHvJO&p5dbKsVT3$U#*M>NNnWf>-pE3QgB_q@uxQ( z7#m_tm}z@eZ*eUp??lhOjf(@j&uN0|$g!)d1s4}7%X70N%o$K@=iT*fllbB5fVtN?ah^Cr>!`&(D0Y9|(oCZlhq+r4%raO(Z@5{BI7+UZr!x~cQ1XQ>S5&*Yd+ zc54x1lW|hfthw$6V~St83SIcGblm&g$HqD+P-il-WFYG(OsjVDQli0oKZef6=5|+~ z0^62nbytzY-n?ud2^FG2eY?ANndf@RwxaHsPV*vZg=IwUzyCJHaCe z%U9$p{EdspPsiBk=%MkKJR$3+en^jXodv^4A>^0$RZTP(RItn6K^~H3S}^UY#PWN? z2HUpgAj4lzyR0B=Z^{c~X)O!sX4P}2g{n(Z=R?$_syp~VpPeN-aur~idE^;FOp#Sa zX(MhDm}goFO2lxlRc0ZR;i+VH%ir%boY#lH)%@5njDEq_N@9G=2sfHzDhukVJ{BS{ zo@On!$v3r0Z_-dM$=a%l_DNEz{w!?akNuKmGMQ(L?0Z3og9^4TcZA@xu;KAkK46(h zkm{Cc8B%R+HEzX=XGVph$LOIq;|o#@w6T_sP*}uGk9rQ5>^^%!pvB&!>z}2N>N= z0w2!Ae)ySrE^Hyv)#gvfB%S(gq63dm`G zXEEjmeO^At5dLXFvd0v0_>6=|@GAl{53?*)w{2XZjQ)n z9Us)Sx55eXa{!8A_jP>VCME555lQCJE=!*%(lnIqV#oCGJt0c&s3|XPgq#i8KWsg< z59$Kf!EZ#$2aE;sxaK=6TJen9XrFJd%%oZv&J&tp>R&(P#q|zO0w_6 z>zJ%h?w3zhVkKP?d$=zYFm{&(6q7W56a$YRS<$92D1E$}%uY>UW6~c@;gMM9!4{QK z8eY_*CG_67cc|s?zbg(h9E;M`5zVon(SkThkPLm{}aWS#7U z&s!h$-c0#?wFbbrF~%1YvQKNGU?2P#>fRZU=9)a7)-)Y&>*^%US4$V;p`kZB9Hdy- z&^7MFe3RwPRivv{Tw!sO(^X|Sjdc}_unHFu6x@;hwfTz`n; z^Y(iqYDd4q5jB#n`ulSTu^kVa=1{9R4{9qI6*QvJE<+SDt`V^gQ;b%Uqz8R%&94rP zDaM~Ap=i=8R5+sSU7*9&LwLr}R3_V!Wv}fY7i20e!h6G2pEzjaaQ&Xf@n^#H*J56- zB71O8c&O!hCJf~QhE0r%lR7bG1@XKh$kh6ONp5avU@PH;*8p!Scf``Uw(aGXiv^?L zIxpAjT+Ty9S?I>O^y^6pcFS604QA}EY<@xlxUgg$Q>e*i_70B^v1ALErZN!yj_!%y zjL7DG6IGauaPiNbvY;b?kkvH|AK!JZVm!~<=9i_AxfYJ>T~i@OjZfUMhhW1L0lLs_uWUpO(*aQZDJSrqXRU-JEr`%{rVm<`gC;y~+v8IUzu<(Bk zFNvHTPjID>1Tqhkq7VWva#@BCPBr+2efiwLBER!tWcr5|mVUvpMo0f-4FmENakeD1 z6~=zDQwmS(7BB;UDH%nH{c~j(0tIMniHp@Z9ORzdy1%rFD>d2679J8wu+)~}-i7{N zi#`tc!!QnYqV6C@_cqzgNTTNMZC&6YrC2XUA!A@F!=||>n7}RDa}28{R`K(i4TbR2 zK>H$J2n*1)nMC|{6m;dyUU$~qaTmaFWPx>b8J2?dSto^6*+44Op`Be9MUq^=iuhVk zPFbcrVcp3810&xFjN94!a&EW~+rAUro*mKe#yj(CUM7n1Zo~mU`kxIvYIwV##;z2K zRJ#2`W^8n!%uKm=Q7GLS!_A&L>YGPJO+Z?YU6A{E9dyRq0d!+ zL|?RTO5MbBlRun>%iyLeJKulAzLlm$E{6@WMWl2Z^dFyEhbkgoYTmtk& zjFpA=$pc|xj-I0PO0$gM;VgUyEUP{%^TXnuSHgQlAl2mj+n-t7AO8h0s002%jD#2Q zPgcPSK=XZV=^Z8v#{R(-g{Gt27U@26xI4z_Rok9Ql(-2m@ZZ= zgj)G}rf1bSS>9=l``PBO#SFHRGq>MA zfgkP!@u}W@B@|+G_UR;{k)g7{?3qx^Y~5x1%1hslTpLz@U8A0Oquag=R{gGUWv$AY zkjbnX?CcNA8jMpw?T;+oitV?#^qNAyQ^45^RPr#o>+~b z`Kx|R!*Spr%fJAE|D;QsZy_0vMVri*G`U_yDg4K982In~e&x{uGXey7IsT}#RB-Sm zAF3s5^2%Kv=Eb@@GWW1$(-kj&SPi>m{IHb$857iVAW^PHWPv%;X&(b@D(|7%An@57 z_WkO9Nn2DE*nwD+Vn~JOyy7wT4R=BrL?~L@>BN3>^dnF4A3?1j{!K%m0Ff~gUN|C9 z?`}rU^L}4qntwHySkvOwMDdGy#Dwi`o3BBriK$6e`KkXHns{H?*6OauQE5nTS+_Q# zeBbR?x5%n;wj#ITduBiVTF#k}X$UiBKKc1=s8}UQoy=m(qh8ks&0)iD5fRd2PE}2` zp5Uk7+Q0MJ8Pz1+sxjF!c)VOOLlzWlaQ$0)*pzm=KU(LvuxPN5B-p{`-hwfmEbOaz zzMB4kwqc-hg;Y@8@bVu=Lg>5{KOBnJ7}zpO@TNP};w)|bWbT(dKi$k`zlie^dn*^u zo~Z0fS3Ft|$kGl$Lpwl7-|m#*(z~|~{1a&OG2IRY0z8m<((o0uOiI#XlMIV_Nk>mK~kpP{MlM1vb5D;|v+7mQQI5GY-?Z@(~lQufD=%Znj*l`q7kIX6Jxm@7uT02Y=GkHlm98Y+BC z(XAHMat*gWVyhl%Pu+{Mo_V`^m}&Y5U>_P4jS9Z-t>a3awHr$4dXL2U+)nCN?H zSn01Pv`WvJ#}!UF(Jcr~J6tOW!|?BPr$ozRp1IL9nCK&+aEmGGXm9;tjW8Ec6j@M&_@A+FK@i#p7ZgC+%KNu5iILCZH5-Q zvx3}vqzl=@g%4NiQX{A&>E8L|Dc?-4cQ>YiHK*!|wcX%q;^0)~W*WX_tSPlvmr>rd z{L^)Or3ws4?2?UGG}N?qk`9R%_89G*_MJgGf%?7$xC!Zr4fxFoE!OiAm8LyGIefLD1*Lf8h!I00HZj{1S4l`t>o4Cn1CA#xsXKG7n4wk~CU9*?J71bX2 zEN-w-69&D^F#bIaqtH9KF6s);_6suTEI3@3*nt#VGd?$4~>t!NNM@Yd+U&oU^1~K3zH~Xzp0v&6fY< zcL#0lU$tRx_nARylA0nw|J(<;5FA?4YWqO&#_8FfPHt(z>Ts5RqJlEy_pQ7(udrSd zLazBpF$B+yO!`^Kx5Xe%A%W?9dtWJuj%0Pi#J~xP)oocZ;S=~Kc0_1`V#CF?V~$FUaD&Te zV+FNQ@vW!wJZqvo2!?$*D||&&kJqytD({knS=Mi)W8ee2O~tA@f;28h`1Uw+CFIHU zrq*k(f{zx7_A;ZnrsW~#H@}wF?O51Hn$J3S{bc6W;ar#Y%z*$Wr&)KN1Q|qMfGvOd zQ4;K>+N=t5o864b`&6Vcy6gpMD#X3h#@PpSaKF$O7D)BjK1PgqQ=+qw@O7E#?P6lj z>7N}SB;d9BqbSzI{r$h%vhEt_;-ZLL)XEV~1BK*U-Nv)&>6N#sv z$F<($50ugcc?Zk{qSHDjlv=Gws3Ls}1Dbs#cC;d5Uhk`F0z zO%F&;5d)gO`L0c_G*}1j8KWn(j@uJtX-SpgC7UPtop~=+4~fY5_m0C#DQ&05;xjHeCek|$n4IK3iXa0Hu_8*s9c<`J$B92?$YD|}tQNP8xy7%>t==$tU zOxo;Nd+>D=oZ>Z~>LWiiIJC_r5el|4L6K4X^PNknwi$s4U*{}g#EvOh@yl5jABp0P zu75C91(v+kX{Mf%T$(*f)|2Bs>^;vSQXz*{UEyG57iX~#r?H|HzMO(L{dbw z-zb{CJhuZHo-I`$rw}(Zsm63zd^V6ZF7OpW62xH%);>E{qc7Is*!Z2~Vh!6T16Bdr zgz7Ab=Q|l{v4rI)b^`PB5?}S?v^!BeawC^%|B@_4{t2$R2nZ&=!mb^yWK@QHk)!(K z8>aO?vK)Vl&5z)E?f+gLR2U#wfv?K z0`P3fg4CVqO4nE(6lKScXf?Ko`C{W&I=;|gB(3GJ57t3pm_WPgF9Y4iHA&gmHC!A{ z(j-gFJx#IoLS?kMAc(j!A53O#XH&q>-IU|g$!ms#2FC5wG3ET(eaX^{-xdRLZ3|k; z!#m!>;VVsAt%*}kLmIwZVT+1iY;C&^OLkOo<#azwc?@Ys+>~3bT(_@?Ytb_|CHm}* zyRgDqYJ^QswwQzs<1(|GURmi)Z~|fiKBj<-f4m=gJl^6zr}I3I*Ks@#ULuiUsfSPJhKny! zgamiR?3WssQxpSK7=KUeg*O=^U7vN+lA37OtTL{TS3Opq48yBxl4EgfQif=?7d4-l zgVenW_JcGMUdD(6o=o4-^)&W?gf@;+Z{by~libvnZpKPo`_C4oH+vv9?a>x%nlaK{ zwSaZuIFYX&{ZZ6aA2-$HbUSN9{4+)%8dU}Ql zK0C0@T5JB0@_a5w<#ig66qWn!*ciuZzm6udf{s>BN&smtcNZqu3v^gg zO~ZSNr#qS)px{tCv2{ewThegjx)K*goU@)WhSdx(59Ho!>avV40Tg|8SGD={H^B$~ zvRQ^Llf=2arh}VO09&{LYfN5@yob|PE}nE^%7b&vORCQiK6#q>@<)w$=_5N$5Lgcs zS8JnLBgqbD8jOqKN3V9rB>7m$z&jE9p8r<&1W_LxXbnXjvjpjrCByonp6Ir>q@ zv3H)-oY`ZkV^jCy$Vb{G$1PBVrw2w4xUQYF7tywIRlA)(f zHnR+T#q<12b#@aKhSUu{!@QrqEA<0R0E(g{13uc$J$)qrQl0hMi4 z;&dt!h-e??zt`+!YMc^%R~*edCEu346+<)|M|}m%g8He7W*F(sZFlS|Vl2dA6wA?_R9_vgkJ&T*B&qNFgp}P`rL7Fr zzh?CMtRy*+(RxMAAN3{nP8-H&YY`KW?bwh|^t$y}TmGRq&P~5~Z6~(4Q%R???b!AR zIP3E_!OPaAFJv4LJR7^|*8|^&s>{Jc5!wOt9vw%!ooe){gZ|yBRSDtDd#68>1e^<3 zcZi2matY-$1@(65r0ff0q^dx6( zb>1z?>+%8K!Qu zCt@7Q(Lr;M^i4qt>J-So7A#i=i3yTq5Ai|%`eqmMrD0TpH+iJO*}n0)&`E9huS&Cn z&zWoSng-|AI@pm^d|DM#_7|CVF;AQPkg9QDcFJ$s;w{gBgx|CG-yh@uqWS)CXtd>Y zF$;3t2K`fxwj>R?FrdEEVK^OfppsP!m$F@N{s#!_N4KkWrt;?0jUTJvd4j<&#(qlV z|6F|kFQE%R1rN&z=}oxC2cYU)Z?n+QgU{q~kehk@`}C0B#CJ;n^F9f2lOKlPtu18_ zzu$0jrNRG9e4Rs%C#@lOPlap%=j%+}rhCVPUJX!|G{id3@fyj-$=+DUr^(|(QzoC( zA2x`+YeDCUN4#2qRwszs9Sh`QV92vM7FygpA5r^vnw%uMqx;{iVXDtJ&CN9G<@U`m z@ptYtT;cOKFW-?Fu1*!oKpPqLO(WFUUK2?L7!;-r_3j#1e(h!iZwpp^+LniX`%pxCwD?9?wyz0^iGh?4@V6Ke{?nk-;*?C5<91EtG^$E zeAbX3WggAsQp**lE#a=)JbZEk39Gc2)fFUfa)Z{;y0RgRBFz#vp0|n%6Hn8{VGWI@ z(!nXw%o2vVS`>|pdYCrjS~sAb_55gQgkfwpmL@~<0LJ*LF??bJFi-~`+kCH z00nsfl{7*dq+XCC4=(6E1vnj!W+B4tI=$bVZ003@U?$g(rvVQHYc$&W1f^+W; zxo~HBx0THKBz19ThP$Z%N{T|LS~(RHX6RAUmBG>^ z?c#kw!V|>a@`gTl)llJswAK}SMS)S1O(T1}O-yi4c>@Zgj z2-VMbOIm$c2<>&h7S4A<0=p317q{iBtYM^R2k>BxVrxd zVIT-_!H#ZvWYE15?w(Ch6r|&A{T5a)E*vT{dvHiEmn$f^-tpQ&T~_8*&U({>55cFKZIW zJ3soTb;X9W2uzkP4UJvRE6vf7_nazzg~x*;SgD8Hj}H9HK5w_@)Ab6OTTo;a^S}^W8n~a}f_#*q%O&fJooQpy zmrFHGowad}+e-be%wjn0WbBVDuY%tqtHnn>zf|<0p?lGX_UuG^g1Z#3iem3?OCvI7{`X0Q$RVxTgW< zp)zL@gfSbsKlF$F#XKG{@+9a#@mpzEdr~?)aqBLAtC8Y&lHq%VuP6C8e`^~u^$$SP zV`=rOCT^Jid0varLHmDu#9J)!-;W6Ceex@0$0`cH9x>1@l8f+L68x{TTPom}Vi?N} zXMNe$JF=r=*>j2PsP&q8LB)CQQCPWX)-@{Gg`1&JmL^6+OGhleqId?Le&t;xz&L?^ zox5fb$`hM&HuatjG>={Rb_F8TiQ9u?3#M+ee{pP>mY{yyQFD6;5Num?O36oK`H>n5 z=Cp=?Me0tjH#_t`ba~K&-c0j!CJhDKf4e8P3_srlw;&4@VJ=qaij>bRi<=s;RNqI( z5i=K7VVcitYs?araMO+XhVoN|hOe94c6+7hv*&*gI=_buO%6mCeJk5l_iM$yDRqSI z@*2d>$=w4KLg|Yatn|Oz)ApJqK36@Am6%{JHIv34UE{56=%gVA&t!b+w&HvQE6ZD|{ zT`ZtQCo5b(KdMjr&Qe)7*J}!V2K6w77gupO+K@rlvsP&Avlws%2B#OV#|MmzZD4OM8NCDVw;F}Q{8cYf)Ew~`IEqd4ZBEGxLr-RN+fxO>~6%C=dJMA9Qi zYYKk21;ss*l>C{{E$&7of>@~NBmJ(65D_AGJzyj)9S+F)wwtCj|K8Qu7r45ccKv-8 z)l_!!ct!W&MCo}8t??qWx5~r-#)gyu`SSoW=95#lB0+eS4MhjzC~u*7w{YGko_IkQ z=Y>Npv#Oifp97SHB}mfuENsKyDl=`-2_tc6QI3Wz`WVcE3&@i?(iu+lo5un;`y4D(^6eh z+4ZP2!#jqgp%7bR+ohZ0SSg&uQsM2X*iRS2-X=(0dXo)11~3n91E*hw^1xS zOQx?q{4oJb1}#Z9S*L{!b6X9A}BGr8<(Z zFM@{fyrTItS#=~#2mZO3FD`& z8Xe(gJY|`N;pUu~o;5EYmJ>A-^1OH#3xsdPwo9IMsq`X+kG+Qgab#_V;=M)R25@al zb!;Et8~&=9frxEC<7!0G?yDM716$EMMC6pux91y67Ohiun19T~UfrWBO#Jic{iJaC zS^jWt>c*KfVkd^C;c>CAD931x7X&k%q|yatwRdpQguI%M0{kpt0FqW(LzlM`jE^l6 zk`UfMqFf{b$EC;GG#{30MzD?LDo{2yIUhio_cj*JgswhgJdK~!npTkqCOhTGMHR>e zy;DsC&# zG*+|c)H;#A@_b)UiN!;_P+?X{oU>GDdO72kFn|8c|NZypba`8E z>ErWCl^E2*vq4_yJhmXC4K`j+c=pmFYgD-~O%g`~eJ{ekK8YYp*JJf3M7rGoS}#p# z9@$#^e z`d1a{zP97*(z^PO0P?-y^42*7N*211;3C(_jDJygoQ|AZ{m#t!t9VBfDS)$>JV-^*4vAE?_S z3|x;A%6OLyXML|s-s!;2#i1dqopSqVNO*{J2w6UzqSCt#?}%E9zbiRW=Qn9OJ@tYl z^o-yuMcI1znUrVeN#RzmrOG7unnNmauAN$S%9YNP2wnX4=lhaFUKhmED9F0;h zf+WZ1@`>W^FI3=sMuI4uT+FaqnX4>3kcznnZJWm9T+S}iz>seM?eO|?#fR)~y{25G zyreZWf4=_;N4at(e-{-h@v=>_!hD<@`0jc)i~1#G=SAcD>uqV?u%Km}=x(RcG6JIf zNnsDxGgD<4`;vPz?7e8AEpJ{~KJk71-V!zpQ>6joe#2zLKyHPmhIajh+hPSV1SL~K z12py78~f~AM(0M8jioM*AEnC|Uh1Z^ckvvqw%9GVghZ1d*}D|Qr6HYJ3V`~R39>@& zX7XVj0QxF6tT2*i#!YZrSBb7wR}4_d)Bq&100nqmCgH3f!1;lqi-0pvYD85&$XX3`OY8xF-f*tCCl*nf7^3l}! zMK=3xJ$bV`T(S|`PjpH+8kVuzh${hVqxV2wc(e<2)?&|sQdq||Qu7x2ObtEMVWkiC zt$lhu#@Kn}=rlVQ*{av6Q`g|UECw*%yJveMtg05}ye9@=Y|TgaKS_JTd#9`sHLKKH z?CMQkfA?$khu_MgLGxoc3B(jbA*^2l7H(li{5A#Wom$CJ#$gDs)T%aOz3)|GtHU0` zM=U(1$p~MgP7@T|OV}%7&AeNfDCV`H=F;2Aar(m`!NpqCy51D0s`1*qG}r{FArI@O z6?d)V%r6x_-qmX09`})TB&g^`PHw`wR=3SLbF3?3Q-td9ki^HSloq}!EPq@iEN_kX=*7nMm7EY8F7 zQo%A1*x!iPzEXz2@oQgIk_}qshmJcm!C&4ECtB$#VnonxoNhk%Eb*5*^^b|a%?{tF zjlu90JT1%RsW=+shZ4~nNPwTQ$9Od5PvAdyL7C+L=E(eCpBGWLB<4dqYKJVhi{CFj zu;uMBb>(w8f4CiFD`kJUUg7)uHv7iElbj)L(!yo`TZoB^4r#LZVCr=DN`FR!K_h5) zkl_P?3*e{I_Rn#qhU=vU^T&f~87caYa2|$ZjABu_Q+}pe&A1dGN(Crut5_lQpS z+egE=(yvb)HZ5`sy4t0OZ3x^X3~|=1yDZ#?b;y0oihjoSHU_NsV_tgx*>NGCwIkRk zd^-_EN-nfz>`Yem597pxu^sDY84A4`T`ExFpXR4^y#4R=&N@T*58MJG4@FWhMT@v?9d-*VTmmI%KXQ*0s~^wc)9tFn2NchyxOcpr0Y*INfvCZt+TL#J?n{pwz9NA;%4LAlD*@RlksWKwH zpV#PQ{WYeM?1{6Ce-VR|b4#aPm?*rr@m#M`At^eVN?W-xDi>mR2dQqa7PQiLK~UD>9%A^|chURLQAJug8q@?VO8yP)CaV;o3DcZvL3` zF9$Jjt;dTv{WQ{eKa{63F%!9VU7EBaw2Js0vHyFner>wDD=XnTDgw{0kq9J5H_Gsh zwaaH+Gk}zy)akUq><#c!dc6OQ6Iq7~f6}(0HO4s6HGRxJy>h!#ad^>#LgUp4GZOv+ z6kC_Iw{WPx$HmhsF4T)@Sjus5U4LoqpAR#B((++CD0OXi4hkMhW2Pa2bg@PI6jf1A z5wBj3^ZeE`!su3nB11ANwX*H1;~1)S=FRb^W$&`6guQ}&cp<}{dSQRXCWd+GwcHZ2 zwu-D3D>FV+q$G?YuLBcMK(fA7aVVKqo*jOiSUoA=Io2QW!BB2$Q<_;W7umgp=pXR zct3bZjD;YWE_CFUDn<~0V`M9bFAXgWTeu!cUDj4PpWttp--X2gmrP?hjl9(iUnahZ zmFT`r#aaFu3laN{)w=#d+(miC=pBdm47JQe%%Sv8EHiMHw*EejbIPOGOD_H%UZ+i+ z(aI8ME87W;hk+|qs|7IjzWniKrFIEpqmJhtcgE{}Q*2a;?wh-TMy_Qal>$o^PRoO# z2x1~uPhLGmkngv*Txi$Lq)0Z-QZo(ur zRmV2_NHl`t8sVEKjW^MEFr=CdJh9M%kv&Z%*%-W_8mIW~3>Y!B(cs+EO0D47{Y+)_ z02DK_PZv1;Lo~z8p}0H&9ojUbl)>z{;q|gd(gV1d<;*xC>gs=#gDAKP>Ho%z9Xe(5 z7LDWo$6RJ-!*Jwq4CCE}X&! znDw+&dY#67Lc`s-q0>Ahb!cqo`w~qWIYeBjSjjsZ3!08Fkdn(Qd2Y`c6 zLnS;KJl25dYzDYN^wXsR`~6XU9xkNnLgovwx32P*6TMdEbnfvZ6xfMZ#)SdCK1*7q z)lKyEO3A#dMCwN^G2Ah;n(^mPvs(6WS0a=O{|1mV-g{HKsch<`qynQN^_%-1S~qnW z;$1Z4_T;?>q{+Dc9dq(9Yk%&GmkgioNWo`nKgh8#K|qo^^edjaRqJBgI^1S;UDGM+ zbD(80Zx4vas zt)#9>UPVXymj4R*!}fzy?Dj>{wI$xT6KI+BxaP=G>$XSu5--o-p|PgjhdN3(uS3p` z>MN(;uCi$1n7sr0G95O&@s=J=f7bu8^t{AddS1(}L_6XwJ@WrrdQMK?b>gS|M0iV& zV1CGC@V{yf$VnsH#qM@f@8|XTijLN`6R2ZdR&bT!k9L4wahFp`_@_sF>n8M?2$uq~ zj#VPm%v7)U%Qw0|5c@(Jj8h*BTG<_npriRR$V_7oR3h(Zt|?uCyIub0fb2us^dG-F zX2u&o`)Q6RzH?Ss!GllmWDiLc*JwFws3KSVMOBK$UsS`*rS6_l0uD^V{a$CLS=XtH zUG7N;Tq5;Dv1Kd_M>_K*_9f-hWwc@9Y)a3jc```M5hpbe6K2;%kP9<-DVcR>P*5wA zeuk8%vnPO`<%5NDCp6KK=tN%1LvJ*xq(_0fYDG-xTuVxjH4874+du;RMt+6m<)uxb z7NDwz|EJy9u$4hSG#c;MG1%!{ zV!<`I98lsA7iDiUrA_#iD|UVz1SILGc~#b$cNJq7eVnPCN=ca|CU6tdeSDQLBiu0~ z;pJ#$7dTYtV-yMH!Lh-jLPF?U_(hq@iJ0wW;;Zl+NKNH+VlN#iR2U3ILVbcD)E%WSljUS7;pRe(=PqHspkedd<2aK-D+5S~G~9@-L{PvBse z;1uAGDsN}za70p{c^w-dDK=s@;;o%kKTfkEl1^x)osT#tz(!eunbIVUx<0hx!#e)p zzx|x*eKVmPd%?k^FsTNk`$4xiuesvNXA3v8tau2W3rUTFE;P2@c`a8^WL&&^*`(Cg zlb*%V{5{@lXrY297x$qUB4&DeJKv1J%)*oQ!9DgX>LZU`n!u)?!|SZF^yqEQGvWAQ zhLs~URciNX7}e~7w<*Y zjQe*ff=Wd~aiF2sl7Sq~{OirIHrkkAi}Zc<+7G}`J!N$pr*$E5^BBb+94$7T;+3b* z(vjg^;zb$)UHk~&y1y(mzGYt<07im}tWXA6wJ%xR{Xq~77Ih~^ji*bsS}A7hX-zwE z_(nVohn4*9WL2Xf0#kwF_o z2yvw0uh|}t(O>Wg$ZiB`PQCbjHuO6wk4mEO36t2YW|L}7{>e`~oz=B|&pgVyv_aVItlLxWf2XRG6RA4rYcrrHNa9o@ZdssWlB z_g$mg(1lKXf~)-L{tDG)6%BarcV1P7ZP)Y?nwN+GZ;7z;(xVkC=R~uu^3%BZfL>}7 zAL=#DW$l$2mET)9FKHR6kxS~dBN3I#v4(YerwnZ?PtdV5rzE?aW$kriU(UPWVM}$t z6o5qaX@|v4t$tGUgV5Gy`UF?v@dg09V=-6`I`FLH`tVHuS0iW3E3DScCs61NxLJUJS8wI`hrLM3;biC}tT#xuY zEMH>q*`74_gYaI4fa}p!14gQNh$~2}t*u06>jY>2<73OvA6*#jIy_LNmR2eb@6b1t zei;Oz?VXDJ725eUx%g@dXOL*gqsZWG=oCAzNe+qurLNSi(~P^1QXvZLT^=$9j+iK% zwjtdozaTF@ZR*hV4oT_riTpk*=ia6ffGF4RSiFn2EN!(uZCKOvuIWPF{i;N?LoQfdf@OEQQv;%>dTSo$&iHf(jPpzdH+K8BwOMd z8T;srhG7ps(FpkS#9w(Ae_Oo%cy{-DCgpU;t!Cyj%m1>~{L?HE8M0W?!jJdAu9p4F z-Fko2r|GE_?BqmXnDMPH(jqZ>O*C9VDH^$D-%C+h@5E^ZnEvDvRq{T zCRhi6#W#n&t!spk&yS7w0v@O3$wA4S6W-If)oq&-#koBd05HhtDxa_w8YDh~Qgn%N zts0!5JWPYqw6y*FgRSQ~(Edi=T$*5uqhG;Dd*;j(Fm6Ooq>FO*`=d6R55mnY4)IqB z377Cw9=^7NKdD+++1a--uMLP!QmwpHq`;eqTQq_=Uc7r=_Zl^B z%FN_UwpFwEbJ3Xx_rv4+L2)nY@s?h{on(}EX~KZ>N+S6raMcvxGp;3%3lbJ*TQ5Ki zcZ~{w+G)r{JRo{lqnGQ`rLQAk=i$ftIu;L(jNX1K*Kqcn$<3>ZRoT?k282Ib7Y4JF z(X2CLJ?NCu65zk6yQ)v$_9ACcwEb0yjxniE)&y1F=-=RS&uE8c28$cUuUf^j``mV^uKtZ_rXjIf}Eaiiphk)c* z%C`j5qLY%xtR!Jdi{|_>(HC&sL;H4qQUmnvb=@F^f=T2oKcNx;rb}bS46IkbfG4mT zwY?oLJTmN;jL>*4|DFPZ?OG_)KODwvb*My(8MNml>wBu_@QDj+ZYkEXWmB5ud!j)0 zQe@=;*>-i$>zyqS2hXfnGBHpQ=vpR$ha{c}bYR=++6Z17O$c7-SVr`L$^8Q#BO9mP zn&@+d_txHK;b)zjGg*txTghpJ^*tCT^(n>9Dc-xwoawa@GqGXC7m=?#aq>+lD_cX! zV)I7&u;T&eOE7f1wN!2s{8(|?jbDG?Rxo(S+XcjJ=x##)S|TRQXOx$uSdy3y6WcXs ztzF+%N^hQM`i6kzG^*3*#a5@-g3g#&v$k~W_G zi2uN(Tw>sc9=SJOswB{xDBdZeq5Av@-%~Rmw12XSj)#4?)@00>!Pb_T;<{B>1wJP< zFvF8K8`1<705^^n5ZOC3w^ydvrsS8n=R5cdoE0wJUwz7Ka|MQpD&**#|w1I5>rPP-2XQJcHe|}KHTV?%OCZ^}FT?6%JLUkg${EV#36=q4;= zIpk7Z>k;FwP5>3Ga7GvZWP|oF{<9rBUj@`lCDxiP+b$`mmJQ>rYE_SKh}z}75@8GU zXVQ=9$f$aLP)(Bb!Biw&+-w}4%a3lHq#Hcemb;wL=8a4J%1p2U${AVy{qxX-X@dBk z@N%DI&jmbrCovf5x@hVty{prd_;b5`DcNCXX0(YaawO}K0nE0WJjr<5bIpn#OOpwQU$CJ3SyU?tuOn>+ z@wS3RMaeOz0`Aah1Q&UyE15Vo#&ovqrIQryYi!t8V*-4`XdQT|@om(Rc%hL&d^sK- z7=2eC^wK4TX{Gh8ukD1l6wf8&aPidrbItmPID50%-a*K08exE^KGJzzc;{zFaeUs& zz*LdF)LN->Ao$|$u6SwHomla5dn#!XeN_Rjuf-ORM#NBBS);n-9kJM%`vE5~5#Rd9Il)@8_etl3) zq?p?(+sL0BE74*PH=dy87sJ%$x7^niHA56iKJO(;g~l`o7{EFgoR{I-A6$G|c`-gG z;(!dHtMs)`7`ega1btrKD1}Y)pPwckOQ^-Rq-dGbM;EH zv_#M0Z?JD86cmsFw=#J~-EX0a5_%rgA6MH~tP5h*55$Dcl<%%(a;&(88Nq2xnSME? zi|XJEvhCu;tw1s`+jBG|Z1>Mt-gdy{dgpFh!gXzpre_ybPIKE(_Uo9y>D=z=vJ+eg!)S!xC06Xnc!?CpJXIl?>$h$rk9r@KaCDSsst2udDT)CUHVD zraWZiN*%|WnHVZhik%>!xy(kw9w@i2p%def_VUqMg|wSkr>CHC?|pam3~DpU6sUF$TLV1~~A`qtpJ`yjbx!p|HszzDz#ouc;VZMr&eQx-h3$5#RsByJGBn){}ZlC-j zaP;Ns!xk$I?SzLo^E$4}F>YF_A>nP^PbWgpJ(6vsBTsRh^BE=6BE$)Gx-gP@X%HlF z_0#Tak)NT-d+j-?IgEk&=Ku*s+7&W*k}Y)Cf_;nOC;@m~FXV6I)KkJjOTaiPkjf-s zNRwd`yiahb?qea~Uu@^e5`Y$7FvEyrRLrBZRl?5ON)ydeX}7oGVbHithEn1N@Mip2 zKRd6rF~`g>s@FYH;mtYIC8F%eHVOWjdPQ^CiVUG8RSnkdY&a=$of(0@a2PN{M#(`n}k;p6IV9 z4{nMa-?V8Hc&a0`Kv|>Owf&oF>@u$J@LA4C*g%{&C$VbkjN0upf7CD27k*Ou=%*Ip za2NAWt9!yWRK-jhDr*Gp@DIwgS-SpP_5F8F3!!P1PMMV}`nt-f+oADljBufRjWF;0 zn4%Mgk6}&HaJWS7lmo}}IV~W|eT92|eoU3Y*tLML&ZHK($xuM12-P)+#k~winqZ(# zkEHET46bB8FBjbrH>CI-Qb}b^v@`r~0;_+*bA$eTfO^0sD_&{~O(>9k{0YA!Q)h*& z?7G>=&aH;<;TZo!T@@u<3gVGi{n=vwgTxxJJkWN%V3QENiCX*@e|728f}cBw;cdOW zbicmu$l}v;vlAj*e+0zvmx(>BH098jqL`vD3h%o4FV>%@FMVOOR|KC5dCVFV#aSPj zu{V0aJ;Ud7B(!u&VLRRyjCfe*6x(m$^${7;<`e^GFj};VQ0Zk636U`Mvv*{xbD%JL ztMv5*mC(*UOS8uN*Nql(xY?* z;;Cg&UdBp@^;6ZNt+t=+Op$#A_$<$Vo#R|6CU=}`2c(FuoSzgaUDfo(OfFS?F(+r! z(oMD96Z^_--@D?22GlO|{LV#kG$gOHs2-y9r6EF7x?);LA`C|ho<`}n&=>7#!1iGe z_wL4@d6(XYfxKj9qS*}2gN}Bf*;ty3X;unI73;V0h`+@iW9No^-v3FqJ^mJ}Af`Cf z!7Qm?T+#>yfA|2nh@vYe$s0Y(J7ELc&2(vLVnu$ZIj$^YrGWR^+KT=rpc21CbR>@{ zD}E9iB0`h}B5X5f9A!<)CJ6o5o_ebMz*rB&=Cjk63y;r7u_0A4=V9_G3<~39l&s3rgk`^Yu2F?9=;c@5zhZT2Aj*pAy(wBZ{~L3^IM@XU~eS7ihDw z#s#3CHqHE*_1mV;Y3oyOe?`0q@W~%wO!P#2#dD}^m%@3U`~*LzYY5-lOSxn14ljqG zz2vDvvw;tv>?S>EYaP^pcF=c)GZAz`hT)YYn)y7Yn7O6}UsPl5fy5o-zBZ!(aG}kJ zSedxS1-cS2_AA23yR0x$t^1-neA^^)!qYZu^sqgF_u+j6_G@+_6eiTJAKv|Ltdpe7 zZwYbq{(+R9=ym;Rz5x%Fgfiv5u;;wAM-SibgI5}20keBwy~{SeyAsmn=(qsEV@9{Pt03l8MDjfySgN!IK;Ifthug zQ>;|KOXy#2VoP~-jjdlfC6{{aJN3>>I=(3_tBew4PWK|7{18y!|?yg+$m3N`(8gr{Ui0o z{kPOdcV0dv@Z`4bKX$LDm%6`HEag3^1C8jHtl$C~0eq2ORIU$vXydO8*%lpZezR4% z&@DCVj*H(d@t#TO0(tASnbw;N%_7EWV$!lIw!46*yX#4`1aE@~4I*U)G@B?+8Mj6( z6lKGuZtxtwv#QUV#hcxVgGjkb-%6@3fQ44ZTxFIH#?N?ZhNmG_Y#m;^=<}#&`0)>$ z_n`8+CfTQ-0Aq$ZeKpba=MB}pKYp#(uhtyXz%UO$ZTAg2(cK_;j?u8j>iEXpvJZP! z#BxRlgFgpEWA&G(qxw0QM=ux+0G$@oOwZK&e~x{036IM_QSQ#$+{=c%shf2ZLq)K4 zdy!Ym>|~7ukJ7k7nbnr?#V5l-6{7Oys-GaCWyeIfKzaXU0%1FJE54VQTJyWSNvT39 z$6C`0f#;tQ`}%1y7f69?eI$_nPvT!GD<+Rl=e;{ zRAQN|Ag8avX~ZE2s6dD0{gI?^cTtB*>h1~vd(ufQ!%s?n?FEc3=Ni9#dZ=5bp{Iyt zUR#Zxus#fZg_&rkKKHN~erfQ<1w zy#?t+E5Ud`ugK1q$+Kdq*4an3!+DABBjwFjE0(+kiRus5FGhKI*Jy7bbezPo5Lroh zto5iGvkx(skbeh2=hJC{lVCJLyuI`LORJSJ(~Pi{q)s9ieDhheoq(#7S2s1M^)?3q<6J~^&qyc znTQ#S41>FG`gcMuS-kArwa|2pRRqycZt&5 zMtF2vo#LEwA3G=1)M)M?K1~9T++?t-Fk_#TPrlh*C4lVRPxem(NCv?Y&qf2Ppr}@( zGGqJG5}{q5q6FKtBb1MXiDKwv$my0bFO{GCw^5JUvM5|(o{Ci46YIn2??J&2@VoMp z1InAvm;Yh(k4_K-B5p%vh>s5pTZwNAZ)EYa?ELZpzs=jVu;Y+KY4z{^|B|cXnS1^r zR~fC}(*G+J{#OZY#J?r9QP*Giy6{tQz3giIvL4YUPAN@AiKM#v3v`9EkMQIs`c9(| zdri{$92kzb$G(_ZUV3ohik)s=7|#ZR(ZrUZg*3rjF{)Ut0P@$|>&a;k#ei|wRHXqW z;zH9HrKAa>mJ)3wFt`^@$v3T*gSgDTTD*bGbfs>t0Y1R`mn(|4>D%lc^D8x|g|Bq% zAj6>DCrtK1tZ6p!F&#E_u3sWEN>#@0i-0CCRZ& z+C%tqSvJ!#lA0Z%=mlXY4 z6M5m#qK0tM_vmi+)yxo#|5}+}-{7QWWRHB|sY9~`czT%G3M}~3Z^*GPP@#@9?z`znK8=Btl9%1el z=a`MdEsJLdIqzYL+t9zN&0(8;3E_XxfimbthQO25X~)NEZ<*WZ7)dT9vg{1`ejugw zGkuOWw9a1e4|>F={$e3AF?Z3$Gn~XDC1c9)N=ogL=*}6;J`qOerZr$)&eSlx_YE2j z^Cy)TID7WpviEqF*Od@RJyEN-_@t$FBgR!Fc z&>6H@6eY=p1Lrp#Ya$-MyL{;9p_NCcK zHD45wc3VYzHRmB|l@&iewJe(0Gbbwk*DHc7*oL6Va(zE1a(xq9wef&*wO_ zAM{K@D0_Xvuy;@ZZ!lqP>6Humw@0Sref)}Q>TB}vRd7Nt$~tRApk;At5qfUgf%GZp zMSTK7+HIxEq`A-2#*VS#JL^ew7TkIYbqNhth8oY~e8gxX7Y&`Z;T9bO00U1O#V;Pj zMCsG!eYH>R8|d~~!V<=egGa<>pAZ%P`lRaI36pU{#`7CDnAvaLoO{W`=NlvacI{Jg zXqiA@ zgk+EY#?n?sw(T1oZ7t1;Ib1Z748g2bJ{iSVvn&Zmmv%bFENaQ;){VZ}k1Hv=`u&#@ z{RjExW@6zlkH2^ag$sS4lr!;>AahS@7C#l{Pf@81tepMoO}ZV+%JA1n{;&0f1K=BnO{pXr*5+^v(gmK42Tm7v+^H}}cBF>)!jqc;+ZT2iLfS~O* zmqKEqTKDM3k&MGhPe-wYp>Nz6nIr2j3{>boi39ACdwGrY%P9$#0XU99r^ktcGDE*v zc!Kbkru$gs?vG7I{S!OsY`4UKcS1To%O~ad;pak{7&H5!<2{y*oGU5tvi(n;Hm%Do zAAIHKn;WWsfI^S9?*4Zo<^TAk%=o@g@cqaxhbm<6A5Iv9$+gJLu4TRDfv(nt!0(Sa z=M?z%f*5o#9dsb3KML8Q3*& z;%EF;=ULX?EtSb*9;ThCE~=0(?$a*##0c#C}LkK%R|k^DXz(~R!%1h%R@jMGy}u>+4n}sb=|~7 zeQDdIuF2-(ZWp2dpC&#jHVyQt1I~O{??+H)jGEF*ONPLA^`uGq28K03Rd)*lCX^O? z=@0i@Gxx}j1@t@7TCcx=sFW%EiVJkQB!+uMfcLyU zXCb!m@3V+-?O9`wrx`fENS||MgE)`1*O^dNS;oH!t#^O) zn~0T_ku;6&c}!hheiey&ccT3|z_|8|_y`kGwj6x~e3&oylVmHpk*PA=LF$(>?&mP6 zxREVGjssjV{?-VYZg?$63UrF+t)q4Mrqd~wC?7-tfxxxsFEk;XVg2X z^X*<^SwcekpS722%?k?`*o$cgdLg+33!Yva0%}2PLWcIhXkj1aS0B=-GT)txVyW{bMn4>zx(Vt zb3SA;bLUeglk5K9>ssr#VufS|e?KtYebIlhT9rHTw3HXAVbjO33J|;~p~HXfEN7xz zhV3Pyu6^-wAz*k6mK#^)GlbaD{4lPpN$&z&_^K#hWY1wQpbWQ`wgpMv^Z{TFme>R= zvmFt$DH+BSLRB2BT*YDrW7USW(vQXhuourIhFIW}ZL?}>Hnd9(-@IHEmBkK}%*^j5 zjZ{4t(p6`xw@n{bvpR=Bt?uF$UurLXt_{U^HPLH|^B9(UZZm3pzb(I^)>DVpm;U?z zefe6S?GgW9vCql)7*PVW-5*KtIV+qA<6p0`Exb#Z_(|OV5>%l!Hy!_vpn7r7cc77< zbwA`iM>o(gQT}yYKuIvKlTy0vi5^32Ua^1+s$~~#o=Ow4x2?$CfmxApM)uUOt=s7Q zjz(ua)YV0$SBraEnc6q&3n$Rq(fs|@wcerGp;dNSqMx*D8P+oP?J5B;Vt@F#Cza^?}bY$rNgOTRjflqMzCt-N)JG6ieRJGddkRAN4kF%D z`N9f(Lk=dM)AOy9rY#VaF8|#A*MP@eSAlzL8&O$?rpA`Kn-W{YQ{^A_tO2*-8#?N$ zY_XW3wpmO$S0+fIM;SXGK`WEg;R1Pje_i`$e!=x-5o0M_MuU23QXb4`USk9`dSW(PG*1_V4-7nCqls?_D^p$ zGuYCMsZw1lPUE>xGc`^%fv1AffcFEaoV%}sUxop1VJoN_V-#4+jp>Jmyazj_t)nu+ zq}ykcjRBjtT58&^e*9;UTB ziqU=c)#;AJN!vwF_)Va4uw7k`i9Epa<$a^@UvgZd~0(ImJBZhcYeA5B)7`(cb z8Kz>I6K$63eqHFfl2rcWX-yL8w8~p`1YP(*s!zJPK-C`$DpiHzg{fpz%(5Gn-?67mzS~T$sgif0<3`$iGe_sqdJo!^ zqmy-aY_0k?>JzC$l>|t<7e^I&>R@5dP1?wEl8tcJ$N(9<(LRMl^YS$(D`FRB(qp`D zl9SLYp8N5prY8a6@QRd)Jlci1@CS0V1pBq9;JN~O`+MDg+04)L#*iTX-|}u4hJVL2 zI*9id__(O~7O+IfnYigA%(qS zuU~vFU_Bu82Fs0GC&#M7vu&HX29>o)B0+Vh9y;XyFj{FlQZ|Ur0ack%%(slwW6o_J-+h~f|bDN@R1PJgiiv3DGzG8z9aK*^ihPw zv9i8=BE^y~^p8hHkcRJN{_W*Os9%bze0=sQ{+cFPA|tt zA4CgTFo}WI@eO?HT?bM=2%Rjg+j}Hw&Obmk8Tl7VJ7IzSK2B50-79Ibc2e55 zM|I8>u`^$K#Y?=^vM*pR_6v_vP-s^Z+W~%7>zmNuO+3U-@PM9iR2hubgz>C&T&7OeV^71 z{$Q_>(AyH5y5=7cc)OM1qr}2gs0xbbWO_;H1!tY=eY<%FEK@KpH*+Ju{^a?>n2J&? z%*tUyw@AKLkd8V~0qTygDbWu(Y0^cw+;=k1hxfa=0wQ^e-E)tBxWj1Q)CWHSm|R(6 zsQ$h}o5>b(0@D?|)LGue3z>3vD9)ds!=Apme~%k&L>UoyplFbo4qsP~A67|zR~`U% z^eyTrRHr{vM(}VhenK?~*CRkDyLrPZ>Ay#dz^}QNKHV@#kh5`X$6Avc@&=)eo8AX^n?*5z@~z z>V82=wxD~xO&KXBOioW3idX|ie{iWSFL6p;8_PPI!=b*uf?eJ^^z)y?8B*&v7 zHKyE(>n7?Z$9WN+SA^572LDFIBw94Ug%{(bOus%59O}L7ZJs*D>(RgY*9y0Qwg_49 zObPF0dR;D=9yEy^NmEU-oXY-v>YInSD@7mwF#m0w;P9e7bDs$Sfuwtr-ci^y7cnau zZEdH_Z>QR{b#0AZ-&n6#!1rNiPh)x9MEb0j2GopIK@tcbz=l)>S6-_06Z1tdd5uUueOu_`0 zL1)gTYItRNrj*6HjfFLAozebv|F)R|Lk+B(^LGv$+oDg$+4hm0=CvykePEUo z15L?9{-z=Ap;=lRwL3msri|`h`Xq4lKiVr z@%i_EFuut5qQ6rX{I{MAeL+{jis;_EC3Rn!7`FQZ#E7Ou{v1|g@PZ6?kC&kU(oUgY zY`@mx?|Qu^^Qh%;nW)A;@p&zpl zK98R2y`(=fe2g=e?;@*qy(cnU7|s0x?^*jENtma0FL}#C6uATS&FwD0HT_*Bek9}HXML_*-ba^`9WN#UpsNl~4^5m*-P3bfUvk(eOkApa9D0uTTl5XT zN%|4MD7cyRnrAB&t>LYe3upP!1bJ=r6pBuif8&Xof;u&nTrB~qVpDpoU|7G zfY?wt$&F2+^1f`A9s7#8qo19=^ipZYs+dR1H`Cc2g$I;Ay(+HfB&%Iq z&5SwQSgoi2Zkc7pm$`AQXpKD)@n&8_RreG41(hmwQVKY^ePZ8$(|Ntttkwl${ena6 z+b{`PZPeK?z?|~++U9Ci)1QaQO!YirCxu0HpK)Hob23a>)OH#cknN8PN3hgz>ei=5 z`dta@V(aE$X6F<~2{EY&L2!b`VMZjwIHKQ1Chs#BDF9E_``H7_jY#L8p*nwdXGs=h zS_>Uc{buF_o(#QmJVFpQvhf)^O<-40+`ZRtG(d&+x9!@eo*O45VEuGhe#$8_L&cw$ z-l`BJWmDXekC%#cSUZ9_R{i@{zS%Ao0wgLn>l%CGiw8`f zSCoEu*BJi)p~&5%TUH42;r^d#o{6tVTw^|V|F^b1aJ|cyjsE{iUpUebLuS!Oo?b#z z)|={wqW3n?S=icZPJ?Utf3xITa~d8!{VnZY%G@d48`nAYoj}T^9{veInXPm|FbL%PGtE0|2G(n}m_ zxb!Q|SX#drPCL0utC&SOf@&S2aCY`|Kn!p`Xo1eWq;mzAVRHjY8tk#7VFYCpWr`z_ z82ISK5-DcttGsLwT5VW98=K%1=^Z--^}$l&eK*35Y)yLs1r1#37`jRy-iN=-W5Bww~r@R(A2G1Lze zY@=?)V$F=~pR!kC(V&@wuucd2#;RYu20zy-SV{Ey>?+UEU}g#BT(Xa4Dg#;)KlrvV zSirIRZ`NpVty(xv`072CCGs*H2GeSNI&Y>*uE!1Za5{MP?C0o8dlAlEPWt>OPxr5N zRwC1rBO^^fiGzuEb6T(`x5*C%0p;VJJ$YjGeU{Mlp;T5`0gv4WLefzYoPxbsw%Od` zk0qHRQNkL!6?RcQogX{_7wk0|cE@YWMPG!N(pNd01&73MCpXjaj}_qmU<6z&*dJgJ zZA7PREBKW}IZ?}Zlt?T5D4gM4-CZZpVuUCz_({d}@S!6dfg@^T+6Xp7=62fLVpSaI;Z|r#Yz5PmDy{%hKg0d*U{ivSwF)oe-FX>ER!HX;OspKp9oc#p<4)y?Ur#w zjkmZT2poKMDDX~lkx!ZNWu>@v;A9j*-8cAFN7Ts$s^yNpcV{!}Eo8Zz*kp*RrIyJJ z@9hg1vdPmE>k;v=hTE5a=>4L+k3N&m8mPI5&j`Mc7mIfme*G5a!+!3?2vFveJF%40)3ZQ)E)J80eQ| z)Ug1G@D>#T3t-@>4c~da(0HbQ+^)*&1oY8>!awc*>e2B0`m^Hots zG;46KT3$)vBw34|;;%ieU(<0kg6GAx$(s+MUp78!SB*_-4m$j~o1Cb<3C70KY!&U@ z1iSOvxdN-Xyk=OSK(Oz9XYx6K3GLCwN#4g19wp2+c&&V^GVEPd+U4f1&D0OT{L9(zkNc>Y6wtI zqlptgU3kLb&0#5*0EjPiPINC>|FWm(gbp&zvl+G`%$**0@iHfUqK_mc7WF6Gg^2vw zd08D=w@zr~4{Oq2XhEB8(eB-sz^h)s732NuZs zRz}u2+W)8TEBzmn(EsQC$(htWamb^5>nh_zXY2WzjX@Zn&=1As=x#qWC$L@O!=2Q= z)5%luBWmG)FckSFuV*asBRqdn+g#X_WNoVq;>iiWvG2BgIVLV|bos(oacT`~ zU+QTFfwq&HreA$n_=KP*l-Qr#fb??I@8AVzbkElX%;Squf%TXkpFHzvE^Z)<@9t4*z&uPjudpO~ zdscCPm25@vr;F-#-_Rv&GQdtE8+Ujm|N+^i^J*Kv-XBAAcdk zn4(<87j;cxWyZOkxtY5`ElePkCf`@$pK=a8E^Uzn6rx4B4v961VBQAQ#Im<0(xjlL zS+b;d(=Q}J#Z!b1Cc2pWB0Cw`>fPw^5=xokKonGrol@i77{^D9!F!YW&Z z%*?u~3O$9OvXN!ll0dP^Q3`KrqP&ry&(Pq}tQQ#2BER3_k^QJb!d%E*jVMU90xSt>^Sm<*<>N zu6HW<&Vw^JshsR}L&I{b{>L@}Ku!)2aa=wxrc|ExJFjK=lDD;K4{X*CogGxuSIJhs^t#s6Wc7$HDUP)cLY+e4iHuT@s_%E6J+K=rK>tA}d+oe>NvINOZX3u|Y z{UrCReiNff{?_7P4F8e;asxk5Po1^iNn17kgMr2vp4L7PP_)xrQv!Syzj1)#X`)|; z2uq)_^DCbIq<7JALcpa;xJ_;X_B4oHk-ob5?(QA9EWLgsZMXvZI3Sz*Y|&jyrGq|V zNzk^X9Nuf}LzaigHPC_-)6Flcg9A0b+hdk4R(&|d5XVk5Q=#Rs@|fGk2{nukqUO32 zUObzkbyKS873oulu$%b@19ye}aI4&Tq*sX_A1b?R3wREo{h5}4il{%kR~T!leG#T! z@&;K(V}!#q4tmI19R@BweX}{wbw2FhHd(@{q-WFRVj%54JuRusJI=!Wz)J9=?|gyd z^343*Ma_3a=@>U+1K3c*m#mNwYumM;$J56lbkNIq4mh`y%-OGjZhFRNq8mm0i6MLK zXLzuPNRYj0S}*wBz=WpWC9-oKO5fEs*MQMftFte@|1>!5b zd}bVUsfhH&%w&1369;d>Q0x_?-hMSN+x;F)BOSm%FTjxM@MJkXJ}Z_p%}1rrKf3|8dgW&LtG9}A(MvOgm0J`s^!xhxSCSKe@K^*bsh+m`k`)0 z^SJ-^sg9LY55*V-6XtrVbjULERf}$qM;`bpcrr$7QnxlM$rOAN4iFCZLVly(v!l!9 zUeF09r*_fqNtWLUdKRY832pccz+Y8I-- zXO(>W=d{`8oRJ8IjLY_)rAeD%J_+kSO`0U2DRz>aXb+;X0$rI}Mu4}Rno~3rQX>|q zA=3N2j3f8!zL|7avh`d7LIhP%Rco-?S7-wHxLsdLuD&m(zo|KWYo4d|_2<(~IOV_) zH&SxUb_mP4->l~&TH0A*Ry?hz;RLW$*bVxDBCEzlH1Ok%(sfFm?^VgMU7A)i1~_$b z?H8ITIuszi&?~>+$zq@DW~5&%fK{vZR4DLGpew7{=Jp*q4u1@B%`F)RVP5g@sDuMG z*VAYIk#aD7iI*=MD;MKqpN&h`DOdQ~sAVvNa|6O zD3PHI(oI*e7_^Ncqr)3fJvPV(MQdw$tH%43YlWM=G&UsfR;n{$buHxJ07HZF)*KjP zHL7_{RM<*bd)(JPZS-!z0kv*9&j`^Cs%0bpU|iFSFJ8mZLiIlw2M2y0eODC*nP@nU zv_tcWsmWsingyU+H+<#g)5;r5bkE=yfgT>GTjBpRJkB`piSZt#7yn0*Zb>dn5qGe0 zaXAi=_<|Nyw$U>6YMs;2v{frzB5{2|AI(#(X+Rfx89oKHu%=UbIyN(nXZZ_q;ZD|n z5@?VzRLi}h@;56e)XpJS;4en2B6EK>@ynyxCt`3Y_hc`(OOqXyM`0pn#_D;#mv7A} z)9LXH0x*hO+@T~y^Wr+Q`fWV9v$+6|o4~N_9-QHHEkY&ov%ruCfeS)$vuVbNQqi-- zjz;&6ZPeeA7wBNyS|6H`C@Q2059xkL80N?$*WWYwF-RH%(rVZbU+^1sJ@jHD;JzW! zwsCY_;8&ia4r{0rH-{_UNxzT2!Oc*Dr`s_rYv)h8nKa^h6a0B#Va|S>JprVrz1Om> z-mcsgviRj+g3cB?kAdl$TIseXi1S6{YjlWwA$qKxd^OMg$KmAsGLTH?7Y{E==j1A( z3KijFulOYSV)w!~T_q@%!Z_f3eej=NT;5Lc02IBS{_`7k=$3(UK#a`rp}6>9Ms)`3 zlDl+oQ>tMtmNk5y2(;tjA_L_dFea)eFU*dU<5H1vxd%{pmtFQW=WpLUjB_PB3b6$K zBHYp=8&t`csIf>`-q0Z;E7Nd#q#KF-I_?@){{6~U=BuwX;;hKo>TG&Y!*kFyvQv~b zbCMp-&hK2M=*SR_48?^_+mSuS<8%q1*H73T7}N9AJbxE|s@(oy)F*Lpli`1Xppthy z(bSWtSCV#EWh60DrjJ$+5!?ekGL5YsA3)++vb}1#u?^9l;w1Y>dUmdF<>$7VK`Ol9 zoZ5B?DK6tQ*ZGF@#xHiI0&lRm*d7hY+#-Ex|G+tbYZY3<*t7E{69ge%ra>O5F9R4zX%U%=f#DOhyLJr(P5f17g>r`&sgWFVwA_&)6tYS> zO{2;_YmKkVIs0ZObiJJ7r{TYapBI)?mpdXhMg6R+s5}|zj$t!YAO^J!RlDn>Q7t{)GjJs;RM``^QsG& zuA(;B_MtIv-9^?Zv^hj7HkKzUWos%vTd+(C$ zXrYdIU8S;4mE$bw@ym-u|0}7x_a*FQj;TvHjPZSX^*A=PR^0p_j2qfV>fr~Gf6LKs zoBmZU_Sc*%<6wUT;xTSOKgpoUto2Y$yq%_xN@XDw^#`h%wEdADe#ak8 zi^=O|UdC&X*sN0%e~_d7Y9(&7(Kf$KB~!V3*Sqe`0XHy(;I|u6GZJiqlPe_n8{$$eW`?PvVCE)DYj z$I+e*GO6az#gumt|6mlgO0^mLWe2LJpi7y_B}q(Y^$mDKPS-;|Oj%tKN1)M%r6*>^ znv3ssJh`3-2ozKQ0O}DOf7W3`_YHgf<|dxFv^_l$!2R>XpiseN-cL19LGPrmulF+F zmL;<$I{gCB$RrYeu5Jm>+!Cm3I_wtX5T^m z$(b!l->TKhg#4KufMQ;b(r1B1vO%LQ%2X3ecCb;K=+i!_)z)J6{MXdvjDcVYVIAzh zF$)<#m&2FS~&D%t{1Zbp*+?afOf{j2YyZfwu3Wr$G?Htl=w3it#b2#;{F zA1-Q|$a-MJZWr+9d0YX+U)2nF2gkZE6WWI`Q z!#pnJMFQ${w@56_?WXt#(j8gG9_Et=(|6vUN1v^xZOS>d-Dt#_nIkF2RTjD@tO6Su z$BLT96;fYfj^%|TO)%-$HR!;6yh!`{+uN1p#=>e5SC^LnrWaAdsD1{8`7l5AwLoAbkbZntWLc$;r;~iMfPWgY z>ZnN~q3Hoi^hk*XD|sqI;4d@Z^5Anp|6CS&WDI4w+c)n_e^;SdIQ9ZOVE7QCH%Y9^ zgm3{cx`_{#OBt0ovb({jUa;eMA1}IrMSo`eZ-I594vn zs_HW8x>V``Ke<9Yavgs6VDK}eI{6=r9%&yHFyh`*W4oe%~&Tz((t3^eNn~2MDrtJTwH# zjL z0lPE==RaDBJpaMaI8lf(@0s^az+Vi1u8sH2>S* z`SO5o2^>J()Txboz4+B!N%iBsBoJ!9A;TfoWCc|HL49`iwrX2};}S3QmEh^7{@*Lu zT#Vdr?L=m9_;9Qod{S!8&fK7Z@8Hu$iyrpiaKUK>w8@wCW?*;7WkYVU{x;BQW7Bq7 z#LUr0eJxV>b66Et;hp^BPiB0y5!`!>CUl7^;g!z-zWDPZvF*9&2Ub5(HOobU6JVs7 zU83e83ia6NIDlAZN!&lmNh9oR$Ot>qbRD<el^1TO!Pq_tMub z3?f~z_9k;Afp^l^p`Ia#48vOAT4!jyzI^g!6#BKh1QM9AX)1G*T?q6raq^oHtu z7vHDdr3C1k0z8=b<8^*!vEV*D|8V8p8iqB-SR63vi!;P zozz9KX%htCMMw$3;!#kht?jRI6u6xlH*)FM3hPOOTXd~?N)bsC~*W>vaXN~(}2(T=4wJMJ(YC-%l%iXx7QI6_1VmnQ+z&DXG=jg<_sr6UuooyG{cr7uF zB&mI@nlq$!Eu1I=)aa7GF!Pz?V^GYSQK)Q%!`~%zJ@QG%5VZHbrpU>)we@MT+(d3j z(GeE4Y(}GmAmF{7>C4fglhcz*|j zsMAjy5l$i_r|8se;exb@O7F2k3g@w{H`poYZni0|9whE5x=yRLD!-{FX7w3I^y&41 zKH_+*v(V^RNKba#Z2fCn?V7d$VKHshHEJXBczT7^( z_seF}-IzZ;dOnL557kT9!8E>gB?e^I=0M!N%Uhp8Ou6}q)+%{cuQd}o4t2@>N4MG& zzg5Cl_r_J#R!_e}fB93KIn^IBkk?K!-;30m4KroR5lW)DCn~6r;~^a+_0c-sIt;^W z|HoAN{rBWyi3S@zQIsSLkCgRp%ZKaRs}5yqjv@2sd$pD#L=gBoF4Botj+^ZeO)U1n z6S^+cR~gwvu#ia%Jcm_%`1B73pX2HM5YT%nn5&4?PB7X(IRE)m$Yu*9?cqyB&rO-WY)K%4YNNn`E-5dmHG=K>k`M@n)W7K(UIlFH?}|= z+SnoXck|yJ%8d}LO~_pJUc#pCNU@1;f?xS$1d4l?ww z@@2Kn`vIM-vj}gZT6-AEznD1Z@<)V=fBzTjr|vfDKgO>A&pYN;#x>K^!-wd5Pw_pp z*-P#Kc7jxCHEDG?y7`oWz8}qnJSNUftxY%`u8{+7Enc=>qHj21kIEv&nkS@xyLXOM zo}_k!vPD8BfD3!6kG&>P91mGIMS^u`8M-rW;VH`5x0}|oMFUMgv7c*1`N%#? zI?&8EP`nrrrWy-#=DUGe08t;y&7*t{z=YPFk>0rQcT$+iLp-hvt3yI@+3cezP)}iw zj59JNX%5389X{b{mpi_{3UQq!6f6@SKh&oTO_yW~^*iuZO!0kJX8+vhWPGe8gELL2 z^Z__jRs2z_mYFrrmBxywRPC@J`Q&|A-a;@YLuCT$hXy5$^E)^}87hgMBQDKBJ>@Pb79NTWBDCo_2KtWjJ`>X4S$IVPaF}6hcV>Z_RzE%;+pD;mgM1U+(MXby^pBRyiN)7y~r#t$aU1VxlCkpBdLZ z=cI1Y4?)#<3TdT&11E%@@!Tk$7A@`{PJu8ttfSrN*AItme%sVWLWg4&6{HXP1OR|j zTnCUMVr;zIpL;H1$R%0hkIas)^&YkeAx}-UaKgl4%quVZiG#gnuNjB|TV!CGfg> z72mRVd4b4~=lZSM>3|LO3-FN&Z|-4tSIX4XMz7%PW6U z(5DL4C<}D&%pPW5+Ys-sI4AQP_4RP4p(KkEf1}6SrjobZm>n)~8&f9aOH^}<*YkJ& z$t%`6ZUDkM5lHum>tVzZRPDAzys(?G9sG5>$$nTC8T=9NNLK)AA#>Vo?X5PyYdxcg zLaJ9v0!s|Olj9I$-BiqG{KWTCiJl{yj`kv9vlhN6NW^>>PfPO$J%Z>=b;8BH<9za| zi%HFi4$O(vn_w@E50FpK}vE(}{`b1HGCmq;R zezIETn9ngvQW2rk(ys-JtjWE@WB3I7@(t_F;-XV0=olTlMgP=kpqA`$t z74?VK}bNzT;`yN-n>{MWH|G&z$zk&Zl zvu3?Hf5=O8IAA<6L?4moU>5&iV16vy=4j1un6-ISlv_oh|7S1kOVG)~n5mKVrLK%j zxTy9PQCi(rc>EWK+!xpfC9`>2m#LR!1CkBl6%-$^(!zruIhD`>~=fib(L2!7?X{Zn45@YK|gCp0`=i6GqE8&uZ z^1tqDfLsT(W=N`r3hNxWD`=%i)Ywq75w3&O72P2!huY0bpNWkKc z-}#V{-N(ORmot@NVEtOopAGEs)*c?OIk+=oGqD<}t==<~qaQu8_hC|&V=XYbvLRe9 zuo1fsl@$NYe)$r~)M{p!b1Xd8wNYKfIzDn`6cIC;mZJB95IempSVbs&oo=G3c~C20 zws>}r9K#;z)56GJ_Pn0a75X)u0YDKpet?gP?f;5s*76noJF+&Rbq_&@^c^VYb#8j= zq4i~dQmwUx?cIM!OSykWW!}DHsZQANrl=Ws$vEJaI zJAUaRtTy9QvyNDUE^;Wt9Wc<|g%GyIgg3C~C4DVlQr(KS7V~tha==_rPBsp{S=Z026#3kj&)54$y31XZ<*=+w z*OD90(o}$s?4s0|XI$DeE(SRai1!@(TMbH!+Y*BM;(ndUi}h)W)m#f@Y9oI5OTZ$b zM7r4IB~$Ct!dqB@*5{`PM{v`b5M1;3@38~~7TdZ1?iA1b9{{nK-<&l4 zwQ8=Z1;vCEx||$n@MUVawWe*jqlC$S!kY7eNYu+K|1J&qO(?lLgP0{fX){mt6ta~| z?{s8KlmH`|3nJSR?{8(<5)hR1vTa(;(K^DgM0iHg6v-1iDBOxjPVSuF>Ll->e3xYK>N=lYPcY6NL$K?~a8>y0`oFk>-m z=G91%YTz%(v&x3r>a+$_rB2Q%h_cv&5<_~-Z2P@qLl`ff!cu^NkZ=km%|s;31X8%J6Ep< zRV~T4V#A#})ndmOc)Q2)Ph?2u$ah^*o#Z7s>(HGA*)nMmIVObeyZCo7bKCucMVuwJ zv0_7aK|t+!8LTceR^>jnt~t0STWs|a#=9A%Lp|o076zkGfbHRK&Tpt5 zskgq)GJ5yAVcm?~8WCit^qg zM+-{L>VQA(wNHooa-RQhlO3heApC5D?MiDI zM|4J$T=KmcsOyGQ)u$v|(4;fd5G(06p?7#IFQ!39v(wKOBZf)12-Crbw3L=*+NZ#Z)gJO^tsa9bU9u z-zIWJy}KSR#j*r~F?_Q?6|Gv{SzJ0%`n~k@G@&^`D{E++Os-~%-;n>Q~L7A+`_ zh1rbRTn*Mf^n_f~-Ovpsb`1}UU$gxRzP35uPoHf@Jcg0nfLJepk9gD>*BY&a+~{TT z59q^d)>F8qyRXd^2h~3q(XOHfo1WLuRUQetT9EtrJQ}GUuP-zaj!Ec!8yg))agblC zU6)r*7ogHpN0Ot}ECJovbx|z^(q@GT0(eRVup5koi89%%gUJ)VbTaprhL3j>62Cs8 zJfx>2zI89Pu-X3pV8pi1i)=PT6{+lKYyN=Y2!A;(Q3(th~iGQnUBL>fU=efFUUwK!jKeL zHJr#Zib7w<2D{W)?DcTS+vK;Wx#Tp9#r`CD)68DcABbD z7FiYx95bv8E1$-gn%vaNS5fLj6mA&p{E)4P*nwTJdTNLtuv@R50_&0l#Uq_tXeUgk zMAYuQ^zJ0wP}(Ctlb~U-j{;d zB|8bHyB^rjz5L))FKt~q9ZR-`40pKB?mn|oC5B#Uty~`CyX?K1 zLp2xi6l!YoLMrnW(9gE&@M`t-Q(pyr2tePeqRf7hwt(tg{LXkJgrSc+{l~;8qD^pX zEK6VDNTsK(4(TpI&eP>I5k80J6Hgz*rWi}`Vq?6`Yh_+Bb(~7zy~jt=t$3*>(@{>V z8Z-O|pg25IGp~PxcEjpqXV8N6 zt^x)0!fSNR3UyU(tAAz`xuZYR7DvpL_dpT!PcESS)joPX*&&ixBCQL zzvd4_&mqgb(-z@JEHto+Zff9>nYsjhUWBBNXf9hEyhl4-tO&zRXwCq;OSjtWem7Ig zwFmeFN3+7Kuy;?d|JO!`F5>lmbbDK)3FdLGcvAgDf~;qC%dCU zsc9{_+3r($kly~&)PrI2vZ!vyV1uykZ{#EcOpUi0%E7BIC2%>w7p==bLJ*#slE;Zb zB7&If{V=+ML#@CEoB9)~z2(+GzJzpO!s&M&zrUKp-iR=c`!B^-wNsVk3NVIAu?XmK z+4t*`6Gg(ep7tGMSaTaiFb=U_Ejpp6KIU{pk}RY_hxIIfz+e@2npN94*29a&AMyN?bF8iI@avzUXsbDn*hJnNXp-kIxvhwhiMJ&pa&b zRdFH^>RkgiGbFWj#zD1S#u=A2;4q=fe9@vC@Xx40L8Qu^NQ-5d7Xq-SQ+Sye=biAe z^>n>66|;Y0mKtU=dlCQJM9j4lYTest0Dw}a0c9_sGufR^d-|*ms6(}qgya?{^PQic zAK~YM5`cp=i)zAz;R+52<;<76@zHl7cT#)If3N+&AE(6T zfLY2ECue3CX@Y599CFV5(%*eif#cUxH~dqKrViC4$516O=u~rxyw;qRqd2{Pxr;m> z0V_}lr244n?*lC0K33?J!ZAl;evZv$-=N{iu<|2dN2HWw0|VN4^fk_qKi7{PQldT<$qZavxE}%$1MbQbLD_^ctiWk=}bk zkxr-ql6Vh3&-;FNt^3_SST0Y_K701e?7e62nfVR<;PX`^j!V`B!CH4NQ2ey5W47%j zqRrq0*-*Y-DJmUA954<365Tc(m)(O9ttQW+sBxW7b`gAC{*v$wvfbM$iG46Qt24Y6 zJ=*i6stry%wHiPZ({`lcUU~l-!3nKSZiPX(5a!-ns-uHud_}|HntyMb?z}_ja(14t| zrCjp(RLLb>?h6EX>T|L&j;wpIaJic=QcIaoy^?nRZ4Z;$WB5jAX}#jkB+>rUZhW+u zF%k37=)B+kTn9S83zzehZ%>|EH}JPPF~EsAR)1I7wY=cws5JW1Sb15r52<$veV7fc&?wQTDsBoa8>Gb9lv<%C6YN{t_;mz!i5ju*SVO=(;PvR5sH z=4kxNNDZJvzruyz?OQ6SYUuoD(5{4E)jtngeleWVX|3n~1~dbHYhMh=eCzHjpy0`T z^Z)x(2qL0Ei-SSFf5==oPApt4;G^sPFycF6V+}Em;QgPcA7eX0&!g2Aa!DN^9H@(3 zQy3rF?=v$!WFbSQ7lpKs3fQyIIglyh5+zI@6gqM&Y4b%-tm?Ktn%~oI)nMM_94t}( zCE8iAR!>pE_@LKPoL=)?v2u*eazT3&Zno&z*D9o^O0(h!NYK2jF{Le}y-j6rdL2*iB#rAJ}E2K+TzHi-gIR#GkqXdJ)dPi(yjjAJ0E3@oYK?RP|jJ=|%+%a?prkc^K9!9-N ziDIoO_*DsJr5Gf|v)xCIni5peP15VF#Xm3-O_r5*3+b&b5c0{MO2^}ei;JU5NQuVVV{sRmdaNp1p4{@xzqjZ&J22W- zVwkNtq!!2TdVYG^l;@`O6)p3y8q~$kvB{j5RkznuEZD*A&Fyn)k$A)9AKS{IANtGv zU0<8tyv`r@Mb@L3Z*+kDm-vlU%M>jG<1wK-)i^JaR-bHW;6dS$@0F4gm>d!W0O*e< z#+mz{L?HrPYt{{>w9`!tK4gveOtCpuQE7^XPEH@omVHWrOSOMWDZgQA8Mbiy7)%~} zFZt~qJ@2B93dg*$Z5KA`=X%m9&P|UMHuw%$YR8ZaE+M&$RgFcyQ<=fwE9$niPRrlD zXH+FQ=-d03pG94Vd@l4+t4uTuc>f_b>b`P?_D)0cuF!T4<4oG?V8`@0eXhB^haAo? zitbZAE#$+)jt`0hW@qWTo`~zo`4{x~dtXW<+f^DeQJ|^11vaeOs*4kGgUT)^Kr8F@ z0N_CH^ae`Zk?EP(DozFXLP(zw|25UrQ`wQNRDQG^`#}}%?!3{U2cC~uf#~%+k!f(A)a_xBv7mh3pfENl$VvhTGIQ*82I>h=odPd8*+`eCkc{oxh zB7IYq`FJv=(Rz)yvSDz;$#6Eu{4o|5BPVPQZr-W)Esx&H7Z$u16$5${gkmEIXgOk)Z;8iSYvQ73AtwVi~@2$$9vXiELY@;r` zPxrRaw6D;7YFhKOKC_y=g=4|6-m;iVshttS%0mX*-;rEZ%XHX4-}?1aqyns;Q@k~& z)6rtW@qNl13~z7ilTBX z_$q{JHkx0hbTDK7{iV`1H!#(G>ukKC7sCI$KY>4Gq{IfN9NAsyqUdVgT#WD2aeSy( zuG?=_-z|Yz>~H#u1T759T`Q< z*R7qoFuP%n^Qwkd*G=cbER*y{sah!C_%hG8>V!Hyff)<-c^e$krY#;SyJGkwih`tX z(wwe}fD(k_y4d3>t(QNcbd6*yu%mI1ZCkR!o-2!gCZfC;b3n8GjNCcj+a4E>TCC*b&aq! zY)krO=k>7q2OJqW_)O7Y)Wo=A=k!;2s}4c)vtV0#D&yqGyNiKX@?(80$nyzplCD$1 zm-#h?+E4Z5wxrid-1tk{)v*T}0WUccqIPBz#(p2-Rw2}hHh;*By3)oz3n9f-MU*C) zVl+H3T=}~{_LXQ;Tdi^xBA;3tjnMBdC|d|v`xZaFv{pK)Ecr;vT5fS@!IG1kk+V2x z#vxGIb9Hr=qCInwb@;cZwo(JtOE{&+gA#gQ+(4dqez+fH}EuI;eE z)2GZYrwk*Xu&ran%$q|XIi<77(4$sJN@RmH*_xU)X zP+mY6aYrNYfn0M5gR`M^XS+?gd*PfxaG%OxF=n{l+hx6~r!=z9$MlA#JJIpu0It~Y zWO++_NWe+UGl~Xg3okTt0D+~p=t?+ur@!?v^&0Ky4?{T6jd2U#ZI?{u=vI{(D4(Uq zm&IGSFJpERcwP|`=mPJ;;U!~wqgho^G~@YnXzZIw@;6=$yevI&uCvwV3vb#T&yRvO z5iC#GJc`AvS7VJq3@AE+p5TizRvNyyX;`wcvd<$r?;TnS2{(2LAixl-$WNMQlgu7g zr<;x?B%kODe@Lm9Hf}b4V&Wtx&9+{#b1%!}x1fh1fbPC+Zjs+zl~O!1#CvCMx0tW+ zR6s6R>!t;|<5#ZAZ@@*L+~^l^9IzbwGNH;_a86<=O8Rrw(#m$AE;Ie4R8@B_>}YVw zn62*nvv_bqN8j)*m0f`keF9LqVYR+J8CI{$$rN`xwR#cjMtb?(zzQUp%WfOH#&JcW zqINW8_r>SpkDZ=FQOtcK6<32au+iOa@62c4;7n57fGiv3h3PSqN>lTo{JwiCw?`S? z2rNyx7pO-CO?hsn96y*Xvf*w}#yI;A}zvy&UyylgKBVFCO4 zsx-bvb5XXW?F5WnDxA^42;WdXFNCq6tH817$ZCYb!!fK=sBJfBWUQJ0d){`$UE>W+ zW29!>vbEgh?cl-;+iBR9SJOwu@-CQbbL=mtnthbdi>o|p+5m@#q>^Ze^jMuQqU{R^ zq(MJ64(D?4l4MwW&q}^&(U2wMO08Gil!5>F^z#CPw4nihVO>;RtwxW83ffNjc#1VW z@WuU7XXl`&?{G8FjvDp`U{|2>=KoM+P9NDFCvwiz13L=;&NAiK)McaH+0)*#Y8Jq~ zm9=v0(s1Z15Zmj0x9Td>&p9cvVDg2RXmAX+wbmv?!5K(+s`dmR_(|S$?a5vw{V95y zeB#>H(%5KfMv*}Pq!^`sgE!9 zMJ%q@^F|ns0rs;D+S59fx@yMVR``Bsf1O_i9|?qr?5_4CQS+hvAF}y>swvQE5!NMd z0djh+C)hN4?kqmw!aU!{HrNPaioXA3xNIn-cRg92ugt>KLO`7#vZioTVR)6LCzg6A zy&A+ zMcribY0n3#<#?A3_wtmAOQ~dBROJ*)c518)8>L^SYgj45=vaWwT%-?qOSNYA@m-{8 zt*^u6J7J(;3Bf{a3XZ`2f>=$yb&}zv28Kbwc>i{UVh*L-Kg- z1j~3hxO@3s)y(Y^+ro0Fbg`D`nz9RTfoNYq@FnX_(8RFy{9JekF$lW)tYVdJd9DmW zh(%K$0mpARkvFm7J{sBsnzo0&1I5Wr5S|kbfm3|j8wfK&9w=lA!0eoG87?K%ZZ4yB z(etHp1Zlh`7^n`D|Y77`S&XRU~?D< zsVjD+x#jZXB7|-j@7uv(%+FWH5LuhD^yGvctgk&CuWLfBb&H$=_IonHe<04?uGYSh zyKFS0GuGESK{667Hv>M|q~6*bdKc_Zw=`O%Guc(k4cv{85EzF|ubx7;j1q2du=jclPY0WH!3gm+;WVYk zizyc6R9|CM*0sB87}-adobs`k3Iqbu`mAg}1<(7J94$nDj&?bafh?-In}f5KuW$&c zT-ZaY)_)-g&wpOv5UApeontjbK0We6Dy)P@A<7FWqzAy=vaG#Del?31jskbVV=*~l zEW<9?vj;llYtbK2%v^j8C;BHS|$Y#Hn_e`!jE*Qj zb(U@xnc=zWv4Y!tjt|VuDW;7x&EvW3;Yf|STV2M@iC=hK-rg?a-jqsC5m-jm&riIIfnC2#K~uhUraAg zF&`aBzUkLGL{>+AAzlZnGvG=#^k=QQoD4zH?s+l~d0o8hV;xl+;o7)&^`ZLC)-Djh z#u+U_P|tSw<0#fPV-_h*d|eLI*lJzFUAD73tE;U_)A4+Oj>*H&2 z?@bHc=(v^zYxR%9s5pU!ZR@L~D=TMfjta)9QX#Vj#Q?Sd^150?DGN5WdSI(T zVv&|WF|G0ts`B^TV|2d=4+mo^Fr1u=R2wUMy*@A0uf6x&h$+~fwBpn%9rSC&m7^(5 zCa{;@FcJ#Ws_IEU`%`#1L_6zw4+_zA6_Vk`uxZL{?{;%=+?tLM?!Ec>(U;EBJiH3E z>1N>W-H+qTM&s`SQobAAkV@%24r(qQEHQq^+3vY&VWYYMzZp1_n6*WHIr{=9QknhX zLH^+a00#EE^+yzU1`%vOzS$QvBt`YrJC}cdgUUcH#8L^Wc+&FQ8n3U>PJs-gh6ymz2o zMFQXzvmzk?#L$7U`8zCnv7=}R>7U9IMW=^F;}-hd*jcYsydWHTU3IKlGtA(b`rtG@Z!aa^`#QEEO(j0 z?7`F9+S@t6)}vS*KgDg{n)%%NG?K4$^bRRUC*G3XLg$ zUSNxU-yO#itD{`0TmGfOxP5U3BWM`#38Y?-+Ke|9z4Ae(+(;R1ff4r!#_;K?BD@6(u#kpx~q}T0< zY4kPjL|-Z9@N-Fb$~LuCT}q}x^loY_R!-xLXH0bTFl9^YuWfJ#Hl~WVRq_I!IAUmC!KBNc6Z9GVyTdmxc-bcG_tfg;KHz9e@Z~a9* zE-4~a&-b{qou933q+X`UQ`>kgHFeu3qXOx`7>KF&jeF&?!K*ibV8h;Z$t_1CbMcMKzhByV2&;q69y z8bXrX8FyiA4~FFM)yq>W3;g=_No3a}3C&-1!U!#=ub%9P5R`jp#W*!#C73n*+7S8M zCvsVM<}F&9ZK;0sGi+k^e-=yrD+~Xl159}2Z;~ll;@`QMs!Cj(k9|KXR>qZ9V_e-LAWK{9B@?my>Y;TSMyA zRf%%t#a~8zwee$~-q{t|?#@DWMyCdP)ym@0ieP3)2x+vr8`R3M0Lk~ihl;Mq%etw$ zr(DPSwkFh&m?E%ci^j1dS%g!j-u9Vi)_|_yXR*z)LUpe=r&B)s3YGzi0dj-`K)^wl zT_@+3P3jj7S8LkUVXgBd)Dsr2Mr==RvL*489ug%#moedYd3Hq`zsfb#DL0(t5wa#~ zE|0tKkb^JvIr4z^^pmgF*79@Q0?WF>5T+5IYHVlVd;2H}d~9L)Nz_qUqiarZlW&*g z>e8{d6Foy*L1j4dvz?KX!Tq$=hk5B}Tph;46c5tanN8(!|K=a6_4qA`Zb5tXMF(@S zQUCsFas~r~Qy(WaX4B9n#8z~)yldC*6|eM29YQ?CdQ_~w;OVoU)lL8cj^55!_yflL z=(?^AJ{jvtdH&zUF5!_rCNb;ax5o5C%5TVZO?T#oz82n?^M7#Bw$@UjN<6M+(7&ii zbTG5}&WTUaQr(`)oxM#%xrO^z<_!&{#roTFD(Gbt^d77Jr(PwN+LE0G_aCuOb5wyYg!h1FJ-`QpW6wkS}N z?)syGt)8o@j5mnbru@-In`yX7)Z@k_n~z?cbd+VzPCk-yFu9htXY(rK=izsR({CL?B&%^N((sx)|+T zR}2OqQZLpy&+_1WQ4?qGr0y^3CxVDf2>=aDFWBfmSF-=hzvuSmHcv&W zh;c!_cbKjnQWJ%%P@UHT%fI+8QV@fIUFEIvGO4ps?)hflIFNQ!bqXfjQ{F|n`yot9GFCa}7$zH)wH{<*0EMtj};SOZ5o!v?91mt#W&3Kq#8 zASbPz*d0>KhC(zg(q8R2a?xIci>el85H(XL(jt|5Y!n~5|hBH1q>De>Xz}95f zr%!3Bz?{imSx*)^RIVbYiw=t`IwY!I^mrHC+iQHO;oifciB#2_JY^ZqcrdsGtFgjI z1J%tgFM06uHysvUtdd3kXl@r1#4Ns*S_ zp@8w-Z;Z^JP<*jvb{wLvH~gZ|_gF$&b-Ch)Bc|+iBr1_QVWmEH>uGhG+ zTFftgGxCPWr6!J0&khPADr08p-9j!}BS9;P-oh-` z#cNT@7zz4a;iO8X!}XUQ>$d<1-Lk;MUs~V;dLh`U3hoZjut7>?pqtC1v~rYHxWUF; za;7+KHbAWOfmwS6*VyOJ@T6z}C~d4lRq{bI-@@ZQdB>Mno?V5PSwlaTDn{)0bbapN z6k9*~X^u`m{-z@1YF=S%Q(d<6RbP|Q8;PUOo-B!VwlCfE3%g2KOK3;Bov3^lwY$I) z%4llF>_XLJz2^Q|tpdqz^!xMn_3Xo0hK2+;bvwx}KeJ|V`%6=*T)EMuFT~L=IU4qt z4K#DA<)8kHS?_w=_NH441lx z#-DQ8P$;Vr+y0;YBD|+t2OIwp%b^!gEIbHr2^gsVmNag0x7gtgKzA;-I{lM*r2GDV zGLMdkNbh@p$ihD1OKDi#F`Gtqq&Xn>=q?&TJqhjwEg%&HIy(ZP&1QvcS+l#wCjtrF zI=&kY8s+;BD!$0Nvg*~V&gCL)%6TPX3!PAP@t^5c z5aEcz6i)RXhW=oLs(s7zH85}(pYkxv5pC08#-&#<6FTzp-`qZED#nPoHMpzg*MuV( zry<74+M#luX71;@zY1G4rx)&0dR@|soPSR3Wu{;Afx47|F}}K#VNyRHWSEIHsEa>8k}7oNjKA@_27~N> z%`EsMPBd+VYBB8sB7k|l%k-8R?fM8($WQ+Hnr^Pa^;^cGB(AU(qhYFUT)U{zpu>@N zyQ-o!-e6cdS>8YU)Fbisq2$z4G1Oy+{LXJ)R-ddhdR4DZQ+^Z!fETC}VN)^lfo_i9 z93vKOxOmwQXk%M$--%T@F+>vUGFRSGUnq&fV#}vQ-HrwOY0Q(X;_!A&@fxMRWOC2zbv-$dxV>?ind*lm#3zGKqQjP|v*vqp zY7Z#TMhV8T`@Dq5joOtbrc6^n1!Mb5koxtC@#=sej5-wL2iv`szea&iits7-s@TcQBbd zR0#RfyKpH&rTa7aP)f%RP#>cJEUk7yO3QqPul++oBiuQ!RoAVSk$rl(tehn>`npLn zo$u8;!vt1+A&USD@X9c7an2onsD~Z*iKnefmo@M>XVA%^_7gYNBMr+HOS3FJNvU$Z z*Vgz=l8)u|uP39@kuE7~YbIT(H*4ngv7+ULdXXUwG4+y61KFQqOEL06Cci~i4>y<= z86Rw#A{*1`MX1g9Yc{Izc6vZ@aT)CYl5{GO5@u=ViCL4PI^wK0qlILqP1C0(u{>S_;pk+ zNV7ReCAg<6%JIw$U5+%mT(vcZguDl`wKxXZq^tCrf5;*M z^Ck}{%rSOf9DfsnNRbh>Xtw+P1qc`w^RxV)1S8Uv^xd%q^SBKlN9n`xQ~1e(eDEa; zdB{|MawDySdYwJ<@H2$#-GDvuhryc2Qzm^hXDP0&0yBN#{v$fDjO=_!+iR1lmaDJt ze71gAr-jCqo}7efl5VUe3}Me><8@;`oeJ z_{2xw)voAzlZ(rpkmUo`2)TW5^fiC_0Vb{5MZ|qdvi$ky;L+Pr%SB3QWDk0kE{|T^ z)0~N4HO{=MZ!AMmsgN~TB!-%q{>pw}CVuy00JfO@|)b>*H#*&e{n~TcYx`pwy%$TPfIfU5S07arbIF)@?MlXVb;89q~#0y zA>AacSYp@ZO6Jq$b|8ZCPUF;djK`$_x1XXkc?IShyQHk}C?rpzlxNb79@@8D<9Vtz@HPhdRlPp}yA%1&@9i(URyr1aEfHe7LWzx^1k$ zHn7x)uBwO<_-SOlZXz~5`4eLQhP#BXJsF%Lzu{{TU8&~;>$#OWgi;(*&FSS?xThJ^ zo1}1mTGc<;AxqK4ntqLQQeKlvftb{yz@n~i`;uZ2?x`o7k#cGpsq0IecWb@XLFe0+ z6z|wgCzYWsiyKVjU<%jwPaWjf((P`aFElK?)U$OUuE08x?OT=7Hf6={H7o7Q%}MvG zmn)qc_f)gvuD{sCe#s=8gR|T%sn|vTB#)YhO;}c60>AVR|3Q@8dLC_#h~ILiUy6RI zYPT`xs8_D$`x^Snenfj5-^A*PB$`lp$Mkx2;aJ)I)O#tDO|Omm?>tgiZCcJeGtdSF z6|Hog3Y!Ru;0b&JXNA{pWp`Uy!g_ogElv-~`%6c)Y>98)o%N#{)zJZ`L0n+#9$iSm z)*h%{^ayO!+UOO<=}y_Qq`A;g@IIxjdo`0S{dMpC&5r0S`Ag=|8}xA_^_dr+Qi^Vl zmY;005bS>}eU1kC{G3`vl@o8haLk(lViycjiJ~cG*lQmI6xde-0;WDEYMXYAr~M&2 zSQDmM(^)qVbSiwm(fi|m7S14N@+f?N>QdzE#7_Oxv6MX|47oY9Ja9kQ+#)nzDP%b` z)uvtj3#iGhz0|E+b76~*}&6BrG1q(6&kgkN*fSx<|qgha#l{7U)TARhIK6C_+igr^5M(ak+mBrS`jPw3ZK<_ zNtYxqs5!!kha0yx?m4>*&W=8@abX!ZC{9Nlns}Rf&Q|kVVRmrYRS%a!_1*ajf5VD% z!*mlR@0z@2WP?;#wof1BY+3`A`JxslR(JE>KTka;F@%ac|h_Q})L#QOFJy_aiys{ zV#`hO_%fr6eLhvaNXg83noX0q`qHQ$+%A3n!MZvpUz9cN!Q=8xId;B>;h>onTK{v0 z;TtP(yskNWht~c-&X)SIL%??uC%&-thwQ)kLi?9ZNZ6+Y8ba)nI>v4VO`yO3I(z}$ z(mc1%P~Aa1raN|KO>N)69iWD_@B*I%K`mP0mOyIJhlUQ&VXNVi*9*Q@!=KJQ+ZNSs zizz(|Kq{bkXRD%2nTl7oj^R6TJ$)fMY2InRwub(X&h8cKYgrPrxtx)|UpDp1Tl0B< z?_0Fzz&%jUM{#1-NdhDH2V%;HFZoaPf@Icz?3YBui34E6Dfi@9_cWsYS79H$_vz|Y ztt$y5Tz;pyAsVLAnwdyb-Y&X&g!$4J_LA~9|AT>yAkR^_{@w01ms{7?ax+mE@yKUy3yGxO|(mD=n`{v<~G5pAW0*I~P z+e{f7KSK+j#FzWqI>~RWuFrKyY%_vO*T!J|twnG4wEfBqzQ?yDjwa@xZk5p2o1)2O z9a9_)0`%Q!^B0bJ%Md#asq3?M{j-GFA=GIa7J}T|xi$idj%i0fb#rA4@=hu_( zYl+iBo<^j*BkaL^vrKy1$xZxK7b!87Se#3GRaDcP&zKrG?fSYYoYzJCQOeiC z^?38r>D1Eto^Wuv$Hp6EGy~O^CWA#)ei|~N)TA22DC^jnfN;He&(X4*TAyzL7W=|y z=VMLB+EgW{N=C60J(ET!W+xAqPV3%>+n#BLYPaJ+kHBrU$@iJ&^&h`Y_dhIFzsPQY zJsHV)Rz4g&>Ss!*&`;1}S}svJ_X=EgXQl}|TYeX7?Nm8iai`;Ho4g+PZBNy^?`q^7 z!diT3Iwe^Yu<82^ALNDu)fV4A>blOPm6tbi!Fz*GlZhN;d_Lhq^}31)jYgH_k2VM) z=mxMj3IAW{aOZ@s{J;Q?6LO$x5&h189VZF6e;p^kEb9MOFX^DR=f`2O;Tz64*ET4N z!tHfq-}l@1hAOCUf~9?i1jJ|rw7fa9sFN~usIeB`_6_wZpMBe`F8*1dt1n@P5g~?? z#EI!^TKK2Jjr|=<#MTDTHf#{^?^|(1iA-f_EwBvTmm&U4gJLUAwG;zrdci+txGV(( zUKe}fp16)fvCL~UT^#R)-KQgfplb}jKGX^xIaNh{)jVYGNDPXTJe5>5^$Zwj$gH?) z^gwNL10oI|M_}fi0KlSq{_qnl_aCxbx~L6noaY1AE3?*PT>yg72h;>J->CP?;7?k^ z8Pfzei}5@kCq+hgK>$~*$fYH|u zTSJV72!4MTbk9F+S1uWZPl*GKuV4mugYNlh5#9gxF|G!rvQz=Lmwfk@_U8;BpsrK4 zAfw%G)O!>VSZewj7>+|Fki>;1={|JAZXl#wErQaL+Z@!z-EB0rsRwTCkKigQ+W8&6H*`0BxduJnP37itBu?L z4;kdF8*mjt6x3Gfgt#_~_;)nlCG_eK>rQ!4c|%)$?FYmNk|6(iIdMPoe~8b3T4_a^ z+d>x@>p2eplJ%MgNvttB)+QmZZx9?@$5SyUi*QQwTYzPmC#dsB#9n;Y~hS0iy zHVzwmpn=7)cCZrL1bzYleY3`W6aj){jduXx115lDWx%sQT#OvHxhfoS{8vyGrF!~B zr!?#l7B6y65HUDXCx}s9YWu1=_8;ct!Vl%~-)S%JB&>oC{^6E=F~bdv;l$b4rwOp| zLuEXQW>W56z`p~@t=dS!y{=-vRp0TiSjPDH(4%r%yIG)(NwbM#PJoJ3oUvrnPQt<; zvPyuT5+R2#@TE&w63w+==znBl@;i%Y-3myO|5O&?15xTay+?TI1a14ThN;bc*J^wF zPL9{VgDCn}`eV$z&>|zm(qrNmtvV1V7cq_`D7&}7-_l3G{pFYI1V5$)P*a%N4v;d@ zcg7K-M`qGV+Xq1Q6T@|7yUgo{>i8;3Fkr5h*A{txdaWckol>yy(F z3K@VF*dh*D@Fh)~JH;!+ z4?Z4_rK_`)yA#ktsy=%|zZz~;JiT=&;k4Wu)XUZJj1=%nh@c4q5a)>aJa9KO~ z`)#S~=$f-$N|MxxF^#;&lOJbk>jBiuzcb14U=WsvyL?w{c}dBmXSK_CgYZdff&qil zT?nlILpFBCUQ6Of2w!3!;qDRQX!%SJw_rm=8KCptbvz^b z4RH!#YsF(8?MohR(zU^UIa=k*>cYyTkRhSGyt&^1yw7>|H~)*5gzVpE=N1r^E8!Iu zbdaA4;)DAdLVLy|LJDn*6dZ=H1T87w)1+Oe>fwiaAQctEcSl zU(KMnT4G0LW4O>Su+8uno*xhE0Jl!Kps8%3kZoVf2r-?+f9)+ZK?3)#ZJik64JR@f z3W4kpA2zR(x41Ig`a|}L1x1p!s2n!T!KYMgZ;Z|U9Hr+qUctzRZ=B*(l=Mj~eofaW z`dz|FLxnx3AOes9o9T|3Bhbv8bzP8|ffKYnYu0g~?`TRe3!mp`U0mN;X+z^VYyXg8 zy=@fE6V%s%IP>+k5YoJ_1wj$=bKvD4vLb!x%H*AS%n2fTfIyB<;vfw5(Lv|qRsisZ zLQw)0J|1z*ozBN$cJu5BY8=KdjTnJi-a24B0&!&FS#TV31O=QH>8cA953At#K^+l- zY5}$sAO<((Z;P=2e(7{Qs3lW1T}>NS)HNu%tTxq9R1`+rmxg4^ARzYJwhU>)KgQ`kp)2FxJqYZ!tvxMi%|wRudF_ zqrv*Y??HJT$ME)c{5jl1hrTy*dco+@vjIgnk_Bn&A453s*?-8U|7{Wg>~EtK=uX(l zr$9uMGXSCbDTq2eo!|BJ-5K$;(B=qyPJ;p)9tC*tl0XC(v>i__=^L;bAncLv)i*h! zKI{l%Qg$J1et+MvAh-ZHUZMAJ>MqnR2zrjwcYoC>F%y0emPFhWB3o%fXPFF*xL-=Kv~rxVnXKOtI*2H0-|jHj>S=zBxl7DQSVkdGGe7M#9{*aS41dCc9k z{o%PE-W9V=p!rMJ=geYvI{iOnLSwT%+NJo$KV;`g+~Ux#2RPrq{meW9o&dgl&dv@a?HvmkDHv$3$+pL@LmRJGk`1x(XwLb)1NQLl= z0#OdajPTPpW`OK!i6fc0L}x%O3Q4oQ@?{$kTEs*ZrcwrPgXJLH`1^_$-WCdr?!sGR zZ$LLL09kzP>qq7Q=K%!Iz183i$W$lcZv3GQzC?=@Lk^xVNCgb7viTZR73wVA;?vaoig~yQuz3 zaGDwTda*uwW$<*o@!Kg!c=V|pL4SLvubrGUer$H7Tggn*(IGZ4&x z7SZt^0ko`Bm*PDTWn*oMzaUJ09l&&AfD&91&~^w|GGN#{U^!a;;qGaqBIux`ig02P zNL{4Ru@F244BzI%wVj@JBOvVa(qEOB(0UW#JPY69U+4|XaFmQDqQdsag#=x!MSCbC z9htw&ex&$z`yKUd*ftM}Mf?}o)-8(r63fHi*0M3cC9z>2(wUM@e5HV8Yk*7-8u~CW2neKQqCB#Mba#YF zOuATxSgw$Tj))+PUvIqrl<`JPc?Qd=#YA#^tJHtTEHx=Qpz_xw8WQZkyN+pp_@Q~ z^e3RC(FG>l(s^Q31(xK-gy z#2RO_aUv8SjItv&Fh`(mg(EpH)t@UtoO*$YZJFh!iYBCBR4ddd7~Zl({7m9U5jthJ zN#`Z0Pep#1pJ^ua@0Y)VzW23N%Hgj8 zRkxayP%{*Osug*oS0)LnhufuRzDUYKoGT~$g)y=}WGJO17WE=QaAs4+y>+}v_+C&@S>xI2wa&>rRqRiqzDF6f5^D@%uj*=DW8WSHjNS8 zA!qNhS8a^oELejS#=?m;6?cd!5CJ!3K#qUN&M~9ie&>^h5xZYc7V9{|olhkRMmA@( zLN^EUKxJBO9$%r&4S(A~FfVk8NDy!Y?D7FoEfG^11=^$Lc2M3&x7{wK8uHD3N3{zY zYKxl!d&~KD>Ta|(n7Er+B9o0Rf00%ymh^Nd18Rea6m36GcsN!I?aKma<|i1t9_Q8q z;|6qS2F3>}FM#s#uj>FSF0_T004!4(7tTa0mGg(Jg*V8^y02vhSZe{YeSiZNK+MyV zsSOBC*C{vP^WmVb*an1E0)TP1Ay;>fEVxPalW2dQ>@w!auZ`uW9Pfb2H{A$^1-T$O z+&Q32seCLk5hCo1_s0PLb#b<+m(YhgXdP>h@~LOw70(nRaOncT3}vE!1=!&a+4t=> zqpagbj`bht^?|~p`>>?X8igJiL>I#bcnj`SEXYtk=v)b4^l<8$99rYQwL;r2lwgN8 z0NLs31w)|Su#=6yToxu6xM;Ny{|+SuUB=NCT@6s*4?6oQGYmR;i=-!9qF#yz)QYkN zXNSOll^g_q%r8C_AsF@%;y1lG9M2{lAG(~IkOwtOvfddZF@rdMx5h*>j~g zI&h|YfUyTC=HKZt4daE{1H%jU0d$QCZOaj9ZqBf9*mySCx2W;=R+{G+2$HTJ&^rnE zK<)mMa7U{ElkBoiYvqsRq z4*a!=l9iPt0^c?x4Fdnnd-?=w+Y4A=LrDwJs3DS#peM&66|j|kK=s%1ja?tK*u}+f z=&nJ45QZ{bBrCL;vxFMDF@eUXyx8eze~4bxJzL^lojo94$t)yVoQ;U_3+)L4p}y16 zMkL%T!TllAKKtDf*clVIlbVf}K^y;&^>tsrC@kgF2~h~pgdvFHVu1DSy{;+w*r}{5 z>HZf$=zZ-hPY@K2cJLE&V2x@W`%MK{K0-#r+T3BEqPBt{{BZk-l=VyE1kp|W|SAlNq z%0B7W-=Po40#;<{*TyN%wHI3uz?Nvv#WlCPFK9TW{UPf=BMo?sqOL2n#C_d?o`r%lDB(8fAab+EW7y5_ECw$OS zU`jsZeKmNhyi!Hr?_=3lpc&JYg|=1#{iLC<2ibRvFUb1^0PVj+fdlX3-+81KBw=H~ zGHwk5M^4lLXgEOB-I-iZF))j@1E4|61keGnIHBszf!|h&b(A{cy)TSZWL72%5@(5r_d3gAgDj zgeAN5BtXCPKj`owACkSf_dI@Q_uR9)_c8}|nrVe?e^Py-Vh&cktZ6qr_oS`9;05WM zqonyFJ$vr6ugDg1M#7Ej@jApZdMoMM0%H|UN1EifV%7(kA{B5`unpXR3O~ zgn6Z{8@rwwL;c*aDn(2*lAA>n`Z*fQx2b9?U!Zq1%SkDj8%!28gTW;ItmAtOe(&>7 zk#1@q#&vRG0t~^cH1d`6_uAH0zJVFUr*F-=7}A-4ThnxU;?eF`1|OXPJ!d4KlgTz$ z@(=oEb)y41xBk5cx_8N;#_+B4lJAB+)CpXSpO$xXcn`~4e66}C>~d9==n62ptrXQ( z7>^3i5R%wbD5bFkp<OFQy+ji+2dVIr0m2M~B zIbdm4sCBuUY*k6l%;QlwC+{AnuVQbYje=H@;d&0g5v^;E*ett}SJ)mHN#>}p+dH(O zDp&`slPIPIx4t$pyF^VhGCx?Dbe`(VLH=dDj4U^J*$lROGIzl!M#ms3PoVX0&pD>Y z<+RnKO){QP8HE}>$v{|1G(*Geu=KG*S@2~pHJQwSAskShyrF8Wc*Ol z*VZ|QoNN&5>{cW}dK-AM4PFP%)l;#4!akdGPXllMyVq0%YA{?lT`CgCZ2 zA@^vUVIJYsvc>osriGKeNqHoyJaHX`zWv*#*6ML!H3Zu1~)Ti8SbHf&Z68UEF*^p5L(A=_|L`$7tN z0qkAO)PTqyN*MtKFm{nc_)DdbW3nbLzY6bd7a) zBTEVn)_%Az@s)EN2L&YX7@1xLjcQozoG3=YWzhvjT9Yk3ER{1~b;A{B=9TY)tHV@> zfijWK;yKFwo#H~`q<+B@wAq~}>b(tN$v26{DdqMuOhjElnEfxxRWHsmMV4EMyt6O z+xf5htY)Q?Y3h>f#Q+>K1NUS2$QVj9*^YLGl_fu4*Asg>3U`K7KmgAvVG}|bl15;z z0I=LVoc+_EV_R-1Q;ng+92P@AM&?>NVctPfBdTFuPJAd>Ssh`Q`UJ5;g2ySCRAWC0 z6ColXeFg|Lh4Qdm-{Ot&Bfs^#;wynBI1nVY$jo@B)tdEJEz?BAbo?SwTXiLwgcvYl zV1OlYj#J`r=wB-NM&j=5jcsvp6{M?n$UlN+^KMbny;@lC=(P9e>GZ2Jra#j*m}X#_ Mf&U=`)bX}|0Od{i7ytkO literal 0 HcmV?d00001 diff --git a/docs/utils/images/xai_det.jpg b/docs/utils/images/xai_det.jpg new file mode 100644 index 0000000000000000000000000000000000000000..fabecf82f5773390c909ed1f377b749b343b08a2 GIT binary patch literal 171342 zcmeFYcT`i|_cwUyy-E?O(xnR0TR^0Vh^R;tBBB%#0w@q9BoqPZO+i442m%6zjvx{` zA|g_xK9B$k(j-9$ffVLG@63AVop=73dDr@_HEYc`aQ;c|J@@Q=_u2ciKYO2(*%KVV zd)d;)5}>1_12Vxs;A8w-|NRF)jNm^LD-#nVBNICd3o|PxJ0~XxI|m0BH$N{I zHy<|#2d@w>-zfn>K|xL);nP9_r}+g01^%6cjsd)fk%^6oiA{iugG=E5_2;A;;A3U* zU|D3KlLqMd=ot9uPWk~!0H9+6SNrdR|BnwHJ-9|@7FITP4)B5&UVxsCfq|Zp;ont* zS4V-r1B`r3{AV@Hm`^!)vPcIDXvSo{W|g_v&?9(#94~v}Zb&Q}yO8i{ku!4g3X12H zw6t|}_4EzQFIiYxS=(H`antdZle3Gf*S-7RKE8er9z6~X3x`KMiF+EKkoYVqIXmb1 zi`=}I`2}Tf$}8ShR#m@iY-(<4ZTryP(c9NQFgWyi_{+rP)Q{V>F=Y>N$Mv6(_8#>d#!l|=I01f3cU?OHV&jklCTFN+ zT|!&MvJ?9+AJXK0w}yE59jBFFZrr5TQVEdXSYu&#d8 zE|I?=W&5Gehc0}WBOVCZD}FSD#WXAI2;wkvdCL@tq4h z`}J(YXH7+9*@69e<-MsBpyzI9=!zxghmjm!V)Yt+H17O|kK}uY60YfH+19u@zJ*w` zZ^rhn$2W+EpoD2~8Dhf^(LVc1d)%&=ifT}S@ze_Z{i?_9p7V}NUrg?%zMCDUi){cT zuaYJQ&p7p+0IXXc3NwLr#VK#*Mo^JKGJc#vgYk>4RVuUB;6Tp&2WT{^d8`jR?`8Ev z8H373(?t+TX3T6betcSxTCqh%0X;0%rZHTe=ka=MvF>cPh<8*?ucU$l>- zANyAuuVj1Q(-YB~`(}{da$JIgh&UrI;dK)cP8DpHdG9x-k1I-x!iIm2@qZ~y^lC}4 zs&gTz$M3^Le&@OBV9MwGpIA(67xy`|+2CG8iHPx{_?#X-3~2r+8fj<~>ZiAI3mB1#_#ep|*B{$=pyJm#E6X1fwV#fh7t-2JX&yTM>LuCt>TfjNzJ=f_r1@~i3uixP*fV{MO$~fo@aJe= z)zEVChsWN*jA*hlFTJ8vQv-01+vSCQ?-uCWQb=*{{0=3~PSEH^)Z+vGu6v}Algcv0 z!*8p8uo(IPs$x@l4L19ixQ~bIMEHk#RW8^sqeYy16+Ysr+cjr3@SOFRfxljyJvK_U zY$DfK73!&Ii0?94nIecG>=M}sD!GTZh)I=H^mb1wx-B@z3rCl8coi-dTFT>BhB4^0 z2Uqb5ua<+a?0fPpe_8pH+mJ!6rcLN_OD7xbtw8>p{uxWi%Mb3qt3~9j7n@{^O8O*8 zs_|)u$#u;qz%wZKg4u+8-bmif37G5hPDqdl&bQ}2@GH4>n<3t706>; zweziw^s(*Zj0snNhxo|da{(=WGEt>z z11`(TXT|V3REsEgf->=Vycsg5G_pDr)2R`g(9mv-eg&C0Trnl4GIS>ADi4Nj+ zBvImNp{QozK|2T$MY^zpd3L4t`O{59NdB@%)lai~BWrvnorIb?EE)CoxBI%aXBB z{C8}t)u1F}e{hwD-HOfr2~cuJH_KKfce#*-{oYNkp^Lr@#C0NqG+sgbj$x*#6Bw-H z>jFXxVH!K)ZQ9MBN5#g4x)&QSg)YW?0D>-fBtdf}@fN7G>`P0!B*;7K4HX4)i4RM#+AVtlf3D0ikhUv@?rG(bxHcx16RVoXAeqB*) zT7JPYy7a8IT)9~$w^baf2XX@7-YgkWt4%II;WC3B68N1|$+_#;`98st^5;+3EU-}W6r=2UCp(jC@g<0*-k2!?%`Kt%X7$+LZrHQ@Gi2Y zt3?Q=oB)gslds2502u6)vzyvNq`Tp*Sncl9jvvxYabBf6rjjiJPQ?^BX=52WH?sV` zVm0HmeXvB1_bu&&Z|XXcNrg|7wLgo#JOSYG;x>Rc#-v4(q=HVO*i|i+02IECACGHp zjoiNNVXO5K@=4m9&v_Uiv?sQMTla}F4m)Q>WA9mpWONx!QUq5pTx~H%X4Rin_ZPAd zY7gJ9>2dIdJRAS~b6FsPGe|d3iD#ipX(p$)OL$tl`mW!~Ve z`!{|mgErSaBYZPGqKkL#ea6|9vvtGkygYi(dz!OozW`+8xCfMMfE-8iQY*Ve7vOz8 zhnk|^kC+zIzH>afuLm?=uaac|opq)eejOFx8gRDxF4y%t%^t-5c=^if zTl_y#2Y&R52k$G1Hq=5SKRk{IB{@}ZVmJ`O)tgSmq75I~J3ry6<&*X@7i+)$8PD~W z&9>2huRv@EpXou-Bemc@SzRYRS|;-_!2e7$$Q5ijq(q8Nlx8v3tI62SNm-ZjS}O#J z`ZDZ{7>-0)e&eXIS#57A5^}DL?<;@Sr}U){)NriQ14eSiDoHTwPd`Hax1ugwX^j7)hK9*G zzmAn-%3qv~eh0{A2LYuV%}#w?df?Q(6lQ#SyIN7@&_p)YiLNc< zIKrpF*J8)}x=>GBQFD`BMX7*($-Mj5{eU>D)HLJo!aPB!OWN*E_( z=6oAsAA*EBex^z3jnz>2sMQn^vJC=45Kf{=jD<%^WbU~Qa~;%}(t@5@zhSA8N;f+| z4`WDjsHX@SLJqP1$9tNL1#HF6MQfzUV-V|1-CTs`uc}{A4wiY>iB4d7)ukC7%~MVh z{dbq)r(f5ocqoxv-uV^;cav|us2QrV?mSFedFu5W8;?w+Tp+zz9!sGyO&EDCt6FHo zcpi+41O$$|=mG<$1@EvoHa2Ejnq60BSaI53#}BPCKbv<-p>UP?>vTCJ=*u?UEM|iI z{c?}qI-}#fb`3spSvS#ynPMdg1n@Fna2g(iF6$uSE4Vre@@qtifgsH5zDPH3dbgZ9=Q=q zANt8v)Nn_4&_uE=6SsLLp^!6k=ua=(&m7v3xv6CM%$g5HV2e0MY9Z@U;P}b}1Rc7k zY(u@&yvE&Jz62IbtoGPk(R@Gg=KQsK#Uu8T{PvBEaV#6D$&xUWuunzx%@?&cm2L8| z?Z9e(@>?iL;4%=g2C=M0$&`nnss1Hce|*}PTZ?IVKg*1#6twuiWM>SMzmq=oT`m}}%emV@@^+N_F3ULej%rS6NA^1M^5 zMGKH>tngNe=mRY8gRl;rG*7;Q&10g2z_rppU*Tf7$ZRHr!%7B-)w`ouoO8eR4h_&` zTdx)*6uvbV(?oSB6r6)-iTg7A`J&tp@qsKTFZ=A+z?9$$?y#Lgcqh4t<$A{UA^xqY z`mdjf*Idke=bj4znbzpbq$~o973l`fIDGI7C41nFZ)--xrVb=ya`MO)Kq-6}^y}&WOlA+~b{r_tzzY zal0fH`)-r~DVa-?h#l|{Zp&{?yWuqXTIchln2(KLyZ9A(YVdfg_@ zDCe`8wB8$256f-^c5XlxNAJtfUe?8yLgO&TYudR7h+ZNz-x`FeU|WjMi@( zwR8f6hd~mJoF?qZw<+g4NIW$Z=mZFtNY)6E3i++}$5pOwFH01SDxWTAU-|sBCcgX5 z5XUGGDJS?7)fB#Cg@ew&>CHPnjQOK}hRbAYJ1?vw9+P9Fc2hIEG}AHWvsCfZP)4VB zSg6}ZuC8$2Be_p?odGIM)S_DinT*|#l48oO1JHN*CJ9-i7lHNkd1;TMS21fh>*BuVnUMPIz?*RiVEBaP07 zWCj`LK+i-@vE%pPx^E>LdDZD#1NN>Vh5-h z)DoI8+=-Y?@~VWd_MW;je}B?Q)6M-|#X#-nAj0K$)|{F=jIkotvV0zC&@3$83YDUZ zY~CH#tB$_87ON4qb+u(TF=n9Xc`1SO)t-CIlISzRBX@^lLo^Udtr z);+K}+uZdL?r~On@Eb$?!}qwS_X2;W&&hzK1g-%qxJz+gDM>70Fw#kjs&c<=^=@&l zSM%@f%Yz(>W@9%x?%q5BkQbo3$WatWA+C}lfazIcj_j$jeZ7Ucm;GD5M>)k(ttTW% z_tc;rfW$l8JV?9S2fBx~>USOs8y)wbs|SZhL?{h#@=51Lh=XCVl;1`Va2iDML2zhE zpp)mFc6uS?nUU`o7mszb$>*l2m8Y@6vTiZkpGB;Ho&m^rlaC-cD~iAZE@KFN%9Rv0 za($}v9NObby^9OHU`~}bBf~2wldUhC@rYm0@#O`6fH4c|e2vZmv5u1jT`#s@T^PRF zM?Bu2H(KIJa4WL>>-yAn{^i}5=CWDsfu><*v2=zrz)ZS77Q}ZkXDr*b(~65$#X}}^ z**l8T3)#)2^hT6MjY96AnxhF5(I&fPAJk({0C6Ke@DEChJqA2;5ozb!H0v0p7i~b_%vF`DfN< zztqP5u~@&W@U^?XuQ6WyGeFq|ZC^hLvW$k&Mljo}eNJ88!6oY7+6GZLH^>(l46$cH%rLFe^1KwnMnJpu67|K%%8#3%Q~Sq8`b3`|Pc z7v|?&Ns$iL%W0PMRYC_xZDb3?15&_S+9FbdRD-v_>8BOeS^s0Vcxl4_^Z@R8N;zMr z@4~$_bI2ESk@Br<<%7U6A_KLsg+ouR4k`RwSEs3LL}1*gHd}_X<{bYW5a=e4kE1-H zb0%d%FXcilD6zh!K1ZtO%Qz`7+gVrN zXJOg?7NWd@CP@p{UV!tg6n1KS1eqTfuCJx9BCZERAJS%%p~P#XUV(cjK(E|8$uHC9 zP<~Vo_iibQ1>xM}a`T@dROJ`KA?x^80h*lXUMbIH#yL)J&15ebzuUnXS3rt)#~*~B zaUD~kqgDea0K=|oKfghN8 zMscd0_dO$6W7&jV8g5h^=_sR*ntP7au70y6Bq2Rfr){isyl`VOaK)u=S7pt`vp`40 z$?CfKIdN7!Ejm&Q{tbrXcNt>A2Xi}3o#cVE>#YL!)UVJ%8Hr_FnJdC7Ck@PRr>p(I&*_bC={Cn-Rh zwjldWtj>gfl32PM77=u*U!gI7`Qm%oYH#6*3F>Q-VFpp1wcf98R8F_TPG$^OV-9I8Y>?V=BrT`9NIc2o<4E^d8I^W zxVymGG3@kQlKZUOHQbBS1T0A*V;lRLwkRoRB25C3aNRq2c7R-+nwqNJ(i z(df)4PQN|!KnJ5meeu{>e#UkZ@kprTt!r;9kr>t`HYJmA`gy?3OKx`rO#y%zKz|!c zr}5$|qfxkOgBO+kRAZPC7pP?Ag|Rk`gvGjDKIhA0FQ`-3ZU6EHKf5i4OQu>tNeW}z zzc)pYg1!&S@A2O6IRS=#Gwd~)lV0E!d1&kr%V+_FK!+J_6guFFIM2Fszb#{=lg~Wz z%z$)_(w~B;H-MTG1K{-AiXyl`yE5qrX72-uv{dbF2|?;J8UsnRf8OKc8-&*P?a(5V z(2mCT`9I@V5<@<}o=y8=dM7y3RECkk10;-?!cKr3tskgu4bKzc4Hpn`Na0r|KEPis z!F%+r{94N6u3mM&{%2D;a^W+GFnlZo(zs{o$h8a3O&Ko}7AEyygLBlAB1=c`;XNMu zw6P>3I~e+Hs(Mz~$N7PyzkYg8B@PR^T znw?KCmZ(2}0&H*{0FjcEQ$GE9#nRM7+G*^A8xf5?e&OC?#Dt|i_+81T!KD=CzmY^? z{6WS}5gEC)q(&I%*u2$HcbNQhQXPd7cxoqh3$OE}+yem@*I<9IF@Q`2caFN(DI}<7 zZW&m|>dhOZ&G68pRd6*eGpaNe?Ijdm(#qK#x-98A@|ULAGRqk z3CRVAU3TZ`?HH(=vK6(J8;k^CjS``qPm=>|b&j6uE(dxrHhc$3j&3&+g@a`(Jl|BbS*$qrg0yr&bySBr63 zV~`E=w|W3oo0?2x2O~)gHNTT<`5f;2gOPjcEf#k^4*Xmnec_zW-Ai+ZET;MYwI_l# zg6EZR46WA##2w?=Guw+uO(Ym&_nz;9Kh4_qqmDIQfidda?KNsV2zx6G!uz>QgzR}X z!#7sWCs!Q0xeKRUd-hngF)*xZ*?edpJykNFv-+d58#OmHfZ|j2j$QMQq@!lrkBJ@L z@KjBw6U~XN5tHmVJu`~MJ40gw2pq)F20}|(b)h5PeYZqLx=DA|XF&`{H^VIzV!wFv z(UA7hM}jEvHNj_x53h{FbT7EEy$_@q2dVt<;wZ92P)qDDiJPU1a7J0K92qMz?Cg@j zSJk8c`a8mpkm!>3t7}@pz@*T)w;`*{lHdKZSjkJ9b7_xe9`lT8uS(*tK##1gQc+uz zE+@dmbM&-&^a;QQ($6tq=#{}4>yG$G@z|%&{YukeTN~VXJLDx>`zLC4=I1WulrZf& z->T&C{xoX8LXqKMa>c0^%Z~8Gegk=d52uZs#z3g!!joDoMMB}N z`aDj3fB#48H^M#A$l%53#~n@AzB9RPvLQm2FCN8xd2{Yd1B)o}0_c$MQ3ME1+f*>X z36a)qKM>xxuNbnn*vJN0u1s?{zA#tOsO48!WMo!x&gF!kzD7rY?NAAQ%ex>Y`CvzK z#j5Aib>-RBhQ?1FQ&NoulVO#dd*Z+DU%K0?)piu!Gj!byy>J4gO77Vhf$sl~;VxC- z2}K8dZXQCCYf+R+=8F6$W@a- zR0#4sYFL)gMvdOZ>=*rJAPmvibjjLq_0^dH^;26C+V$>%W4*b=$pzm-MTzM(zXH=U z_aRX?t%?$3t-qP}P!mb*_{6>vk*?EQ#moezrx$#Wi{7{~?L9B_UU>E^MTVWwDsoj% zNqH}W#)=`B_nB~!YF43~{;uT)VRbA1ZkHzBd@#JVX6<^Np!|60Qgr-oTsj;*Bl*n4 zV4QZEB4;Z zD|7#pl<;rA+FL$cG9#({jUzd2ry;?dzUR!o_>BQf^LjeM&&X}(;l6(E=5i3w-qD_x z6-=8L&_4m9XSOQ<0SAl!k|B5mzFQ`6S7#|K+f$#yKv-~7`}h&`B^G~M{fHk4qMcdG z6W|1}068*0+8|`l55)@EzBLJ6c+siJDfNH;Dw1Hn;**HvCULkBGLlel*Ah5h`)-iWq3@N8v&Rj#-F^_m>#`d*2kcgm%+5n|OLZjnOz9 z2@4LBUU*cmvChs;?E%Llv_Iclt9jnbUM&}&2=WMgySct*Jtk@q!9zKv&}V=d+Fhu* zO{xiJF!Qglts=!zGXqoIJ@UQM#s3MpGVww0r|E++icpiq$5v{2CrEd4tEd|(bMjq9 zZrKPf4rf_?;2*RnuSA=Cr@lG?ykT>RIBYLf+49FvQuxXVz-JSPuJJ#!KdswnAN0kj zyV%d>iE18FT*Hca5)er3Ll|zRXDZQ=+SbG7jz+Y zRmNm}8spat-``6a|qfGeD44=F9aiLxB z8rTdof?nXDv8AkHKuo|3L&{MY-BOB--@7@u3tPQhcwlg)vq7iOS}l70!E5ezcERXT zWP20Sb(?OnY+uInj3J5)ghQMHMR0Y6rO((Cp-#Y<=JkiQW@~@?_L`}_XLAG?VVzD|9De~*EEN~herzFHFMX3Dpg&Nw|!kj2vjvx*vJ!0~x3 z9dj2HRCf`A4KJJl@x};~#%|X)H4v;4Ri@Xk&b$y*(`8|A{YrJ0Rj-%1#~fs(y$Zo! zg7NwbLphLge!3o>-yeqh`={u5y*Opd!8Y~ejs+bUF(O|@NAbM{_q#5QwU6KrhQH87 zc9iggV`7EQyXk`cUVl-W``!^@7X=;rE;RIiB*ZVI-a8uGnIo%ESUaitl;CP#qzE4M zw9~3|wyV-Bs&w9MZgg$n@9CEoZD0A-eiK`RoOw0Nx^(IZ;r$?2s5G>5t`4q^fayM^ zN%n9JR&Ww=qc=GNz-o1zJLDDf=OEfhj!WL zU!4`XtaDjrl;UH7|e`8 z@UEC60~iNtHXqDXD`uwX$ZUjDG@GHM6JYyPD42t|V$@N%M97{d!s7%$+kv6Xyd6EI z{;Xv0s*ybyX$sL){vSt?;MAbM%*t6gq(1R0ITjxUtiW9vOKN`)?^|Vw?PA+-8opo7 z8nBBNvg!O#+4BPQMqu)avH@=XJq~a(Md9Yl01HH3@^b8zOzY z_rPb!p@vxq>(o>X*`F&igCantU4gEIlMBLWuj+gmo`F}Gk`AvEDz{Trv_;O`$1i;h zEqDF=$0b$bwl2Ho6DD2G(lBdh~%=kT*ft(Pj}1rI=9WlysUA3zcH{s9e8@C70de`^y0mYtp524@P%`u7Q4 z9~lGmIojV(?5KioY0TWn`Cv(rVSi0>R)H9m(o?hiZ#zk&Gb_I zachl8W)K{+kVNr+SA1e>VF#*Sy)?au>njsSgIoUL(fxOwZu6UZACVy8ABK;wW}d%m zDwQ=e9}N)!(eg{$xhahI39#TlP{L|trYyb1ymcwxF>G*VI&hb>Jus|MwfDWj(Z~V1 z9lW|q@-$}If%dsrF=5HHpd>jTq5nW6oW7|>G&LRjdqa<&BL1DW&-@mg=OX982v1#}dfEfCTPZOzvEWYJw>3hCW#B^zsHX_wv>QG6N36v{Ay1I8_RFvUt{0{yUe~Z#t_% zrB^Xi`e$(^yQaPD&zaT3e%R5KUzOCf0w^te;X4BgSBjLFIsxvfFHH7fg%P>|jUz6_ zl6>Z8=mbJ;TlN}m-L1zPQT3-BayTe8fZHYeG(o{H4k zs@E#G8Xt7rAM}rsq?(Jad8KJLcY}qUD!bbGyEGc@8n~@;B$~pPwZWl zbUNG%0Pp{IjOam)L)eqyqBht@xge{dLQ|FYP^F z#BbXSS88o3eX`>J`si||)?sjElH8d4hKb+*@WX^it0Vk8JR6Nl5u$k^qWRCHdHWcF z_av$99_CMPRu2cPk*pL!LS_0}>t6vpwgrD!z3*0QEeCU6N;i#QS3}LChj~cyW7{(< zJ19ca0EWLyG0Dib!uL=)GL&hSx>Qqr^&9Ro&}SLrGOrPBYssqUV$A|mrwAFYa<^;E zCnk2C@~tcF#da==|5}jxbLQT?Yr(XECx_dhn`3&8N2RwcsPe2p6BfFg+zomQ7AwZ) z9|~qf#5ls8^h6Wz7jl*8WcIIq`-Z*RoG?AXx)b1OCU&hihEIT#$x#eo@q~hu4E`2u zPO{J7u-DEC#QBNMkx{w>^J3#RHMU;BjcV7AwO1zIQ8#4N($*xxlo>LGeOZ3L_|)%o z%}PsC<@@K?6V|hujaivy;%0(!2rvy`j3+II6AX!g?fBN3aAKW(kJB6brww;OoRX!x z@JE4MZg(bTT-wYXJ2*yC439Hmq>UHo_NJlCr&6#{nfC8%8{}Fl=S3ehF-tx7;Lx#T zhU@)g*qlEKr?Cr?Z^8?Swj}*>Qb}*Y)uQUng4`ydm!h|9B$fovkDR+(_1h)mMa*CB z^1rE0(Ugmz$xOo~!CslD3%yH>QwX=^S?|4HGqM;VJLy;kqw`uvc4vC;k(8+y+43D5(JqNEo1XH zh++7{3?r+za~n@S?YCVUl_*Zn-TifykeF(C0l4@#*C=kx^H0kfvHy;~vSzxszEHM0*RC5>XN*YNcxG1qw;Ix)%2hPiLaw8iRVr z*I3eMKd|`gs3a%@#6QC1Gzrzyq4uWK)vodqa*nA#uOnk#CfVwIqxHMW!?)T;l4}>Q zK59m0%nX-^LjpkuCQhvcTi!7)c=xrsx(9aNSzB4PvaX`}SNxiFznU7o;+tC4-uL-e zfpZjP5^NvHBj~Yjttn_dO4ZWQ4cQUr(d=|-%unT`3HMhJ+ZxwKtG~q@4(mUs8A_7G~I>& zH{bSyF(Rit7$g+&j@ykE0>u`8O5m}EHRpnhvP)${^f}9`*BNm*hF>Kl<@Eu~8Ok}5 z6VAm*WTnWhw+ZXdFE=`v5k{Q&Qq6Y%`fu}q;cpk>u-XS!+vpYd8-vg$Xdl-jw@-Q3 ze;n{|JO8t|boG(!L^Hd~?4j(Q9Rj57nL5Fg!~wrR>kF4WmhS4MrlNjM(0Ea^b(P== zjHjhM-so)5FP5~-q@eDh5|@O3be=8q>nI{NR2!G)o$0(j5!Ee&GVK0KdN7gzxI>N6O>B(o~wJLiP4%n@NaIU7qPS;rcmJfOKH=tE>aCHaN$y zYS~0KM}8~jA%M-uS|6h%oqw#p`p0#yrdB_F#W`+JESG)dd^dma$LcMQ;oFkkCCN{G z2FIS72ot)}JWXD_^%shTtGuUAe9EkNZAUif=BjwZWCdaa#4=15NuGX$ks(Q*P7D8t zfV8&oV+<>qm=_uOyev(`vA)ufNyL01`pPLGM1q`IOW^9bQs*q>^nGc3y#P%3FX4t?2#>ItIgbt#>(6WmjXQkg=W&OA4aQsh==qrHFAne9tEBMNrD%vl6`F408by0IKd2zBRATNJ4<*`3@d;<+{^OM{#XI zWGjzj+TFsqnOv56v^jWH2cF?l0yNP?)<)y6((Q$Mmf4>^djfRFHWndJjxISrjG)PB z_@f~mK0D8fttXFNp9fI=jCZqj8V%H)0?$spl^tAfR=y-V+gLHgbOLNHC>>a)(MHkudMuW z5O94u;~!c1T7hHjE5A-EztvZ-`Oe(?KmnQRtQpH~_LI+)!$JUT82K+6P)1ab1?G;V zIC8p#*GPE<%IuGntasPF5@4{2yMGamK>==NY}2a5k4403&|9hd&WE*dHxYaWo8WbAY?b5tk0ast zGhfwb|0?_%3fZm=z=k4Uj++WRHs-mC|CZua?kA*k?49Su?&dl-Zz+?^n#pw2LQE+d z0G#5GbI^tiusZTkl6h`+Gd#rCLvA9aE9bqL_6I97$#P4tCej6+-MPh`4-sYsU`{!bAI zbhtM=xX};uI<8jxsiysEE(vqKcQN}cQK=uPeWamF;U6LR*o-Wc(%g$A;?QjI5&;v|?k3B?kt@;K7h=p1(m^?8Bw^}A%XF_pnbBlP95HnKmn zZM4V1#pfL7QBJX}4sD+R1f{?Il1+#GzGo-p7JpL9j<$ChBa?Wq7M>BA;=)|G!+)V6 zNNMaD1ec2vqv`n`W1n@YB3xRh5is%AQ6Brq>i4Q%A=k20-kv{uML8{V(ClvA$5@_C z(EMeAUb#wM=dpSv!l)r_XdXrD^F4hOPD~T$8MT>JVmAFs7 zLeZG&(n2s0#$vVnQ{3#FZcpB8ZY?;-_jKWAU2=4e8M?;veI^ssrojntKSKyPjA28V zEn`k&{d7BW-_&)z5gA|0cFvVy)QJ8{c)j>?A)0kif7_&{EZW44+6uy0EEjV2j9nTY zBOyZgmM@ug|GkPt$T0H?)~865Lw@aUx!CKwmS*=Mk|Zec0=1nc=vPnj@5KrPI9;^v zcTmMVtB8!e=@4(73pcm?@LT4Z@lbeL4+`2!Hm>I$XR&LiGfQ z43_^llO10q_ch3 z(EXhLa=IBh{{#qBC&^7vP=tz^en_IhkGup56fW2zuIu3BX1L+>AZhv%=l9`e#H=Ov z#gfcF-cP3ufdxeur9{6ImW_TRmu_>gsmQPl z9t!b^>ol%Ij_Ax2D?8vFR~efhQcKpS3<{m3RR{dZt?>1wtD!fxr@Dq=6^*v^`mW z+Wh-o*N66m<0ZxNuB_At@3XDTPp{riG`hQ-0M{m>x2gEtF`QEml$pdmGIFlWZ+u}? z<+7vq2N}d(OitQfb4CWSis8He-$UX9YP5+IMT**DWJ7?t{h$~^63~GM{we;EqWMcZ z@l)wuoN4UuGSPn#LyUKP>cHq`dYYz*FD_giW8FfYnZ855np5jySGm~t3YKtFan(3| z+Gu7Cq@e%hl`9F*9?5y-D}=W6gAyKp1Ll~ue>SrGvcK$&g&y9CkjWNg6}Vv2mo0uu zL`3^I*a8qdIwFgMdklh?65X(tdyWTH7ve zCRT-xGde})pv`01_VqjYaJ$J*ymOXqQe#}lZWo=6)|X3li5q>I{dMRI!^xUSnYOv0wquFC(qo#d{E=>pC{wE30rKNET?-I95-JX7}n z*Lv>ORxv5*jXOML`2>gMyD$O(m4c^0X0&Gt}%ZVXI|h_6Tjmtt8~kyDee1M%%6I z*S3tqd*1>2LAH5=+`C_wepmNb0L~f#l%vR$%CLzjKE*P;vIMrksRZcbIVlW>-FQ!h0fcxWd?y+$`