Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up and refactor the output of the OTX CLI #1946

Merged
merged 22 commits into from
Mar 30, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 27 additions & 29 deletions docs/source/guide/get_started/quick_start_guide/cli_commands.rst
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ Building workspace folder

(otx) ...$ otx build --help
usage: otx build [-h] [--train-data-roots TRAIN_DATA_ROOTS] [--val-data-roots VAL_DATA_ROOTS] [--test-data-roots TEST_DATA_ROOTS] [--unlabeled-data-roots UNLABELED_DATA_ROOTS]
[--unlabeled-file-list UNLABELED_FILE_LIST] [--task TASK] [--train-type TRAIN_TYPE] [--work-dir WORK_DIR] [--model MODEL] [--backbone BACKBONE]
[--unlabeled-file-list UNLABELED_FILE_LIST] [--task TASK] [--train-type TRAIN_TYPE] [--workspace WORKSPACE] [--model MODEL] [--backbone BACKBONE]
[template]

positional arguments:
Expand All @@ -93,7 +93,7 @@ Building workspace folder
--task TASK The currently supported options: ('CLASSIFICATION', 'DETECTION', 'INSTANCE_SEGMENTATION', 'SEGMENTATION', 'ACTION_CLASSIFICATION', 'ACTION_DETECTION', 'ANOMALY_CLASSIFICATION', 'ANOMALY_DETECTION', 'ANOMALY_SEGMENTATION').
--train-type TRAIN_TYPE
The currently supported options: dict_keys(['Incremental', 'Semisupervised', 'Selfsupervised']).
--work-dir WORK_DIR Location where the workspace.
--workspace WORKSPACE Location where the workspace.
--model MODEL Enter the name of the model you want to use. (Ex. EfficientNet-B0).
--backbone BACKBONE Available Backbone Type can be found using 'otx find --backbone {framework}'.
If there is an already created backbone configuration yaml file, enter the corresponding path.
Expand Down Expand Up @@ -150,7 +150,7 @@ Training
- ``weights.pth`` - a model snapshot
- ``label_schema.json`` - a label schema used in training, created from a dataset

The results will be saved in ``./model`` folder by default. The output folder can be modified by ``--save-model-to`` option. These files are used by other commands: ``export``, ``eval``, ``demo``, etc.
The results will be saved in ``./outputs/`` folder by default. The output folder can be modified by ``--output`` option. These files are used by other commands: ``export``, ``eval``, ``demo``, etc.

``otx train`` receives ``template`` as a positional argument. ``template`` can be a path to the specific ``template.yaml`` file, template name or template ID. Also, the path to train and val data root should be passed to the CLI to start training.

Expand All @@ -160,7 +160,7 @@ However, if you created a workspace with ``otx build``, the training process can

otx train --help
usage: otx train [-h] [--train-data-roots TRAIN_DATA_ROOTS] [--val-data-roots VAL_DATA_ROOTS] [--unlabeled-data-roots UNLABELED_DATA_ROOTS] [--unlabeled-file-list UNLABELED_FILE_LIST]
[--load-weights LOAD_WEIGHTS] [--resume-from RESUME_FROM] [--save-model-to SAVE_MODEL_TO] [--work-dir WORK_DIR] [--enable-hpo] [--hpo-time-ratio HPO_TIME_RATIO] [--gpus GPUS]
[--load-weights LOAD_WEIGHTS] [--resume-from RESUME_FROM] [-o OUTPUT] [--workspace WORKSPACE] [--enable-hpo] [--hpo-time-ratio HPO_TIME_RATIO] [--gpus GPUS]
[--rdzv-endpoint RDZV_ENDPOINT] [--base-rank BASE_RANK] [--world-size WORLD_SIZE] [--mem-cache-size PARAMS.ALGO_BACKEND.MEM_CACHE_SIZE] [--data DATA]
[template] {params} ...

Expand All @@ -186,9 +186,9 @@ However, if you created a workspace with ``otx build``, the training process can
Load model weights from previously saved checkpoint.
--resume-from RESUME_FROM
Resume training from previously saved checkpoint
--save-model-to SAVE_MODEL_TO
-o OUTPUT, --output OUTPUT
Location where trained model will be stored.
--work-dir WORK_DIR Location where the intermediate output of the training will be stored.
--workspace WORKSPACE Location where the intermediate output of the training will be stored.
--enable-hpo Execute hyper parameters optimization (HPO) before training.
--hpo-time-ratio HPO_TIME_RATIO
Expected ratio of total time to run HPO to time taken for full fine-tuning.
Expand Down Expand Up @@ -261,7 +261,7 @@ With the ``--help`` command, you can list additional information, such as its pa
.. code-block::

(otx) ...$ otx export --help
usage: otx export [-h] [--load-weights LOAD_WEIGHTS] [--save-model-to SAVE_MODEL_TO] [--work-dir WORK_DIR] [--dump-features] [--half-precision] [template]
usage: otx export [-h] [--load-weights LOAD_WEIGHTS] [-o OUTPUT] [--workspace WORKSPACE] [--dump-features] [--half-precision] [template]

positional arguments:
template Enter the path or ID or name of the template file.
Expand All @@ -271,9 +271,9 @@ With the ``--help`` command, you can list additional information, such as its pa
-h, --help show this help message and exit
--load-weights LOAD_WEIGHTS
Load model weights from previously saved checkpoint.
--save-model-to SAVE_MODEL_TO
-o OUTPUT, --output OUTPUT
Location where exported model will be stored.
--work-dir WORK_DIR Location where the intermediate output of the export will be stored.
--workspace WORKSPACE Location where the intermediate output of the export will be stored.
--dump-features Whether to return feature vector and saliency map for explanation purposes.
--half-precision This flag indicated if model is exported in half precision (FP16).

Expand All @@ -282,15 +282,15 @@ The command below performs exporting to the ``outputs/openvino`` path.

.. code-block::

(otx) ...$ otx export Custom_Object_Detection_Gen3_SSD --load-weights <path/to/trained/weights.pth> --save-model-to outputs/openvino
(otx) ...$ otx export Custom_Object_Detection_Gen3_SSD --load-weights <path/to/trained/weights.pth> --output outputs/openvino

The command results in ``openvino.xml``, ``openvino.bin`` and ``label_schema.json``

To use the exported model as an input for ``otx explain``, please dump additional outputs with internal information, using ``--dump-features``:

.. code-block::

(otx) ...$ otx export Custom_Object_Detection_Gen3_SSD --load-weights <path/to/trained/weights.pth> --save-model-to outputs/openvino/with_features --dump-features
(otx) ...$ otx export Custom_Object_Detection_Gen3_SSD --load-weights <path/to/trained/weights.pth> --output outputs/openvino/with_features --dump-features


************
Expand All @@ -306,8 +306,8 @@ With the ``--help`` command, you can list additional information:

.. code-block::

usage: otx optimize [-h] [--train-data-roots TRAIN_DATA_ROOTS] [--val-data-roots VAL_DATA_ROOTS] [--load-weights LOAD_WEIGHTS] [--save-model-to SAVE_MODEL_TO] [--save-performance SAVE_PERFORMANCE]
[--work-dir WORK_DIR]
usage: otx optimize [-h] [--train-data-roots TRAIN_DATA_ROOTS] [--val-data-roots VAL_DATA_ROOTS] [--load-weights LOAD_WEIGHTS] [-o OUTPUT]
[--workspace WORKSPACE]
[template] {params} ...

positional arguments:
Expand All @@ -324,11 +324,9 @@ With the ``--help`` command, you can list additional information:
Comma-separated paths to validation data folders.
--load-weights LOAD_WEIGHTS
Load weights of trained model
--save-model-to SAVE_MODEL_TO
Location where trained model will be stored.
--save-performance SAVE_PERFORMANCE
Path to a json file where computed performance will be stored.
--work-dir WORK_DIR Location where the intermediate output of the task will be stored.
-o OUTPUT, --output OUTPUT
Location where optimized model will be stored.
--workspace WORKSPACE Location where the intermediate output of the task will be stored.

Command example for optimizing a PyTorch model (.pth) with OpenVINO™ NNCF:

Expand All @@ -337,7 +335,7 @@ Command example for optimizing a PyTorch model (.pth) with OpenVINO™ NNCF:
(otx) ...$ otx optimize SSD --load-weights <path/to/trained/weights.pth> \
--train-data-roots <path/to/train/root> \
--val-data-roots <path/to/val/root> \
--save-model-to outputs/nncf
--output outputs/nncf


Command example for optimizing OpenVINO™ model (.xml) with OpenVINO™ POT:
Expand All @@ -346,7 +344,7 @@ Command example for optimizing OpenVINO™ model (.xml) with OpenVINO™ POT:

(otx) ...$ otx optimize SSD --load-weights <path/to/openvino.xml> \
--val-data-roots <path/to/val/root> \
--save-model-to outputs/pot
--output outputs/pot


Thus, to use POT pass the path to exported IR (.xml) model, to use NNCF pass the path to the PyTorch (.pth) weights.
Expand All @@ -363,7 +361,7 @@ With the ``--help`` command, you can list additional information, such as its pa
.. code-block::

(otx) ...$ otx eval --help
usage: otx eval [-h] [--test-data-roots TEST_DATA_ROOTS] [--load-weights LOAD_WEIGHTS] [--save-performance SAVE_PERFORMANCE] [--work-dir WORK_DIR] [template] {params} ...
usage: otx eval [-h] [--test-data-roots TEST_DATA_ROOTS] [--load-weights LOAD_WEIGHTS] [-o OUTPUT] [--workspace WORKSPACE] [template] {params} ...

positional arguments:
template Enter the path or ID or name of the template file.
Expand All @@ -377,9 +375,9 @@ With the ``--help`` command, you can list additional information, such as its pa
Comma-separated paths to test data folders.
--load-weights LOAD_WEIGHTS
Load model weights from previously saved checkpoint.It could be a trained/optimized model (POT only) or exported model.
--save-performance SAVE_PERFORMANCE
Path to a json file where computed performance will be stored.
--work-dir WORK_DIR Location where the intermediate output of the task will be stored.
-o OUTPUT, --output OUTPUT
Location where the intermediate output of the task will be stored.
--workspace WORKSPACE Path to the workspace where the command will run.


The command below will evaluate the trained model on the provided dataset:
Expand All @@ -388,7 +386,7 @@ The command below will evaluate the trained model on the provided dataset:

(otx) ...$ otx eval SSD --test-data-roots <path/to/test/root> \
--load-weights <path/to/model_weghts> \
--save-performance outputs/performance.json
--output <path/to/outputs>

.. note::

Expand Down Expand Up @@ -447,7 +445,7 @@ By default, the model is exported to the OpenVINO™ IR format without extra fea
.. code-block::

(otx) ...$ otx export SSD --load-weights <path/to/trained/weights.pth> \
--save-model-to outputs/openvino/with_features \
--output outputs/openvino/with_features \
--dump-features
(otx) ...$ otx explain SSD --explain-data-roots <path/to/explain/root> \
--load-weights outputs/openvino/with_features \
Expand Down Expand Up @@ -521,7 +519,7 @@ With the ``--help`` command, you can list additional information, such as its pa
.. code-block::

(otx) ...$ otx deploy --help
usage: otx deploy [-h] [--load-weights LOAD_WEIGHTS] [--save-model-to SAVE_MODEL_TO] [template]
usage: otx deploy [-h] [--load-weights LOAD_WEIGHTS] [-o OUTPUT] [template]

positional arguments:
template Enter the path or ID or name of the template file.
Expand All @@ -531,7 +529,7 @@ With the ``--help`` command, you can list additional information, such as its pa
-h, --help show this help message and exit
--load-weights LOAD_WEIGHTS
Load model weights from previously saved checkpoint.
--save-model-to SAVE_MODEL_TO
-o OUTPUT, --output OUTPUT
Location where openvino.zip will be stored.


Expand All @@ -540,5 +538,5 @@ Command example:
.. code-block::

(otx) ...$ otx deploy SSD --load-weights <path/to/openvino.xml> \
--save-model-to outputs/deploy
--output outputs/deploy

4 changes: 2 additions & 2 deletions docs/source/guide/tutorials/advanced/self_sl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ for **self-supervised learning** by running the following command:

.. code-block::

(otx) ...$ otx build --train-data-roots data/flower_photos --model MobileNet-V3-large-1x --train-type Selfsupervised --work-dir otx-workspace-CLASSIFICATION-Selfsupervised
(otx) ...$ otx build --train-data-roots data/flower_photos --model MobileNet-V3-large-1x --train-type SELFSUPERVISED --workspace otx-workspace-CLASSIFICATION-Selfsupervised

[*] Workspace Path: otx-workspace-CLASSIFICATION-Selfsupervised
[*] Load Model Template ID: Custom_Image_Classification_MobileNet-V3-large-1x
Expand All @@ -82,7 +82,7 @@ for **self-supervised learning** by running the following command:

1. add ``--train-type Selfsupervised`` in the command to get the training components for self-supervised learning,
2. update the path set as ``train-data-roots``,
3. and add ``--work-dir`` to distinguish self-supervised learning workspace from supervised learning workspace.
3. and add ``--workspace`` to distinguish self-supervised learning workspace from supervised learning workspace.

After the workspace creation, the workspace structure is as follows:

Expand Down
6 changes: 3 additions & 3 deletions docs/source/guide/tutorials/advanced/semi_sl.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ In the train log, you can check that the train type is set to **Semisupervised**
...


After training ends, a trained model is saved in the ``models`` sub-directory in the workspace named ``otx-workspace-CLASSIFICATION`` by default.
After training ends, a trained model is saved in the ``latest_trained_model`` sub-directory in the workspace named ``otx-workspace-CLASSIFICATION`` by default.


***************************
Expand All @@ -159,12 +159,12 @@ Validation

In the same manner with `the normal validation <../base/how_to_train/classification.html#validation>`__,
we can evaluate the trained model with auto-splitted validation dataset in the workspace and
save results to ``performance.json`` by the following command:
save results to ``outputs/performance.json`` by the following command:


.. code-block::

(otx) ...$ otx eval otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml \
--test-data-roots splitted_dataset/val \
--load-weights models/weights.pth \
--save-performance performance.json
--output outputs
2 changes: 1 addition & 1 deletion docs/source/guide/tutorials/base/deploy.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ using the command below:

(otx) ...$ otx deploy otx/algorithms/detection/configs/detection/mobilenetv2_atss/template.yaml \
--load-weights outputs/openvino/openvino.xml \
--save-model-to outputs/deploy
--output outputs/deploy

2023-01-20 09:30:40,938 | INFO : Loading OpenVINO OTXDetectionTask
2023-01-20 09:30:41,736 | INFO : OpenVINO task initialization completed
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -186,13 +186,13 @@ Keep in mind that ``label_schema.json`` file contains meta information about the
``otx eval`` will output a frame-wise accuracy for action classification. Note, that top-1 accuracy during training is video-wise accuracy.

2. The command below will run validation on the dataset
and save performance results in ``performance.json`` file:
and save performance results in ``outputs/performance.json`` file:

.. code-block::

(otx) ...$ otx eval --test-data-roots ../data/hmdb51/CVAT/valid \
--load-weights models/weights.pth \
--save-performance performance.json
--output outputs

You will get a similar validation output:

Expand All @@ -215,12 +215,12 @@ Export
It allows running the model on the Intel hardware much more efficiently, especially on the CPU. Also, the resulting IR model is required to run POT optimization. IR model consists of two files: ``openvino.xml`` for weights and ``openvino.bin`` for architecture.

2. Run the command line below to export the trained model
and save the exported model to the ``openvino_models`` folder.
and save the exported model to the ``openvino`` folder.

.. code-block::

(otx) ...$ otx export --load-weights models/weights.pth \
--save-model-to openvino_models
--output openvino

...
2023-02-21 22:54:32,518 - mmaction - INFO - Model architecture: X3D
Expand All @@ -241,8 +241,8 @@ using ``otx eval`` and passing the IR model path to the ``--load-weights`` param
.. code-block::

(otx) ...$ otx eval --test-data-roots ../data/hmdb51/CVAT/valid \
--load-weights openvino_models/openvino.xml \
--save-performance openvino_models/performance.json
--load-weights openvino/openvino.xml \
--output outputs/openvino

...

Expand All @@ -262,8 +262,8 @@ OpenVINO™ model (.xml) with OpenVINO™ POT.

.. code-block::

(otx) ...$ otx optimize --load-weights openvino_models/openvino.xml \
--save-model-to pot_model
(otx) ...$ otx optimize --load-weights openvino/openvino.xml \
--output pot_model

...

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,13 +128,13 @@ Please note, ``label_schema.json`` file contains meta information about the data
``otx eval`` will output a mAP score for spatio-temporal action detection.

2. The command below will run validation on our dataset
and save performance results in ``performance.json`` file:
and save performance results in ``outputs/performance.json`` file:

.. code-block::

(otx) ...$ otx eval --test-data-roots ../data/JHMDB_5%/test \
--load-weights models/weights.pth \
--save-performance performance.json
--output outputs

We will get a similar to this validation output after some validation time (about 2 minutes):

Expand Down Expand Up @@ -163,7 +163,7 @@ Export
It allows running the model on the Intel hardware much more efficiently, especially on the CPU. Also, the resulting IR model is required to run POT optimization. IR model consists of two files: ``openvino.xml`` for weights and ``openvino.bin`` for architecture.

2. Run the command line below to export the trained model
and save the exported model to the ``openvino_models`` folder.
and save the exported model to the ``openvino`` folder.

.. code-block::

Expand Down Expand Up @@ -213,7 +213,7 @@ OpenVINO™ model (.xml) with OpenVINO™ POT.

.. code-block::

(otx) ...$ otx optimize --load-weights openvino_models/openvino.xml \
(otx) ...$ otx optimize --load-weights openvino/openvino.xml \
--save-model-to pot_model

...
Expand Down
Loading