Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhance] Add in-memory caching in dataloader #1694

Merged
merged 22 commits into from
Mar 7, 2023

Conversation

vinnamkim
Copy link
Contributor

Signed-off-by: Kim, Vinnam <[email protected]>
 - Did some refactoring
 - Add docstrings

Signed-off-by: Kim, Vinnam <[email protected]>
Signed-off-by: Kim, Vinnam <[email protected]>
Signed-off-by: Kim, Vinnam <[email protected]>
@vinnamkim vinnamkim requested a review from a team as a code owner February 14, 2023 08:34
@github-actions github-actions bot added ALGO Any changes in OTX Algo Tasks implementation API Any changes in OTX API CLI Any changes in OTE CLI DOC Improvements or additions to documentation TEST Any changes in tests labels Feb 14, 2023
Copy link
Contributor

@goodsong81 goodsong81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strictly speaking, the features had been frozen for OTX1.0.
But as I see, this could be just an enhancement, so I think we could include this before code freeze. Could you change the tile? [Feature] -> [Enhance]
Sorry for nitpicking.
Finally, could you let us know the impact of this change? How much faster?

@codecov-commenter
Copy link

codecov-commenter commented Feb 14, 2023

Codecov Report

Base: 56.17% // Head: 59.23% // Increases project coverage by +3.05% 🎉

Coverage data is based on head (30c9f0d) compared to base (c4de542).
Patch coverage: 91.79% of modified lines in pull request are covered.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #1694      +/-   ##
===========================================
+ Coverage    56.17%   59.23%   +3.05%     
===========================================
  Files          494      487       -7     
  Lines        34699    35151     +452     
===========================================
+ Hits         19493    20821    +1328     
+ Misses       15206    14330     -876     
Impacted Files Coverage Δ
otx/cli/tools/train.py 0.00% <0.00%> (ø)
otx/mpa/modules/hooks/eval_hook.py 22.22% <0.00%> (-0.25%) ⬇️
otx/core/data/caching/mem_cache_hook.py 61.53% <61.53%> (ø)
otx/api/entities/dataset_item.py 98.88% <80.00%> (-0.56%) ⬇️
otx/cli/utils/parser.py 92.56% <90.90%> (-0.37%) ⬇️
otx/core/data/caching/mem_cache_handler.py 93.25% <93.25%> (ø)
...ms/classification/adapters/mmcls/data/pipelines.py 100.00% <100.00%> (ø)
otx/algorithms/common/configs/training_base.py 100.00% <100.00%> (ø)
otx/algorithms/common/tasks/training_base.py 54.16% <100.00%> (+17.61%) ⬆️
...orithms/detection/adapters/mmdet/data/pipelines.py 89.79% <100.00%> (-3.26%) ⬇️
... and 66 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@vinnamkim vinnamkim changed the title [Feature] Add in-memory caching in dataloader [Enhance] Add in-memory caching in dataloader Feb 14, 2023
@vinnamkim
Copy link
Contributor Author

vinnamkim commented Feb 14, 2023

Strictly speaking, the features had been frozen for OTX1.0. But as I see, this could be just an enhancement, so I think we could include this before code freeze. Could you change the tile? [Feature] -> [Enhance] Sorry for nitpicking. Finally, could you let us know the impact of this change? How much faster?

I cannot simply say about the performance gain. It largely depends on the dataset.

The main factor would be how many data samples is caching under the memory constraint and the cost saving from avoiding the image decode for each image. However, there is usually trade-off between them. If the decode cost is high, the decoded image size would normally be huge. It will result in not caching even a few decoded images. However, it cannot say that image decode cost is absolutely proportional to image size. Therefore, I expect that this feature is mainly useful for the smaller datasets.

This is my experiment for a classification task with Chest Xray dataset.
Chest Xray dataset has 5238 images (5215 for training and 23 for validation) and I assigned --mem-cache-size 8GB to cache 2074 images (2074 / 5238 = 39.6%). Then, I compared 5 epochs between No cache (--mem-cache-size=0) and 8GB cache (--mem-cache-size=8GB).

8GB cache (--mem-cache-size=8GB)

2023-02-14 18:35:46,567 - mmcls - INFO - workflow: [('train', 1)], max: 90 epochs
2023-02-14 18:35:46,568 | INFO : cancel hook is initialized
2023-02-14 18:35:46,568 | INFO : logger in the runner is replaced to the MPA logger
[                                                  ] 0/20, elapsed: 0s, ETA:WARNING:nncf:You are using DataParallel, which may cause significant performance issues with dynamic graph building. Consider using distributed training (DistributedDataParallel) instead.
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 9.3 task/s, elapsed: 2s, ETA:     0s
2023-02-14 18:36:24,825 | WARNING : training progress 1%
2023-02-14 18:36:28,458 | INFO : Epoch [1][82/82]       lr: 4.900e-03, eta: 1:11:52, time: 0.485, data_time: 0.338, memory: 3614, current_iters: 81, loss: 0.1812, sharpness: 0.1133, max_loss: 0.2937
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 117.5 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:36:28,630 | INFO : Saving best checkpoint at 1 epochs
2023-02-14 18:36:28,784 | INFO : MemCacheHandlerForMP uses 8589885783 / 8589934592 (100.0%) memory pool and store 2074 items.
2023-02-14 18:36:28,784 | INFO : Epoch(val) [1][82]     accuracy_top-1: 0.9500, NORMAL accuracy: 0.8750, PNEUMONIO accuracy: 1.0000, mean accuracy: 0.9375, accuracy: 0.9500, current_iters: 82
2023-02-14 18:36:56,375 | INFO : Epoch [2][82/82]       lr: 4.899e-03, eta: 1:00:12, time: 0.336, data_time: 0.195, memory: 3614, current_iters: 163, loss: 0.0719, sharpness: 0.0594, max_loss: 0.1311
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 118.5 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:36:56,546 | INFO : Saving best checkpoint at 2 epochs
2023-02-14 18:36:56,707 | INFO : MemCacheHandlerForMP uses 8589885783 / 8589934592 (100.0%) memory pool and store 2074 items.
2023-02-14 18:36:56,707 | INFO : Epoch(val) [2][82]     accuracy_top-1: 0.9500, NORMAL accuracy: 0.8750, PNEUMONIO accuracy: 1.0000, mean accuracy: 0.9375, accuracy: 0.9500, current_iters: 164
2023-02-14 18:37:24,405 | INFO : Epoch [3][82/82]       lr: 4.894e-03, eta: 0:56:00, time: 0.338, data_time: 0.195, memory: 3614, current_iters: 245, loss: 0.0659, sharpness: 0.0531, max_loss: 0.1191
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 117.8 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:37:24,654 | INFO : MemCacheHandlerForMP uses 8589885783 / 8589934592 (100.0%) memory pool and store 2074 items.
2023-02-14 18:37:24,655 | INFO : Epoch(val) [3][82]     accuracy_top-1: 0.9000, NORMAL accuracy: 0.7500, PNEUMONIO accuracy: 1.0000, mean accuracy: 0.8750, accuracy: 0.9000, current_iters: 246
2023-02-14 18:37:52,373 | INFO : Epoch [4][82/82]       lr: 4.887e-03, eta: 0:53:38, time: 0.338, data_time: 0.196, memory: 3614, current_iters: 327, loss: 0.0503, sharpness: 0.0478, max_loss: 0.0979
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 117.8 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:37:52,545 | INFO : Saving best checkpoint at 4 epochs
2023-02-14 18:37:52,706 | INFO : MemCacheHandlerForMP uses 8589885783 / 8589934592 (100.0%) memory pool and store 2074 items.
2023-02-14 18:37:52,706 | INFO : Epoch(val) [4][82]     accuracy_top-1: 1.0000, NORMAL accuracy: 1.0000, PNEUMONIO accuracy: 1.0000, mean accuracy: 1.0000, accuracy: 1.0000, current_iters: 328
2023-02-14 18:38:20,217 | INFO : Epoch [5][82/82]       lr: 4.876e-03, eta: 0:51:54, time: 0.335, data_time: 0.194, memory: 3614, current_iters: 409, loss: 0.0458, sharpness: 0.0447, max_loss: 0.0909
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 106.1 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:38:20,408 | INFO : Saving best checkpoint at 5 epochs
2023-02-14 18:38:20,569 | INFO : MemCacheHandlerForMP uses 8589885783 / 8589934592 (100.0%) memory pool and store 2074 items.
2023-02-14 18:38:20,569 | INFO : Epoch(val) [5][82]     accuracy_top-1: 1.0000, NORMAL accuracy: 1.0000, PNEUMONIO accuracy: 1.0000, mean accuracy: 1.0000, accuracy: 1.0000, current_iters: 410

No cache (--mem-cache-size=0)

2023-02-14 18:27:05,239 - mmcls - INFO - workflow: [('train', 1)], max: 90 epochs
2023-02-14 18:27:05,240 | INFO : cancel hook is initialized
2023-02-14 18:27:05,240 | INFO : logger in the runner is replaced to the MPA logger
[                                                  ] 0/20, elapsed: 0s, ETA:WARNING:nncf:You are using DataParallel, which may cause significant performance issues with dynamic graph building. Consider using distributed training (DistributedDataParallel) instead.
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 10.1 task/s, elapsed: 2s, ETA:     0s
2023-02-14 18:27:40,626 | WARNING : training progress 1%
2023-02-14 18:27:43,980 | INFO : Epoch [1][82/82]       lr: 4.900e-03, eta: 1:06:29, time: 0.448, data_time: 0.303, memory: 3614, current_iters: 81, loss: 0.1798, sharpness: 0.1126, max_loss: 0.2915
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 117.7 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:27:44,152 | INFO : Saving best checkpoint at 1 epochs
2023-02-14 18:27:44,307 | INFO : MemCacheHandlerBase uses 0 / 0 (0.0%) memory pool and store 0 items.
2023-02-14 18:27:44,307 | INFO : Epoch(val) [1][82]     accuracy_top-1: 0.9500, NORMAL accuracy: 0.8750, PNEUMONIO accuracy: 1.0000, mean accuracy: 0.9375, accuracy: 0.9500, current_iters: 82
2023-02-14 18:28:20,808 | INFO : Epoch [2][82/82]       lr: 4.899e-03, eta: 1:05:30, time: 0.445, data_time: 0.302, memory: 3614, current_iters: 163, loss: 0.0717, sharpness: 0.0586, max_loss: 0.1300
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 107.7 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:28:20,995 | INFO : Saving best checkpoint at 2 epochs
2023-02-14 18:28:21,158 | INFO : MemCacheHandlerBase uses 0 / 0 (0.0%) memory pool and store 0 items.
2023-02-14 18:28:21,159 | INFO : Epoch(val) [2][82]     accuracy_top-1: 1.0000, NORMAL accuracy: 1.0000, PNEUMONIO accuracy: 1.0000, mean accuracy: 1.0000, accuracy: 1.0000, current_iters: 164
2023-02-14 18:28:57,415 | INFO : Epoch [3][82/82]       lr: 4.894e-03, eta: 1:04:32, time: 0.442, data_time: 0.300, memory: 3614, current_iters: 245, loss: 0.0660, sharpness: 0.0533, max_loss: 0.1193
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 119.4 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:28:57,663 | INFO : MemCacheHandlerBase uses 0 / 0 (0.0%) memory pool and store 0 items.
2023-02-14 18:28:57,663 | INFO : Epoch(val) [3][82]     accuracy_top-1: 0.9500, NORMAL accuracy: 0.8750, PNEUMONIO accuracy: 1.0000, mean accuracy: 0.9375, accuracy: 0.9500, current_iters: 246
2023-02-14 18:29:33,845 | INFO : Epoch [4][82/82]       lr: 4.887e-03, eta: 1:03:39, time: 0.441, data_time: 0.300, memory: 3614, current_iters: 327, loss: 0.0509, sharpness: 0.0471, max_loss: 0.0978
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 120.0 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:29:34,013 | INFO : Saving best checkpoint at 4 epochs
2023-02-14 18:29:34,175 | INFO : MemCacheHandlerBase uses 0 / 0 (0.0%) memory pool and store 0 items.
2023-02-14 18:29:34,175 | INFO : Epoch(val) [4][82]     accuracy_top-1: 1.0000, NORMAL accuracy: 1.0000, PNEUMONIO accuracy: 1.0000, mean accuracy: 1.0000, accuracy: 1.0000, current_iters: 328
2023-02-14 18:30:11,069 | INFO : Epoch [5][82/82]       lr: 4.876e-03, eta: 1:03:05, time: 0.450, data_time: 0.307, memory: 3614, current_iters: 409, loss: 0.0477, sharpness: 0.0445, max_loss: 0.0927
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 20/20, 120.1 task/s, elapsed: 0s, ETA:     0s
2023-02-14 18:30:11,238 | INFO : Saving best checkpoint at 5 epochs
2023-02-14 18:30:11,398 | INFO : MemCacheHandlerBase uses 0 / 0 (0.0%) memory pool and store 0 items.
2023-02-14 18:30:11,398 | INFO : Epoch(val) [5][82]     accuracy_top-1: 1.0000, NORMAL accuracy: 1.0000, PNEUMONIO accuracy: 1.0000, mean accuracy: 1.0000, accuracy: 1.0000, current_iters: 410

From this table, you can see that data_time improves from 0.3 (No cache) to 0.2 (8GB cache) with caching (0.3 / 0.2 = 150%). Please denote that the first epoch needs time for caching so that there is no difference between them. I also compared the elapsed time from training start to the end of epoch 5. It shows 120.8% gain, but with further training the gain will increase as the influence of the first epoch decreases.

8GB cache No cache
Training start 18:35:46 18:27:05
5 epoch end 18:38:20 18:30:11
Elapsed time 0:02:34 0:03:06
Gain 120.8%

Copy link
Contributor

@goodsong81 goodsong81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great implementation, indeed!
Thank you for nice enhancement (feature :)) with thorough documentation update and unit testing. LGTM!
BTW, it's quite huge change, so we need more reviews.
Finally, could you run e2e test locally and post the captured result? There might be some failures, though. (tox -e pre-merge -- tests/e2e would work, I suppose)

@goodsong81
Copy link
Contributor

goodsong81 commented Feb 14, 2023

image
@vinnamkim BTW, it is mistake, right? (cached version takes more time :p)

@JihwanEom could you check possible conflict w/ video frame caching? (I suppose not, but need double check)
@eunwoosh could you check compatibility w/ multi-GPU use-case or HPO?
@jaegukhyun there are changes in BaseTask that might have impact to action tasks, but without changes in data pipeline or config param. Could you check possibility of adption?

@vinnamkim
Copy link
Contributor Author

image @vinnamkim BTW, it is mistake, right? (cached version takes more time :p)

Sorry I made a mistake. They had switched their positions in both tables. I fixed it.

@jaegukhyun
Copy link
Contributor

image @vinnamkim BTW, it is mistake, right? (cached version takes more time :p)

@JihwanEom could you check possible conflict w/ video frame caching? (I suppose not, but need double check) @eunwoosh could you check compatibility w/ multi-GPU use-case or HPO? @jaegukhyun there are changes in BaseTask that might have impact to action tasks, but without changes in data pipeline or config param. Could you check possibility of adption?

Video data dict for video classification contains "dataset_items" not "dataset_item". Therefore I think we need to add methods for multi-imgs input and output.

jaegukhyun
jaegukhyun previously approved these changes Feb 15, 2023
Copy link
Contributor

@jaegukhyun jaegukhyun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, Anyway

@vinnamkim
Copy link
Contributor Author

vinnamkim commented Feb 15, 2023

I tried to run tox -e pre-merge -- tests/e2e but it was not working. Please see the following log.

2023-02-15 10:36:39,641 | INFO : train()
2023-02-15 10:36:39,641 | INFO : init data cfg.
2023-02-15 10:36:39,641 | INFO : initializing....
2023-02-15 10:36:39,641 | INFO : called _init_recipe()
2023-02-15 10:36:39,641 | INFO : train type = INCREMENTAL
    raise FileNotFoundError(msg_tmpl.format(filename))
FileNotFoundError: file "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/recipes/stages/classification/incremental.yaml" does not exist
Process SpawnProcess-20214:
Traceback (most recent call last):
  File "/usr/lib/python3.8/multiprocessing/process.py", line 313, in _bootstrap
    self.run()
  File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/hpo/hpo_runner.py", line 155, in _run_train
    train_func(hp_config, report_func)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/cli/utils/hpo.py", line 791, in run_trial
    trainer.run()
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/cli/utils/hpo.py", line 691, in run
    task.train(dataset=dataset, output_model=output_model, train_parameters=score_report_callback)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/api/utils/argument_checks.py", line 234, in validate
    return function(**input_parameters_values_map)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/algorithms/classification/tasks/train.py", line 119, in train
    results = self._run_task(stage_module, mode="train", dataset=dataset, parameters=train_parameters)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/algorithms/common/tasks/training_base.py", line 132, in _run_task
    self._initialize(kwargs)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/algorithms/common/tasks/training_base.py", line 234, in _initialize
    self._init_recipe()
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/algorithms/classification/tasks/inference.py", line 396, in _init_recipe
    self._recipe_cfg = MPAConfig.fromfile(recipe)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/mpa/utils/config_utils.py", line 119, in fromfile
    cfg_dict, cfg_text = MPAConfig._file2dict(filename, use_predefined_variables)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/mpa/utils/config_utils.py", line 28, in _file2dict
    check_file_exist(filename)
  File "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/mmcv/utils/path.py", line 23, in check_file_exist
    raise FileNotFoundError(msg_tmpl.format(filename))
FileNotFoundError: file "/home/vinnamki/otx/training_extensions/.tox/pre-merge/lib/python3.8/site-packages/otx/recipes/stages/classification/incremental.yaml" does not exist
Loop <_UnixSelectorEventLoop running=False closed=True debug=False> that handles pid 24986 is closed

eunwoosh
eunwoosh previously approved these changes Feb 15, 2023
Copy link
Contributor

@eunwoosh eunwoosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for nice work!
I checked that it works well with multi-GPU and HPO.

Copy link
Contributor

@goodsong81 goodsong81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[DO NOT MERGE YET]
@vinnamkim will fix issues in evaluation, nncf, etc.

Copy link
Contributor

@wonjuleee wonjuleee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you Vinnam, but please hold this until having more experimental results.

 - Change parsing destination from args.mem_cache_size to
args.params.algo_backend.mem_cache_size

Signed-off-by: Kim, Vinnam <[email protected]>
@goodsong81 goodsong81 added this to the 1.1.0 milestone Feb 17, 2023
@vinnamkim
Copy link
Contributor Author

@goodsong81
Because many things are keeping changing for preparing OTX 1.0.0, there occur many conflicts and e2e testing issues in real-time and it makes hard to merge this PR. I'll revisit it after OTX 1.0.0 is done.
@wonjuleee
This is a one-note page to show performance gains by this PR, Classification task. Until I revisit this PR, I'll update this for detection task and segmentation task too.

@wonjuleee
Copy link
Contributor

@goodsong81 Because many things are keeping changing for preparing OTX 1.0.0, there occur many conflicts and e2e testing issues in real-time and it makes hard to merge this PR. I'll revisit it after OTX 1.0.0 is done. @wonjuleee This is a one-note page to show performance gains by this PR, Classification task. Until I revisit this PR, I'll update this for detection task and segmentation task too.

It sounds like feasible. Let's postpone this until 1.0.0 release is done. For the performance, I don't have any doubt about the performance gain through memory caching. Thanks.

Copy link
Contributor

@wonjuleee wonjuleee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please to resolve some conflicts with the latest develop?

@vinnamkim
Copy link
Contributor Author

Could you please to resolve some conflicts with the latest develop?

Hi @goodsong81,
I wonder if changes about e2e testing, CLI argument, or configuration file parsing in the develop branch are planned in the near future.

@goodsong81
Copy link
Contributor

Could you please to resolve some conflicts with the latest develop?

Hi @goodsong81, I wonder if changes about e2e testing, CLI argument, or configuration file parsing in the develop branch are planned in the near future.

We are planning MPA refactoring (dismantling :p), input / output handling refinement for the next update. There might be some changes, but not that abrupt ones, I suppose.
Could you let us know your concerns or suggestions we could take into consideration for planning?
@harimkang Thoughts?

@harimkang
Copy link
Contributor

We can't be sure, but we don't expect any major changes to the CLI argument for a while, and the same goes for E2E testing. Changes on the configuration side may change the code flow as MPA is decommissioned. However, the config value is unlikely to change.

@vinnamkim
Copy link
Contributor Author

vinnamkim commented Feb 28, 2023

Thanks for let me know, @goodsong81 @harimkang. I'll revisit this PR early in the next sprint, @wonjuleee.

@github-actions github-actions bot removed the DOC Improvements or additions to documentation label Mar 6, 2023
Signed-off-by: Kim, Vinnam <[email protected]>
Copy link
Contributor

@wonjuleee wonjuleee left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me, thanks.

Copy link
Contributor

@goodsong81 goodsong81 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice feature!

@wonjuleee wonjuleee merged commit d810fbc into develop Mar 7, 2023
@wonjuleee wonjuleee deleted the vinnamki/add-mem-cache-feature branch March 7, 2023 02:12
@wonjuleee wonjuleee restored the vinnamki/add-mem-cache-feature branch March 7, 2023 02:15
@vinnamkim vinnamkim deleted the vinnamki/add-mem-cache-feature branch April 5, 2023 07:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ALGO Any changes in OTX Algo Tasks implementation API Any changes in OTX API CLI Any changes in OTE CLI TEST Any changes in tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants