Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implementation of TRT wrapping via inference.json #620

Merged
merged 38 commits into from
Sep 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
85efcc3
POC implementation of TRT wrapping via inference.json
borisfom Aug 16, 2024
98a615c
Bumped bundle version
borisfom Aug 16, 2024
d3275e9
Adding pathology_nuclei_classification and swin_unetr_btcv_segmentation
borisfom Aug 20, 2024
18c580f
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 20, 2024
936103d
Adjusting arguments
borisfom Aug 22, 2024
58d76de
Merge branch 'trt_wrapper' of github.com:borisfom/model-zoo into trt_…
borisfom Aug 22, 2024
9e4bac4
Merge remote-tracking branch 'origin/dev' into trt_wrapper
borisfom Aug 22, 2024
8461033
Added vista3 inference_trt.json
borisfom Aug 22, 2024
8588674
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 22, 2024
f7da0ec
Renamed trt_wrap -> trt_compile
borisfom Aug 23, 2024
c8d64bf
Merge branch 'trt_wrapper' of github.com:borisfom/model-zoo into trt_…
borisfom Aug 23, 2024
aab37c5
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 23, 2024
7b64083
Added TRT for pathology_nuclei_segmentation_classification
borisfom Aug 26, 2024
43da1f5
Merge branch 'trt_wrapper' of github.com:borisfom/model-zoo into trt_…
borisfom Aug 26, 2024
3d8a738
Merge remote-tracking branch 'origin/dev' into trt_wrapper
borisfom Aug 26, 2024
4a9f2ba
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 26, 2024
7496dfb
Merge remote-tracking branch 'origin/dev' into trt_wrapper
borisfom Aug 26, 2024
337fcf3
Added Vista2D TRT config
borisfom Aug 26, 2024
16f720f
Cleaned up using new config syntax, batch size added
borisfom Aug 26, 2024
6cc423d
Merge branch 'trt_wrapper' of github.com:borisfom/model-zoo into trt_…
borisfom Aug 26, 2024
a205bf4
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Aug 26, 2024
a38f85d
Stash
borisfom Aug 28, 2024
a099cd5
Merge remote-tracking branch 'origin/dev' into trt_wrapper
borisfom Sep 6, 2024
f6507c8
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 6, 2024
c6744e9
Fixed models/brats_mri_axial_slices_generative_diffusion
borisfom Sep 6, 2024
1b634d2
Merge branch 'trt_wrapper' of github.com:borisfom/model-zoo into trt_…
borisfom Sep 6, 2024
956d931
Merge remote-tracking branch 'origin/dev' into trt_wrapper
borisfom Sep 6, 2024
affbf12
Fixing TRT configs for brats_mri_xx
borisfom Sep 6, 2024
ee06c91
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Sep 6, 2024
f0a9329
update metadata
yiheng-wang-nv Sep 9, 2024
bde5d52
update mgpu test script
yiheng-wang-nv Sep 9, 2024
c7c4b50
add benchmark for non generative models
binliunls Sep 9, 2024
626ba3c
add benchmark results for generative bundles
binliunls Sep 9, 2024
7f47fb4
add version for trt benchmark
binliunls Sep 10, 2024
5d58414
update vista2d version
binliunls Sep 10, 2024
19ba83c
update format
binliunls Sep 10, 2024
3e80c93
use nvidia host large files
yiheng-wang-nv Sep 10, 2024
35e697c
update vista2d
yiheng-wang-nv Sep 10, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion ci/run_premerge_gpu.sh
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ verify_bundle() {
fi
test_cmd="python $(pwd)/ci/unit_tests/runner.py --b \"$bundle\""
if [ "$dist_flag" = "True" ]; then
test_cmd="$test_cmd --dist True"
test_cmd="torchrun $(pwd)/ci/unit_tests/runner.py --b \"$bundle\" --dist True"
fi
eval $test_cmd
# if not maisi_ct_generative, remove venv
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"+imports": [
"$from monai.networks import trt_compile"
],
"diffusion": "$trt_compile(@network_def.to(@device), @load_diffusion_path)",
"autoencoder": "$trt_compile(@autoencoder_def.to(@device), @load_autoencoder_path)"
}
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json",
"version": "1.1.0",
"version": "1.1.1",
"changelog": {
"1.1.1": "enable tensorrt",
"1.1.0": "update to use monai 1.4, model ckpt not changed, rm GenerativeAI repo",
"1.0.9": "update to use monai 1.3.1",
"1.0.8": "define arg for output file and put infer logic into a function",
Expand All @@ -15,7 +16,7 @@
"1.0.0": "Initial release"
},
"monai_version": "1.4.0",
"pytorch_version": "2.2.2",
"pytorch_version": "2.4.0",
"numpy_version": "1.24.4",
"required_packages_version": {
"nibabel": "5.2.1",
Expand Down
31 changes: 31 additions & 0 deletions models/brats_mri_axial_slices_generative_diffusion/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,31 @@ If you face memory issues with data loading, you can lower the caching rate `cac

![A graph showing the latent diffusion training curve](https://developer.download.nvidia.com/assets/Clara/Images/monai_brain_image_gen_ldm2d_train_diffusion_loss_v3.png)

#### TensorRT speedup
This bundle supports acceleration with TensorRT. The table below displays the speedup ratios observed on an A100 80G GPU. Please note that 32-bit precision models are benchmarked with tf32 weight format.

| method | torch_tf32(ms) | torch_amp(ms) | trt_tf32(ms) | trt_fp16(ms) | speedup amp | speedup tf32 | speedup fp16 | amp vs fp16|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| model computation (diffusion) | 32.11 | 32.45 | 2.58 | 2.11 | 0.99 | 12.45 | 15.22 | 15.38 |
| model computation (autoencoder) | 17.74 | 18.15 | 5.47 | 3.66 | 0.98 | 3.24 | 4.85 | 4.96 |
| end2end | 1389 | 1973 | 332 | 314 | 0.70 | 4.18 | 4.42 | 6.28 |

Where:
- `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
- `end2end` means run the bundle end-to-end with the TensorRT based model.
- `torch_tf32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
- `trt_tf32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
- `speedup amp`, `speedup tf32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
- `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.

This result is benchmarked under:
- TensorRT: 10.3.0+cuda12.6
- Torch-TensorRT Version: 2.4.0
- CPU Architecture: x86-64
- OS: ubuntu 20.04
- Python version:3.10.12
- CUDA version: 12.6
- GPU models and configuration: A100 80G

## MONAI Bundle Commands
In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
Expand Down Expand Up @@ -143,6 +168,12 @@ The following code generates a synthetic image from a random sampled noise.
python -m monai.bundle run --config_file configs/inference.json
```

#### Execute inference with the TensorRT model:

```
python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
```

# References
[1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf

Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
large_files:
- path: "models/model_autoencoder.pt"
url: "https://drive.google.com/uc?id=1x4JEfWwCnR0wvS9v5TBWX1n9xl51xZj9"
url: "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_autoencoder_brats_mri_axial_slices_generative_diffusion_v1.pt"
hash_val: "847a61ad13a68ebfca9c0a8fa6d0d6bd"
hash_type: "md5"
- path: "models/model.pt"
url: "https://drive.google.com/uc?id=1CJmlrLY4SYHl4swtnY1EJmuiNt1H7Jzu"
url: "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_brats_mri_axial_slices_generative_diffusion_v1.pt"
hash_val: "93a19ea3eaafd9781b4140286b121f37"
hash_type: "md5"
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
{
"+imports": [
"$from monai.networks import trt_compile"
],
"diffusion": "$trt_compile(@network_def.to(@device), @load_diffusion_path)",
"autoencoder": "$trt_compile(@autoencoder_def.to(@device), @load_autoencoder_path)"
}
5 changes: 3 additions & 2 deletions models/brats_mri_generative_diffusion/configs/metadata.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json",
"version": "1.1.0",
"version": "1.1.1",
"changelog": {
"1.1.1": "enable tensorrt",
"1.1.0": "update to use monai 1.4, model ckpt not changed, rm GenerativeAI repo",
"1.0.9": "update to use monai 1.3.1",
"1.0.8": "update run section",
Expand All @@ -15,7 +16,7 @@
"1.0.0": "Initial release"
},
"monai_version": "1.4.0",
"pytorch_version": "2.2.2",
"pytorch_version": "2.4.0",
"numpy_version": "1.24.4",
"required_packages_version": {
"nibabel": "5.2.1",
Expand Down
33 changes: 33 additions & 0 deletions models/brats_mri_generative_diffusion/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,32 @@ If you face memory issues with data loading, you can lower the caching rate `cac

![A graph showing the latent diffusion training curve](https://developer.download.nvidia.com/assets/Clara/Images/monai_brain_image_gen_ldm3d_train_diffusion_loss_v2.png)

#### TensorRT speedup
This bundle supports acceleration with TensorRT. The table below displays the speedup ratios observed on an A100 80G GPU. Please note that 32-bit precision models are benchmarked with tf32 weight format.

| method | torch_tf32(ms) | torch_amp(ms) | trt_tf32(ms) | trt_fp16(ms) | speedup amp | speedup tf32 | speedup fp16 | amp vs fp16|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| model computation (diffusion) | 44.57 | 44.59 | 40.89 | 18.79 | 1.00 | 1.09 | 2.37 | 2.37 |
| model computation (autoencoder) | 96.29 | 97.01 | 78.51 | 44.03 | 0.99 | 1.23 | 2.19 | 2.20 |
| end2end | 2826 | 2538 | 2759 | 1472 | 1.11 | 1.02 | 1.92 | 1.72 |

Where:
- `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
- `end2end` means run the bundle end-to-end with the TensorRT based model.
- `torch_tf32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
- `trt_tf32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
- `speedup amp`, `speedup tf32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
- `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.

This result is benchmarked under:
- TensorRT: 10.3.0+cuda12.6
- Torch-TensorRT Version: 2.4.0
- CPU Architecture: x86-64
- OS: ubuntu 20.04
- Python version:3.10.12
- CUDA version: 12.6
- GPU models and configuration: A100 80G

## MONAI Bundle Commands

In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
Expand Down Expand Up @@ -143,6 +169,13 @@ The following code generates a synthetic image from a random sampled noise.
python -m monai.bundle run --config_file configs/inference.json
```

#### Execute inference with the TensorRT model:

```
python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
```


# References
[1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. https://openaccess.thecvf.com/content/CVPR2022/papers/Rombach_High-Resolution_Image_Synthesis_With_Latent_Diffusion_Models_CVPR_2022_paper.pdf

Expand Down
4 changes: 2 additions & 2 deletions models/brats_mri_generative_diffusion/large_files.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
large_files:
- path: "models/model_autoencoder.pt"
url: "https://drive.google.com/uc?id=1arp3w8glsQw2h7mQBbk71krqmaG_std6"
url: "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_autoencoder_brats_mri_generative_diffusion_v1.pt"
hash_val: "9e6df4cc9a2decf49ab3332606b32c55"
hash_type: "md5"
- path: "models/model.pt"
url: "https://drive.google.com/uc?id=1m2pcbj8NMoxEIAOmD9dgYBN4gNcrMx6e"
url: "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_brats_mri_generative_diffusion_v1.pt"
hash_val: "35258b1112f701f3d485676d33141a55"
hash_type: "md5"
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
"$import os"
],
"bundle_root": ".",
"checkpoint": "$@bundle_root + '/models/model.pt'",
"output_dir": "$@bundle_root + '/eval'",
"dataset_dir": "/workspace/data/CoNSePNuclei",
"images": "$list(sorted(glob.glob(@dataset_dir + '/Test/Images/*.png')))[:1]",
Expand Down Expand Up @@ -88,7 +89,7 @@
"handlers": [
{
"_target_": "CheckpointLoader",
"load_path": "$@bundle_root + '/models/model.pt'",
"load_path": "$@checkpoint",
"load_dict": {
"model": "@network"
}
Expand Down
Original file line number Diff line number Diff line change
@@ -1,12 +1,6 @@
{
borisfom marked this conversation as resolved.
Show resolved Hide resolved
"imports": [
"$import glob",
"$import os",
"$import pathlib",
"$import json",
"$import torch_tensorrt"
"+imports": [
"$from monai.networks import trt_compile"
],
"handlers#0#_disabled_": true,
"network_def": "$torch.jit.load(@bundle_root + '/models/model_trt.ts')",
"evaluator#amp": false
"network": "$trt_compile(@network_def.to(@device), @checkpoint)"
}
9 changes: 5 additions & 4 deletions models/pathology_nuclei_classification/configs/metadata.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
"version": "0.1.7",
"version": "0.1.8",
"changelog": {
"0.1.8": "enable tensorrt",
"0.1.7": "update to use monai 1.3.1",
"0.1.6": "set image_only to False",
"0.1.5": "add support for TensorRT conversion and inference",
Expand All @@ -20,13 +21,13 @@
"0.0.2": "Update The Torch Vision Transform",
"0.0.1": "initialize the model package structure"
},
"monai_version": "1.3.1",
"pytorch_version": "2.2.2",
"monai_version": "1.4.0",
"pytorch_version": "2.4.0",
"numpy_version": "1.24.4",
"optional_packages_version": {
"nibabel": "5.2.1",
"pytorch-ignite": "0.4.11",
"torchvision": "0.17.2"
"torchvision": "0.19.0"
},
"name": "Pathology nuclei classification",
"task": "Pathology Nuclei classification",
Expand Down
28 changes: 11 additions & 17 deletions models/pathology_nuclei_classification/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,28 +140,28 @@ A graph showing the validation F1-score over 100 epochs.
![](https://developer.download.nvidia.com/assets/Clara/Images/monai_pathology_classification_val_f1_v3.png) <br>

#### TensorRT speedup
This bundle supports acceleration with TensorRT. The table below displays the speedup ratios observed on an A100 80G GPU.
This bundle supports acceleration with TensorRT. The table below displays the speedup ratios observed on an A100 80G GPU. Please note that 32-bit precision models are benchmarked with tf32 weight format.

| method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
| method | torch_tf32(ms) | torch_amp(ms) | trt_tf32(ms) | trt_fp16(ms) | speedup amp | speedup tf32 | speedup fp16 | amp vs fp16|
| :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| model computation | 9.99 | 14.14 | 4.62 | 2.37 | 0.71 | 2.16 | 4.22 | 5.97 |
| end2end | 412.95 | 408.88 | 351.64 | 286.85 | 1.01 | 1.17 | 1.44 | 1.43 |
| model computation | 12.06 | 20.57 | 3.23 | 1.48 | 0.59 | 3.73 | 8.15 | 13.90 |
| end2end | 45 | 49 | 18 | 18 | 0.92 | 2.50 | 2.50 | 2.72 |

Where:
- `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
- `end2end` means run the bundle end-to-end with the TensorRT based model.
- `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
- `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
- `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
- `torch_tf32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
- `trt_tf32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
- `speedup amp`, `speedup tf32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
- `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.

This result is benchmarked under:
- TensorRT: 8.6.1+cuda12.0
- Torch-TensorRT Version: 1.4.0
- TensorRT: 10.3.0+cuda12.6
- Torch-TensorRT Version: 2.4.0
- CPU Architecture: x86-64
- OS: ubuntu 20.04
- Python version:3.8.10
- CUDA version: 12.1
- Python version:3.10.12
- CUDA version: 12.6
- GPU models and configuration: A100 80G

## MONAI Bundle Commands
Expand Down Expand Up @@ -207,12 +207,6 @@ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run --config
python -m monai.bundle run --config_file configs/inference.json
```

#### Export checkpoint to TensorRT based models with fp32 or fp16 precision:

```
python -m monai.bundle trt_export --net_id network_def --filepath models/model_trt.ts --ckpt_file models/model.pt --meta_file configs/metadata.json --config_file configs/inference.json --precision <fp32/fp16>
```

#### Execute inference with the TensorRT model:

```
Expand Down
4 changes: 2 additions & 2 deletions models/pathology_nuclei_classification/large_files.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
large_files:
- path: "models/model.pt"
url: "https://drive.google.com/file/d/1YSjgn11zAlQ7OLBepk7YZj23BmxEYmwW/view?usp=sharing"
url: "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_pathology_nuclei_classification.pt"
hash_val: "066c6ef8739c4d86e167561b9ad8524d"
hash_type: "md5"
- path: "models/model.ts"
url: "https://drive.google.com/file/d/1exN1REo0V5EXlfrBOooD3fdrQmlPKYFG/view?usp=sharing"
url: "https://developer.download.nvidia.com/assets/Clara/monai/tutorials/model_zoo/model_pathology_nuclei_classification.ts"
hash_val: "e6aceee58f55abafd0125b3dd6a6c1b8"
hash_type: "md5"
Original file line number Diff line number Diff line change
Expand Up @@ -74,17 +74,18 @@
"progress": true,
"extra_input_padding": "$((@patch_size - @out_size) // 2,) * 4"
},
"sub_keys": [
"horizontal_vertical",
"nucleus_prediction",
"type_prediction"
],
"postprocessing": {
"_target_": "Compose",
"transforms": [
{
"_target_": "FlattenSubKeysd",
"keys": "pred",
"sub_keys": [
"horizontal_vertical",
"nucleus_prediction",
"type_prediction"
],
"sub_keys": "$@sub_keys",
"delete_keys": true
},
{
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
{
"+imports": [
"$from monai.networks import trt_compile"
],
"trt_args": {
"output_names": "$@sub_keys",
"dynamic_batchsize": "$[1, @sw_batch_size, @sw_batch_size]"
},
"network": "$trt_compile(@network_def.to(@device), @bundle_root + '/models/model.pt', args=@trt_args)"
}
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
{
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_hovernet_20221124.json",
"version": "0.2.4",
"version": "0.2.5",
"changelog": {
"0.2.5": "enable tensorrt",
"0.2.4": "update to use monai 1.3.1",
"0.2.3": "remove meta_dict usage",
"0.2.2": "add requiremnts for torchvision",
Expand All @@ -18,15 +19,15 @@
"0.1.1": "update to use monai 1.1.0",
"0.1.0": "complete the model package"
},
"monai_version": "1.3.1",
"pytorch_version": "2.2.2",
"monai_version": "1.4.0",
"pytorch_version": "2.4.0",
"numpy_version": "1.24.4",
"optional_packages_version": {
"scikit-image": "0.22.0",
"torchvision": "0.17.2",
"scipy": "1.12.0",
"tqdm": "4.66.2",
"pillow": "10.2.0"
"scikit-image": "0.23.2",
"torchvision": "0.19.0",
"scipy": "1.13.1",
"tqdm": "4.66.4",
"pillow": "10.4.0"
},
"name": "Nuclear segmentation and classification",
"task": "Nuclear segmentation and classification",
Expand Down
Loading
Loading