Skip to content

Commit

Permalink
docs: renew readmes, add ms/step data to forms and remove development…
Browse files Browse the repository at this point in the history
… docs
  • Loading branch information
ChongWei905 committed Oct 22, 2024
1 parent 009aaeb commit a8b12e3
Show file tree
Hide file tree
Showing 59 changed files with 710 additions and 1,228 deletions.
1 change: 0 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -217,7 +217,6 @@ We provide the following jupyter notebook tutorials to help users learn to use M
- [Finetune a pretrained model on custom datasets](docs/en/tutorials/finetune.md)
- [Customize your model]() //coming soon
- [Optimizing performance for vision transformer]() //coming soon
- [Deployment demo](docs/en/tutorials/deployment.md)
## Model List
Expand Down
2 changes: 1 addition & 1 deletion README_CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,7 +121,7 @@ python infer.py --model=swin_tiny --image_path='./dog.jpg'

```shell
# 分布式训练
# 假设你有4张GPU或者NPU卡
# 假设你有4张NPU卡
msrun --bind_core=True --worker_num 4 python train.py --distribute \
--model densenet121 --dataset imagenet --data_dir ./datasets/imagenet
```
Expand Down
200 changes: 101 additions & 99 deletions benchmark_results.md

Large diffs are not rendered by default.

15 changes: 9 additions & 6 deletions configs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,17 +33,20 @@ Please follow the outline structure and **table format** shown in [densenet/READ

<div align="center">

| Model | Context | Top-1 (%) | Top-5 (%) | Params (M) | Recipe | Download |
|--------------|----------|-----------|-----------|------------|-----------------------------------------------------------------------------------------------------|----------------------------------------------------------------------------------------------------|
| densenet_121 | D910x8-G | 75.64 | 92.84 | 8.06 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | download |
| ----------- | --------- | --------- | ---------- | ---------- | ----- | ------- | --------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
| densenet121 | 75.67 | 92.77 | 8.06 | 32 | 8 | 47,34 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |

</div>

Illustration:
- Model: model name in lower case with _ seperator.
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
- Top-1 and Top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point.
- Params (M): # of model parameters in millions (10^6). Keep **2 digits** after the decimal point
- Batch Size: Training batch size
- Cards: # of cards
- Ms/step: Time used on training per step in ms
- Jit_level: Jit level of mindspore context, which contains 3 levels: O0/O1/O2
- Recipe: Training recipe/configuration linked to a yaml config file.
- Download: url of the pretrained model weights

Expand All @@ -62,10 +65,10 @@ Illustration:
For consistency, it is recommended to provide distributed training commands based on `msrun --bind_core=True --worker_num {num_devices} python train.py`, instead of using shell script such as `distrubuted_train.sh`.

```shell
# standalone training on a gpu or ascend device
# standalone training on single NPU device
python train.py --config configs/densenet/densenet_121_gpu.yaml --data_dir /path/to/dataset --distribute False
# distributed training on gpu or ascend divices
# distributed training on NPU divices
msrun --bind_core=True --worker_num 8 python train.py --config configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/imagenet
```
Expand Down
24 changes: 9 additions & 15 deletions configs/bit/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,25 +17,24 @@ too low. 5) With BiT fine-tuning, good performance can be achieved even if there

Our reproduced model performance on ImageNet-1K is reported as follows.

performance tested on ascend 910*(8p) with graph mode
- ascend 910* with graph mode

*coming soon*

performance tested on ascend 910(8p) with graph mode
- ascend 910 with graph mode


<div align="center">

| Model | Top-1 (%) | Top-5 (%) | Params(M) | Batch Size | Recipe | Download |
| ------------ | --------- | --------- | --------- | ---------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |

| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download |
| ------------ | --------- | --------- | --------- | ---------- | ----- |---------| --------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
| bit_resnet50 | 76.81 | 93.17 | 25.55 | 32 | 8 | 74.52 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |


</div>

#### Notes

- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.

## Quick Start
Expand All @@ -44,7 +43,7 @@ performance tested on ascend 910(8p) with graph mode

#### Installation

Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV.
Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV.

#### Dataset Preparation

Expand All @@ -57,11 +56,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run

```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple NPU devices
msrun --bind_core=True --worker_num 8 python train.py --config configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet
```

Similarly, you can train the model on multiple GPU devices with the above `msrun` command.

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py).

Expand All @@ -72,7 +70,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on single NPU device
python train.py --config configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/dataset --distribute False
```

Expand All @@ -84,10 +82,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```

### Deployment

Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/) in MindCV.

## References

<!--- Guideline: Citation format should follow GB/T 7714. -->
Expand Down
24 changes: 9 additions & 15 deletions configs/cmt/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,24 +14,23 @@ on ImageNet-1K dataset.

Our reproduced model performance on ImageNet-1K is reported as follows.

performance tested on ascend 910*(8p) with graph mode
- ascend 910* with graph mode

*coming soon*

performance tested on ascend 910(8p) with graph mode
- ascend 910 with graph mode

<div align="center">

| Model | Top-1 (%) | Top-5 (%) | Params(M) | Batch Size | Recipe | Download |
| --------- | --------- | --------- | --------- | ---------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
| cmt_small | 83.24 | 96.41 | 26.09 | 128 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |

| model | top-1 (%) | top-5 (%) | params(M) | batch size | cards | ms/step | jit_level | recipe | download |
| --------- | --------- | --------- | --------- | ---------- | ----- |---------| --------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
| cmt_small | 83.24 | 96.41 | 26.09 | 128 | 8 | 500.64 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |


</div>

#### Notes

- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.

## Quick Start
Expand All @@ -40,7 +39,7 @@ performance tested on ascend 910(8p) with graph mode

#### Installation

Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV.
Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV.

#### Dataset Preparation

Expand All @@ -53,11 +52,10 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run

```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple NPU devices
msrun --bind_core=True --worker_num 8 python train.py --config configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet
```

Similarly, you can train the model on multiple GPU devices with the above `msrun` command.

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py).

Expand All @@ -68,7 +66,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on single NPU device
python train.py --config configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/dataset --distribute False
```

Expand All @@ -80,10 +78,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```

### Deployment

Please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/).

## References

<!--- Guideline: Citation format should follow GB/T 7714. -->
Expand Down
23 changes: 9 additions & 14 deletions configs/coat/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,23 +10,23 @@ Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image

Our reproduced model performance on ImageNet-1K is reported as follows.

performance tested on ascend 910*(8p) with graph mode
- ascend 910* with graph mode

*coming soon*


performance tested on ascend 910(8p) with graph mode
- ascend 910 with graph mode

<div align="center">

| Model | Top-1 (%) | Top-5 (%) | Params (M) | Batch Size | Recipe | Weight |
| --------- | --------- | --------- | ---------- | ---------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |

| model | top-1 (%) | top-5 (%) | params (M) | batch size | cards | ms/step | jit_level | recipe | Weight |
| --------- | --------- | --------- | ---------- | ---------- | ----- |---------| --------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| coat_tiny | 79.67 | 94.88 | 5.50 | 32 | 8 | 254.95 | O2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |

</div>

#### Notes
- Context: Training context denoted as {device}x{pieces}-{MS mode}, where mindspore mode can be G - graph mode or F - pynative mode with ms function. For example, D910x8-G is for training on 8 pieces of Ascend 910 NPU using graph mode.
- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.


Expand All @@ -35,7 +35,7 @@ performance tested on ascend 910(8p) with graph mode
### Preparation

#### Installation
Please refer to the [installation instruction](https://github.com/mindspore-lab/mindcv#installation) in MindCV.
Please refer to the [installation instruction](https://mindspore-lab.github.io/mindcv/installation/) in MindCV.

#### Dataset Preparation
Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/2012/index.php) dataset for model training and validation.
Expand All @@ -47,12 +47,11 @@ Please download the [ImageNet-1K](https://www.image-net.org/challenges/LSVRC/201
It is easy to reproduce the reported results with the pre-defined training recipe. For distributed training on multiple Ascend 910 devices, please run

```shell
# distributed training on multiple GPU/Ascend devices
# distributed training on multiple NPU devices
msrun --bind_core=True --worker_num 8 python train.py --config configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet
```


Similarly, you can train the model on multiple GPU devices with the above `msrun` command.

For detailed illustration of all hyper-parameters, please refer to [config.py](https://github.com/mindspore-lab/mindcv/blob/main/config.py).

Expand All @@ -63,7 +62,7 @@ For detailed illustration of all hyper-parameters, please refer to [config.py](h
If you want to train or finetune the model on a smaller dataset without distributed training, please run:

```shell
# standalone training on a CPU/GPU/Ascend device
# standalone training on single NPU device
python train.py --config configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/dataset --distribute False
```

Expand All @@ -75,10 +74,6 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/coat/coat_lite_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```

### Deployment

To deploy online inference services with the trained model efficiently, please refer to the [deployment tutorial](https://mindspore-lab.github.io/mindcv/tutorials/deployment/).

## References

[1] Han D, Yun S, Heo B, et al. Rethinking channel dimensions for efficient model design[C]//Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition. 2021: 732-741.
Loading

0 comments on commit a8b12e3

Please sign in to comment.