Skip to content

Commit

Permalink
Merge pull request #548 from zifeng-radxa/main
Browse files Browse the repository at this point in the history
docs: add zh/en yolov4 demo in sirider s1
  • Loading branch information
peterwang2050 authored Nov 5, 2024
2 parents 07f8704 + 10a2595 commit 961f21e
Show file tree
Hide file tree
Showing 5 changed files with 240 additions and 18 deletions.
19 changes: 10 additions & 9 deletions docs/sirider/s1/app-development/zhouyi_model_zoo.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,19 @@
---
sidebar_position: 2
sidebar_position: 3
---

# 周易 Model Zoo

[周易 Model Zoo](https://github.com/Arm-China/Model_zoo) 仓库提供了一套人工智能模型,供周易 SDK 参考使用。

#### **FTP 模型下载 (推荐 FTP 工具 [FileZilla](https://filezilla-project.org/))**
- `Host`: sftp://sftp01.armchina.com
- `Account`: zhouyi.armchina
- `Password`: 114r3cJd

- `Host`: sftp://sftp01.armchina.com
- `Account`: zhouyi.armchina
- `Password`: 114r3cJd

| Model | Framework | Input Shape | Model Source | Quant Model |
|---------------------------|-----------|---------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
| Model | Framework | Input Shape | Model Source | Quant Model |
| ------------------------- | --------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| mobilenet_v2 | caffe | [1,3,224,224] | https://github.com/shicai/MobileNet-Caffe | No |
| inception_v4 | tf1 | [1,299,299,3] | https://github.com/tensorflow/models/tree/archive/research/slim/nets | No |
| deeplab_v2 | onnx | [1, 3, 513, 513] | https://github.com/kazuto1011/deeplab-pytorch | No |
Expand Down Expand Up @@ -88,7 +89,7 @@ sidebar_position: 2
| squeezenet | onnx | [1, 3, 224, 224] | https://github.com/onnx/models/tree/master/vision/classification/squeezenet | No |
| resnet_v2_101 | caffe | [1, 3, 448, 448] | https://github.com/soeaver/caffe-model | No |
| densenet_121 | caffe | [1, 3, 224, 224] | https://github.com/soeaver/caffe-model | No |
| yolo_v2 | onnx | [1, 3, 416, 416] | https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov2-coco | No |
| yolo_v2 | onnx | [1, 3, 416, 416] | https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov2-coco | No |
| yolo_v2_tiny | onnx | [1, 3, 416, 416] | https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny-yolov2 | No |
| inception_v2 | tf1 | [1, 224, 224, 3] | https://github.com/tensorflow/models/tree/master/research/slim#Pretrained | No |
| lightface | onnx | [1, 3, 240, 320] | https://hailo.ai/devzone-model-zoo/face-detection/ | No |
Expand Down Expand Up @@ -186,7 +187,7 @@ sidebar_position: 2
| codeformer_256 | onnx | [1,3,256,256] | https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0 | No |
| resnet_18 | onnx | [1, 3, 224, 224] | https://github.com/onnx/models/tree/main/vision/classification/resnet/model | No |
| resnext_101 | onnx | [1, 3, 224, 224] | https://github.com/Cadene/pretrained-models.pytorch | No |
| unet_3d | onnx | [1, 3, 224, 224, 32] | https://zenodo.org/record/3904138#.YbBtatDP1PY | No |
| unet_3d | onnx | [1, 3, 224, 224, 32] | https://zenodo.org/record/3904138#.YbBtatDP1PY | No |
| sne_roadseg | onnx | [1,3,384,1248] | https://github.com/hlwang1124/SNE-RoadSeg | No |
| maskrcnn | pytorch | [0] | https://pytorch.org/vision/main/models/mask_rcnn.html | No |
| yolo_v6s | onnx | [1,3,640,640] | https://github.com/DefTruth/lite.ai.toolkit | No |
Expand All @@ -195,7 +196,7 @@ sidebar_position: 2
| xception | tf2 | [1, 299, 299, 3] | https://www.tensorflow.org/versions/r2.6/api_docs/python/tf/keras/applications | No |
| efficientnet_b5 | tf2 | [1,456,456,3] | https://www.tensorflow.org/versions/r2.6/api_docs/python/tf/keras/applications | No |
| mobilenet_v1_ssd | tflite | [300, 300] | https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2 | No |
| transformer_official | tf1 | [1, 32] | https://github.com/Kyubyong/transformer | No |
| transformer_official | tf1 | [1, 32] | https://github.com/Kyubyong/transformer | No |
| ViT_B_16 | pytorch | [1, 3, 224, 224] | https://pytorch.org/vision/stable/models/vision_transformer.html | No |
| efficientnet_b4_quant | tflite | [1,380,380,3] | https://ai-benchmark.com/download.html | Yes |
| dped_quant | tflite | [1,1536,2048,3] | https://ai-benchmark.com/download.html | Yes |
Expand Down
110 changes: 110 additions & 0 deletions docs/sirider/s1/app-development/zhouyi_yolov4.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
---
sidebar_position: 2
---

# YOLOv4 目标检测

此文档将详细讲解如何在 Sirider S1 利用 NPU 硬件加速推理 [YOLOv4](https://github.com/hunglc007/tensorflow-yolov4-tflite) 模型。

文档分为两部分:
[快速体验](#快速体验)[详细教程](#详细教程)

## 快速体验

radxa 提供一个开箱即用的 YOLOv4 目标检测例子,旨在用户可以直接在 sirider s1 使用 AIPU 推理 yolov4_tiny 模型,
免去复杂的模型编译和执行代码编译, 这对想快速使用 AIPU 而不想从头编译模型的用户是最佳的选择,
如您对完整工作流程感兴趣可以参考 [详细教程](#详细教程) 章节。

- 克隆仓库代码

```bash
git clone https://github.com/zifeng-radxa/siriders1_NPU_yolov4_tiny_demo.git
```

- 安装依赖
:::tip
建议使用 virtualenv
:::

```bash
cd siriders1_NPU_yolov4_tiny_demo/demo
pip3 install -r requirements.txt
```

- 运行 yolov4 demo 程序

```bash
python3 yolov4_aipu.py -m [mode] -i [your_input_path] -r
# python3 yolov4_aipu,py -m camera -r
```

参数解析:

`-h`, `--help`: 打印参数信息

`-m`, `--mode`: 输入模式选择,支持['camera', 'video', 'image']

`-i`, `--input`: 输入文件路径, 当 mode 为 ['video', 'image'] 时请提供文件路径

`-r`, `--real_time`: 实时预览

`-s`, `--save`: 保存输出,保存在 `output` 文件夹

![input.webp](/img/sirider/s1/yolov4_1.webp)

## 详细教程

### 模型转换

:::tip
此过程在 x86 主机上完成,进行模型转换前,请根据 [**周易 Z2 AIPU 使用教程**](./zhouyi_npu#周易-z2-aipu-使用教程) 安装 周易SDK 并完成 **配置 nn-compiler 环境**
:::

- 生成量化数据
```bash
cd siriders1_NPU_yolov4_tiny_demo/convert
python3 preprocess.py
```
- 生成 aipu 模型
```bash
aipubuild tflite_yolo_v4_tinybuild.cfg
```
生成目标模型路径 ./aipu_yolov4_tiny.bin

### 编译 AIPU 模型可执行推理文件

编译可用于推理 周易Z2 AIPU 模型的可执行文件

- 复制[仓库](https://github.com/zifeng-radxa/siriders1_NPU_yolov4_tiny_demo)例子中 `compile` 文件夹到 周易 SDK

将 [siriders1_NPU_yolov4_tiny_demo](https://github.com/zifeng-radxa/siriders1_NPU_yolov4_tiny_demo) 仓库下的 `compile` 文件夹复制到 `YOUR_SDK_PATH/siengine`, **注意修改 YOUR_SDK_PATH 为您的实际路径**

```bash
cp -r compile YOUR_SDK_PATH/siengine
```

- 交叉编译

:::tip
请根据您的交叉编译工具链条路径修改 CMakeList.txt 中的 Linux_Tool_ROOT, 默认为 `/opt/gcc-linaro-7.5.0-2019.12-x86_64_aarch64-linux-gnu/bin`
:::

```bash
cd YOUR_SDK_PATH/siengine/compile
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release .. && make
cd ..
```

生成的输出文件在 `out` 文件夹中

- 转移至板端并测试

将生成的 `aipu_yolov4_tiny.bin` 模型文件和 `out/linux` 下的文件到 sirider s1 上。

利用 [yolov4_aipu.py](https://github.com/zifeng-radxa/siriders1_NPU_yolov4_tiny_demo/blob/main/demo/yolov4_aipu.py) 测试结果

```bash
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:linux/libs
python3 yolov4_aipu.py -m image -i YOUR_IMAGE_PATH -r
```
Original file line number Diff line number Diff line change
@@ -1,18 +1,19 @@
---
sidebar_position: 2
sidebar_position: 3
---

# Zhouyi Model Zoo

The [Zhouyi Model Zoo](https://github.com/Arm-China/Model_zoo) repository provides a set of AI models for reference used by Zhouyi SDK.

#### **FTP Model Download (Recommended FTP Tool: [FileZilla](https://filezilla-project.org/))**
- `Host`: sftp://sftp01.armchina.com
- `Account`: zhouyi.armchina
- `Password`: 114r3cJd

| Model | Framework | Input Shape | Model Source | Quant Model |
|---------------------------|-----------|---------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------|-------------|
- `Host`: sftp://sftp01.armchina.com
- `Account`: zhouyi.armchina
- `Password`: 114r3cJd

| Model | Framework | Input Shape | Model Source | Quant Model |
| ------------------------- | --------- | ------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ----------- |
| mobilenet_v2 | caffe | [1,3,224,224] | https://github.com/shicai/MobileNet-Caffe | No |
| inception_v4 | tf1 | [1,299,299,3] | https://github.com/tensorflow/models/tree/archive/research/slim/nets | No |
| deeplab_v2 | onnx | [1, 3, 513, 513] | https://github.com/kazuto1011/deeplab-pytorch | No |
Expand Down Expand Up @@ -88,7 +89,7 @@ The [Zhouyi Model Zoo](https://github.com/Arm-China/Model_zoo) repository provid
| squeezenet | onnx | [1, 3, 224, 224] | https://github.com/onnx/models/tree/master/vision/classification/squeezenet | No |
| resnet_v2_101 | caffe | [1, 3, 448, 448] | https://github.com/soeaver/caffe-model | No |
| densenet_121 | caffe | [1, 3, 224, 224] | https://github.com/soeaver/caffe-model | No |
| yolo_v2 | onnx | [1, 3, 416, 416] | https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov2-coco | No |
| yolo_v2 | onnx | [1, 3, 416, 416] | https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/yolov2-coco | No |
| yolo_v2_tiny | onnx | [1, 3, 416, 416] | https://github.com/onnx/models/tree/master/vision/object_detection_segmentation/tiny-yolov2 | No |
| inception_v2 | tf1 | [1, 224, 224, 3] | https://github.com/tensorflow/models/tree/master/research/slim#Pretrained | No |
| lightface | onnx | [1, 3, 240, 320] | https://hailo.ai/devzone-model-zoo/face-detection/ | No |
Expand Down Expand Up @@ -186,7 +187,7 @@ The [Zhouyi Model Zoo](https://github.com/Arm-China/Model_zoo) repository provid
| codeformer_256 | onnx | [1,3,256,256] | https://github.com/sczhou/CodeFormer/releases/tag/v0.1.0 | No |
| resnet_18 | onnx | [1, 3, 224, 224] | https://github.com/onnx/models/tree/main/vision/classification/resnet/model | No |
| resnext_101 | onnx | [1, 3, 224, 224] | https://github.com/Cadene/pretrained-models.pytorch | No |
| unet_3d | onnx | [1, 3, 224, 224, 32] | https://zenodo.org/record/3904138#.YbBtatDP1PY | No |
| unet_3d | onnx | [1, 3, 224, 224, 32] | https://zenodo.org/record/3904138#.YbBtatDP1PY | No |
| sne_roadseg | onnx | [1,3,384,1248] | https://github.com/hlwang1124/SNE-RoadSeg | No |
| maskrcnn | pytorch | [0] | https://pytorch.org/vision/main/models/mask_rcnn.html | No |
| yolo_v6s | onnx | [1,3,640,640] | https://github.com/DefTruth/lite.ai.toolkit | No |
Expand All @@ -195,7 +196,7 @@ The [Zhouyi Model Zoo](https://github.com/Arm-China/Model_zoo) repository provid
| xception | tf2 | [1, 299, 299, 3] | https://www.tensorflow.org/versions/r2.6/api_docs/python/tf/keras/applications | No |
| efficientnet_b5 | tf2 | [1,456,456,3] | https://www.tensorflow.org/versions/r2.6/api_docs/python/tf/keras/applications | No |
| mobilenet_v1_ssd | tflite | [300, 300] | https://tfhub.dev/tensorflow/lite-model/ssd_mobilenet_v1/1/metadata/2 | No |
| transformer_official | tf1 | [1, 32] | https://github.com/Kyubyong/transformer | No |
| transformer_official | tf1 | [1, 32] | https://github.com/Kyubyong/transformer | No |
| ViT_B_16 | pytorch | [1, 3, 224, 224] | https://pytorch.org/vision/stable/models/vision_transformer.html | No |
| efficientnet_b4_quant | tflite | [1,380,380,3] | https://ai-benchmark.com/download.html | Yes |
| dped_quant | tflite | [1,1536,2048,3] | https://ai-benchmark.com/download.html | Yes |
Expand Down
Loading

0 comments on commit 961f21e

Please sign in to comment.