Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

YOLOv5 converted model is not working any more after YOLOv5 v4.0 Release (#1837). Might be related to changing from BottleneckCSP to C3. #3041

Open
VsevolodKzn opened this issue Jun 29, 2021 · 6 comments

Comments

@VsevolodKzn
Copy link

error log | 日志或报错信息 | ログ

context | 编译/运行环境 | バックグラウンド

how to reproduce | 复现步骤 | 再現方法

more | 其他 | その他

@yinghuang
Copy link

+1. same problem occurred.

@yinghuang
Copy link

it seems to be solved by doing these:

in models/yolo.py

class Detect(nn.Module):
stride = None # strides computed during build
onnx_dynamic = False # ONNX export parameter`

def __init__(self, nc=80, anchors=(), ch=(), inplace=True):  # detection layer
    super(Detect, self).__init__()
    self.nc = nc  # number of classes
    self.no = nc + 5  # number of outputs per anchor
    self.nl = len(anchors)  # number of detection layers
    self.na = len(anchors[0]) // 2  # number of anchors
    self.grid = [torch.zeros(1)] * self.nl  # init grid
    a = torch.tensor(anchors).float().view(self.nl, -1, 2)
    self.register_buffer('anchors', a)  # shape(nl,na,2)
    self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2))  # shape(nl,1,na,1,1,2)
    self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv
    self.inplace = inplace  # use in-place ops (e.g. slice assignment)

def forward(self, x):
    # x = x.copy()  # for profiling
    z = []  # inference output
    for i in range(self.nl):
        x[i] = self.m[i](x[i])  # conv
        bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
        x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()
        """
    #============== **comment the code here to exclude post process**
        if not self.training:  # inference
            raise ValueError
            if self.grid[i].shape[2:4] != x[i].shape[2:4] or self.onnx_dynamic:
                self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

            y = x[i].sigmoid()
            if self.inplace:
                y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i]  # xy
                y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
            else:  # for YOLOv5 on AWS Inferentia https://github.com/ultralytics/yolov5/pull/2953
                xy = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i]  # xy
                wh = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i].view(1, self.na, 1, 1, 2)  # wh
                y = torch.cat((xy, wh, y[..., 4:]), -1)
            z.append(y.view(bs, -1, self.no))

    return x if self.training else (torch.cat(z, 1), x)
    ==============
    """
    return x

@VsevolodKzn
Copy link
Author

Unfortunately training is not working after modification, could you please check it:
python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt

Traceback (most recent call last):
File "train.py", line 657, in
main(opt)
File "train.py", line 558, in main
train(opt.hyp, opt, device)
File "train.py", line 387, in train
results, maps, _ = test.run(data_dict,
File "/home/pc3/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/pc3/yolov5/test.py", line 127, in run
out, train_out = model(img, augment=augment) # inference and training outputs
ValueError: too many values to unpack (expected 2)

@yinghuang
Copy link

Unfortunately training is not working after modification, could you please check it:
python train.py --img 640 --batch 16 --epochs 5 --data coco128.yaml --weights yolov5s.pt

Traceback (most recent call last):
File "train.py", line 657, in
main(opt)
File "train.py", line 558, in main
train(opt.hyp, opt, device)
File "train.py", line 387, in train
results, maps, _ = test.run(data_dict,
File "/home/pc3/miniconda3/envs/yolov5/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context
return func(*args, **kwargs)
File "/home/pc3/yolov5/test.py", line 127, in run
out, train_out = model(img, augment=augment) # inference and training outputs
ValueError: too many values to unpack (expected 2)

The modification is only for onnx model exporting not model training, which is to exlucde the post process in onnx.
Maybe my code is not consistent with yours, the key point in the code is
#============== comment the code here to exclude post process
...
#==============

@VsevolodKzn
Copy link
Author

The modification is only for onnx model exporting not model training, which is to exlucde the post process in onnx.
Maybe my code is not consistent with yours, the key point in the code is
#============== comment the code here to exclude post process
...
#==============

Works fine, just checked with m size model. Thanks a lot!
Should link to your comment be added to https://github.com/Tencent/ncnn/blob/master/examples/yolov5.cpp before this issue is closed?

@nihui
Copy link
Member

nihui commented Aug 5, 2024

针对onnx模型转换的各种问题,推荐使用最新的pnnx工具转换到ncnn
In view of various problems in onnx model conversion, it is recommended to use the latest pnnx tool to convert your model to ncnn

pip install pnnx
pnnx model.onnx inputshape=[1,3,224,224]

详细参考文档
Detailed reference documentation
https://github.com/pnnx/pnnx
https://github.com/Tencent/ncnn/wiki/use-ncnn-with-pytorch-or-onnx#how-to-use-pnnx

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants