Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to get the final result by feature graph? #2353

Closed
alter-xp opened this issue Mar 3, 2021 · 9 comments
Closed

How to get the final result by feature graph? #2353

alter-xp opened this issue Mar 3, 2021 · 9 comments
Labels
question Further information is requested

Comments

@alter-xp
Copy link

alter-xp commented Mar 3, 2021

❔Question

Hi, I am tring to inference yolov3.onnx in a special device, which does not support some operators when I export onnx with model.model[-1].export = False. so I have to use feature map to get the final result. I'm not sure how the three outputs [1,3,13,13,85], [1,3,26,26,85], [1,3,52,52,85] are converted into [110647,85]. Is there any example or tutorial of this?

@alter-xp alter-xp added the question Further information is requested label Mar 3, 2021
@github-actions
Copy link
Contributor

github-actions bot commented Mar 3, 2021

👋 Hello @alter-xp, thank you for your interest in 🚀 YOLOv5! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at [email protected].

Requirements

Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7. To install run:

$ pip install -r requirements.txt

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.

@alter-xp
Copy link
Author

alter-xp commented Mar 3, 2021

Oh, I see

@alter-xp alter-xp closed this as completed Mar 3, 2021
@glenn-jocher
Copy link
Member

@alter-xp the Detect() layer takes in the 3 feature maps and outputs detections here:

class Detect(nn.Module):

You'll probably need to include it in your ONNX export by setting export = False.

@alter-xp
Copy link
Author

alter-xp commented Mar 4, 2021

thanks a lot.

@CE-Noob
Copy link

CE-Noob commented Mar 15, 2021

哦,我明白了

I have also encountered this problem. May I ask how did you solve this problem?

@glenn-jocher
Copy link
Member

@alter-xp @CE-Noob you can use the new --grid argument to export ONNX models with grid included in a single output:

python export.py --weights yolov5s.pt --grid

@CE-Noob
Copy link

CE-Noob commented Mar 15, 2021

@glenn-jocher This command will set model.model[-1].export=False. However,cv2.readNetFrom(model.onnx) failed when I export onnx with model.model[-1].export = False.In the actual deployment process, I have to use OpenCV. Before, I used this( https://github.com/hpc203/yolov5-dnn-cpp-python-v2) to transform the model so that onnx_model will done in Opencv. But now, I want to modify your code so that the three output(out0:[1,3,80,80,85],out1:[1,3,40,40,85],out2:[1,3,20,20,85]) could channge to just one output(shape:[25200,85]). Because of Opencv does not support slice layer,so I had modify the code in common.py in line 106
`
class Focus(nn.Module):
# Focus wh information into c-space
def init(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups
super(Focus, self).init()
self.conv = Conv(c1 * 4, c2, k, s, p, g, act)
#self.contract = Contract(gain=2)

  def forward(self, x):  # x(b,c,w,h) -> y(b,4c,w/2,h/2)
      #return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1))
      N, C, H, W = x.size()  # assert (H / s == 0) and (W / s == 0), 'Indivisible gain'
      s = 2
      x = x.view(N, C, H // s, s, W // s, s)  # x(1,64,40,2,40,2)
      x = x.permute(0, 3, 5, 1, 2, 4).contiguous()  # x(1,2,2,64,40,40)
      yy=x.view(N, C * s * s, H // s, W // s)  # x(1,256,40,40)
      return self.conv(yy)

`
in the yolo.py line22:

`
class Detect(nn.Module):
stride = None # strides computed during build
export = False # onnx export

  def __init__(self, nc=80, anchors=(), ch=()):  # detection layer
      super(Detect, self).__init__()
      self.nc = nc  # number of classes
      self.no = nc + 5  # number of outputs per anchor
      self.nl = len(anchors)  # number of detection layers
      self.na = len(anchors[0]) // 2  # number of anchors
      self.grid = [torch.zeros(1)] * self.nl  # init grid
      a = torch.tensor(anchors).float().view(self.nl, -1, 2)
      self.register_buffer('anchors', a)  # shape(nl,na,2)
      self.register_buffer('anchor_grid', a.clone().view(self.nl, 1, -1, 1, 1, 2))  # shape(nl,1,na,1,1,2)
      self.m = nn.ModuleList(nn.Conv2d(x, self.no * self.na, 1) for x in ch)  # output conv

  def forward(self, x):
      # x = x.copy()  # for profiling
      z = []  # inference output
      self.training |= self.export
      for i in range(self.nl):
          x[i] = self.m[i](x[i])  # conv
          bs, _, ny, nx = x[i].shape  # x(bs,255,20,20) to x(bs,3,20,20,85)
          x[i] = x[i].view(bs, self.na, self.no, ny, nx).permute(0, 1, 3, 4, 2).contiguous()#(bs,3,20,20,85)
          #x[i] = x[i].view( self.na, self.no, ny, nx).permute(0, 2, 3, 1).contiguous()  # (bs,3,20,20,85)


          if not self.training:  # inference
              if self.grid[i].shape[2:4] != x[i].shape[2:4]:
                  self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

              y = x[i].sigmoid()
              y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i]  # xy
              y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
              #print(y.shape)
              z.append(y.view( -1, self.no).contiguous())
              #return torch.cat(z, 0)
          else:
              if self.grid[i].shape[2:4] != x[i].shape[2:4]:
                  self.grid[i] = self._make_grid(nx, ny).to(x[i].device)

              y = x[i].sigmoid()
              y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i]  # xy
              y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i]  # wh
              z.append(y.view( -1, self.no).contiguous())
      return (torch.cat(z, 0))

`
But it didn't work the way I wanted to,even though its output turned out to be shape [25200,85]. The slice layer still in the model
22

@CE-Noob
Copy link

CE-Noob commented Mar 15, 2021

@glenn-jocher thanks!I already know how to modify it. I can get the correct result. Thank you very much!

@anas-899
Copy link

anas-899 commented Mar 19, 2021

@CE-Noob can you please share the steps you did until you get a correct inference with opencv DNN?

I am also getting the output (1x25200x85) as expected from opencv only if I disabled the following lines in yolo.py: (otherwise an exception is rising)
#y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy
#y[..., 2:4] = (y[..., 2:4] * 2) ** 2 * self.anchor_grid[i] # wh
how did you handle the slice operation within these lines, as one of the main problems in opencv is slice operation??

@glenn-jocher glenn-jocher linked a pull request Jun 28, 2021 that will close this issue
@glenn-jocher glenn-jocher removed a link to a pull request Jun 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants