添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
健壮的饭盒  ·  Why can't I create a ...·  3 月前    · 
淡定的萝卜  ·  Decorators · ...·  9 月前    · 
独立的乒乓球  ·  css ...·  1 年前    · 

My goal is to get a PyTorch (Faster RCNN) model running on my OAK D. I have a program that can easily run a .blob file on the OAK D, however, my trouble has been in getting my PyTorch model converted into a .blob file. I have been trying to use the luxonis blob converter tool online but it has been throwing me errors.

As I understand it, I am doing the following file conversion:

Torch (.pth) -> .onnx -> openVINO (.xml & .bin) -> .blob

Here is what I think my issue is:

In the luxonis blob converter, I have been choosing openVINO version 2022.1 as it is the depthAI default. I am trying to use this tool to convert my .onnx file to a .blob but have been encountering an error when trying:
"""Cannot create Interpolate layer Resize_62 id:61 from unsupported opset: opset11"""

My assumption is that this error is a result of trying to convert an .onnx file, which was created with opset version 11, using openVINO v 2022.1. I learned after that openVINO version 2022.1 is associated with opset version 8.

Accordingly, I believe the solution to the problem above is to export the .onnx file with opset version 8, but trying this is where I run into another issue, this time from my python runtime.

"""RuntimeError: Unsupported: ONNX export of Pad in opset 9. The sizes of the padding must be constant. Please try opset version 11."""

My best guess that this is occurring because of my torch version so here is what my python environment is looking like

Package                 Version

----------------------- -----------------

absl-py                 2.1.0

blobconverter           1.4.3

certifi                 2024.7.4

charset-normalizer      3.3.2

contourpy               1.2.1

cycler                  0.12.1

filelock                3.15.4

filterpy                1.4.5

fonttools               4.53.1

fsspec                  2024.6.1

grpcio                  1.65.1

idna                    3.7

importlib_metadata      8.1.0

importlib_resources     6.4.0

intel-openmp            2021.4.0

Jinja2                  3.1.4

kiwisolver              1.4.5

Markdown                3.6

markdown-it-py          3.0.0

MarkupSafe              2.1.5

matplotlib              3.9.1

mdurl                   0.1.2

mkl                     2021.4.0

ml-dtypes               0.4.0

mpmath                  1.3.0

networkx                3.2.1

numpy                   1.24.0

onnx                    1.16.1

onnx-simplifier         0.4.36

onnxscript              0.1.0.dev20240725

onnxsim                 0.4.36

opencv-python           4.10.0.84

packaging               24.1

pillow                  10.4.0

pip                     24.1.2

protobuf                4.25.3

Pygments                2.18.0

pyparsing               3.1.2

python-dateutil         2.9.0.post0

PyYAML                  6.0.1

requests                2.32.3

rich                    13.7.1

scipy                   1.13.1

setuptools              58.1.0

six                     1.16.0

sympy                   1.13.1

tbb                     2021.13.0

tensorboard             2.17.0

tensorboard-data-server 0.7.2

torch                   1.8.1+cu111

torchaudio              0.8.1

torchvision             0.9.1+cu111

typing_extensions       4.12.2

urllib3                 2.2.2

Werkzeug                3.0.3

zipp                    3.19.2

And here is the function that is throwing the error demanding that I use opset v 11

def Create(self):

if self. Width and self. Height and self._Model_Path:

self.Device = torch.device('cpu')

self.Model = self.Get_Model()

Save_Path = self.Extension(self._Model_Path, '.onnx')

Input_Tensor = torch.zeros(1, 3, self. Height, self. Width).to(self.Device)

torch.onnx.export(self.Model, Input_Tensor, Save_Path, opset_version=8)

self._Save_Path = Save_Path

return 'Success'

else:

return 'Error'

Any help or insight is super appreciated. I am still guessing it’s a problem with my torch version but if anyone knows an entirely different way to accomplish my goal of using torch with the OAK then Im open to that also.

Best,

Andy p 63

Thanks a lot for your help, I hadn't seen this. I am not at the point where I have a simplified onnx file, however, it still wont work in the conversion tool.

The document you shared indicates that I should use torch.onnx.export with opset_version=12 so i tried creating my .onnx with that. Then i simplified my onnx file, then used openvino-dev to convert to openVINO IR. Then when i put my .bin and .xml into the convert i get an error telling me:
Cannot create Interpolate layer /transform/Resize id:14 from unsupported opset: opset11

This brings me back to the idea that vino2022.1 doesnt support opset11.

Any suggestions on how to get around this or fix this? Thanks again.

Andrew.

Also, it may be worth mentioning; When i change the opset version I wish to export as, I get an error whenever I use a version lower than 11

UPDATE:

Im pretty sure the reason I cant use a version lower then 11 is because my model is using operations that are not supported before version 11. Not sure which operations these are just yet.

torch.onnx.errors.OnnxExporterError: Unsupported: ONNX export of operator upsample_bilinear2d, torch._C.Value (output_size) indexing. Please feel free to request support or submit a pull request on PyTorch GitHub: