You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
I checked the problem with documentation, FAQ, open issues, Stack Overflow, etc and have not found solution
There is reproducer code and related data files: images, videos, models, etc.
I have also encountered this issue. There are exceptions but I can run, but when I generate the DLL, I cannot run anymore,
openvino version:2022.3
I have also encountered this issue. There are exceptions but I can run, but when I generate the DLL, I cannot run anymore,
openvino version:2022.3
Could you please provide your model and exception message?
the model is yolov8m.pt to yolov8m.onnx.
When I wrote the DLL file again, there was an exception while reading the model, but I was able to reason normally. bool ModelInit(const char * modelpath) { ov::Core core; std::string model = modelpath; compiled_model = core.compile_model(model,"CPU"); return true;}
Subsequently, the DLL is generated and cannot be run in testdemo, when reading model use "compiled_model = core.compile_model(model,"CPU");" ,the exception as follows.
Downgraded to 2022.1. No more InferenceEngine::NotFound error when calling compile_model().
Is there no problem with 2022.1? I'll give it a try. I am using the 2022.3 version, using only compile_ Model() loads .onnx without using read_ Model(), errors also occur
Downgraded to 2022.1. No more InferenceEngine::NotFound error when calling compile_model().
Is there no problem with 2022.1? I'll give it a try. I am using the 2022.3 version, using only compile_ Model() loads .onnx without using read_ Model(), errors also occur
I am not facing any problems in 2022.1 with inference other than difference in confidences compared to Python.
i use 2022.1 openvino, At first, there were exceptions, but they were resolved。
I think the reason may be the problem caused by the conversion of the pt model. At the beginning, the conversion was with opset=12, and there were problems with both openvino 2022.3 and openvino 2022.1. When I converted to opset=15, there were problems in 2022.3, but there were no problems with 2022.1 and they were resolved normally.
I don't remember the opset I used for conversion to ONNX. But the same model works fine in the latest OpenVINO Python. No issue with reading or making an inference. It's only the C++ versions 2022.2 and above having issues.
yeoh,you are right. versions 2022.3 having issues i think. i don`t test on version 2022.2.
我同样遇到这个问题 I also encountered this problem. I tested both 2023.0.1 and 2022.3 versions for this issue!
you can use version 2022.1, it can work.
我同样遇到这个问题 I also encountered this problem. I tested both 2023.0.1 and 2022.3 versions for this issue!
you can use version 2022.1, it can work.
Thank you, but this is not a long-term solution. Openvino developers should focus on addressing this issue
@andrei-kochin Is IR the .bin file? I generated it using MO. Is there any difference between that and generating it through Python?
The error occurs even when I use the .bin file or the .xml file.
@Y-T-G the difference is what is the original file you feed to convert_model. If this is .onnx then result will be the same as described here. But what I actually ask is to try to put original torch.nn to convert_model and then save OV model as IR ( .bin + .xml). As input model would be PyTorch format then PyTorch FE will be used not ONNX FE and IR might be slightly different.
Just wanted to make sure if issue is specific to torch.onnx.export or something wrong with conversion to IR.
Also if 2022.1 works fine it would be great to feed 2022.1 IR to 2023.0.1 runtime if possible to narrow down the faulty area
Okay, I used the serialize function to save to xml and bin. Still throws the same error.
I converted by going from Pytorch > Torchscript > IR through convert_model using the latest Python package 2022.3.1, and tested in C++ using the latest 2022.3.1 dll files downloaded from the archive.
I couldn't go from Pytorch > IR because the scripting function used by convert_model throws an error. I had to trace it instead.
@Y-T-G I'm a bit confused with the last messages. Could you please help me to clarify?
2023.0.1 PyTorch tracing IR + 2023.0.1 runtime - original issue
2023.1 PyTorch tracing IR + 2023.1 runtime - issue is present
2022.1 seems not having issue according to comments. Have yo managed to confirm this?
Have you managed to try 2022.1 IR on 2023.1 runtime?
Those performance results you've posted above are based on which runtime version?
In the last comment, as per your suggestion, I tried doing the conversion using convert_model from PyTorch directly. I couldn't use the PyTorch model directly as the scripting fails. So I used tracing instead and got the TorchScript model and then used that in the convert_model function and saved it using serialize. The generated IR still fails to load using read_model in C++ showing the same error as others.
I haven't tried 2022.1 IR on 2023.0.1. But I have tried compiling the same ONNX model on Ubuntu 22.04 to IR with 2023.0.1 and the read_model has no issues.
For reference, this is the project I am trying to run.
@UsersNGT How is the inference speed for you in 2022.1? It's very slow for me. Takes 3.7 seconds just for 1 image.
@Y-T-G It may be related to the size of the model. I used Yolov8m to load the model for 0.2s, and the inference was about 60ms
For reference, this is the project I am trying to run.
The model definition provided in this repository does not seem to have issues when running on Python/C++ on Linux/Windows. Both the converted IR model and onnx model load and execute normally with benchmark_app on OpenVINO 2023.0.1. So I don't think the model itself is the root of your issue.
$ benchmark_app.exe -m yolo_nas_s.onnx -d CPU -t 5
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2023.0.1-11005-fa1c41994f3-releases/2023/0
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2023.0.1-11005-fa1c41994f3-releases/2023/0
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] Performance hint was not explicitly specified in command line. Device(CPU) performance hint will be set to THROUGHPUT.
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 111.71 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ] input.1 (node: input.1) : f32 / [...] / [1,3,640,640]
[ INFO ] Network outputs:
[ INFO ] 913 (node: 913) : f32 / [...] / [1,8400,4]
[ INFO ] 904 (node: 904) : f32 / [...] / [1,8400,80]
[Step 5/11] Resizing model to match image sizes and given batch
[ WARNING ] input.1: layout is not set explicitly, so it is defaulted to NCHW. It is STRONGLY recommended to set layout manually to avoid further issues.
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ] input.1 (node: input.1) : u8 / [N,C,H,W] / [1,3,640,640]
[ INFO ] Network outputs:
[ INFO ] 913 (node: 913) : f32 / [...] / [1,8400,4]
[ INFO ] 904 (node: 904) : f32 / [...] / [1,8400,80]
[Step 7/11] Loading the model to the device
[ INFO ] Compile model took 400.56 ms
[Step 8/11] Querying optimal runtime parameters
[ INFO ] Model:
[ INFO ] NETWORK_NAME: torch_jit
[ INFO ] OPTIMAL_NUMBER_OF_INFER_REQUESTS: 4
[ INFO ] NUM_STREAMS: 4
[ INFO ] AFFINITY: NONE
[ INFO ] INFERENCE_NUM_THREADS: 8
[ INFO ] PERF_COUNT: NO
[ INFO ] INFERENCE_PRECISION_HINT: f32
[ INFO ] PERFORMANCE_HINT: THROUGHPUT
[ INFO ] EXECUTION_MODE_HINT: PERFORMANCE
[ INFO ] PERFORMANCE_HINT_NUM_REQUESTS: 0
[ INFO ] ENABLE_CPU_PINNING: NO
[ INFO ] SCHEDULING_CORE_TYPE: ANY_CORE
[ INFO ] ENABLE_HYPER_THREADING: YES
[ INFO ] EXECUTION_DEVICES: CPU
[Step 9/11] Creating infer requests and preparing input tensors
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] input.1 ([N,C,H,W], u8, [1,3,640,640], static): random (image/numpy array is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 5000 ms duration)
[ INFO ] Benchmarking in inference only mode (inputs filling are not included in measurement loop).
[ INFO ] First inference took 170.81 ms
[Step 11/11] Dumping statistics report
[ INFO ] Execution Devices: [ CPU ]
[ INFO ] Count: 44 iterations
[ INFO ] Duration: 6073.64 ms
[ INFO ] Latency:
[ INFO ] Median: 557.16 ms
[ INFO ] Average: 548.92 ms
[ INFO ] Min: 452.70 ms
[ INFO ] Max: 637.40 ms
[ INFO ] Throughput: 7.24 FPS
@Y-T-G I apologize if I misunderstand the problem, I still see no issues when loading directly the onnx model in Windows with 2023.0.1. Can you explain a bit more precisely the issue you are still running into? Is it related to the performance of the model while executing in C++ API vs Python API? Which OpenVINO version are you using while facing this issue?
@avitial I get std::runtime_error exception in the core.read_model part when I try to do that in version 2022.3, 2023.0 and 2023.0.1. This only happens in C++. It's not related to the performance. The model doesn't load at all. It just throws that exception when I try to load it.
The same code however has no issues running in version 2022.1.
@Y-T-G tried compiling the code from provided project and it seems to compile and execute normally. Unclear what differs between our environments to cause the error on your system. Ran this on the following config: i7-8665U CPU, Windows 10, cmake 3.14.7, OpenCV 4.7.0, OpenVINO 2023.0.1, python 3.7.9, Visual Studio 16 2019.
git clone https://github.com/Y-T-G/YOLO-NAS-OpenVino-cpp.git
cd YOLO-NAS-OpenVino-cpp
mkdir build
cd build
cmake -G "Visual Studio 16 2019" -A x64 ..
cmake --build . --config Release --verbose -j8
mo --input_model yolo_nas_s.onnx -s 255 --reverse_input_channels
build\Release\yolo-nas-openvino-cpp.exe --model yolo_nas_s.xml -i 694232.png
@avitial You're right. It does work. I was debugging using Visual Studio and had the exception breakpoints turned on, so it would stop whenever an exception was thrown. But if I disable and choose to ignore the exception, it still works. Tested on 2023.0.1.
These are the exceptions I get:
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000EC5091DAF8.
'yolo-nas-openvino-cpp.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_3.0\runtime\bin\intel64\Debug\openvino_ir_frontendd.dll'.
'yolo-nas-openvino-cpp.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_3.0\runtime\bin\intel64\Debug\openvino_auto_batch_plugind.dll'.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC5091C530.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::NotFound at memory location 0x000000EC50919D80.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC509174F0.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC509174F0.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC5091C340.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000EC5091ABC0.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: std::runtime_error at memory location 0x000000EC5091ABC0.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: [rethrow] at memory location 0x0000000000000000.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC50915B60.
'yolo-nas-openvino-cpp.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_3.0\runtime\bin\intel64\Debug\openvino_intel_cpu_plugind.dll'.
'yolo-nas-openvino-cpp.exe' (Win32): Loaded 'C:\Program Files (x86)\Intel\openvino_3.0\runtime\3rdparty\tbb\bin\tbbbind_2_5_debug.dll'.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC5091C490.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC5091C2A0.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::NotImplemented at memory location 0x000000EC5091C910.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: [rethrow] at memory location 0x0000000000000000.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::NotImplemented at memory location 0x000000EC5091A0C0.
Exception thrown at 0x00007FF82FD32BDC in yolo-nas-openvino-cpp.exe: Microsoft C++ exception: InferenceEngine::GeneralError at memory location 0x000000EC5091D0D0.
I also encountered the same issue
core.read_model()
However, it seems to have no impact on the results
Exception thrown at 0x00007FFAD3674C3C in test_runtime_config.exe: Microsoft C++ exception: std::runtime_error at memory location 0x00000036250FC9B8.
std::runtime_error when trying to read_model in C++
[Bug]: std::runtime_error when trying to read_model in C++
Sep 16, 2023