添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接

Hi all,

I just successfully converted my saved model (.pb) trained in a TF2 framework to both ONNX (.onnx) and IR (.xml and .bin). I tried converting the ONNX one to a .blob file directly using this conversion web app , so I could deploy my custom object detection model on my OAK camera, but it kept failing:

Now I really have no ideas if I should adjust my model architecture (this is a custom efficientdet-d0 model for object detection), or find a better way converting to a .blob file. Here I upload some model files so you could have a look: link

Many thanks,

Austin

YWei Are you using TF Object detection API? If yes, which version?

You best bet to compile it would be to follow this: https://docs.openvino.ai/2022.1/openvino_docs_MO_DG_prepare_model_convert_model_tf_specific_Convert_EfficientDet_Models.html . Please use 2022.1 OpenVINO version. If you are using TF1, then you will have to use Python 3.6 or 3.7 I believe. Also, the model need to be frozen or full directory needs to be provided.

File with a pretrained model (binary or text .pb file after freezing) OR saved_model_dir <path_to_saved_model> for the TensorFlow 2 models

Hi @"YWei",

could you please provide us with the arguments that you've used to convert the model into ONNX and IR?

Thanks,

(csl-load-count-ml) csl@CSL-MacBook-Pro ~ % mo --saved_model_dir /Users/csl/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model/
[ INFO ] The model was converted to IR v11, the latest model format that corresponds to the source DL framework input/output format. While IR v11 is backwards compatible with OpenVINO Inference Engine API v1.0, please use API v2.0 (as of 2022.1) to take advantage of the latest improvements in IR v11.
Find more information about API v2.0 and IR v11 at https://docs.openvino.ai/latest/openvino_2_0_transition_guide.html
[ INFO ] IR generated by new TensorFlow Frontend is compatible only with API v2.0. Please make sure to use API v2.0.
Find more information about new TensorFlow Frontend at https://docs.openvino.ai/latest/openvino_docs_MO_DG_TensorFlow_Frontend.html
[ SUCCESS ] Generated IR version 11 model.
[ SUCCESS ] XML file: /Users/csl/saved_model.xml
[ SUCCESS ] BIN file: /Users/csl/saved_model.bin

Regards,

Austin

Thank you for your reply, it seems really helpful. Just two things I want to double check with you please:

  • Do I have to freeze my .pb model no matter I am using TF1 or 2?
  • There is no content about converting it to a .blob in the documentation you shared. Does it mean it would be pretty easy for me to convert it to a .blob file using any ways finally (such as the web converter, or several code lines in Python)?

Regards,

Austin

Hey @YWei ,

You can try using saved_model directly. Though according to the docs you should be able to freeze the saved model using
mo --runmode=saved_model --model_name=efficientdet-d4 --ckpt_path=efficientdet-d4 --saved_model_dir=savedmodeldir

Then I would expand you command for generating IR with the following flags:

  • --transformations_config=<path to venv>/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json ` (please use correct path to env)
  • --input_shape "[1,512,512,3]" ` (or whatever your actual input shape is)

Matija

Thanks. I reckon we might replace the model_name 'efficientdet-d4' with 'efficientdet-d0' in your recommended command line? As efficientdet-d0 model was my config during training, or it doesn't matter at all?

I will have a try anyways 🙂

Cheers

Sorry to bother you again. Another issue when trying this:

(tf-openvino) cloudscapespare@Cloudscapes-MacBook-Pro ~ % mo --runmode=saved_model --model_name=efficientdet-d0 --ckpt_path=efficientdet-d0 --saved_model_dir=savedmodeldir --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.7/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
Traceback (most recent call last):
  File "/Users/cloudscapespare/anaconda3/envs/tf-openvino/bin/mo", line 5, in <module>
    from openvino.tools.mo.__main__ import main
ModuleNotFoundError: No module named 'openvino'
(tf-openvino) cloudscapespare@Cloudscapes-MacBook-Pro ~ % mo --runmode=saved_model --model_name=efficientdet-d0 --ckpt_path=efficientdet-d0 --saved_model_dir=savedmodeldir --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.7/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
Traceback (most recent call last):
  File "/Users/cloudscapespare/anaconda3/envs/tf-openvino/bin/mo", line 5, in <module>
    from openvino.tools.mo.__main__ import main
ModuleNotFoundError: No module named 'openvino' 

In terms of the openvino 2022.1 version, do you mean I should install the openvino-dev 2022.1, as this is the only option I can download (for MacOS):

That just confused me now. I have no ideas how can I get the openvino module. Any suggestions please?

Cheers,

Austin

Matija

It seems like a progress as the terminal did show me something different, but still failed somehow.

As you suggested, I installed openvino-dev==2022.1.0, and found only tensorflow==2.5.3 would be compatible with this version of OpenVINO, so I installed this version of TF. But my object detection model was trained with tensorflow==2.13.0, do you think this would be the main reason of failure (i.e., the TF version used to train my model is different to the version used to compile/convert my model)?

The details are as:

(tf-openvino) cloudscapespare@Cloudscapes-MacBook-Pro ~ % mo --model_name=efficientdet-d0 --saved_model_dir=/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	None
	- Path for generated IR: 	/Users/cloudscapespare/.
	- IR output name: 	efficientdet-d0
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	Not specified, inherited from the model
	- Output layers: 	Not specified, inherited from the model
	- Input shapes: 	[1,512,512,3]
	- Source layout: 	Not specified
	- Target layout: 	Not specified
	- Layout: 	Not specified
	- Mean values: 	Not specified
	- Scale values: 	Not specified
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- User transformations: 	Not specified
	- Reverse input channels: 	False
	- Enable IR generation for fixed input shape: 	False
	- Use the transformations config file: 	/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json
Advanced parameters:
	- Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: 	False
	- Force the usage of new Frontend of Model Optimizer for model conversion into IR: 	False
TensorFlow specific parameters:
	- Input model in text protobuf format: 	False
	- Path to model dump for TensorBoard: 	None
	- List of shared libraries with TensorFlow custom layers implementation: 	None
	- Update the configuration file with input/output node names: 	None
	- Use configuration file used to generate the model with Object Detection API: 	None
	- Use the config file: 	None
OpenVINO runtime found in: 	/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino
OpenVINO runtime version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
Model Optimizer version: 	2022.1.0-7019-cdb9bec7210-releases/2022/1
2023-08-11 10:19:04.513927: E tensorflow/core/framework/node_def_util.cc:623] NodeDef mentions attribute validate_shape which is not in the op definition: Op<name=AssignVariableOp; signature=resource:resource, value:dtype -> ; attr=dtype:type; is_stateful=true> This may be expected if your graph generating binary is newer  than this binary. Unknown attributes will be ignored. NodeDef: {{node AssignNewValue}}
[ FRAMEWORK ERROR ]  Cannot load input model: Converting GraphDef to Graph has failed with an error: 'Op type not registered 'DisableCopyOnRead' in binary running on Cloudscapes-MacBook-Pro.local. Make sure the Op and Kernel are registered in the binary running in this process. Note that if you are loading a saved graph which used ops from tf.contrib, accessing (e.g.) `tf.contrib.resampler` should be done before importing the graph, as contrib ops are lazily registered when the module is first accessed.' The binary trying to import the GraphDef was built when GraphDef version was 716. The GraphDef was produced by a binary built when GraphDef version was 1482. The difference between these versions is larger than TensorFlow's forward compatibility guarantee, and might be the root cause for failing to import the GraphDef.

I would like to know some of you thoughts 🙂

Regards,

Austin

Hi @YWei ,

you could also try to convert the model using OpenVino in version 2022.3 (to install it use pip install openvino-dev==2022.3.0 ) and Tensorflow in version 2.8.0 (to install it use pip install tensorflow==2.8.0 ).

I have used this setup and I was able to export the efficientdet-d0-tf .pb into blob. To compile the model from .pb I used this command:
mo --input_model=efficientdet-d0_frozen.pb --reverse_input_channels --transformations_config=/home/honza/.local/lib/python3.10/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"

For converting the .xml and .bin files into blob, you can use the online blobconverter. Please note, that even when using OpenVino in version 2022.3.0, the conversion to a blob will work when choosing OpenVino version 2022.1.0.

Hope this will help you!

Best,

Hi JanCuhel ,

Thank you for your suggestion! That seems make me closer to my milestone, but I still encountered a few problems.

I was directly converting my saved model as my first try, and failed:

(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % cd /Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % ls
assets				lim_det_model.onnx		saved_model.pb
efficientdet-d0.bin		model.onnx			tf_model_inference.ipynb
efficientdet-d0.xml		modified_model.onnx		variables
fingerprint.pb			saved_model.onnx
(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % mo --input_model=saved_model.pb --reverse_input_channels --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
[ FRAMEWORK ERROR ]  Cannot load input model: TensorFlow cannot read the model file: "/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model/saved_model.pb" is incorrect TensorFlow model file. 
The file should contain one of the following TensorFlow graphs:
1. frozen graph in text or binary format
2. inference graph for freezing with checkpoint (--input_checkpoint) in text or binary format
3. meta graph
Make sure that --input_model_is_text is provided for a model in text format. By default, a model is interpreted in binary format. Framework error details: Error parsing message with type 'tensorflow.GraphDef'. 
 For more information please refer to Model Optimizer FAQ, question #43. (https://docs.openvino.ai/latest/openvino_docs_MO_DG_prepare_model_Model_Optimizer_FAQ.html?question=43#question-43)

Then I guess I need to make my saved model a frozen one, according to the error message I received. Following the instructions from OpenVINO's official documentation seems not work, so I run a special Python script to generate a frozen graph:

import tensorflow as tf
from tensorflow.python.framework.convert_to_constants import convert_variables_to_constants_v2
# Load the saved model
model = tf.saved_model.load("/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model")
infer = model.signatures["serving_default"]
# Determine input shape and dtype from the loaded model
input_name = list(infer.structured_input_signature[1].keys())[0]
input_shape = infer.structured_input_signature[1][input_name].shape
input_dtype = infer.structured_input_signature[1][input_name].dtype
# Convert the model to a concrete function
concrete_func = tf.function(infer).get_concrete_function(
    tf.TensorSpec(input_shape, input_dtype)
# Convert the concrete function to a frozen ConcreteFunction
frozen_concrete_func = convert_variables_to_constants_v2(concrete_func)
# Extract the GraphDef from the ConcreteFunction
frozen_graph_def = frozen_concrete_func.graph.as_graph_def()
# Save the frozen graph
with open("/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model/frozen_graph.pb", "wb") as f:
    f.write(frozen_graph_def.SerializeToString())

Again I replicated your command line and expected it would succeed, both of my tries failed.

The first one:

(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % mo --input_model=frozen_graph.pb --reverse_input_channels --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"
2023-08-15 01:03:29.274017: E tensorflow/core/framework/node_def_util.cc:630] NodeDef mentions attribute resize_if_index_out_of_bounds which is not in the op definition: Op<name=TensorListSetItem; signature=input_handle:variant, index:int32, item:element_dtype -> output_handle:variant; attr=element_dtype:type> This may be expected if your graph generating binary is newer  than this binary. Unknown attributes will be ignored. NodeDef: {{node StatefulPartitionedCall/StatefulPartitionedCall/map/while/body/_1405/map/while/TensorArrayV2Write/TensorListSetItem}}
[ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node StatefulPartitionedCall/StatefulPartitionedCall/map/while/body/_1405/map/while/Preprocessor/ResizeToRange/strided_slice_2/stack.
Original exception message: index -1 is out of bounds for axis 0 with size 0
[ INFO ] You can also try to use new TensorFlow Frontend (preview feature as of 2022.3) by adding `--use_new_frontend` option into Model Optimizer command-line.
Find more information about new TensorFlow Frontend at https://docs.openvino.ai/latest/openvino_docs_MO_DG_TensorFlow_Frontend.html

Second one (this time I just added something according to the above error message):

(basic-ml-env) cloudscapespare@Cloudscapes-MacBook-Pro saved_model % mo --input_model=frozen_graph.pb --reverse_input_channels --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]" --use_new_frontend
[ ERROR ]  Legacy extensions are not supported for the new frontend

Since the versions of TF and OpenVINO are same as what you recommended, I wonder if there are any inexplicit factors making my situation so complicated. Could you please try doing this using my saved_model file if possible: link . I just want to ensure if I am working towards a correct direction. Otherwise I might go train a new model.

Regards,

Austin

Hi @YWei ,

I apologize for the delay in my reply. I tried your script to freeze the model, but I cannot load the model. I tried several approaches, but none of them worked. Could you please share with us the saved_dir ?

Best,

Thank you so much for still making efforts on my problem!

For saved_dir , do you mean saved_model_dir in my previous command line:

(tf-openvino) cloudscapespare@Cloudscapes-MacBook-Pro ~ % mo --model_name=efficientdet-d0 --saved_model_dir=/Users/cloudscapespare/Documents/TF-dirt-overflowing-detection/fine_tuned_model/saved_model --transformations_config=/Users/cloudscapespare/anaconda3/envs/tf-openvino/lib/python3.8/site-packages/openvino/tools/mo/front/tf/automl_efficientdet.json --input_shape "[1,512,512,3]"

If so, that would be pretty much the saved_model.pb in this link . I don't keep the original dir unfortunately as there are not many useful files.

In addition, I tried training a YOLOv5s model with Pytorch, and it can be converted with this tool with no problem (and it's running well on my OAK camera now). So I guess it could be that OpenVINO doesn't really support some layers like Non Maximum Suppression in the EfficientDet model I trained before.

What's your thoughts 🙂

Thanks again,

Austin

Hi Austin,

I am happy to hear that your YOLOv5s model is working! Yeah, with the EfficientDet model, there must be some troublesome layers that are causing the export to fail (maybe it could be because of the Loop layer, which object detection models usually don't have).

Best,