添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
深沉的松鼠  ·  [Python] ...·  8 小时前    · 
多情的仙人球  ·  Spark(一) | Learner·  8 小时前    · 
一身肌肉的煎鸡蛋  ·  Python: ...·  8 小时前    · 
火爆的香瓜  ·  建议收藏!Python ...·  8 小时前    · 
紧张的香瓜  ·  c++ string utf16 - CSDN文库·  3 小时前    · 
性感的灯泡  ·  Spring Cloud Stream ...·  1 月前    · 
想出国的滑板  ·  qml ...·  6 月前    · 
暴躁的稀饭  ·  Python 中的 AWS Lambda ...·  9 月前    · 

Python – How to Fix: “google.protobuf.message.DecodeError: Error parsing message” when creating tensorflow text summary

coreml firebase-mlkit parsing python tensorflow

I am trying to run a script to get a text summary out of a tensorflow .pb model such as this:

    OPS counts:
    Squeeze : 1
    Softmax : 1
    BiasAdd : 1
    Placeholder : 1
    AvgPool : 1
    Reshape : 2
    ConcatV2 : 9
    MaxPool : 13
    Sub : 57
    Rsqrt : 57
    Relu : 57
    Conv2D : 58
    Add : 114
    Mul : 114
    Identity : 231
    Const : 298

I am overall trying to convert a .pb model to a .coremlmodel and am following this article:

https://hackernoon.com/integrating-tensorflow-model-in-an-ios-app-cecf30b9068d

Getting a text summary from the .pb model is a step towards that. The code I try to run to create the text summary follows:

import tensorflow as tf
from tensorflow.core.framework import graph_pb2
import time
import operator
import sys
def inspect(model_pb, output_txt_file):
    graph_def = graph_pb2.GraphDef()
    with open(model_pb, "rb") as f:
    graph_def.ParseFromString(f.read())
    tf.import_graph_def(graph_def)
    sess = tf.Session()
    OPS = sess.graph.get_operations()
    ops_dict = {}
    sys.stdout = open(output_txt_file, 'w')
    for i, op in enumerate(OPS):
            print('---------------------------------------------------------------------------------------------------------------------------------------------')
        print("{}: op name = {}, op type = ( {} ), inputs = {}, outputs = {}".format(i, op.name, op.type, ", ".join([x.name for x in op.inputs]), ", ".join([x.name for x in op.outputs])))
        print('@input shapes:')
        for x in op.inputs:
            print("name = {} : {}".format(x.name, x.get_shape()))
        print('@output shapes:')
        for x in op.outputs:
            print("name = {} : {}".format(x.name, x.get_shape()))
        if op.type in ops_dict:
            ops_dict[op.type] += 1
        else:
            ops_dict[op.type] = 1
    print('---------------------------------------------------------------------------------------------------------------------------------------------')
sorted_ops_count = sorted(ops_dict.items(),     key=operator.itemgetter(1))
    print('OPS counts:')
    for i in sorted_ops_count:
        print("{} : {}".format(i[0], i[1]))
if __name__ == "__main__":
Write a summary of the frozen TF graph to a text file.
Summary includes op name, type, input and output names and shapes. 
Arguments
----------
- path to the frozen .pb graph
- path to the output .txt file where the summary is written
Usage
----------
python inspect_pb.py frozen.pb text_file.txt
if len(sys.argv) != 3:
    raise ValueError("Script expects two arguments. " +
          "Usage: python inspect_pb.py /path/to/the/frozen.pb /path/to/the/output/text/file.txt")
inspect(sys.argv[1], sys.argv[2])

I ran this command:

 python inspect_pb.py /Users/nikhil.c/Desktop/tensorflowModel.pb   text_summary.txt

But instead of receiving the expected output, I receive this error message:

    Traceback (most recent call last):
      File "inspect_pb.py", line 58, in <module>
        inspect(sys.argv[1], sys.argv[2])
      File "inspect_pb.py", line 10, in inspect
        graph_def.ParseFromString(f.read())
    google.protobuf.message.DecodeError: Error parsing message

and do not really know where to start. Other similar questions that seem to receive the same error message do not make too much sense. What do I do?

  • TensorBoard
  • Protocol Buffer aka Protobuf (a replacement of XML and JSON file transfer for better performance and protect corruption of data - via some kind of hashing, thus the parsing decode error)
  • Bazel
  • I can reproduce the issue from a GCP (cloud) v.1.14 platform to a local TensorFlow (JitTeam Docker) v.1.13 - If I retrain a model it works, if I import, all scripts crash on this error.

    You have other export options assuming you still have access to the original file and system

    You can try to install other components version with this guide - The author recommend to use his scripts instead of pip3 install. (you can read comments for review of the scripts). It is based on official Tensorflow builds.

    You can also try

    pip3 install protobuf==3.6.0
    

    This is related into tensorflow issue #21719