tensorflow pb to tflite 精度下降详解


Posted in Python onMay 25, 2020

之前希望在手机端使用深度模型做OCR,于是尝试在手机端部署tensorflow模型,用于图像分类。

思路主要是想使用tflite部署到安卓端,但是在使用tflite的时候发现模型的精度大幅度下降,已经不能支持业务需求了,最后就把OCR模型调用写在服务端了,但是精度下降的原因目前也没有找到,现在这里记录一下。

工作思路:

1.训练图像分类模型;2.模型固化成pb;3.由pb转成tflite文件;

但是使用python 的tf interpreter 调用tflite文件就已经出现精度下降的问题,android端部署也是一样。

1.网络结构

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
 
import tensorflow as tf
slim = tf.contrib.slim
 
def ttnet(images, num_classes=10, is_training=False,
   dropout_keep_prob=0.5,
   prediction_fn=slim.softmax,
   scope='TtNet'):
 end_points = {}
 
 with tf.variable_scope(scope, 'TtNet', [images, num_classes]):
 net = slim.conv2d(images, 32, [3, 3], scope='conv1')
 # net = slim.conv2d(images, 64, [3, 3], scope='conv1_2')
 net = slim.max_pool2d(net, [2, 2], 2, scope='pool1')
 net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='bn1')
 # net = slim.conv2d(net, 128, [3, 3], scope='conv2_1')
 net = slim.conv2d(net, 64, [3, 3], scope='conv2')
 net = slim.max_pool2d(net, [2, 2], 2, scope='pool2')
 net = slim.conv2d(net, 128, [3, 3], scope='conv3')
 net = slim.max_pool2d(net, [2, 2], 2, scope='pool3')
 net = slim.conv2d(net, 256, [3, 3], scope='conv4')
 net = slim.max_pool2d(net, [2, 2], 2, scope='pool4')
 net = slim.batch_norm(net, activation_fn=tf.nn.relu, scope='bn2')
 # net = slim.conv2d(net, 512, [3, 3], scope='conv5')
 # net = slim.max_pool2d(net, [2, 2], 2, scope='pool5')
 net = slim.flatten(net)
 end_points['Flatten'] = net
 
 # net = slim.fully_connected(net, 1024, scope='fc3')
 net = slim.dropout(net, dropout_keep_prob, is_training=is_training,
      scope='dropout3')
 logits = slim.fully_connected(net, num_classes, activation_fn=None,
         scope='fc4') 
 end_points['Logits'] = logits
 end_points['Predictions'] = prediction_fn(logits, scope='Predictions')
 
 return logits, end_points
ttnet.default_image_size = 28
 
def ttnet_arg_scope(weight_decay=0.0):
 with slim.arg_scope(
  [slim.conv2d, slim.fully_connected],
  weights_regularizer=slim.l2_regularizer(weight_decay),
  weights_initializer=tf.truncated_normal_initializer(stddev=0.1),
  activation_fn=tf.nn.relu) as sc:
 return sc

基于slim,由于是一个比较简单的分类问题,网络结构也很简单,几个卷积加池化。

测试效果是很棒的。真实样本测试集能达到99%+的准确率。

2.模型固化,生成pb文件

#coding:utf-8
 
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from nets import nets_factory
import cv2
import os
import numpy as np
from datasets import dataset_factory
from preprocessing import preprocessing_factory
from tensorflow.python.platform import gfile
slim = tf.contrib.slim
#todo
#support arbitray image size and num_class
 
tf.app.flags.DEFINE_string(
 'checkpoint_path', '/tmp/tfmodel/',
 'The directory where the model was written to or an absolute path to a '
 'checkpoint file.')
 
tf.app.flags.DEFINE_string(
 'model_name', 'inception_v3', 'The name of the architecture to evaluate.')
tf.app.flags.DEFINE_string(
 'preprocessing_name', None, 'The name of the preprocessing to use. If left '
 'as `None`, then the model_name flag is used.')
FLAGS = tf.app.flags.FLAGS
tf.app.flags.DEFINE_integer(
 'eval_image_size', None, 'Eval image size')
tf.app.flags.DEFINE_integer(
 'eval_image_height', None, 'Eval image height')
tf.app.flags.DEFINE_integer(
 'eval_image_width', None, 'Eval image width')
tf.app.flags.DEFINE_string(
 'export_path', './ttnet_1.0_37_32.pb', 'the export path of the pd file')
FLAGS = tf.app.flags.FLAGS
NUM_CLASSES = 37
 
def main(_):
 network_fn = nets_factory.get_network_fn(
  FLAGS.model_name,
  num_classes=NUM_CLASSES,
  is_training=False)
 # pre_image = tf.placeholder(tf.float32, [None, None, 3], name='input_data')
 # preprocessing_name = FLAGS.preprocessing_name or FLAGS.model_name
 # image_preprocessing_fn = preprocessing_factory.get_preprocessing(
 #  preprocessing_name,
 #  is_training=False)
 # image = image_preprocessing_fn(pre_image, FLAGS.eval_image_height, FLAGS.eval_image_width)
 # images2 = tf.expand_dims(image, 0)
 images2 = tf.placeholder(tf.float32, (None,32, 32, 3),name='input_data')
 logits, endpoints = network_fn(images2)
 with tf.Session() as sess:
 output = tf.identity(endpoints['Predictions'],name="output_data")
 with gfile.GFile(FLAGS.export_path, 'wb') as f:
  f.write(sess.graph_def.SerializeToString())
 
if __name__ == '__main__':
 tf.app.run()

3.生成tflite文件

import tensorflow as tf
 
graph_def_file = "/datastore1/Colonist_Lord/Colonist_Lord/workspace/models/model_files/passport_model_with_tflite/ocr_frozen.pb"
input_arrays = ["input_data"]
output_arrays = ["output_data"]
 
converter = tf.lite.TFLiteConverter.from_frozen_graph(
 graph_def_file, input_arrays, output_arrays)
tflite_model = converter.convert()
open("converted_model.tflite", "wb").write(tflite_model)

使用pb文件进行测试,效果正常;使用tflite文件进行测试,精度下降严重。下面附上pb与tflite测试代码。

pb测试代码

with tf.gfile.GFile(graph_filename, "rb") as f:
 graph_def = tf.GraphDef()
 graph_def.ParseFromString(f.read())
 
with tf.Graph().as_default() as graph:
 tf.import_graph_def(graph_def)
 input_node = graph.get_tensor_by_name('import/input_data:0')
 output_node = graph.get_tensor_by_name('import/output_data:0')
 with tf.Session() as sess:
  for image_file in image_files:
   abs_path = os.path.join(image_folder, image_file)
   img = cv2.imread(abs_path).astype(np.float32)
   img = cv2.resize(img, (int(input_node.shape[1]), int(input_node.shape[2])))
   output_data = sess.run(output_node, feed_dict={input_node: [img]})
   index = np.argmax(output_data)
   label = dict_laebl[index]
   dst_floder = os.path.join(result_folder, label)
   if not os.path.exists(dst_floder):
    os.mkdir(dst_floder)
   cv2.imwrite(os.path.join(dst_floder, image_file), img)
   count += 1

tflite测试代码

model_path = "converted_model.tflite" #"/datastore1/Colonist_Lord/Colonist_Lord/data/passport_char/ocr.tflite"
interpreter = tf.contrib.lite.Interpreter(model_path=model_path)
interpreter.allocate_tensors()
 
# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()
for image_file in image_files:
 abs_path = os.path.join(image_folder,image_file)
 img = cv2.imread(abs_path).astype(np.float32)
 img = cv2.resize(img, tuple(input_details[0]['shape'][1:3]))
 # input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
 interpreter.set_tensor(input_details[0]['index'], [img])
 
 interpreter.invoke()
 output_data = interpreter.get_tensor(output_details[0]['index'])
 index = np.argmax(output_data)
 label = dict_laebl[index]
 dst_floder = os.path.join(result_folder,label)
 if not os.path.exists(dst_floder):
  os.mkdir(dst_floder)
 cv2.imwrite(os.path.join(dst_floder,image_file),img)
 count+=1

最后也算是绕过这个问题解决了业务需求,后面有空的话,还是会花时间研究一下这个问题。

如果有哪个大佬知道原因,希望不吝赐教。

补充知识:.pb 转tflite代码,使用量化,减小体积,converter.post_training_quantize = True

import tensorflow as tf

path = "/home/python/Downloads/a.pb" # pb文件位置和文件名
inputs = ["input_images"] # 模型文件的输入节点名称
classes = ['feature_fusion/Conv_7/Sigmoid','feature_fusion/concat_3'] # 模型文件的输出节点名称
# converter = tf.contrib.lite.TocoConverter.from_frozen_graph(path, inputs, classes, input_shapes={'input_images':[1, 320, 320, 3]})
converter = tf.lite.TFLiteConverter.from_frozen_graph(path, inputs, classes,
              input_shapes={'input_images': [1, 320, 320, 3]})
converter.post_training_quantize = True
tflite_model = converter.convert()
open("/home/python/Downloads/aNew.tflite", "wb").write(tflite_model)

以上这篇tensorflow pb to tflite 精度下降详解就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
python实现bitmap数据结构详解
Feb 17 Python
python异步任务队列示例
Apr 01 Python
Python实现将DOC文档转换为PDF的方法
Jul 25 Python
Python简单遍历字典及删除元素的方法
Sep 18 Python
python django使用haystack:全文检索的框架(实例讲解)
Sep 27 Python
Python爬虫之正则表达式基本用法实例分析
Aug 08 Python
python的继承知识点总结
Dec 10 Python
WxPython实现无边框界面
Nov 18 Python
Python pyautogui模块实现鼠标键盘自动化方法详解
Feb 17 Python
详解Python中的Lock和Rlock
Jan 26 Python
python中使用asyncio实现异步IO实例分析
Feb 26 Python
python 命令行传参方法总结
May 25 Python
Python HTMLTestRunner测试报告view按钮失效解决方案
May 25 #Python
python用opencv完成图像分割并进行目标物的提取
May 25 #Python
Pytorch转tflite方式
May 25 #Python
Python HTMLTestRunner库安装过程解析
May 25 #Python
Anaconda+vscode+pytorch环境搭建过程详解
May 25 #Python
5行Python代码实现图像分割的步骤详解
May 25 #Python
Win10用vscode打开anaconda环境中的python出错问题的解决
May 25 #Python
You might like
浅谈PHP链表数据结构(单链表)
2016/06/08 PHP
PHP mongodb操作类定义与用法示例【适合mongodb2.x和mongodb3.x】
2018/06/16 PHP
php获取手机端的号码以及ip地址实例代码
2018/09/12 PHP
jquery 图片轮换效果
2010/07/29 Javascript
用jquery设置按钮的disabled属性的实现代码
2010/11/28 Javascript
用Jquery重写windows.alert方法实现思路
2013/04/03 Javascript
javascript陷阱 一不小心你就中招了(字符运算)
2013/11/10 Javascript
Javascript中replace()小结
2015/09/30 Javascript
javascript的replace方法结合正则使用实例总结
2016/06/16 Javascript
js获取json中key所对应的value值的简单方法
2020/06/17 Javascript
JavaScript实现form表单的多文件上传
2020/03/27 Javascript
利用node.js实现自动生成前端项目组件的方法详解
2017/07/12 Javascript
Vue项目webpack打包部署到服务器的实例详解
2017/07/17 Javascript
vuejs父子组件之间数据交互详解
2017/08/09 Javascript
Vue filter介绍及其使用详解
2017/10/21 Javascript
小程序分享模块超级详解(推荐)
2019/04/10 Javascript
d3.js实现图形拖拽
2019/12/19 Javascript
js实现弹幕墙效果
2020/12/10 Javascript
python实现的解析crontab配置文件代码
2014/06/30 Python
Python中的匿名函数使用简介
2015/04/27 Python
python3新特性函数注释Function Annotations用法分析
2016/07/28 Python
Python自定义线程类简单示例
2018/03/23 Python
Python中return self的用法详解
2018/07/27 Python
Python模块相关知识点小结
2020/03/09 Python
pytorch随机采样操作SubsetRandomSampler()
2020/07/07 Python
详解Pandas 处理缺失值指令大全
2020/07/30 Python
移动端html5 meta标签的神奇功效
2016/01/06 HTML / CSS
DOUGLAS荷兰:购买香水和化妆品
2020/10/24 全球购物
卫校护理专业毕业生求职信
2013/11/26 职场文书
会展策划与管理专业大学生职业生涯规划
2014/02/07 职场文书
个人工作表现评语
2014/04/30 职场文书
初中学生操行评语
2014/12/26 职场文书
班级元旦晚会开幕词
2015/01/29 职场文书
结婚典礼致辞
2015/07/28 职场文书
2016国庆节活动宣传语
2015/11/25 职场文书
Java Spring Boot请求方式与请求映射过程分析
2022/06/25 Java/Android