keras 实现轻量级网络ShuffleNet教程


Posted in Python onJune 19, 2020

ShuffleNet是由旷世发表的一个计算效率极高的CNN架构,它是专门为计算能力非常有限的移动设备(例如,10-150 MFLOPs)而设计的。该结构利用组卷积和信道混洗两种新的运算方法,在保证计算精度的同时,大大降低了计算成本。ImageNet分类和MS COCO对象检测实验表明,在40 MFLOPs的计算预算下,ShuffleNet的性能优于其他结构,例如,在ImageNet分类任务上,ShuffleNet的top-1 error 7.8%比最近的MobileNet低。在基于arm的移动设备上,ShuffleNet比AlexNet实际加速了13倍,同时保持了相当的准确性。

Paper:ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile

Github:https://github.com/zjn-ai/ShuffleNet-keras

网络架构

组卷积

组卷积其实早在AlexNet中就用过了,当时因为GPU的显存不足因而利用组卷积分配到两个GPU上训练。简单来讲,组卷积就是将输入特征图按照通道方向均分成多个大小一致的特征图,如下图所示左面是输入特征图右面是均分后的特征图,然后对得到的每一个特征图进行正常的卷积操作,最后将输出特征图按照通道方向拼接起来就可以了。

keras 实现轻量级网络ShuffleNet教程

目前很多框架都支持组卷积,但是tensorflow真的不知道在想什么,到现在还是不支持组卷积,只能自己写,因此效率肯定不及其他框架原生支持的方法。组卷积层的代码编写思路就与上面所说的原理完全一致,代码如下。

def _group_conv(x, filters, kernel, stride, groups):
 """
 Group convolution
 # Arguments
  x: Tensor, input tensor of with `channels_last` or 'channels_first' data format
  filters: Integer, number of output channels
  kernel: An integer or tuple/list of 2 integers, specifying the
   width and height of the 2D convolution window.
  strides: An integer or tuple/list of 2 integers,
   specifying the strides of the convolution along the width and height.
   Can be a single integer to specify the same value for
   all spatial dimensions.
  groups: Integer, number of groups per channel
  
 # Returns
  Output tensor
 """
 channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
 in_channels = K.int_shape(x)[channel_axis]
 
 # number of input channels per group
 nb_ig = in_channels // groups
 # number of output channels per group
 nb_og = filters // groups
 
 gc_list = []
 # Determine whether the number of filters is divisible by the number of groups
 assert filters % groups == 0
 
 for i in range(groups):
  if channel_axis == -1:
   x_group = Lambda(lambda z: z[:, :, :, i * nb_ig: (i + 1) * nb_ig])(x)
  else:
   x_group = Lambda(lambda z: z[:, i * nb_ig: (i + 1) * nb_ig, :, :])(x)
  gc_list.append(Conv2D(filters=nb_og, kernel_size=kernel, strides=stride, 
        padding='same', use_bias=False)(x_group))
  
 return Concatenate(axis=channel_axis)(gc_list)

通道混洗

通道混洗是这篇paper的重点,尽管组卷积大量减少了计算量和参数,但是通道之间的信息交流也受到了限制因而模型精度肯定会受到影响,因此作者提出通道混洗,在不增加参数量和计算量的基础上加强通道之间的信息交流,如下图所示。

keras 实现轻量级网络ShuffleNet教程

通道混洗层的代码实现很巧妙参考了别人的实现方法。通过下面的代码说明,d代表特征图的通道序号,x是经过通道混洗后的通道顺序。

>>> d = np.array([0,1,2,3,4,5,6,7,8]) 
>>> x = np.reshape(d, (3,3)) 
>>> x = np.transpose(x, [1,0]) # 转置
>>> x = np.reshape(x, (9,)) # 平铺
'[0 1 2 3 4 5 6 7 8] --> [0 3 6 1 4 7 2 5 8]'

利用keras后端实现代码:

def _channel_shuffle(x, groups):
 """
 Channel shuffle layer
 
 # Arguments
  x: Tensor, input tensor of with `channels_last` or 'channels_first' data format
  groups: Integer, number of groups per channel
  
 # Returns
  Shuffled tensor
 """
 
 if K.image_data_format() == 'channels_last':
  height, width, in_channels = K.int_shape(x)[1:]
  channels_per_group = in_channels // groups
  pre_shape = [-1, height, width, groups, channels_per_group]
  dim = (0, 1, 2, 4, 3)
  later_shape = [-1, height, width, in_channels]
 else:
  in_channels, height, width = K.int_shape(x)[1:]
  channels_per_group = in_channels // groups
  pre_shape = [-1, groups, channels_per_group, height, width]
  dim = (0, 2, 1, 3, 4)
  later_shape = [-1, in_channels, height, width]
 
 x = Lambda(lambda z: K.reshape(z, pre_shape))(x)
 x = Lambda(lambda z: K.permute_dimensions(z, dim))(x) 
 x = Lambda(lambda z: K.reshape(z, later_shape))(x)
 
 return x

ShuffleNet Unit

ShuffleNet的主要构成单元。下图中,a图为深度可分离卷积的基本架构,b图为1步长时用的单元,c图为2步长时用的单元。

keras 实现轻量级网络ShuffleNet教程

ShuffleNet架构

注意,对于第二阶段(Stage2),作者没有在第一个1×1卷积上应用组卷积,因为输入通道的数量相对较少。

keras 实现轻量级网络ShuffleNet教程

环境

Python 3.6

Tensorlow 1.13.1

Keras 2.2.4

实现

支持channel first或channel last

# -*- coding: utf-8 -*-
"""
Created on Thu Apr 25 18:26:41 2019
@author: zjn
"""
import numpy as np
from keras.callbacks import LearningRateScheduler
from keras.models import Model
from keras.layers import Input, Conv2D, Dropout, Dense, GlobalAveragePooling2D, Concatenate, AveragePooling2D
from keras.layers import Activation, BatchNormalization, add, Reshape, ReLU, DepthwiseConv2D, MaxPooling2D, Lambda
from keras.utils.vis_utils import plot_model
from keras import backend as K
from keras.optimizers import SGD
 
def _group_conv(x, filters, kernel, stride, groups):
 """
 Group convolution
 
 # Arguments
  x: Tensor, input tensor of with `channels_last` or 'channels_first' data format
  filters: Integer, number of output channels
  kernel: An integer or tuple/list of 2 integers, specifying the
   width and height of the 2D convolution window.
  strides: An integer or tuple/list of 2 integers,
   specifying the strides of the convolution along the width and height.
   Can be a single integer to specify the same value for
   all spatial dimensions.
  groups: Integer, number of groups per channel
  
 # Returns
  Output tensor
 """
 
 channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
 in_channels = K.int_shape(x)[channel_axis]
 
 # number of input channels per group
 nb_ig = in_channels // groups
 # number of output channels per group
 nb_og = filters // groups
 
 gc_list = []
 # Determine whether the number of filters is divisible by the number of groups
 assert filters % groups == 0
 
 for i in range(groups):
  if channel_axis == -1:
   x_group = Lambda(lambda z: z[:, :, :, i * nb_ig: (i + 1) * nb_ig])(x)
  else:
   x_group = Lambda(lambda z: z[:, i * nb_ig: (i + 1) * nb_ig, :, :])(x)
  gc_list.append(Conv2D(filters=nb_og, kernel_size=kernel, strides=stride, 
        padding='same', use_bias=False)(x_group))
  
 return Concatenate(axis=channel_axis)(gc_list)
def _channel_shuffle(x, groups):
 """
 Channel shuffle layer
 
 # Arguments
  x: Tensor, input tensor of with `channels_last` or 'channels_first' data format
  groups: Integer, number of groups per channel
  
 # Returns
  Shuffled tensor
 """
 if K.image_data_format() == 'channels_last':
  height, width, in_channels = K.int_shape(x)[1:]
  channels_per_group = in_channels // groups
  pre_shape = [-1, height, width, groups, channels_per_group]
  dim = (0, 1, 2, 4, 3)
  later_shape = [-1, height, width, in_channels]
 else:
  in_channels, height, width = K.int_shape(x)[1:]
  channels_per_group = in_channels // groups
  pre_shape = [-1, groups, channels_per_group, height, width]
  dim = (0, 2, 1, 3, 4)
  later_shape = [-1, in_channels, height, width]
 
 x = Lambda(lambda z: K.reshape(z, pre_shape))(x)
 x = Lambda(lambda z: K.permute_dimensions(z, dim))(x) 
 x = Lambda(lambda z: K.reshape(z, later_shape))(x)
 
 return x
 
def _shufflenet_unit(inputs, filters, kernel, stride, groups, stage, bottleneck_ratio=0.25):
 """
 ShuffleNet unit
 
 # Arguments
  inputs: Tensor, input tensor of with `channels_last` or 'channels_first' data format
  filters: Integer, number of output channels
  kernel: An integer or tuple/list of 2 integers, specifying the
   width and height of the 2D convolution window.
  strides: An integer or tuple/list of 2 integers,
   specifying the strides of the convolution along the width and height.
   Can be a single integer to specify the same value for
   all spatial dimensions.
  groups: Integer, number of groups per channel
  stage: Integer, stage number of ShuffleNet
  bottleneck_channels: Float, bottleneck ratio implies the ratio of bottleneck channels to output channels
   
 # Returns
  Output tensor
  
 # Note
  For Stage 2, we(authors of shufflenet) do not apply group convolution on the first pointwise layer 
  because the number of input channels is relatively small.
 """
 channel_axis = 1 if K.image_data_format() == 'channels_first' else -1
 in_channels = K.int_shape(inputs)[channel_axis]
 bottleneck_channels = int(filters * bottleneck_ratio)
 
 if stage == 2:
  x = Conv2D(filters=bottleneck_channels, kernel_size=kernel, strides=1,
     padding='same', use_bias=False)(inputs)
 else:
  x = _group_conv(inputs, bottleneck_channels, (1, 1), 1, groups)
 x = BatchNormalization(axis=channel_axis)(x)
 x = ReLU()(x)
 
 x = _channel_shuffle(x, groups)
 x = DepthwiseConv2D(kernel_size=kernel, strides=stride, depth_multiplier=1, 
      padding='same', use_bias=False)(x)
 x = BatchNormalization(axis=channel_axis)(x)
  
 if stride == 2:
  x = _group_conv(x, filters - in_channels, (1, 1), 1, groups)
  x = BatchNormalization(axis=channel_axis)(x)
  avg = AveragePooling2D(pool_size=(3, 3), strides=2, padding='same')(inputs)
  x = Concatenate(axis=channel_axis)([x, avg])
 else:
  x = _group_conv(x, filters, (1, 1), 1, groups)
  x = BatchNormalization(axis=channel_axis)(x)
  x = add([x, inputs])
 return x
 
def _stage(x, filters, kernel, groups, repeat, stage):
 """
 Stage of ShuffleNet
 
 # Arguments
  x: Tensor, input tensor of with `channels_last` or 'channels_first' data format
  filters: Integer, number of output channels
  kernel: An integer or tuple/list of 2 integers, specifying the
   width and height of the 2D convolution window.
  strides: An integer or tuple/list of 2 integers,
   specifying the strides of the convolution along the width and height.
   Can be a single integer to specify the same value for
   all spatial dimensions.
  groups: Integer, number of groups per channel
  repeat: Integer, total number of repetitions for a shuffle unit in every stage
  stage: Integer, stage number of ShuffleNet
  
 # Returns
  Output tensor
 """
 x = _shufflenet_unit(x, filters, kernel, 2, groups, stage)
 
 for i in range(1, repeat):
  x = _shufflenet_unit(x, filters, kernel, 1, groups, stage)
 return x
 
def ShuffleNet(input_shape, classes):
 """
 ShuffleNet architectures
 
 # Arguments
  input_shape: An integer or tuple/list of 3 integers, shape
   of input tensor
  k: Integer, number of classes to predict
  
 # Returns
  A keras model
 """
 inputs = Input(shape=input_shape)
 
 x = Conv2D(24, (3, 3), strides=2, padding='same', use_bias=True, activation='relu')(inputs)
 x = MaxPooling2D(pool_size=(3, 3), strides=2, padding='same')(x)
 
 x = _stage(x, filters=384, kernel=(3, 3), groups=8, repeat=4, stage=2)
 x = _stage(x, filters=768, kernel=(3, 3), groups=8, repeat=8, stage=3)
 x = _stage(x, filters=1536, kernel=(3, 3), groups=8, repeat=4, stage=4)
 
 x = GlobalAveragePooling2D()(x)
 
 x = Dense(classes)(x)
 predicts = Activation('softmax')(x)
 model = Model(inputs, predicts)
 return model
 
if __name__ == '__main__':
 model = ShuffleNet((224, 224, 3), 1000)
 #plot_model(model, to_file='ShuffleNet.png', show_shapes=True)

以上这篇keras 实现轻量级网络ShuffleNet教程就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
python使用cPickle模块序列化实例
Sep 25 Python
详解Python中DOM方法的动态性
Apr 11 Python
粗略分析Python中的内存泄漏
Apr 23 Python
python根据京东商品url获取产品价格
Aug 09 Python
Python机器学习logistic回归代码解析
Jan 17 Python
flask中过滤器的使用详解
Aug 01 Python
对Python 窗体(tkinter)树状数据(Treeview)详解
Oct 11 Python
Django objects的查询结果转化为json的三种方式的方法
Nov 07 Python
python 实现敏感词过滤的方法
Jan 21 Python
详解django+django-celery+celery的整合实战
Mar 19 Python
python的命名规则知识点总结
Oct 04 Python
python ETL工具 pyetl
Jun 07 Python
Python爬虫实现HTTP网络请求多种实现方式
Jun 19 #Python
Keras设置以及获取权重的实现
Jun 19 #Python
Python包和模块的分发详细介绍
Jun 19 #Python
浅谈Keras中shuffle和validation_split的顺序
Jun 19 #Python
Python爬虫headers处理及网络超时问题解决方案
Jun 19 #Python
sklearn和keras的数据切分与交叉验证的实例详解
Jun 19 #Python
Python虚拟环境的创建和包下载过程分析
Jun 19 #Python
You might like
一个好用的分页函数
2006/11/16 PHP
laravel 框架执行流程与原理简单分析
2020/02/01 PHP
仿163填写邮件地址自动显示下拉(无优化)
2008/11/05 Javascript
javascript Onunload与Onbeforeunload使用小结
2009/12/31 Javascript
js禁止页面刷新禁止用F5键刷新禁止右键的示例代码
2013/09/23 Javascript
如何将网页表格内容导入excel
2014/02/18 Javascript
JQuery以JSON方式提交数据到服务端示例代码
2014/05/05 Javascript
javascript圆盘抽奖程序实现原理和完整代码例子
2014/06/03 Javascript
jQuery UI插件自定义confirm确认框的方法
2015/03/20 Javascript
js光标定位文本框回车表单提交问题的解决方法
2015/05/11 Javascript
JavaScript运算符小结
2015/06/03 Javascript
vue 使用eventBus实现同级组件的通讯
2018/03/02 Javascript
Vue filter介绍及详细使用
2018/04/04 Javascript
jQuery实现鼠标移到某个对象时弹出显示层功能
2018/08/23 jQuery
微信小程序自定义底部导航带跳转功能
2018/11/27 Javascript
Vue组件间的通信pubsub-js实现步骤解析
2020/03/11 Javascript
解决Vue的文本编辑器 vue-quill-editor 小图标样式排布错乱问题
2020/08/03 Javascript
[04:03]辉夜杯主赛事 12月25日RECAP精彩回顾
2015/12/26 DOTA
[01:09]DOTAPLUS——DOTA2的新时代
2018/04/04 DOTA
Python中使用bidict模块双向字典结构的奇技淫巧
2016/07/12 Python
python通过socket实现多个连接并实现ssh功能详解
2017/11/08 Python
python操作oracle的完整教程分享
2018/01/30 Python
关于python列表增加元素的三种操作方法
2018/08/22 Python
python中while和for的区别总结
2019/06/28 Python
python list转置和前后反转的例子
2019/08/26 Python
Python多继承以及MRO顺序的使用
2019/11/11 Python
Python+Selenium+phantomjs实现网页模拟登录和截图功能(windows环境)
2019/12/11 Python
Python json格式化打印实现过程解析
2020/07/21 Python
移动端HTML5开发神器之vconsole详解
2020/12/15 HTML / CSS
专注澳大利亚特产和新西兰特产的澳洲中文网:0061澳洲制造
2019/03/24 全球购物
澳大利亚最受欢迎的超级商场每日优惠:Catch
2020/11/17 全球购物
采购部部门职责
2013/12/15 职场文书
我们的节日清明节活动总结
2014/04/30 职场文书
小学生教师节演讲稿
2014/09/03 职场文书
springboot中的pom文件 project报错问题
2022/01/18 Java/Android
python神经网络 tf.name_scope 和 tf.variable_scope 的区别
2022/05/04 Python