Keras实现DenseNet结构操作


Posted in Python onJuly 06, 2020

DenseNet结构在16年由Huang Gao和Liu Zhuang等人提出,并且在CVRP2017中被评为最佳论文。网络的核心结构为如下所示的Dense块,在每一个Dense块中,存在多个Dense层,即下图所示的H1~H4。各Dense层之间彼此均相互连接,即H1的输入为x0,输出为x1,H2的输入即为[x0, x1],输出为x2,依次类推。最终Dense块的输出即为[x0, x1, x2, x3, x4]。这种结构个人感觉非常类似生物学里边的神经元连接方式,应该能够比较有效的提高了网络中特征信息的利用效率。

Keras实现DenseNet结构操作

DenseNet的其他结构就非常类似一般的卷积神经网络结构了,可以参考论文中提供的网路结构图(下图)。但是个人感觉,DenseNet的这种结构应该是存在进一步的优化方法的,比如可能不一定需要在Dense块中对每一个Dense层均直接进行相互连接,来缩小网络的结构;也可能可以在不相邻的Dense块之间通过简单的下采样操作进行连接,进一步提升网络对不同尺度的特征的利用效率。

Keras实现DenseNet结构操作

由于DenseNet的密集连接方式,在构建一个相同容量的网络时其所需的参数数量远小于其之前提出的如resnet等结构。进一步,个人感觉应该可以把Dense块看做对一个有较多参数的卷积层的高效替代。因此,其也可以结合U-Net等网络结构,来进一步优化网络性能,比如单纯的把U-net中的所有卷积层全部换成DenseNet的结构,就可以显著压缩网络大小。

下面基于Keras实现DenseNet-BC结构。首先定义Dense层,根据论文描述构建如下:

def DenseLayer(x, nb_filter, bn_size=4, alpha=0.0, drop_rate=0.2):
 
 # Bottleneck layers
 x = BatchNormalization(axis=3)(x)
 x = LeakyReLU(alpha=alpha)(x)
 x = Conv2D(bn_size*nb_filter, (1, 1), strides=(1,1), padding='same')(x)
 
 # Composite function
 x = BatchNormalization(axis=3)(x)
 x = LeakyReLU(alpha=alpha)(x)
 x = Conv2D(nb_filter, (3, 3), strides=(1,1), padding='same')(x)
 
 if drop_rate: x = Dropout(drop_rate)(x)
 
 return x

论文原文中提出使用1*1卷积核的卷积层作为bottleneck层来优化计算效率。原文中使用的激活函数全部为relu,但个人习惯是用leakyrelu进行构建,来方便调参。

之后是用Dense层搭建Dense块,如下:

def DenseBlock(x, nb_layers, growth_rate, drop_rate=0.2):
 
 for ii in range(nb_layers):
  conv = DenseLayer(x, nb_filter=growth_rate, drop_rate=drop_rate)
  x = concatenate([x, conv], axis=3)
 return x

如论文中所述,将每一个Dense层的输出与其输入融合之后作为下一Dense层的输入,来实现密集连接。

最后是各Dense块之间的过渡层,如下:

def TransitionLayer(x, compression=0.5, alpha=0.0, is_max=0):
 
 nb_filter = int(x.shape.as_list()[-1]*compression)
 x = BatchNormalization(axis=3)(x)
 x = LeakyReLU(alpha=alpha)(x)
 x = Conv2D(nb_filter, (1, 1), strides=(1,1), padding='same')(x)
 if is_max != 0: x = MaxPooling2D(pool_size=(2, 2), strides=2)(x)
 else: x = AveragePooling2D(pool_size=(2, 2), strides=2)(x)
 
 return x

论文中提出使用均值池化层来作下采样,不过在边缘特征提取方面,最大池化层效果应该更好,这里就加了相关接口。

将上述结构按照论文中提出的结构进行拼接,这里选择的参数是论文中提到的L=100,k=12,网络连接如下:

growth_rate = 12
inpt = Input(shape=(32,32,3))
 
x = Conv2D(growth_rate*2, (3, 3), strides=1, padding='same')(inpt)
x = BatchNormalization(axis=3)(x)
x = LeakyReLU(alpha=0.1)(x)
x = DenseBlock(x, 12, growth_rate, drop_rate=0.2)
x = TransitionLayer(x)
x = DenseBlock(x, 12, growth_rate, drop_rate=0.2)
x = TransitionLayer(x)
x = DenseBlock(x, 12, growth_rate, drop_rate=0.2)
x = BatchNormalization(axis=3)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(10, activation='softmax')(x)
 
model = Model(inpt, x)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()

虽然我们已经完成了网络的架设,网络本身的参数数量也仅有0.5M,但由于以这种方式实现的网络在Dense块中,每一次concat均需要开辟一组全新的内存空间,导致实际需要的内存空间非常大。作者在17年的时候,还专门写了相关的技术报告:https://arxiv.org/abs/1707.06990来说明怎么节省内存空间,不过单纯用keras实现起来是比较麻烦。下一篇博客中将以pytorch框架来对其进行实现。

最后放出网络完整代码:

import numpy as np
import keras
from keras.models import Model, save_model, load_model
from keras.layers import Input, Dense, Dropout, BatchNormalization, LeakyReLU, concatenate
from keras.layers import Conv2D, MaxPooling2D, AveragePooling2D, GlobalAveragePooling2D
 
## data
import pickle
 
data_batch_1 = pickle.load(open("cifar-10-batches-py/data_batch_1", 'rb'), encoding='bytes')
data_batch_2 = pickle.load(open("cifar-10-batches-py/data_batch_2", 'rb'), encoding='bytes')
data_batch_3 = pickle.load(open("cifar-10-batches-py/data_batch_3", 'rb'), encoding='bytes')
data_batch_4 = pickle.load(open("cifar-10-batches-py/data_batch_4", 'rb'), encoding='bytes')
data_batch_5 = pickle.load(open("cifar-10-batches-py/data_batch_5", 'rb'), encoding='bytes')
 
train_X_1 = data_batch_1[b'data']
train_X_1 = train_X_1.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
train_Y_1 = data_batch_1[b'labels']
 
train_X_2 = data_batch_2[b'data']
train_X_2 = train_X_2.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
train_Y_2 = data_batch_2[b'labels']
 
train_X_3 = data_batch_3[b'data']
train_X_3 = train_X_3.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
train_Y_3 = data_batch_3[b'labels']
 
train_X_4 = data_batch_4[b'data']
train_X_4 = train_X_4.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
train_Y_4 = data_batch_4[b'labels']
 
train_X_5 = data_batch_5[b'data']
train_X_5 = train_X_5.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
train_Y_5 = data_batch_5[b'labels']
 
train_X = np.row_stack((train_X_1, train_X_2))
train_X = np.row_stack((train_X, train_X_3))
train_X = np.row_stack((train_X, train_X_4))
train_X = np.row_stack((train_X, train_X_5))
 
train_Y = np.row_stack((train_Y_1, train_Y_2))
train_Y = np.row_stack((train_Y, train_Y_3))
train_Y = np.row_stack((train_Y, train_Y_4))
train_Y = np.row_stack((train_Y, train_Y_5))
train_Y = train_Y.reshape(50000, 1).transpose(0, 1).astype("int32")
train_Y = keras.utils.to_categorical(train_Y)
 
test_batch = pickle.load(open("cifar-10-batches-py/test_batch", 'rb'), encoding='bytes')
test_X = test_batch[b'data']
test_X = test_X.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
test_Y = test_batch[b'labels']
test_Y = keras.utils.to_categorical(test_Y)
 
train_X /= 255
test_X /= 255
 
# model
 
def DenseLayer(x, nb_filter, bn_size=4, alpha=0.0, drop_rate=0.2):
 
 # Bottleneck layers
 x = BatchNormalization(axis=3)(x)
 x = LeakyReLU(alpha=alpha)(x)
 x = Conv2D(bn_size*nb_filter, (1, 1), strides=(1,1), padding='same')(x)
 
 # Composite function
 x = BatchNormalization(axis=3)(x)
 x = LeakyReLU(alpha=alpha)(x)
 x = Conv2D(nb_filter, (3, 3), strides=(1,1), padding='same')(x)
 
 if drop_rate: x = Dropout(drop_rate)(x)
 
 return x
 
def DenseBlock(x, nb_layers, growth_rate, drop_rate=0.2):
 
 for ii in range(nb_layers):
  conv = DenseLayer(x, nb_filter=growth_rate, drop_rate=drop_rate)
  x = concatenate([x, conv], axis=3)
  
 return x
 
def TransitionLayer(x, compression=0.5, alpha=0.0, is_max=0):
 
 nb_filter = int(x.shape.as_list()[-1]*compression)
 x = BatchNormalization(axis=3)(x)
 x = LeakyReLU(alpha=alpha)(x)
 x = Conv2D(nb_filter, (1, 1), strides=(1,1), padding='same')(x)
 if is_max != 0: x = MaxPooling2D(pool_size=(2, 2), strides=2)(x)
 else: x = AveragePooling2D(pool_size=(2, 2), strides=2)(x)
 
 return x
 
growth_rate = 12
 
inpt = Input(shape=(32,32,3))
 
x = Conv2D(growth_rate*2, (3, 3), strides=1, padding='same')(inpt)
x = BatchNormalization(axis=3)(x)
x = LeakyReLU(alpha=0.1)(x)
x = DenseBlock(x, 12, growth_rate, drop_rate=0.2)
x = TransitionLayer(x)
x = DenseBlock(x, 12, growth_rate, drop_rate=0.2)
x = TransitionLayer(x)
x = DenseBlock(x, 12, growth_rate, drop_rate=0.2)
x = BatchNormalization(axis=3)(x)
x = GlobalAveragePooling2D()(x)
x = Dense(10, activation='softmax')(x)
 
model = Model(inpt, x)
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
 
model.summary()
 
for ii in range(10):
 print("Epoch:", ii+1)
 model.fit(train_X, train_Y, batch_size=100, epochs=1, verbose=1)
 score = model.evaluate(test_X, test_Y, verbose=1)
 print('Test loss =', score[0])
 print('Test accuracy =', score[1])
 
save_model(model, 'DenseNet.h5')
model = load_model('DenseNet.h5')
 
pred_Y = model.predict(test_X)
score = model.evaluate(test_X, test_Y, verbose=0)
print('Test loss =', score[0])
print('Test accuracy =', score[1])

以上这篇Keras实现DenseNet结构操作就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
python目录操作之python遍历文件夹后将结果存储为xml
Jan 27 Python
Python tempfile模块学习笔记(临时文件)
May 25 Python
python之yield表达式学习
Sep 02 Python
使用C语言扩展Python程序的简单入门指引
Apr 14 Python
在Python的Django框架中包装视图函数
Jul 20 Python
python基础知识小结之集合
Nov 25 Python
Python编程中归并排序算法的实现步骤详解
May 04 Python
Python的collections模块中namedtuple结构使用示例
Jul 07 Python
获取python的list中含有重复值的index方法
Jun 27 Python
python实现将列表中各个值快速赋值给多个变量
Apr 02 Python
Python3 requests模块如何模仿浏览器及代理
Jun 15 Python
python 爬虫如何实现百度翻译
Nov 16 Python
基于Python和C++实现删除链表的节点
Jul 06 #Python
基于Python 的语音重采样函数解析
Jul 06 #Python
python interpolate插值实例
Jul 06 #Python
基于Python实现2种反转链表方法代码实例
Jul 06 #Python
简单了解Django项目应用创建过程
Jul 06 #Python
如何在mac下配置python虚拟环境
Jul 06 #Python
Python优秀开源项目Rich源码解析的流程分析
Jul 06 #Python
You might like
PHP 读取文件内容代码(txt,js等)
2009/12/06 PHP
Linux下CoreSeek及PHP扩展模块的安装
2012/09/23 PHP
php之CodeIgniter学习笔记
2013/06/17 PHP
php面向对象中的魔术方法中文说明
2014/03/04 PHP
Zend Framework页面缓存实例
2014/06/25 PHP
PHP获取访问页面HTTP状态码的实现代码
2016/11/03 PHP
php实现二叉树中和为某一值的路径方法
2018/10/14 PHP
jQuery中:first选择器用法实例
2014/12/30 Javascript
自定义jQuery插件方式实现强制对象重绘的方法
2015/03/23 Javascript
轻量级的原生js日历插件calendar.js使用指南
2015/04/28 Javascript
javascript使用 concat 方法对数组进行合并的方法
2016/09/08 Javascript
Vue生命周期示例详解
2017/04/12 Javascript
layer弹窗插件操作方法详解
2017/05/19 Javascript
bootstrap Table的一些小操作
2017/11/01 Javascript
vue中实现先请求数据再渲染dom分享
2018/03/17 Javascript
浅谈JS对象添加getter与setter的5种方法
2018/06/09 Javascript
JavaScript组合模式---引入案例分析
2020/05/23 Javascript
在vue中使用console.log无效的解决
2020/08/09 Javascript
javascript实现打砖块小游戏(附完整源码)
2020/09/18 Javascript
python使用循环实现批量创建文件夹示例
2014/03/25 Python
Python标准库sched模块使用指南
2017/07/06 Python
利用Python循环(包括while&for)各种打印九九乘法表的实例
2017/11/06 Python
Python正则表达式急速入门(小结)
2019/12/16 Python
关于python scrapy中添加cookie踩坑记录
2020/11/17 Python
浅谈css3新单位vw、vh、vmin、vmax的使用详解
2017/12/01 HTML / CSS
在canvas上实现元素图片镜像翻转动画效果的方法
2018/03/20 HTML / CSS
印度尼西亚在线时尚购物网站:ZALORA印尼
2016/08/02 全球购物
荷兰之家英文站:Holland at Home
2016/10/26 全球购物
Janie and Jack美国官网:GAP旗下的高档童装品牌
2019/09/09 全球购物
计算机相关专业自荐信
2014/07/02 职场文书
写给媳妇的检讨书
2015/05/06 职场文书
婚宴致辞
2015/07/28 职场文书
尊师重教主题班会
2015/08/14 职场文书
周一问候语大全
2015/11/10 职场文书
如何将numpy二维数组中的np.nan值替换为指定的值
2021/05/14 Python
Java Spring读取和存储详细操作
2022/08/05 Java/Android