TensorFlow2.0使用keras训练模型的实现


Posted in Python onFebruary 20, 2021

1.一般的模型构造、训练、测试流程

# 模型构造
inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h1 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h1)
model = keras.Model(inputs, outputs)
# keras.utils.plot_model(model, 'net001.png', show_shapes=True)

model.compile(optimizer=keras.optimizers.RMSprop(),
    loss=keras.losses.SparseCategoricalCrossentropy(),
    metrics=[keras.metrics.SparseCategoricalAccuracy()])

# 载入数据
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x_train.reshape(60000, 784).astype('float32') /255
x_test = x_test.reshape(10000, 784).astype('float32') /255

x_val = x_train[-10000:]
y_val = y_train[-10000:]

x_train = x_train[:-10000]
y_train = y_train[:-10000]

# 训练模型
history = model.fit(x_train, y_train, batch_size=64, epochs=3,
   validation_data=(x_val, y_val))
print('history:')
print(history.history)

result = model.evaluate(x_test, y_test, batch_size=128)
print('evaluate:')
print(result)
pred = model.predict(x_test[:2])
print('predict:')
print(pred)

2.自定义损失和指标

自定义指标只需继承Metric类, 并重写一下函数

_init_(self),初始化。

update_state(self,y_true,y_pred,sample_weight = None),它使用目标y_true和模型预测y_pred来更新状态变量。

result(self),它使用状态变量来计算最终结果。

reset_states(self),重新初始化度量的状态。

# 这是一个简单的示例,显示如何实现CatgoricalTruePositives指标,该指标计算正确分类为属于给定类的样本数量

class CatgoricalTruePostives(keras.metrics.Metric):
 def __init__(self, name='binary_true_postives', **kwargs):
  super(CatgoricalTruePostives, self).__init__(name=name, **kwargs)
  self.true_postives = self.add_weight(name='tp', initializer='zeros')
  
 def update_state(self, y_true, y_pred, sample_weight=None):
  y_pred = tf.argmax(y_pred)
  y_true = tf.equal(tf.cast(y_pred, tf.int32), tf.cast(y_true, tf.int32))
  
  y_true = tf.cast(y_true, tf.float32)
  
  if sample_weight is not None:
   sample_weight = tf.cast(sample_weight, tf.float32)
   y_true = tf.multiply(sample_weight, y_true)
   
  return self.true_postives.assign_add(tf.reduce_sum(y_true))
 
 def result(self):
  return tf.identity(self.true_postives)
 
 def reset_states(self):
  self.true_postives.assign(0.)
  

model.compile(optimizer=keras.optimizers.RMSprop(1e-3),
    loss=keras.losses.SparseCategoricalCrossentropy(),
    metrics=[CatgoricalTruePostives()])

model.fit(x_train, y_train,
   batch_size=64, epochs=3)
# 以定义网络层的方式添加网络loss
class ActivityRegularizationLayer(layers.Layer):
 def call(self, inputs):
  self.add_loss(tf.reduce_sum(inputs) * 0.1)
  return inputs

inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h1 = ActivityRegularizationLayer()(h1)
h1 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h1)
model = keras.Model(inputs, outputs)
# keras.utils.plot_model(model, 'net001.png', show_shapes=True)

model.compile(optimizer=keras.optimizers.RMSprop(),
    loss=keras.losses.SparseCategoricalCrossentropy(),
    metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, batch_size=32, epochs=1)
# 也可以以定义网络层的方式添加要统计的metric
class MetricLoggingLayer(layers.Layer):
 def call(self, inputs):
  self.add_metric(keras.backend.std(inputs),
      name='std_of_activation',
      aggregation='mean')
  
  return inputs

inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h1 = MetricLoggingLayer()(h1)
h1 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h1)
model = keras.Model(inputs, outputs)
# keras.utils.plot_model(model, 'net001.png', show_shapes=True)

model.compile(optimizer=keras.optimizers.RMSprop(),
    loss=keras.losses.SparseCategoricalCrossentropy(),
    metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, batch_size=32, epochs=1)
# 也可以直接在model上面加
# 也可以以定义网络层的方式添加要统计的metric
class MetricLoggingLayer(layers.Layer):
 def call(self, inputs):
  self.add_metric(keras.backend.std(inputs),
      name='std_of_activation',
      aggregation='mean')
  
  return inputs

inputs = keras.Input(shape=(784,), name='mnist_input')
h1 = layers.Dense(64, activation='relu')(inputs)
h2 = layers.Dense(64, activation='relu')(h1)
outputs = layers.Dense(10, activation='softmax')(h2)
model = keras.Model(inputs, outputs)

model.add_metric(keras.backend.std(inputs),
      name='std_of_activation',
      aggregation='mean')
model.add_loss(tf.reduce_sum(h1)*0.1)

# keras.utils.plot_model(model, 'net001.png', show_shapes=True)

model.compile(optimizer=keras.optimizers.RMSprop(),
    loss=keras.losses.SparseCategoricalCrossentropy(),
    metrics=[keras.metrics.SparseCategoricalAccuracy()])
model.fit(x_train, y_train, batch_size=32, epochs=1)

处理使用validation_data传入测试数据,还可以使用validation_split划分验证数据

ps:validation_split只能在用numpy数据训练的情况下使用

model.fit(x_train, y_train, batch_size=32, epochs=1, validation_split=0.2)

3.使用tf.data构造数据

def get_compiled_model():
 inputs = keras.Input(shape=(784,), name='mnist_input')
 h1 = layers.Dense(64, activation='relu')(inputs)
 h2 = layers.Dense(64, activation='relu')(h1)
 outputs = layers.Dense(10, activation='softmax')(h2)
 model = keras.Model(inputs, outputs)
 model.compile(optimizer=keras.optimizers.RMSprop(),
     loss=keras.losses.SparseCategoricalCrossentropy(),
     metrics=[keras.metrics.SparseCategoricalAccuracy()])
 return model
model = get_compiled_model()
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)

val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)

# model.fit(train_dataset, epochs=3)
# steps_per_epoch 每个epoch只训练几步
# validation_steps 每次验证,验证几步
model.fit(train_dataset, epochs=3, steps_per_epoch=100,
   validation_data=val_dataset, validation_steps=3)

4.样本权重和类权重

“样本权重”数组是一个数字数组,用于指定批处理中每个样本在计算总损失时应具有多少权重。 它通常用于不平衡的分类问题(这个想法是为了给予很少见的类更多的权重)。 当使用的权重是1和0时,该数组可以用作损失函数的掩码(完全丢弃某些样本对总损失的贡献)。

“类权重”dict是同一概念的更具体的实例:它将类索引映射到应该用于属于该类的样本的样本权重。 例如,如果类“0”比数据中的类“1”少两倍,则可以使用class_weight = {0:1.,1:0.5}。

# 增加第5类的权重
import numpy as np
# 样本权重
model = get_compiled_model()
class_weight = {i:1.0 for i in range(10)}
class_weight[5] = 2.0
print(class_weight)
model.fit(x_train, y_train,
   class_weight=class_weight,
   batch_size=64,
   epochs=4)
# 类权重
model = get_compiled_model()
sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0
model.fit(x_train, y_train,
   sample_weight=sample_weight,
   batch_size=64,
   epochs=4)
# tf.data数据
model = get_compiled_model()

sample_weight = np.ones(shape=(len(y_train),))
sample_weight[y_train == 5] = 2.0

train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train,
             sample_weight))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(64)

val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)

model.fit(train_dataset, epochs=3, )

5.多输入多输出模型

image_input = keras.Input(shape=(32, 32, 3), name='img_input')
timeseries_input = keras.Input(shape=(None, 10), name='ts_input')

x1 = layers.Conv2D(3, 3)(image_input)
x1 = layers.GlobalMaxPooling2D()(x1)

x2 = layers.Conv1D(3, 3)(timeseries_input)
x2 = layers.GlobalMaxPooling1D()(x2)

x = layers.concatenate([x1, x2])

score_output = layers.Dense(1, name='score_output')(x)
class_output = layers.Dense(5, activation='softmax', name='class_output')(x)

model = keras.Model(inputs=[image_input, timeseries_input],
     outputs=[score_output, class_output])
keras.utils.plot_model(model, 'multi_input_output_model.png'
      , show_shapes=True)
# 可以为模型指定不同的loss和metrics
model.compile(
 optimizer=keras.optimizers.RMSprop(1e-3),
 loss=[keras.losses.MeanSquaredError(),
   keras.losses.CategoricalCrossentropy()])

# 还可以指定loss的权重
model.compile(
 optimizer=keras.optimizers.RMSprop(1e-3),
 loss={'score_output': keras.losses.MeanSquaredError(),
   'class_output': keras.losses.CategoricalCrossentropy()},
 metrics={'score_output': [keras.metrics.MeanAbsolutePercentageError(),
        keras.metrics.MeanAbsoluteError()],
    'class_output': [keras.metrics.CategoricalAccuracy()]},
 loss_weight={'score_output': 2., 'class_output': 1.})

# 可以把不需要传播的loss置0
model.compile(
 optimizer=keras.optimizers.RMSprop(1e-3),
 loss=[None, keras.losses.CategoricalCrossentropy()])

# Or dict loss version
model.compile(
 optimizer=keras.optimizers.RMSprop(1e-3),
 loss={'class_output': keras.losses.CategoricalCrossentropy()})

6.使用回 调

Keras中的回调是在训练期间(在epoch开始时,batch结束时,epoch结束时等)在不同点调用的对象,可用于实现以下行为:

  • 在培训期间的不同时间点进行验证(超出内置的每个时期验证)
  • 定期检查模型或超过某个精度阈值
  • 在训练似乎平稳时改变模型的学习率
  • 在训练似乎平稳时对顶层进行微调
  • 在培训结束或超出某个性能阈值时发送电子邮件或即时消息通知等等。

可使用的内置回调有

  • ModelCheckpoint:定期保存模型。
  • EarlyStopping:当训练不再改进验证指标时停止培训。
  • TensorBoard:定期编写可在TensorBoard中显示的模型日志(更多细节见“可视化”)。
  • CSVLogger:将丢失和指标数据流式传输到CSV文件。
  • 等等

6.1回调使用

model = get_compiled_model()

callbacks = [
 keras.callbacks.EarlyStopping(
  # Stop training when `val_loss` is no longer improving
  monitor='val_loss',
  # "no longer improving" being defined as "no better than 1e-2 less"
  min_delta=1e-2,
  # "no longer improving" being further defined as "for at least 2 epochs"
  patience=2,
  verbose=1)
]
model.fit(x_train, y_train,
   epochs=20,
   batch_size=64,
   callbacks=callbacks,
   validation_split=0.2)
# checkpoint模型回调
model = get_compiled_model()
check_callback = keras.callbacks.ModelCheckpoint(
 filepath='mymodel_{epoch}.h5',
 save_best_only=True,
 monitor='val_loss',
 verbose=1
)

model.fit(x_train, y_train,
   epochs=3,
   batch_size=64,
   callbacks=[check_callback],
   validation_split=0.2)
# 动态调整学习率
initial_learning_rate = 0.1
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
 initial_learning_rate,
 decay_steps=10000,
 decay_rate=0.96,
 staircase=True
)
optimizer = keras.optimizers.RMSprop(learning_rate=lr_schedule)
# 使用tensorboard
tensorboard_cbk = keras.callbacks.TensorBoard(log_dir='./full_path_to_your_logs')
model.fit(x_train, y_train,
   epochs=5,
   batch_size=64,
   callbacks=[tensorboard_cbk],
   validation_split=0.2)

6.2创建自己的回调方法

class LossHistory(keras.callbacks.Callback):
 def on_train_begin(self, logs):
  self.losses = []
 def on_epoch_end(self, batch, logs):
  self.losses.append(logs.get('loss'))
  print('\nloss:',self.losses[-1])
  
model = get_compiled_model()

callbacks = [
 LossHistory()
]
model.fit(x_train, y_train,
   epochs=3,
   batch_size=64,
   callbacks=callbacks,
   validation_split=0.2)

7.自己构造训练和验证循环

# Get the model.
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)

# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy()

# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)

# 自己构造循环
for epoch in range(3):
 print('epoch: ', epoch)
 for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
  # 开一个gradient tape, 计算梯度
  with tf.GradientTape() as tape:
   logits = model(x_batch_train)
   
   loss_value = loss_fn(y_batch_train, logits)
   grads = tape.gradient(loss_value, model.trainable_variables)
   optimizer.apply_gradients(zip(grads, model.trainable_variables))
   
  if step % 200 == 0:
   print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
   print('Seen so far: %s samples' % ((step + 1) * 64))
# 训练并验证
# Get model
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)
model = keras.Model(inputs=inputs, outputs=outputs)

# Instantiate an optimizer to train the model.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Instantiate a loss function.
loss_fn = keras.losses.SparseCategoricalCrossentropy()

# Prepare the metrics.
train_acc_metric = keras.metrics.SparseCategoricalAccuracy() 
val_acc_metric = keras.metrics.SparseCategoricalAccuracy()

# Prepare the training dataset.
batch_size = 64
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_dataset = train_dataset.shuffle(buffer_size=1024).batch(batch_size)

# Prepare the validation dataset.
val_dataset = tf.data.Dataset.from_tensor_slices((x_val, y_val))
val_dataset = val_dataset.batch(64)


# Iterate over epochs.
for epoch in range(3):
 print('Start of epoch %d' % (epoch,))
 
 # Iterate over the batches of the dataset.
 for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
 with tf.GradientTape() as tape:
  logits = model(x_batch_train)
  loss_value = loss_fn(y_batch_train, logits)
 grads = tape.gradient(loss_value, model.trainable_variables)
 optimizer.apply_gradients(zip(grads, model.trainable_variables))
  
 # Update training metric.
 train_acc_metric(y_batch_train, logits)

 # Log every 200 batches.
 if step % 200 == 0:
  print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
  print('Seen so far: %s samples' % ((step + 1) * 64))

 # Display metrics at the end of each epoch.
 train_acc = train_acc_metric.result()
 print('Training acc over epoch: %s' % (float(train_acc),))
 # Reset training metrics at the end of each epoch
 train_acc_metric.reset_states()

 # Run a validation loop at the end of each epoch.
 for x_batch_val, y_batch_val in val_dataset:
 val_logits = model(x_batch_val)
 # Update val metrics
 val_acc_metric(y_batch_val, val_logits)
 val_acc = val_acc_metric.result()
 val_acc_metric.reset_states()
 print('Validation acc: %s' % (float(val_acc),))
## 添加自己构造的loss, 每次只能看到最新一次训练增加的loss
class ActivityRegularizationLayer(layers.Layer):
 
 def call(self, inputs):
 self.add_loss(1e-2 * tf.reduce_sum(inputs))
 return inputs
 
inputs = keras.Input(shape=(784,), name='digits')
x = layers.Dense(64, activation='relu', name='dense_1')(inputs)
# Insert activity regularization as a layer
x = ActivityRegularizationLayer()(x)
x = layers.Dense(64, activation='relu', name='dense_2')(x)
outputs = layers.Dense(10, activation='softmax', name='predictions')(x)

model = keras.Model(inputs=inputs, outputs=outputs)
logits = model(x_train[:64])
print(model.losses)
logits = model(x_train[:64])
logits = model(x_train[64: 128])
logits = model(x_train[128: 192])
print(model.losses)
# 将loss添加进求导中
optimizer = keras.optimizers.SGD(learning_rate=1e-3)

for epoch in range(3):
 print('Start of epoch %d' % (epoch,))

 for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
 with tf.GradientTape() as tape:
  logits = model(x_batch_train)
  loss_value = loss_fn(y_batch_train, logits)

  # Add extra losses created during this forward pass:
  loss_value += sum(model.losses)
  
 grads = tape.gradient(loss_value, model.trainable_variables)
 optimizer.apply_gradients(zip(grads, model.trainable_variables))

 # Log every 200 batches.
 if step % 200 == 0:
  print('Training loss (for one batch) at step %s: %s' % (step, float(loss_value)))
  print('Seen so far: %s samples' % ((step + 1) * 64))

到此这篇关于TensorFlow2.0使用keras训练模型的实现的文章就介绍到这了,更多相关TensorFlow2.0 keras训练模型内容请搜索三水点靠木以前的文章或继续浏览下面的相关文章希望大家以后多多支持三水点靠木!

Python 相关文章推荐
Django 如何获取前端发送的头文件详解(推荐)
Aug 15 Python
python处理Excel xlrd的简单使用
Sep 12 Python
Python+matplotlib实现华丽的文本框演示代码
Jan 22 Python
pytorch中tensor的合并与截取方法
Jul 26 Python
python中update的基本使用方法详解
Jul 17 Python
Python利用scapy实现ARP欺骗的方法
Jul 23 Python
pytorch 彩色图像转灰度图像实例
Jan 13 Python
python连接打印机实现打印文档、图片、pdf文件等功能
Feb 07 Python
python3中使用__slots__限定实例属性操作分析
Feb 14 Python
用Python开发app后端有优势吗
Jun 29 Python
教你怎么用Python操作MySql数据库
May 31 Python
Python 详解通过Scrapy框架实现爬取百度新冠疫情数据流程
Nov 11 Python
tensorflow2.0教程之Keras快速入门
Feb 20 #Python
在Pycharm中安装Pandas库方法(简单易懂)
Feb 20 #Python
Python3爬虫RedisDump的安装步骤
Feb 20 #Python
python爬取2021猫眼票房字体加密实例
Feb 19 #Python
Python之Sklearn使用入门教程
Feb 19 #Python
Python爬虫UA伪装爬取的实例讲解
Feb 19 #Python
Pycharm制作搞怪弹窗的实现代码
Feb 19 #Python
You might like
php 论坛采集程序 模拟登陆,抓取页面 实现代码
2009/07/09 PHP
PHP导入Excel到MySQL的方法
2011/04/23 PHP
PHP更新购物车数量(表单部分/PHP处理部分)
2013/05/03 PHP
ThinkPHP验证码和分页实例教程
2014/08/22 PHP
php获取用户真实IP和防刷机制的实例代码
2018/11/28 PHP
PHP-FPM 设置多pool及配置文件重写操作示例
2019/10/02 PHP
JavaScript判断两种格式的输入日期的正确性的代码
2007/03/25 Javascript
Google排名中的10个最著名的 JavaScript库
2010/04/27 Javascript
javascript删除一个html元素节点的方法
2014/12/20 Javascript
jQuery插件windowScroll实现单屏滚动特效
2015/07/14 Javascript
JS延时提示框实现方法详解
2015/11/26 Javascript
javascript检查某个元素在数组中的索引值
2016/03/30 Javascript
JQuery实现文字无缝滚动效果示例代码(Marquee插件)
2017/03/07 Javascript
jquery实现超简单的瀑布流布局【推荐】
2017/03/08 Javascript
javascript九宫格图片随机打乱位置的实现方法
2017/03/15 Javascript
原生JS改变透明度实现轮播效果
2017/03/24 Javascript
JavaScript数组排序reverse()和sort()方法详解
2017/12/24 Javascript
JS中判断某个字符串是否包含另一个字符串的五种方法
2018/05/03 Javascript
解决angular2在双向数据绑定时[(ngModel)]无法使用的问题
2018/09/13 Javascript
微信小程序学习总结(二)样式、属性、模板操作分析
2020/06/04 Javascript
Python中的os.path路径模块中的操作方法总结
2016/07/07 Python
python中利用队列asyncio.Queue进行通讯详解
2017/09/10 Python
Python设计模式之中介模式简单示例
2018/01/09 Python
Python下载网络小说实例代码
2018/02/03 Python
Python3多线程基础知识点
2019/02/19 Python
详解python中TCP协议中的粘包问题
2019/03/22 Python
对Python函数设计规范详解
2019/07/19 Python
Python中的X[:,0]、X[:,1]、X[:,:,0]、X[:,:,1]、X[:,m:n]和X[:,:,m:n]
2020/02/13 Python
Pycharm常用快捷键总结及配置方法
2020/11/14 Python
哄娃神器4moms商店:美国婴童用品品牌
2019/03/07 全球购物
毕业生就业推荐信范文
2013/12/01 职场文书
治超工作实施方案
2014/05/04 职场文书
工作迟到检讨书范文
2015/05/06 职场文书
2016年学校“3.12”植树节活动总结
2016/03/16 职场文书
海贼王十大潜力果实,路飞仅排第十,第一可毁世界(震震果实)
2022/03/18 日漫
使用 MybatisPlus 连接 SqlServer 数据库解决 OFFSET 分页问题
2022/04/22 SQL Server