keras实现基于孪生网络的图片相似度计算方式


Posted in Python onJune 11, 2020

我就废话不多说了,大家还是直接看代码吧!

import keras
from keras.layers import Input,Dense,Conv2D
from keras.layers import MaxPooling2D,Flatten,Convolution2D
from keras.models import Model
import os
import numpy as np
from PIL import Image
from keras.optimizers import SGD
from scipy import misc
root_path = os.getcwd()
train_names = ['bear','blackswan','bus','camel','car','cows','dance','dog','hike','hoc','kite','lucia','mallerd','pigs','soapbox','stro','surf','swing','train','walking']
test_names = ['boat','dance-jump','drift-turn','elephant','libby']
 
def load_data(seq_names,data_number,seq_len): 
#生成图片对
  print('loading data.....')
  frame_num = 51
  train_data1 = []
  train_data2 = []
  train_lab = []
  count = 0
  while count < data_number:
    count = count + 1
    pos_neg = np.random.randint(0,2)
    if pos_neg==0:
      seed1 = np.random.randint(0,seq_len)
      seed2 = np.random.randint(0,seq_len)
      while seed1 == seed2:
       seed1 = np.random.randint(0,seq_len)
       seed2 = np.random.randint(0,seq_len)
      frame1 = np.random.randint(1,frame_num)
      frame2 = np.random.randint(1,frame_num)
      path1 = os.path.join(root_path,'data','simility_data',seq_names[seed1],str(frame1)+'.jpg')
      path2 = os.path.join(root_path, 'data', 'simility_data', seq_names[seed2], str(frame2) + '.jpg')
      image1 = np.array(misc.imresize(Image.open(path1),[224,224]))
      image2 = np.array(misc.imresize(Image.open(path2),[224,224]))
      train_data1.append(image1)
      train_data2.append(image2)
      train_lab.append(np.array(0))
    else:
     seed = np.random.randint(0,seq_len)
     frame1 = np.random.randint(1, frame_num)
     frame2 = np.random.randint(1, frame_num)
     path1 = os.path.join(root_path, 'data', 'simility_data', seq_names[seed], str(frame1) + '.jpg')
     path2 = os.path.join(root_path, 'data', 'simility_data', seq_names[seed], str(frame2) + '.jpg')
     image1 = np.array(misc.imresize(Image.open(path1),[224,224]))
     image2 = np.array(misc.imresize(Image.open(path2),[224,224]))
     train_data1.append(image1)
     train_data2.append(image2)
     train_lab.append(np.array(1))
  return np.array(train_data1),np.array(train_data2),np.array(train_lab)
 
def vgg_16_base(input_tensor):
  net = Conv2D(64(3,3),activation='relu',padding='same',input_shape=(224,224,3))(input_tensor)
  net = Convolution2D(64,(3,3),activation='relu',padding='same')(net)
  net = MaxPooling2D((2,2),strides=(2,2))(net)
 
  net = Convolution2D(128,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(128,(3,3),activation='relu',padding='same')(net)
  net= MaxPooling2D((2,2),strides=(2,2))(net)
 
  net = Convolution2D(256,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(256,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(256,(3,3),activation='relu',padding='same')(net)
  net = MaxPooling2D((2,2),strides=(2,2))(net)
 
  net = Convolution2D(512,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(512,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(512,(3,3),activation='relu',padding='same')(net)
  net = MaxPooling2D((2,2),strides=(2,2))(net)
 
  net = Convolution2D(512,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(512,(3,3),activation='relu',padding='same')(net)
  net = Convolution2D(512,(3,3),activation='relu',padding='same')(net)
  net = MaxPooling2D((2,2),strides=(2,2))(net)
  net = Flatten()(net)
  return net
 
def siamese(vgg_path=None,siamese_path=None):
  input_tensor = Input(shape=(224,224,3))
  vgg_model = Model(input_tensor,vgg_16_base(input_tensor))
  if vgg_path:
    vgg_model.load_weights(vgg_path)
  input_im1 = Input(shape=(224,224,3))
  input_im2 = Input(shape=(224,224,3))
  out_im1 = vgg_model(input_im1)
  out_im2 = vgg_model(input_im2)
  diff = keras.layers.substract([out_im1,out_im2])
  out = Dense(500,activation='relu')(diff)
  out = Dense(1,activation='sigmoid')(out)
  model = Model([input_im1,input_im2],out)
  if siamese_path:
    model.load_weights(siamese_path)
  return model
 
train = True
if train:
  model = siamese(siamese_path='model/simility/vgg.h5')
  sgd = SGD(lr=1e-6,momentum=0.9,decay=1e-6,nesterov=True)
  model.compile(optimizer=sgd,loss='mse',metrics=['accuracy'])
  tensorboard = keras.callbacks.TensorBoard(histogram_freq=5,log_dir='log/simility',write_grads=True,write_images=True)
  ckpt = keras.callbacks.ModelCheckpoint(os.path.join(root_path,'model','simility','vgg.h5'),
                    verbose=1,period=5)
  train_data1,train_data2,train_lab = load_data(train_names,4000,20)
  model.fit([train_data1,train_data2],train_lab,callbacks=[tensorboard,ckpt],batch_size=64,epochs=50)
else:
  model = siamese(siamese_path='model/simility/vgg.h5')
  test_im1,test_im2,test_labe = load_data(test_names,1000,5)
  TP = 0
  for i in range(1000):
   im1 = np.expand_dims(test_im1[i],axis=0)
   im2 = np.expand_dims(test_im2[i],axis=0)
   lab = test_labe[i]
   pre = model.predict([im1,im2])
   if pre>0.9 and lab==1:
    TP = TP + 1
   if pre<0.9 and lab==0:
    TP = TP + 1
  print(float(TP)/1000)

输入两张图片,标记1为相似,0为不相似。

损失函数用的是简单的均方误差,有待改成Siamese的对比损失。

总结:

1.随机生成了几组1000对的图片,测试精度0.7左右,效果一般。

2.问题 1)数据加载没有用生成器,还得继续认真看看文档 2)训练时划分验证集的时候,训练就会报错,什么输入维度的问题,暂时没找到原因 3)输入的shape好像必须给出数字,本想用shape= input_tensor.get_shape(),能训练,不能保存模型,会报(NOT JSON Serializable,Dimension(None))类型错误

补充知识: keras 问答匹配孪生网络文本匹配 RNN 带有数据

用途:

这篇博客解释了如何搭建一个简单的匹配网络。并且使用了keras的lambda层。在建立网络之前需要对数据进行预处理。处理过后,文本转变为id字符序列。将一对question,answer分别编码可以得到两个向量,在匹配层中比较两个向量,计算相似度。

网络图示:

keras实现基于孪生网络的图片相似度计算方式

数据准备:

数据基于网上的淘宝客服对话数据,我也会放在我的下载页面中。原数据是对话,我筛选了其中label为1的对话。然后将对话拆解成QA对,q是用户,a是客服。然后对于每个q,有一个a是匹配的,label为1.再选择一个a,构成新的样本,label为0.

超参数:

比较简单,具体看代码就可以了。

# dialogue max pair q,a
max_pair = 30000
# top k frequent word ,k
MAX_FEATURES = 450
# fixed q,a length
MAX_SENTENCE_LENGTH = 30
embedding_size = 100
batch_size = 600
# learning rate
lr = 0.01
HIDDEN_LAYER_SIZE = n_hidden_units = 256 # neurons in hidden layer

细节:

导入一些库

# -*- coding: utf-8 -*-
from keras.layers.core import Activation, Dense, Dropout, SpatialDropout1D
from keras.layers.embeddings import Embedding
from keras.layers.recurrent import LSTM
from keras.preprocessing import sequence
from sklearn.model_selection import train_test_split
import collections
import matplotlib.pyplot as plt
import nltk
import numpy as np
import os
import pandas as pd
from alime_data import convert_dialogue_to_pair
from parameter import MAX_SENTENCE_LENGTH,MAX_FEATURES,embedding_size,max_pair,batch_size,HIDDEN_LAYER_SIZE
DATA_DIR = "../data"
NUM_EPOCHS = 2
# Read training data and generate vocabulary
maxlen = 0
num_recs = 0

数据准备,先统计词频,然后取出top N个常用词,然后将句子转换成 单词id的序列。把句子中的有效id靠右边放,将句子左边补齐padding。然后分成训练集和测试集

word_freqs = collections.Counter()
training_data = convert_dialogue_to_pair(max_pair)
num_recs = len([1 for r in training_data.iterrows()])
 
#for line in ftrain:
for line in training_data.iterrows():
  label ,sentence_q = line[1]['label'],line[1]['sentence_q']
  label ,sentence_a = line[1]['label'],line[1]['sentence_a']
  words = nltk.word_tokenize(sentence_q.lower())#.decode("ascii", "ignore")
  if len(words) > maxlen:
    maxlen = len(words)
  for word in words:
    word_freqs[word] += 1
  words = nltk.word_tokenize(sentence_a.lower())#.decode("ascii", "ignore")
  if len(words) > maxlen:
    maxlen = len(words)
  for word in words:
    word_freqs[word] += 1
  #num_recs += 1
## Get some information about our corpus
 
# 1 is UNK, 0 is PAD
# We take MAX_FEATURES-1 featurs to accound for PAD
vocab_size = min(MAX_FEATURES, len(word_freqs)) + 2
word2index = {x[0]: i+2 for i, x in enumerate(word_freqs.most_common(MAX_FEATURES))}
word2index["PAD"] = 0
word2index["UNK"] = 1
index2word = {v:k for k, v in word2index.items()}
# convert sentences to sequences
X_q = np.empty((num_recs, ), dtype=list)
X_a = np.empty((num_recs, ), dtype=list)
y = np.zeros((num_recs, ))
i = 0
def chinese_split(x):
  return x.split(' ')
 
for line in training_data.iterrows():
  label ,sentence_q,sentence_a = line[1]['label'],line[1]['sentence_q'],line[1]['sentence_a']
  #label, sentence = line.strip().split("\t")
  #print(label,sentence)
  #words = nltk.word_tokenize(sentence_q.lower())
  words = chinese_split(sentence_q)
  seqs = []
  for word in words:
    if word in word2index.keys():
      seqs.append(word2index[word])
    else:
      seqs.append(word2index["UNK"])
  X_q[i] = seqs
  #print('add_q')
  #words = nltk.word_tokenize(sentence_a.lower())
  words = chinese_split(sentence_a)
  seqs = []
  for word in words:
    if word in word2index.keys():
      seqs.append(word2index[word])
    else:
      seqs.append(word2index["UNK"])
  X_a[i] = seqs
  y[i] = int(label)
  i += 1
# Pad the sequences (left padded with zeros)
X_a = sequence.pad_sequences(X_a, maxlen=MAX_SENTENCE_LENGTH)
X_q = sequence.pad_sequences(X_q, maxlen=MAX_SENTENCE_LENGTH)
X = []
for i in range(len(X_a)):
  concat = [X_q[i],X_a[i]]
  X.append(concat)
 
# Split input into training and test
Xtrain, Xtest, ytrain, ytest = train_test_split(X, y, test_size=0.2,
                        random_state=42)
#print(Xtrain.shape, Xtest.shape, ytrain.shape, ytest.shape)
Xtrain_Q = [e[0] for e in Xtrain]
Xtrain_A = [e[1] for e in Xtrain]
Xtest_Q = [e[0] for e in Xtest]
Xtest_A = [e[1] for e in Xtest]

最后建立网络。先定义两个函数,一个是句子编码器,另一个是lambda层,计算两个向量的绝对差。将QA分别用encoder处理得到两个向量,把两个向量放入lambda层。最后有了2*hidden size的一层,将这一层接一个dense层,接activation,得到分类概率。

from keras.layers.wrappers import Bidirectional
from keras.layers import Input,Lambda
from keras.models import Model
 
def encoder(inputs_seqs,rnn_hidden_size,dropout_rate):
  x_embed = Embedding(vocab_size, embedding_size, input_length=MAX_SENTENCE_LENGTH)(inputs_seqs)
  inputs_drop = SpatialDropout1D(0.2)(x_embed)
  encoded_Q = Bidirectional(
    LSTM(rnn_hidden_size, dropout=dropout_rate, recurrent_dropout=dropout_rate, name='RNN'))(inputs_drop)
  return encoded_Q
 
def absolute_difference(vecs):
  a,b =vecs
  #d = a-b
  return abs(a - b)
 
inputs_Q = Input(shape=(MAX_SENTENCE_LENGTH,), name="input")
# x_embed = Embedding(vocab_size, embedding_size, input_length=MAX_SENTENCE_LENGTH)(inputs_Q)
# inputs_drop = SpatialDropout1D(0.2)(x_embed)
# encoded_Q = Bidirectional(LSTM(HIDDEN_LAYER_SIZE, dropout=0.2, recurrent_dropout=0.2,name= 'RNN'))(inputs_drop)
inputs_A = Input(shape=(MAX_SENTENCE_LENGTH,), name="input_a")
# x_embed = Embedding(vocab_size, embedding_size, input_length=MAX_SENTENCE_LENGTH)(inputs_A)
# inputs_drop = SpatialDropout1D(0.2)(x_embed)
# encoded_A = Bidirectional(LSTM(HIDDEN_LAYER_SIZE, dropout=0.2, recurrent_dropout=0.2,name= 'RNN'))(inputs_drop)
encoded_Q = encoder(inputs_Q,HIDDEN_LAYER_SIZE,0.1)
encoded_A = encoder(inputs_A,HIDDEN_LAYER_SIZE,0.1)
 
# import tensorflow as tf
# difference = tf.subtract(encoded_Q, encoded_A)
# difference = tf.abs(difference)
similarity = Lambda(absolute_difference)([encoded_Q, encoded_A])
# x = concatenate([encoded_Q, encoded_A])
#
# matching_x = Dense(128)(x)
# matching_x = Activation("sigmoid")(matching_x)
polar = Dense(1)(similarity)
prop = Activation("sigmoid")(polar)
model = Model(inputs=[inputs_Q,inputs_A], outputs=prop)
model.compile(loss="binary_crossentropy", optimizer="adam",
       metrics=["accuracy"])
training_history = model.fit([Xtrain_Q, Xtrain_A], ytrain, batch_size=batch_size,
               epochs=NUM_EPOCHS,
               validation_data=([Xtest_Q,Xtest_A], ytest))
# plot loss and accuracy
def plot(training_history):
  plt.subplot(211)
  plt.title("Accuracy")
  plt.plot(training_history.history["acc"], color="g", label="Train")
  plt.plot(training_history.history["val_acc"], color="b", label="Validation")
  plt.legend(loc="best")
 
  plt.subplot(212)
  plt.title("Loss")
  plt.plot(training_history.history["loss"], color="g", label="Train")
  plt.plot(training_history.history["val_loss"], color="b", label="Validation")
  plt.legend(loc="best")
  plt.tight_layout()
  plt.show()
 
# evaluate
score, acc = model.evaluate([Xtest_Q,Xtest_A], ytest, batch_size = batch_size)
print("Test score: %.3f, accuracy: %.3f" % (score, acc))
 
for i in range(25):
  idx = np.random.randint(len(Xtest_Q))
  #idx2 = np.random.randint(len(Xtest_A))
  xtest_Q = Xtest_Q[idx].reshape(1,MAX_SENTENCE_LENGTH)
  xtest_A = Xtest_A[idx].reshape(1,MAX_SENTENCE_LENGTH)
  ylabel = ytest[idx]
  ypred = model.predict([xtest_Q,xtest_A])[0][0]
  sent_Q = " ".join([index2word[x] for x in xtest_Q[0].tolist() if x != 0])
  sent_A = " ".join([index2word[x] for x in xtest_A[0].tolist() if x != 0])
  print("%.0f\t%d\t%s\t%s" % (ypred, ylabel, sent_Q,sent_A))

最后是处理数据的函数,写在另一个文件里。

import nltk
from parameter import MAX_FEATURES,MAX_SENTENCE_LENGTH
import pandas as pd
from collections import Counter
def get_pair(number, dialogue):
  pairs = []
  for conversation in dialogue:
    utterances = conversation[2:].strip('\n').split('\t')
    # print(utterances)
    # break
 
    for i, utterance in enumerate(utterances):
      if i % 2 != 0: continue
      pairs.append([utterances[i], utterances[i + 1]])
      if len(pairs) >= number:
        return pairs
  return pairs
 
 
def convert_dialogue_to_pair(k):
  dialogue = open('dialogue_alibaba2.txt', encoding='utf-8', mode='r')
  dialogue = dialogue.readlines()
  dialogue = [p for p in dialogue if p.startswith('1')]
  print(len(dialogue))
  pairs = get_pair(k, dialogue)
  # break
  # print(pairs)
  data = []
  for p in pairs:
    data.append([p[0], p[1], 1])
  for i, p in enumerate(pairs):
    data.append([p[0], pairs[(i + 8) % len(pairs)][1], 0])
  df = pd.DataFrame(data, columns=['sentence_q', 'sentence_a', 'label'])
 
  print(len(data))
  return df

以上这篇keras实现基于孪生网络的图片相似度计算方式就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
Python学习笔记之常用函数及说明
May 23 Python
在Python中封装GObject模块进行图形化程序编程的教程
Apr 14 Python
python django 实现验证码的功能实例代码
May 18 Python
Python实现正弦信号的时域波形和频谱图示例【基于matplotlib】
May 04 Python
numpy 进行数组拼接,分别在行和列上合并的实例
May 08 Python
Python使用pyautocad+openpyxl处理cad文件示例
Jul 11 Python
django一对多模型以及如何在前端实现详解
Jul 24 Python
PyTorch 解决Dataset和Dataloader遇到的问题
Jan 08 Python
Python使用多进程运行含有任意个参数的函数
May 02 Python
用python写一个带有gui界面的密码生成器
Nov 06 Python
解决使用Pandas 读取超过65536行的Excel文件问题
Nov 10 Python
Python 爬取淘宝商品信息栏目的实现
Feb 06 Python
为什么说python适合写爬虫
Jun 11 #Python
python新手学习使用库
Jun 11 #Python
keras实现多种分类网络的方式
Jun 11 #Python
python的help函数如何使用
Jun 11 #Python
新手学python应该下哪个版本
Jun 11 #Python
python开发前景如何
Jun 11 #Python
python编写softmax函数、交叉熵函数实例
Jun 11 #Python
You might like
thinkphp在低版本Nginx 下支持PATHINFO的方法分享
2016/05/27 PHP
PHP实现redis限制单ip、单用户的访问次数功能示例
2018/06/16 PHP
jquery实现图片灯箱明暗的遮罩效果
2013/11/15 Javascript
ECMAScript6函数剩余参数(Rest Parameters)
2015/06/12 Javascript
阻止表单提交按钮多次提交的完美解决方法
2016/05/16 Javascript
JavaScript制作颜色反转小游戏
2016/09/25 Javascript
JS实现的合并多个数组去重算法示例
2018/04/11 Javascript
原生JS+HTML5实现跟随鼠标一起流动的粒子动画效果
2018/05/03 Javascript
nodejs中用npm初始化来创建package.json的实例讲解
2018/10/10 NodeJs
Element Input组件分析小结
2018/10/11 Javascript
Vue.js构建你的第一个包并在NPM上发布的方法步骤
2019/05/01 Javascript
深入解析Vue源码实例挂载与编译流程实现思路详解
2019/05/05 Javascript
js常见遍历操作小结
2019/06/06 Javascript
[03:05]《我与DAC》之xiao8:DAC与BG
2018/03/27 DOTA
Python字符转换
2008/09/06 Python
详解Python中的循环语句的用法
2015/04/09 Python
在Python的Flask中使用WTForms表单框架的基础教程
2016/06/07 Python
Python实现的基数排序算法原理与用法实例分析
2017/11/23 Python
Django添加sitemap的方法示例
2018/08/06 Python
Windows 64位下python3安装nltk模块
2018/09/19 Python
Python判断对象是否为文件对象(file object)的三种方法示例
2019/04/26 Python
python读取图片的方式,以及将图片以三维数组的形式输出方法
2019/07/03 Python
pandas分区间,算频率的实例
2019/07/04 Python
Python numpy多维数组实现原理详解
2020/03/10 Python
皇马官方商城:Real Madrid Store
2016/09/02 全球购物
英国剑桥包中文官网:The Cambridge Satchel Company中国
2018/11/06 全球购物
区三好学生主要事迹
2014/01/30 职场文书
优秀本科毕业生自荐信
2014/07/04 职场文书
环保项目建议书
2014/08/26 职场文书
大学生考试作弊被抓检讨书
2014/12/27 职场文书
优质护理心得体会
2016/01/22 职场文书
《岳阳楼记》原文、译文赏析
2019/09/10 职场文书
使用Html+Css实现简易导航栏功能(导航栏遇到鼠标切换背景颜色)
2021/04/07 HTML / CSS
React + Threejs + Swiper 实现全景图效果的完整代码
2021/06/28 Javascript
漫改真人电影「萌系男友是燃燃的橘色」公开先导视觉图
2022/03/21 日漫
排查并解决Oracle sysaux表空间异常增长
2022/04/20 Oracle