解决keras使用cov1D函数的输入问题


Posted in Python onJune 29, 2020

解决了以下错误:

1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …

1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

错误代码:

model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape))

或者

model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1:])))

这是因为模型输入的维数有误,在使用基于tensorflow的keras中,cov1d的input_shape是二维的,应该:

1、reshape x_train的形状

x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))

2、改变input_shape

model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))

大神原文:

The input shape is wrong, it should be input_shape = (1, 3253) for Theano or (3253, 1) for TensorFlow. The input shape doesn't include the number of samples.

Then you need to reshape your data to include the channels axis:

x_train = x_train.reshape((500000, 1, 3253))

Or move the channels dimension to the end if you use TensorFlow. After these changes it should work.

2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …

出现此问题是因为ylabel的维数与x_train x_test不符,既然将x_train x_test都reshape了,那么也需要对y进行reshape。

解决办法:

同时对照x_train改变ylabel的形状

t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))

附:

修改完的代码:

import warnings
warnings.filterwarnings("ignore")
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

import pandas as pd
import numpy as np
import matplotlib
# matplotlib.use('Agg')
import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split
from sklearn import preprocessing

from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation, Flatten, Conv1D
from keras.callbacks import LearningRateScheduler, EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras import optimizers
from keras.regularizers import l2
from keras.models import load_model
df_train = pd.read_csv('./input/train_V2.csv')
df_test = pd.read_csv('./input/test_V2.csv')
df_train.drop(df_train.index[[2744604]],inplace=True)#去掉nan值
df_train["distance"] = df_train["rideDistance"]+df_train["walkDistance"]+df_train["swimDistance"]
# df_train["healthpack"] = df_train["boosts"] + df_train["heals"]
df_train["skill"] = df_train["headshotKills"]+df_train["roadKills"]
df_test["distance"] = df_test["rideDistance"]+df_test["walkDistance"]+df_test["swimDistance"]
# df_test["healthpack"] = df_test["boosts"] + df_test["heals"]
df_test["skill"] = df_test["headshotKills"]+df_test["roadKills"]

df_train_size = df_train.groupby(['matchId','groupId']).size().reset_index(name='group_size')
df_test_size = df_test.groupby(['matchId','groupId']).size().reset_index(name='group_size')

df_train_mean = df_train.groupby(['matchId','groupId']).mean().reset_index()
df_test_mean = df_test.groupby(['matchId','groupId']).mean().reset_index()

df_train = pd.merge(df_train, df_train_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
df_test = pd.merge(df_test, df_test_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
del df_train_mean
del df_test_mean

df_train = pd.merge(df_train, df_train_size, how='left', on=['matchId', 'groupId'])
df_test = pd.merge(df_test, df_test_size, how='left', on=['matchId', 'groupId'])
del df_train_size
del df_test_size

target = 'winPlacePerc'
train_columns = list(df_test.columns)
""" remove some columns """
train_columns.remove("Id")
train_columns.remove("matchId")
train_columns.remove("groupId")
train_columns_new = []
for name in train_columns:
 if '_' in name:
  train_columns_new.append(name)
train_columns = train_columns_new
# print(train_columns)

X = df_train[train_columns]
Y = df_test[train_columns]
T = df_train[target]

del df_train
x_train, x_test, t_train, t_test = train_test_split(X, T, test_size = 0.2, random_state = 1234)

# scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(x_train)
scaler = preprocessing.QuantileTransformer().fit(x_train)

x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
Y = scaler.transform(Y)
x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))
t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))

model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))
model.add(BatchNormalization())
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(16, kernel_size=3, strides=1, padding='valid'))
model.add(BatchNormalization())
model.add(Conv1D(16, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(32, kernel_size=3, strides=1, padding='valid'))
model.add(BatchNormalization())
model.add(Conv1D(32, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(32, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(64, kernel_size=3, strides=1, padding='same'))
model.add(Activation('tanh'))
model.add(Flatten())
model.add(Dropout(0.5))
# model.add(Dropout(0.25))
model.add(Dense(512,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01)))
model.add(Dense(128,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01)))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

optimizers.Adam(lr=0.01, epsilon=1e-8, decay=1e-4)

model.compile(optimizer=optimizer, loss='mse', metrics=['mae'])
model.summary()

ng = EarlyStopping(monitor='val_mean_absolute_error', mode='min', patience=4, verbose=1)
# model_checkpoint = ModelCheckpoint(filepath='best_model.h5', monitor='val_mean_absolute_error', mode = 'min', save_best_only=True, verbose=1)
# reduce_lr = ReduceLROnPlateau(monitor='val_mean_absolute_error', mode = 'min',factor=0.5, patience=3, min_lr=0.0001, verbose=1)
history = model.fit(x_train, t_train,
     validation_data=(x_test, t_test),
     epochs=30,
     batch_size=32768,
     callbacks=[early_stopping],
     verbose=1)predict(Y)
pred = pred.ravel()

补充知识:Keras Conv1d 参数及输入输出详解

Conv1d(in_channels,out_channels,kernel_size,stride=1,padding=0,dilation=1,groups=1,bias=True)

filters:卷积核的数目(即输出的维度)

kernel_size: 整数或由单个整数构成的list/tuple,卷积核的空域或时域窗长度

strides: 整数或由单个整数构成的list/tuple,为卷积的步长。任何不为1的strides均为任何不为1的dilation_rata均不兼容

padding: 补0策略,为”valid”,”same”或”casual”,”casual”将产生因果(膨胀的)卷积,即output[t]不依赖于input[t+1:]。当对不能违反事件顺序的时序信号建模时有用。“valid”代表只进行有效的卷积,即对边界数据不处理。“same”代表保留边界处的卷积结果,通常会导致输出shape与输入shape相同。

activation:激活函数,为预定义的激活函数名,或逐元素的Theano函数。如果不指定该函数,将不会使用任何激活函数(即使用线性激活函数:a(x)=x)

model.add(Conv1D(filters=nn_params["input_filters"],
      kernel_size=nn_params["filter_length"],
      strides=1,
      padding='valid',
      activation=nn_params["activation"],
      kernel_regularizer=l2(nn_params["reg"])))

例:输入维度为(None,1000,4)

第一维度:None

第二维度:

output_length = int((input_length - nn_params["filter_length"] + 1))

在此情况下为:

output_length = (1000 + 2*padding - filters +1)/ strides = (1000 + 2*0 -32 +1)/1 = 969

第三维度:filters

以上这篇解决keras使用cov1D函数的输入问题就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
Python批量修改文件后缀的方法
Jan 26 Python
不要用强制方法杀掉python线程
Feb 26 Python
python 利用栈和队列模拟递归的过程
May 29 Python
PyTorch 1.0 正式版已经发布了
Dec 13 Python
Pandas库之DataFrame使用的学习笔记
Jun 21 Python
Python pygame绘制文字制作滚动文字过程解析
Dec 12 Python
对tensorflow 中tile函数的使用详解
Feb 07 Python
python GUI库图形界面开发之PyQt5不规则窗口实现与显示GIF动画的详细方法与实例
Mar 09 Python
Python数据结构dict常用操作代码实例
Mar 12 Python
Python sorted排序方法如何实现
Mar 31 Python
Django中FilePathField字段的用法
May 21 Python
Python  Asyncio模块实现的生产消费者模型的方法
Mar 01 Python
快速了解Python开发环境Spyder
Jun 29 #Python
使用Keras构造简单的CNN网络实例
Jun 29 #Python
基于K.image_data_format() == 'channels_first' 的理解
Jun 29 #Python
Python enumerate() 函数如何实现索引功能
Jun 29 #Python
解决Keras中CNN输入维度报错问题
Jun 29 #Python
Python字符串split及rsplit方法原理详解
Jun 29 #Python
浅谈Keras参数 input_shape、input_dim和input_length用法
Jun 29 #Python
You might like
盘点被央视点名过的日本动画电影 一部比一部强
2020/03/08 日漫
PHP动态变静态原理
2006/11/25 PHP
php empty() 检查一个变量是否为空
2011/11/10 PHP
jQuery中的RadioButton,input,CheckBox取值赋值实现代码
2014/02/18 PHP
Destoon实现多表查询示例
2014/08/21 PHP
Laravel 4 初级教程之安装及入门
2014/10/30 PHP
浅谈PHP中JSON数据操作
2015/07/01 PHP
php类自动装载、链式操作、魔术方法实现代码
2017/07/23 PHP
基于MVC3方式实现下拉列表联动(JQuery)
2013/09/02 Javascript
js中同步与异步处理的方法和区别总结
2013/12/25 Javascript
一个jquery实现的不错的多行文字图片滚动效果
2014/09/28 Javascript
简述JavaScript中正则表达式的使用方法
2015/06/15 Javascript
js中遍历Map对象的简单实例
2016/08/08 Javascript
jQuery Validate插件实现表单验证
2016/08/19 Javascript
javascript代码调试之console.log 用法图文详解
2016/09/30 Javascript
jquery uploadify如何取消已上传成功文件
2017/02/08 Javascript
微信小程序 参数传递实例代码
2017/03/20 Javascript
Angular中ng-bind和ng-model的区别实例详解
2017/04/10 Javascript
socket.io与pm2(cluster)集群搭配的解决方案
2017/06/02 Javascript
JavaScript之排序函数_动力节点Java学院整理
2017/06/30 Javascript
基于IView中on-change属性的使用详解
2018/03/15 Javascript
js实现类似iphone的网页滑屏解锁功能示例【附源码下载】
2019/06/10 Javascript
微信小程序 自定义弹窗实现过程(附代码)
2019/12/05 Javascript
Vue 简单实现前端权限控制的示例
2020/12/25 Vue.js
python网络编程学习笔记(一)
2014/06/09 Python
Python自动化构建工具scons使用入门笔记
2015/03/10 Python
初步剖析C语言编程中的结构体
2016/01/16 Python
浅谈python字符串方法的简单使用
2016/07/18 Python
python:socket传输大文件示例
2017/01/18 Python
python argparse模块通过后台传递参数实例
2020/04/20 Python
适用于所有创业者的创业计划书
2014/02/05 职场文书
党的群众路线教育实践活动剖析材料
2014/09/30 职场文书
2016年大学生社会实践心得体会
2015/10/09 职场文书
mysql查询的控制语句图文详解
2021/04/11 MySQL
golang interface判断为空nil的实现代码
2021/04/24 Golang
Pygame Time时间控制的具体使用详解
2021/11/17 Python