解决keras使用cov1D函数的输入问题


Posted in Python onJune 29, 2020

解决了以下错误:

1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …

1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

错误代码:

model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape))

或者

model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1:])))

这是因为模型输入的维数有误,在使用基于tensorflow的keras中,cov1d的input_shape是二维的,应该:

1、reshape x_train的形状

x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))

2、改变input_shape

model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))

大神原文:

The input shape is wrong, it should be input_shape = (1, 3253) for Theano or (3253, 1) for TensorFlow. The input shape doesn't include the number of samples.

Then you need to reshape your data to include the channels axis:

x_train = x_train.reshape((500000, 1, 3253))

Or move the channels dimension to the end if you use TensorFlow. After these changes it should work.

2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …

出现此问题是因为ylabel的维数与x_train x_test不符,既然将x_train x_test都reshape了,那么也需要对y进行reshape。

解决办法:

同时对照x_train改变ylabel的形状

t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))

附:

修改完的代码:

import warnings
warnings.filterwarnings("ignore")
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

import pandas as pd
import numpy as np
import matplotlib
# matplotlib.use('Agg')
import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split
from sklearn import preprocessing

from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation, Flatten, Conv1D
from keras.callbacks import LearningRateScheduler, EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras import optimizers
from keras.regularizers import l2
from keras.models import load_model
df_train = pd.read_csv('./input/train_V2.csv')
df_test = pd.read_csv('./input/test_V2.csv')
df_train.drop(df_train.index[[2744604]],inplace=True)#去掉nan值
df_train["distance"] = df_train["rideDistance"]+df_train["walkDistance"]+df_train["swimDistance"]
# df_train["healthpack"] = df_train["boosts"] + df_train["heals"]
df_train["skill"] = df_train["headshotKills"]+df_train["roadKills"]
df_test["distance"] = df_test["rideDistance"]+df_test["walkDistance"]+df_test["swimDistance"]
# df_test["healthpack"] = df_test["boosts"] + df_test["heals"]
df_test["skill"] = df_test["headshotKills"]+df_test["roadKills"]

df_train_size = df_train.groupby(['matchId','groupId']).size().reset_index(name='group_size')
df_test_size = df_test.groupby(['matchId','groupId']).size().reset_index(name='group_size')

df_train_mean = df_train.groupby(['matchId','groupId']).mean().reset_index()
df_test_mean = df_test.groupby(['matchId','groupId']).mean().reset_index()

df_train = pd.merge(df_train, df_train_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
df_test = pd.merge(df_test, df_test_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
del df_train_mean
del df_test_mean

df_train = pd.merge(df_train, df_train_size, how='left', on=['matchId', 'groupId'])
df_test = pd.merge(df_test, df_test_size, how='left', on=['matchId', 'groupId'])
del df_train_size
del df_test_size

target = 'winPlacePerc'
train_columns = list(df_test.columns)
""" remove some columns """
train_columns.remove("Id")
train_columns.remove("matchId")
train_columns.remove("groupId")
train_columns_new = []
for name in train_columns:
 if '_' in name:
  train_columns_new.append(name)
train_columns = train_columns_new
# print(train_columns)

X = df_train[train_columns]
Y = df_test[train_columns]
T = df_train[target]

del df_train
x_train, x_test, t_train, t_test = train_test_split(X, T, test_size = 0.2, random_state = 1234)

# scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(x_train)
scaler = preprocessing.QuantileTransformer().fit(x_train)

x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
Y = scaler.transform(Y)
x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))
t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))

model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))
model.add(BatchNormalization())
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(16, kernel_size=3, strides=1, padding='valid'))
model.add(BatchNormalization())
model.add(Conv1D(16, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(32, kernel_size=3, strides=1, padding='valid'))
model.add(BatchNormalization())
model.add(Conv1D(32, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(32, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(64, kernel_size=3, strides=1, padding='same'))
model.add(Activation('tanh'))
model.add(Flatten())
model.add(Dropout(0.5))
# model.add(Dropout(0.25))
model.add(Dense(512,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01)))
model.add(Dense(128,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01)))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

optimizers.Adam(lr=0.01, epsilon=1e-8, decay=1e-4)

model.compile(optimizer=optimizer, loss='mse', metrics=['mae'])
model.summary()

ng = EarlyStopping(monitor='val_mean_absolute_error', mode='min', patience=4, verbose=1)
# model_checkpoint = ModelCheckpoint(filepath='best_model.h5', monitor='val_mean_absolute_error', mode = 'min', save_best_only=True, verbose=1)
# reduce_lr = ReduceLROnPlateau(monitor='val_mean_absolute_error', mode = 'min',factor=0.5, patience=3, min_lr=0.0001, verbose=1)
history = model.fit(x_train, t_train,
     validation_data=(x_test, t_test),
     epochs=30,
     batch_size=32768,
     callbacks=[early_stopping],
     verbose=1)predict(Y)
pred = pred.ravel()

补充知识:Keras Conv1d 参数及输入输出详解

Conv1d(in_channels,out_channels,kernel_size,stride=1,padding=0,dilation=1,groups=1,bias=True)

filters:卷积核的数目(即输出的维度)

kernel_size: 整数或由单个整数构成的list/tuple,卷积核的空域或时域窗长度

strides: 整数或由单个整数构成的list/tuple,为卷积的步长。任何不为1的strides均为任何不为1的dilation_rata均不兼容

padding: 补0策略,为”valid”,”same”或”casual”,”casual”将产生因果(膨胀的)卷积,即output[t]不依赖于input[t+1:]。当对不能违反事件顺序的时序信号建模时有用。“valid”代表只进行有效的卷积,即对边界数据不处理。“same”代表保留边界处的卷积结果,通常会导致输出shape与输入shape相同。

activation:激活函数,为预定义的激活函数名,或逐元素的Theano函数。如果不指定该函数,将不会使用任何激活函数(即使用线性激活函数:a(x)=x)

model.add(Conv1D(filters=nn_params["input_filters"],
      kernel_size=nn_params["filter_length"],
      strides=1,
      padding='valid',
      activation=nn_params["activation"],
      kernel_regularizer=l2(nn_params["reg"])))

例:输入维度为(None,1000,4)

第一维度:None

第二维度:

output_length = int((input_length - nn_params["filter_length"] + 1))

在此情况下为:

output_length = (1000 + 2*padding - filters +1)/ strides = (1000 + 2*0 -32 +1)/1 = 969

第三维度:filters

以上这篇解决keras使用cov1D函数的输入问题就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
用python + hadoop streaming 分布式编程(一) -- 原理介绍,样例程序与本地调试
Jul 14 Python
在Python中处理列表之reverse()方法的使用教程
May 21 Python
python Socket之客户端和服务端握手详解
Sep 18 Python
解决nohup重定向python输出到文件不成功的问题
May 11 Python
python实现周期方波信号频谱图
Jul 21 Python
Python实现二叉搜索树BST的方法示例
Jul 30 Python
python中@property和property函数常见使用方法示例
Oct 21 Python
使用pyecharts1.7进行简单的可视化大全
May 17 Python
Python 捕获代码中所有异常的方法
Aug 03 Python
解决import tensorflow导致jupyter内核死亡的问题
Feb 06 Python
浅谈Python3中datetime不同时区转换介绍与踩坑
Aug 02 Python
pandas中对文本类型数据的处理小结
Nov 01 Python
快速了解Python开发环境Spyder
Jun 29 #Python
使用Keras构造简单的CNN网络实例
Jun 29 #Python
基于K.image_data_format() == 'channels_first' 的理解
Jun 29 #Python
Python enumerate() 函数如何实现索引功能
Jun 29 #Python
解决Keras中CNN输入维度报错问题
Jun 29 #Python
Python字符串split及rsplit方法原理详解
Jun 29 #Python
浅谈Keras参数 input_shape、input_dim和input_length用法
Jun 29 #Python
You might like
PHP 强制性文件下载功能的函数代码(任意文件格式)
2010/05/26 PHP
PHP JSON 数据解析代码
2010/05/26 PHP
PHP框架自动加载类文件原理详解
2017/06/06 PHP
PHP使用redis消息队列发布微博的方法示例
2017/06/22 PHP
索趣科技的答案
2007/02/07 Javascript
关于JavaScript中原型继承中的一点思考
2012/07/25 Javascript
面向对象继承实例(a如何继承b问题)(自写)
2013/07/01 Javascript
8个实用的jQuery技巧
2014/03/04 Javascript
关于Javascript 对象(object)的prototype
2014/05/09 Javascript
JavaScript DOM 对象深入了解
2016/07/20 Javascript
写给vue新手们的vue渲染页面教程
2017/09/01 Javascript
mint-ui 时间插件使用及获取选择值的方法
2018/02/09 Javascript
微信小程序自定义多选事件的实现代码
2018/05/17 Javascript
关于JavaScript中高阶函数的魅力详解
2018/09/07 Javascript
JavaScript实现公告栏上下滚动效果
2020/03/13 Javascript
[06:16]《DAC最前线》之地区预选赛全面回顾
2015/01/19 DOTA
[01:12:08]LGD vs OG 2019国际邀请赛淘汰赛 胜者组 BO3 第一场 8.24
2019/09/10 DOTA
深入理解Python3 内置函数大全
2017/11/23 Python
如何使用django的MTV开发模式返回一个网页
2019/07/22 Python
Python中IP地址处理IPy模块的方法
2019/08/16 Python
在django模板中实现超链接配置
2019/08/21 Python
Python爬虫如何破解JS加密的Cookie
2020/11/19 Python
CSS3 渐变(Gradients)之CSS3 线性渐变
2016/07/08 HTML / CSS
努比亚手机官网:nubia
2016/10/06 全球购物
英国最大的线上保健品零售商之一:Vitamin Planet
2016/12/01 全球购物
描述一下JVM加载class文件的原理机制
2013/12/08 面试题
在网络中有两台主机A和B,并通过路由器和其他交换设备连接起来,已经确认物理连接正确无误,怎么来测试这两台机器是否连通?如果不通,怎么来判断故障点?怎么排
2014/01/13 面试题
Linux中如何用命令创建目录
2015/01/12 面试题
省优秀教师事迹材料
2014/01/30 职场文书
给市场的环保建议书
2014/05/14 职场文书
乡镇干部个人对照检查材料思想汇报
2014/10/04 职场文书
格列夫游记读书笔记
2015/07/01 职场文书
2015年物业公司保洁工作总结
2015/10/22 职场文书
canvas绘制折线路径动画实现
2021/05/12 Javascript
Python实现简单的俄罗斯方块游戏
2021/09/25 Python
Python装饰器的练习题
2021/11/23 Python