tensorflow 变长序列存储实例


Posted in Python onJanuary 20, 2020

问题

问题是这样的,要把一个数组存到tfrecord中,然后读取

a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196],
  [0, 38, 79, 157],
  [0, 49, 89, 147, 177],
  [0, 32, 73, 145]])

图片我都存储了,这个不还是小意思,一顿操作

import tensorflow as tf
import numpy as np

def _int64_feature(value):
 if not isinstance(value,list):
 value = [value]
 return tf.train.Feature(int64_list=tf.train.Int64List(value=value))

# Write an array to TFrecord.
# a is an array which contains lists of variant length.
a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196],
  [0, 38, 79, 157],
  [0, 49, 89, 147, 177],
  [0, 32, 73, 145]])

writer = tf.python_io.TFRecordWriter('file')

for i in range(a.shape[0]):
 feature = {'i' : _int64_feature(i), 
  'data': _int64_feature(a[i])}

 # Create an example protocol buffer
 example = tf.train.Example(features=tf.train.Features(feature=feature))

 # Serialize to string and write on the file
 writer.write(example.SerializeToString())

writer.close()


# Use Dataset API to read the TFRecord file.
filenames = ["file"]
dataset = tf.data.TFRecordDataset(filenames)
def _parse_function(example_proto):
 keys_to_features = {'i':tf.FixedLenFeature([],tf.int64),
   'data':tf.FixedLenFeature([],tf.int64)}
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 return parsed_features['i'], parsed_features['data']

dataset = dataset.map(_parse_function)
dataset = dataset.shuffle(buffer_size=1)
dataset = dataset.repeat() 
dataset = dataset.batch(1)
iterator = dataset.make_one_shot_iterator()
i, data = iterator.get_next()
with tf.Session() as sess:
 print(sess.run([i, data]))
 print(sess.run([i, data]))
 print(sess.run([i, data]))

报了奇怪的错误,Name: <unknown>, Key: data, Index: 0. Number of int64 values != expected. Values size: 6 but output shape: [] 这意思是我数据长度为6,但是读出来的是[],这到底是哪里错了,我先把读取的代码注释掉,看看tfreocrd有没有写成功,发现写成功了,这就表明是读取的问题,我怀疑是因为每次写入的长度是变化的原因,但是又有觉得不是,因为图片的尺寸都是不同的,我还是可以读取的,百思不得其解的时候我发现存储图片的时候是img.tobytes(),我把一个数组转换成了bytes,而且用的也是bytes存储,是不是tensorflow会把这个bytes当成一个元素,虽然每个图片的size不同,但是tobytes后tensorflow都会当成一个元素,然后读取的时候再根据(height,width,channel)来解析成图片。

我来试试不存为int64,而是存为bytes。 又是一顿厉害的操作

数据转为bytes

# -*- coding: utf-8 -*-

import tensorflow as tf
import numpy as np

def _byte_feature(value):
 return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))

def _int64_feature(value):
 if not isinstance(value,list):
 value = [value]
 return tf.train.Feature(int64_list=tf.train.Int64List(value=value))
# Write an array to TFrecord.
# a is an array which contains lists of variant length.
a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196],
  [0, 38, 79, 157],
  [0, 49, 89, 147, 177],
  [0, 32, 73, 145]])

writer = tf.python_io.TFRecordWriter('file')

for i in range(a.shape[0]): # i = 0 ~ 4
 feature = {'len' : _int64_feature(len(a[i])), # 将无意义的i改成len,为了后面还原
  'data': _byte_feature(np.array(a[i]).tobytes())} # 我也不知道为什么a[i]是list(后面就知道了),要存bytes需要numpy一下

 # Create an example protocol buffer
 example = tf.train.Example(features=tf.train.Features(feature=feature))

 # Serialize to string and write on the file
 writer.write(example.SerializeToString())

writer.close()

#
# Use Dataset API to read the TFRecord file.
filenames = ["file"]
dataset = tf.data.TFRecordDataset(filenames)
def _parse_function(example_proto):
 keys_to_features = {'len':tf.FixedLenFeature([],tf.int64),
   'data':tf.FixedLenFeature([],tf.string)} # 改成string
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 return parsed_features['len'], parsed_features['data']

dataset = dataset.map(_parse_function)
dataset = dataset.shuffle(buffer_size=1)
dataset = dataset.repeat() 
dataset = dataset.batch(1)
iterator = dataset.make_one_shot_iterator()
i, data = iterator.get_next()
with tf.Session() as sess:
 print(sess.run([i, data]))
 print(sess.run([i, data]))
 print(sess.run([i, data]))


"""
[array([6], dtype=int64), array([b'\x00\x00\x00\x006\x00\x00\x00[\x00\x00\x00\x99\x00\x00\x00\xb1\x00\x00\x00\x01\x00\x00\x00'],
 dtype=object)]
[array([5], dtype=int64), array([b'\x00\x00\x00\x002\x00\x00\x00Y\x00\x00\x00\x93\x00\x00\x00\xc4\x00\x00\x00'],
 dtype=object)]
[array([4], dtype=int64), array([b'\x00\x00\x00\x00&\x00\x00\x00O\x00\x00\x00\x9d\x00\x00\x00'],
 dtype=object)]
"""

bytes数据解码

如愿的输出来了,但是这个bytes我该如何解码呢

方法一,我们自己解析

a,b= sess.run([i,data])
 c = np.frombuffer(b[0],dtype=np.int,count=a[0])

方法二使用tensorflow的解析函数

def _parse_function(example_proto):
 keys_to_features = {'len':tf.FixedLenFeature([],tf.int64),
   'data':tf.FixedLenFeature([],tf.string)} # 改成string
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 dat = tf.decode_raw(parsed_features['data'],tf.int64) # 用的是这个解析函数,我们使用int64的格式存储的,解析的时候也是转换为int64
 return parsed_features['len'], dat
"""
[array([6]), array([[ 0, 54, 91, 153, 177, 1]])]
[array([5]), array([[ 0, 50, 89, 147, 196]])]
[array([4]), array([[ 0, 38, 79, 157]])]
"""

可以看到是二维数组,这是因为我们使用的是batch输出,虽然我们的bathc_size=1,但是还是会以二维list的格式输出。我手贱再来修改点东西,

def _parse_function(example_proto):
 keys_to_features = {'len':tf.FixedLenFeature([1],tf.int64),
   'data':tf.FixedLenFeature([1],tf.string)} 
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 dat = tf.decode_raw(parsed_features['data'],tf.int64)
 return parsed_features['len'], dat

"""
[array([[6]]), array([[[ 0, 54, 91, 153, 177, 1]]])]
[array([[5]]), array([[[ 0, 50, 89, 147, 196]]])]
[array([[4]]), array([[[ 0, 38, 79, 157]]])]
"""

呦呵,又变成3维的了,让他报个错试试

def _parse_function(example_proto):
 keys_to_features = {'len':tf.FixedLenFeature([2],tf.int64), # 1 修改为 2
   'data':tf.FixedLenFeature([1],tf.string)} # 改成string
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 return parsed_features['len'], parsed_features['data']

"""
InvalidArgumentError: Key: len. Can't parse serialized Example.
 [[Node: ParseSingleExample/ParseSingleExample = ParseSingleExample[Tdense=[DT_STRING, DT_INT64], dense_keys=["data", "len"], dense_shapes=[[1], [2]], num_sparse=0, sparse_keys=[], sparse_types=[]](arg0, ParseSingleExample/Const, ParseSingleExample/Const_1)]]
 [[Node: IteratorGetNext_22 = IteratorGetNext[output_shapes=[[?,2], [?,1]], output_types=[DT_INT64, DT_STRING], _device="/job:localhost/replica:0/task:0/device:CPU:0"](OneShotIterator_22)]]
"""

可以看到dense_keys=["data", "len"], dense_shapes=[[1], [2]],,tf.FixedLenFeature是读取固定长度的数据,我猜测[]的意思就是读取全部数据,[1]就是读取一个数据,每个数据可能包含多个数据,形如[[1,2],[3,3,4],[2]....],哈哈这都是我瞎猜的,做我女朋友好不好。

tensorflow 变长数组存储

反正是可以读取了。但是如果是自己定义的变长数组,每次都要自己解析,这样很麻烦(我瞎遍的),所以tensorflow就定义了变长数组的解析方法tf.VarLenFeature,我们就不需要把边长数组变为bytes再解析了,又是一顿操作

import tensorflow as tf
import numpy as np

def _int64_feature(value):
 if not isinstance(value,list):
 value = [value]
 return tf.train.Feature(int64_list=tf.train.Int64List(value=value))

# Write an array to TFrecord.
# a is an array which contains lists of variant length.
a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196],
  [0, 38, 79, 157],
  [0, 49, 89, 147, 177],
  [0, 32, 73, 145]])

writer = tf.python_io.TFRecordWriter('file')

for i in range(a.shape[0]): # i = 0 ~ 4
 feature = {'i' : _int64_feature(i), 
  'data': _int64_feature(a[i])}

 # Create an example protocol buffer
 example = tf.train.Example(features=tf.train.Features(feature=feature))

 # Serialize to string and write on the file
 writer.write(example.SerializeToString())

writer.close()


# Use Dataset API to read the TFRecord file.
filenames = ["file"]
dataset = tf.data.TFRecordDataset(filenames)
def _parse_function(example_proto):
 keys_to_features = {'i':tf.FixedLenFeature([],tf.int64),
   'data':tf.VarLenFeature(tf.int64)}
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 return parsed_features['i'], tf.sparse_tensor_to_dense(parsed_features['data'])

dataset = dataset.map(_parse_function)
dataset = dataset.shuffle(buffer_size=1)
dataset = dataset.repeat() 
dataset = dataset.batch(1)
iterator = dataset.make_one_shot_iterator()
i, data = iterator.get_next()
with tf.Session() as sess:
 print(sess.run([i, data]))
 print(sess.run([i, data]))
 print(sess.run([i, data]))

"""
[array([0], dtype=int64), array([[ 0, 54, 91, 153, 177, 1]], dtype=int64)]
[array([1], dtype=int64), array([[ 0, 50, 89, 147, 196]], dtype=int64)]
[array([2], dtype=int64), array([[ 0, 38, 79, 157]], dtype=int64)]
"""

batch输出

输出还是数组,哈哈哈。再来一波操作

dataset = dataset.batch(2)
"""
Cannot batch tensors with different shapes in component 1. First element had shape [6] and element 1 had shape [5].
"""

这是因为一个batch中数据的shape必须是一致的,第一个元素长度为6,第二个元素长度为5,就会报错。办法就是补成一样的长度,在这之前先测试点别的

a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196],
  [0, 38, 79, 157],
  [0, 49, 89, 147, 177],
  [0, 32, 73, 145]])


for i in range(a.shape[0]):
 print(type(a[i]))

"""
<class 'list'>
<class 'list'>
<class 'list'>
<class 'list'>
<class 'list'>
"""

可以发现长度不一的array每一个数据是list(一开始我以为是object)。然后补齐

a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196,0],
  [0, 38, 79, 157,0,0],
  [0, 49, 89, 147, 177,0],
  [0, 32, 73, 145,0,0]])


for i in range(a.shape[0]):
 print(type(a[i]))

"""
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
<class 'numpy.ndarray'>
"""

返回的是numpy。为什么要做这件事呢?

def _int64_feature(value):
 if not isinstance(value,list):
 value = [value]
 return tf.train.Feature(int64_list=tf.train.Int64List(value=value))

tensorflow要求我们输入的是list或者直接是numpy.ndarry,如果是list中包含numpy.ndarry [numpy.ndarry]就会报错。上面的那个数组时边长的,返回的时list,没有什么错误,我们补齐看看

a = np.array([[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196,0],
  [0, 38, 79, 157,0,0],
  [0, 49, 89, 147, 177,0],
  [0, 32, 73, 145,0,0]])

"""
TypeError: only size-1 arrays can be converted to Python scalars
"""

这就是因为返回的不是list,而是numpy.ndarry,而_int64_feature函数中先判断numpy.ndarry不是list,所以转成了[numpy.ndarry]就报错了。可以做些修改,一种方法是将numpy.ndarry转为list

for i in range(a.shape[0]): # i = 0 ~ 4
 feature = {'i' : _int64_feature(i), 
  'data': _int64_feature(a[i].tolist())}

这样补齐了我们就可以修改batch的值了

dataset = dataset.batch(2)

"""
[array([0, 2], dtype=int64), array([[ 0, 54, 91, 153, 177, 1],
 [ 0, 38, 79, 157, 0, 0]], dtype=int64)]
[array([1, 3], dtype=int64), array([[ 0, 50, 89, 147, 196, 0],
 [ 0, 49, 89, 147, 177, 0]], dtype=int64)]
[array([4, 0], dtype=int64), array([[ 0, 32, 73, 145, 0, 0],
 [ 0, 54, 91, 153, 177, 1]], dtype=int64)]
"""

当然tensorflow不会让我自己补齐,已经提供了补齐函数padded_batch

# -*- coding: utf-8 -*-

import tensorflow as tf

def _int64_feature(value):
 if not isinstance(value,list):
 value = [value]
 return tf.train.Feature(int64_list=tf.train.Int64List(value=value))

a = [[0, 54, 91, 153, 177,1],
  [0, 50, 89, 147, 196],
  [0, 38, 79, 157],
  [0, 49, 89, 147, 177],
  [0, 32, 73, 145]]

writer = tf.python_io.TFRecordWriter('file')

for v in a: # i = 0 ~ 4
 feature = {'data': _int64_feature(v)}

 # Create an example protocol buffer
 example = tf.train.Example(features=tf.train.Features(feature=feature))

 # Serialize to string and write on the file
 writer.write(example.SerializeToString())

writer.close()


# Use Dataset API to read the TFRecord file.
filenames = ["file"]
dataset = tf.data.TFRecordDataset(filenames)
def _parse_function(example_proto):
 keys_to_features = {'data':tf.VarLenFeature(tf.int64)}
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 return tf.sparse_tensor_to_dense( parsed_features['data'])

dataset = dataset.map(_parse_function)
dataset = dataset.shuffle(buffer_size=1)
dataset = dataset.repeat() 
dataset = dataset.padded_batch(2,padded_shapes=([None]))
iterator = dataset.make_one_shot_iterator()
data = iterator.get_next()
with tf.Session() as sess:
 print(sess.run([data]))
 print(sess.run([data]))
 print(sess.run([data]))


"""
[array([[ 0, 54, 91, 153, 177, 1],
 [ 0, 50, 89, 147, 196, 0]])]
[array([[ 0, 38, 79, 157, 0],
 [ 0, 49, 89, 147, 177]])]
[array([[ 0, 32, 73, 145, 0, 0],
 [ 0, 54, 91, 153, 177, 1]])]
"""

可以看到的确是自动补齐了。

图片batch

直接来测试一下图片数据

# -*- coding: utf-8 -*-

import tensorflow as tf
import matplotlib.pyplot as plt
def _byte_feature(value):
 return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))

files = tf.gfile.Glob('*.jpeg')
writer = tf.python_io.TFRecordWriter('file')
for file in files:

 with tf.gfile.FastGFile(file,'rb') as f:
 img_buff = f.read()
 feature = {'img': _byte_feature(tf.compat.as_bytes(img_buff))}
 example = tf.train.Example(features=tf.train.Features(feature=feature))
 writer.write(example.SerializeToString())
writer.close()


filenames = ["file"]
dataset = tf.data.TFRecordDataset(filenames)
def _parse_function(example_proto):
 keys_to_features = {'img':tf.FixedLenFeature([], tf.string)}
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 image = tf.image.decode_jpeg(parsed_features['img'])
 return image

dataset = dataset.map(_parse_function)
dataset = dataset.shuffle(buffer_size=1)
dataset = dataset.repeat() 
dataset = dataset.batch(2)
iterator = dataset.make_one_shot_iterator()
image = iterator.get_next()

with tf.Session() as sess:
 img = sess.run([image])
 print(len(img))
 print(img[0].shape)
 plt.imshow(img[0][0])

"""
Cannot batch tensors with different shapes in component 0. First element had shape [440,440,3] and element 1 had shape [415,438,3].
"""

看到了没有,一个batch中图片的尺寸不同,就不可以batch了,我们必须要将一个batch的图片resize成相同的代大小。

def _parse_function(example_proto):
 keys_to_features = {'img':tf.FixedLenFeature([], tf.string)}
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 image = tf.image.decode_jpeg(parsed_features['img'])
 image = tf.image.convert_image_dtype(image,tf.float32)# 直接resize,会将uint8转为float类型,但是plt.imshow只能显示uint8或者0-1之间float类型,这个函数就是将uint8转为0-1之间的float类型,相当于除以255.0
 image = tf.image.resize_images(image,(224,224))
 return image

但是有时候我们希望输入图片尺寸是不一样的,不需要reize,这样只能将batch_size=1。一个batch中的图片shape必须是一样的,我们可以这样折中训练,使用tensorflow提供的动态填充接口,将一个batch中的图片填充为相同的shape。

dataset = dataset.padded_batch(2,padded_shapes=([None,None,3]))

如果我们想要将图片的名称作为标签保存下来要怎么做呢?

# -*- coding: utf-8 -*-

import tensorflow as tf
import matplotlib.pyplot as plt
import os

out_charset="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789"

def _byte_feature(value):
 return tf.train.Feature(bytes_list=tf.train.BytesList(value=[value]))

def _int64_feature(values):
 if not isinstance(values,list):
 values = [values]
 return tf.train.Feature(int64_list=tf.train.Int64List(value=values))

files = tf.gfile.Glob('*.jpg')
writer = tf.python_io.TFRecordWriter('file')
for file in files:
 with tf.gfile.FastGFile(file,'rb') as f:
 img_buff = f.read()
 filename = os.path.basename(file).split('.')[0]
 label = list(map(lambda x:out_charset.index(x),filename))
 feature = {'label':_int64_feature(label),
  'filename':_byte_feature(tf.compat.as_bytes(filename)),
  'img': _byte_feature(tf.compat.as_bytes(img_buff))}
 example = tf.train.Example(features=tf.train.Features(feature=feature))
 writer.write(example.SerializeToString())
writer.close()


filenames = ["file"]
dataset = tf.data.TFRecordDataset(filenames)
def _parse_function(example_proto):
 keys_to_features = {
  'label':tf.VarLenFeature(tf.int64),
  'filename':tf.FixedLenFeature([],tf.string),
  'img':tf.FixedLenFeature([], tf.string)}
 parsed_features = tf.parse_single_example(example_proto, keys_to_features)
 label = tf.sparse_tensor_to_dense(parsed_features['label'])
 filename = parsed_features['filename']
 image = tf.image.decode_jpeg(parsed_features['img'])
 return image,label,filename

dataset = dataset.map(_parse_function)
dataset = dataset.shuffle(buffer_size=1)
dataset = dataset.repeat() 
dataset = dataset.padded_batch(3,padded_shapes=([None,None,3],[None],[]))
#因为返回有三个,所以每一个都要有padded_shapes,但是解码后的image和label都是变长的
#所以需要pad None,而filename没有解码,返回来是byte类型的,只有一个值,所以不需要pad
iterator = dataset.make_one_shot_iterator()
image,label,filename = iterator.get_next()

with tf.Session() as sess:
 print(label.eval())

瞎试

如果写入的数据是一个list会是怎样呢

a = np.arange(16).reshape(2,4,2)

"""
TypeError: [0, 1] has type list, but expected one of: int, long
"""

不过想想也是,tf.train.Feature(int64_list=tf.train.Int64List(value=value))这个函数就是存储数据类型为int64的list的。但是如果我们要存储词向量该怎么办呢?例如一句话是一个样本s1='我爱你',假如使用one-hot编码,我=[0,0,1],爱=[0,1,0],你=[1,0,0],s1=[[0,0,1],[0,1,0],[1,0,0]]。这一个样本该怎么存储呢?

以上这篇tensorflow 变长序列存储实例就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持三水点靠木。

Python 相关文章推荐
Python模块学习 re 正则表达式
May 19 Python
Python运用于数据分析的简单教程
Mar 27 Python
在Mac OS上搭建Python的开发环境
Dec 24 Python
Python中的迭代器与生成器高级用法解析
Jun 28 Python
Python用threading实现多线程详解
Feb 03 Python
python WindowsError的错误代码详解
Jul 23 Python
python绘制无向图度分布曲线示例
Nov 22 Python
pytorch实现mnist分类的示例讲解
Jan 10 Python
Python如何使用input函数获取输入
Aug 06 Python
把Anaconda中的环境导入到Pycharm里面的方法步骤
Oct 30 Python
解决jupyter notebook图片显示模糊和保存清晰图片的操作
Apr 24 Python
python中os.path.join()函数实例用法
May 26 Python
在tensorflow中实现去除不足一个batch的数据
Jan 20 #Python
Tensorflow实现在训练好的模型上进行测试
Jan 20 #Python
Python线程条件变量Condition原理解析
Jan 20 #Python
tensorflow tf.train.batch之数据批量读取方式
Jan 20 #Python
Python list运算操作代码实例解析
Jan 20 #Python
Python模块future用法原理详解
Jan 20 #Python
使用Tensorflow将自己的数据分割成batch训练实例
Jan 20 #Python
You might like
实用函数7
2007/11/08 PHP
php 来访国内外IP判断代码并实现页面跳转
2009/12/18 PHP
Notice: Trying to get property of non-object problem(PHP)解决办法
2012/03/11 PHP
PHP中的traits实现代码复用使用实例
2015/05/13 PHP
PHP实现通过strace定位故障原因的方法
2018/04/29 PHP
PHP ADODB生成下拉列表框功能示例
2018/05/29 PHP
JavaScript调用ajax获取文本文件内容实现代码
2014/03/28 Javascript
js脚本获取webform服务器控件的方法
2014/05/16 Javascript
javascript实现可拖动变色并关闭层窗口实例
2015/05/15 Javascript
jQuery validate插件submitHandler提交导致死循环解决方法
2016/01/21 Javascript
Nodejs如何复制文件
2016/03/09 NodeJs
Vue-router 类似Vuex实现组件化开发的示例
2017/09/15 Javascript
Angular2学习笔记之数据绑定的示例代码
2018/01/03 Javascript
vue使用keep-alive保持滚动条位置的实现方法
2019/04/09 Javascript
JavaScript实现简单的计算器
2020/01/16 Javascript
详解JavaScript作用域、作用域链和闭包的用法
2020/09/03 Javascript
[03:22]DOTA2超级联赛专访单车:找到属于自己的英雄
2013/06/08 DOTA
[00:59]DOTA2荣耀之路1:Doom is back!weapon X!
2018/05/22 DOTA
[07:01]DOTA2-DPC中国联赛正赛 Aster vs Magma 3月5日 赛后选手采访
2021/03/11 DOTA
Python实现高效求解素数代码实例
2015/06/30 Python
Python的Django框架中的数据库配置指南
2015/07/17 Python
python_opencv用线段画封闭矩形的实例
2018/12/05 Python
Python 限制线程的最大数量的方法(Semaphore)
2019/02/22 Python
使用Python实现将list中的每一项的首字母大写
2019/06/11 Python
pycharm new project变成灰色的解决方法
2019/06/27 Python
pandas dataframe 中的explode函数用法详解
2020/05/18 Python
python 根据列表批量下载网易云音乐的免费音乐
2020/12/03 Python
Html5画布_动力节点Java学院整理
2017/07/13 HTML / CSS
泰国汽车、火车和轮渡票预订网站:Bus Online Ticket
2017/09/09 全球购物
Crabtree & Evelyn英国官网:瑰珀翠护手霜、香水、沐浴和身体护理
2018/04/26 全球购物
如何用JQuery进行表单验证
2013/05/29 面试题
专科毕业生就业推荐信
2013/11/01 职场文书
制药工程专业个人求职自荐信
2014/01/25 职场文书
挂牌仪式策划方案
2014/05/18 职场文书
2016年小学生寒假家长评语
2015/10/10 职场文书
2020年元旦祝福语录,总有适合你的
2019/12/31 职场文书