Python tensorflow卷积神经Inception V3网络结构


Posted in Python onMay 06, 2022

前言

学习了Inception V3卷积神经网络,总结一下对Inception V3网络结构和主要代码的理解。

GoogLeNet对网络中的传统卷积层进行了修改,提出了被称为 Inception 的结构,用于增加网络深度和宽度,提高深度神经网络性能。从Inception V1到Inception V4有4个更新版本,每一版的网络在原来的基础上进行改进,提高网络性能。本文介绍Inception V3的网络结构和主要代码。

1 非Inception Module的普通卷积层

首先定义一个非Inception Module的普通卷积层函数inception_v3_base,输入参数inputs为图片数据的张量。第1个卷积层的输出通道数为32,卷积核尺寸为【3x3】,步长为2,padding模式是默认的VALID,第1个卷积层之后的张量尺寸变为(299-3)/2+1=149,即【149x149x32】。

后面的卷积层采用相同的形式,最后张量尺寸变为【35x35x192】。这几个普通的卷积层主要使用了3x3的小卷积核,小卷积核可以低成本的跨通道的对特征进行组合。

def inception_v3_base(inputs,scepe=None):
    with tf.variable_scope(scope,'InceptionV3',[inputs]):
        with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d],stride=1,padding='VALID'):            
            # 149 x 149 x 32   
            net = slim.conv2d(inputs,32,[3,3],stride=2,scope='Conv2d_1a_3x3') 
            # 147 x 147 x 32'
            net = slim.conv2d(net,32),[3,3],scope='Conv2d_2a_3x3') 
            # 147 x 147 x 64
            net = slim.conv2d(net,64,[3,3],padding='SAME',scope='Conv2d_2b_3x3')  
            # 73 x 73 x 64
            net = slim.max_pool2d(net, [3, 3], stride=2, scope='MaxPool_3a_3x3')    
            # 73 x 73 x 80 
            net = slim.conv2d(net, 80, [1, 1], scope= 'Conv2d_3b_1x1')      
            # 71 x 71 x 192.
            net = slim.conv2d(net, 192, [3, 3], scope='Conv2d_4a_3x3',reuse=tf.AUTO_REUSE)    
            # 35 x 35 x 192
            net = slim.max_pool2d(net, [3, 3], stride=2, scope= 'MaxPool_5a_3x3')

2 三个Inception模块组

接下来是三个连续的Inception模块组,每个模块组有多个Inception module组成。

下面是第1个Inception模块组,包含了3个类似的Inception module,分别是:Mixed_5b,Mixed_5c,Mixed_5d。第1个Inception module有4个分支,

第1个分支是输出通道为64的【1x1】卷积,

第2个分支是输出通道为48的【1x1】卷积,再连接输出通道为64的【5x5】卷积,

第3个分支是输出通道为64的【1x1】卷积,再连接2个输出通道为96的【3x3】卷积,

第4个分支是【3x3】的平均池化,再连接输出通道为32的【1x1】卷积。

最后用tf.concat将4个分支的输出合并在一起,输出通道之和为54+64+96+32=256,最后输出的张量尺寸为【35x35x256】。

第2个Inception module也有4个分支,与第1个模块类似,只是最后连接输出通道数为64的【1x1】卷积,最后输出的张量尺寸为【35x35x288】。

第3个模块与第2个模块一样。

with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d],stride=1,padding='SAME'):
        # 35 x 35 x 256
        end_point = 'Mixed_5b'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net,depth(64),[1,1],scope='Conv2d_0a_1x1')               
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(64), [5, 5], scope='Conv2d_0b_5x5')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],scope='Conv2d_0b_3x3')
                branch_2 = slim.conv2d(branch_2, depth(96), [3, 3], scope='Conv2d_0c_3x3')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(32), [1, 1], scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3]) # 64+64+96+32=256
        end_points[end_point] = net
        # 35 x 35 x 288
        end_point = 'Mixed_5c'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0b_1x1')
                branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],scope='Conv_1_0c_5x5')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(64), [1, 1],scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],scope='Conv2d_0b_3x3')
                branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],scope='Conv2d_0c_3x3')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3],scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(64), [1, 1],scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net
        # 35 x 35 x 288
        end_point = 'Mixed_5d'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(48), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(64), [5, 5],scope='Conv2d_0b_5x5')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],scope='Conv2d_0b_3x3')
                branch_2 = slim.conv2d(branch_2, depth(96), [3, 3],scope='Conv2d_0c_3x3')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(64), [1, 1],scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net

第2个Inception模块组包含了5个Inception module,分别是Mixed_6a,Mixed_6b,Mixed_6ac,Mixed_6d,Mixed_6e。

每个Inception module包含有多个分支,第1个Inception module的步长为2,因此图片尺寸被压缩,最后输出的张量尺寸为【17x17x768】。

第2个Inception module采用了Fractorization into small convolutions思想,串联了【1x7】和【7x1】卷积,最后也是将多个通道合并。

第3、4个Inception module与第2个类似,都是用来增加卷积和非线性变化,提炼特征。张量尺寸不变,多个module后仍旧是【17x17x768】。

# 17 x 17 x 768.
        end_point = 'Mixed_6a'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(384), [3, 3], stride=2,padding='VALID', scope='Conv2d_1a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(64), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(96), [3, 3],scope='Conv2d_0b_3x3')
                branch_1 = slim.conv2d(branch_1, depth(96), [3, 3], stride=2,padding='VALID', scope='Conv2d_1a_1x1')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',scope='MaxPool_1a_3x3')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2]) # (35-3)/2+1=17
        end_points[end_point] = net
        # 17 x 17 x 768.
        end_point = 'Mixed_6b'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(128), [1, 7],scope='Conv2d_0b_1x7')
                branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],scope='Conv2d_0c_7x1')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(128), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(128), [7, 1],scope='Conv2d_0b_7x1')
                branch_2 = slim.conv2d(branch_2, depth(128), [1, 7],scope='Conv2d_0c_1x7')
                branch_2 = slim.conv2d(branch_2, depth(128), [7, 1], scope='Conv2d_0d_7x1')
                branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],scope='Conv2d_0e_1x7')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net
        print(net.shape)
        # 17 x 17 x 768.
        end_point = 'Mixed_6c'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                ranch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(160), [1, 7],scope='Conv2d_0b_1x7')
                branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],scope='Conv2d_0c_7x1')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],scope='Conv2d_0b_7x1')
                branch_2 = slim.conv2d(branch_2, depth(160), [1, 7],scope='Conv2d_0c_1x7')
                branch_2 = slim.conv2d(branch_2, depth(160), [7, 1],scope='Conv2d_0d_7x1')
                branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],scope='Conv2d_0e_1x7')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net
        # 17 x 17 x 768.
        end_point = 'Mixed_6d'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(160), [1, 7], scope='Conv2d_0b_1x7')
                branch_1 = slim.conv2d(branch_1, depth(192), [7, 1], scope='Conv2d_0c_7x1')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(160), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(160), [7, 1], scope='Conv2d_0b_7x1')
                branch_2 = slim.conv2d(branch_2, depth(160), [1, 7], scope='Conv2d_0c_1x7')
                branch_2 = slim.conv2d(branch_2, depth(160), [7, 1], scope='Conv2d_0d_7x1')
                branch_2 = slim.conv2d(branch_2, depth(192), [1, 7], scope='Conv2d_0e_1x7')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], sco e='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net
        # 17 x 17 x 768.
        end_point = 'Mixed_6e'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(192), [1, 7],
                                     scope='Conv2d_0b_1x7')
                branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
                                     scope='Conv2d_0c_7x1')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(branch_2, depth(192), [7, 1],
                                     scope='Conv2d_0b_7x1')
                branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
                                     scope='Conv2d_0c_1x7')
                branch_2 = slim.conv2d(branch_2, depth(192), [7, 1],
                                     scope='Conv2d_0d_7x1')
                branch_2 = slim.conv2d(branch_2, depth(192), [1, 7],
                                     scope='Conv2d_0e_1x7')
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(branch_3, depth(192), [1, 1],
                                     scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net

第3个Inception模块组包含了3个Inception module,分别是Mxied_7a,Mixed_7b,Mixed_7c。

第1个Inception module包含了3个分支,与上面的结构类似,主要也是通过改变通道数、卷积核尺寸,包括【1x1】、【3x3】、【1x7】、【7x1】来增加卷积和非线性变化,提升网络性能。

最后3个分支在输出通道上合并,输出张量的尺寸为【8 x 8 x 1280】。第3个Inception module后得到的张量尺寸为【8 x 8 x 2048】。

# 8 x 8 x 1280.
        end_point = 'Mixed_7a'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
                branch_0 = slim.conv2d(branch_0, depth(320), [3, 3], stride=2,
                                     padding='VALID', scope='Conv2d_1a_3x3')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(192), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = slim.conv2d(branch_1, depth(192), [1, 7],
                                     scope='Conv2d_0b_1x7')
                branch_1 = slim.conv2d(branch_1, depth(192), [7, 1],
                                     scope='Conv2d_0c_7x1')
                branch_1 = slim.conv2d(branch_1, depth(192), [3, 3], stride=2,
                                     padding='VALID', scope='Conv2d_1a_3x3')
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.max_pool2d(net, [3, 3], stride=2, padding='VALID',
                                         scope='MaxPool_1a_3x3')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2])
        end_points[end_point] = net
        # 8 x 8 x 2048.
        end_point = 'Mixed_7b'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(320), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(384), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = tf.concat(axis=3, values=[
                  slim.conv2d(branch_1, depth(384), [1, 3], scope='Conv2d_0b_1x3'),
                  slim.conv2d(branch_1, depth(384), [3, 1], scope='Conv2d_0b_3x1')])
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(448), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(
                  branch_2, depth(384), [3, 3], scope='Conv2d_0b_3x3')
                branch_2 = tf.concat(axis=3, values=[
                  slim.conv2d(branch_2, depth(384), [1, 3], scope='Conv2d_0c_1x3'),
                  slim.conv2d(branch_2, depth(384), [3, 1], scope='Conv2d_0d_3x1')])
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(
                  branch_3, depth(192), [1, 1], scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net)
        # 8 x 8 x 2048.
        end_point = 'Mixed_7c'
        with tf.variable_scope(end_point):
            with tf.variable_scope('Branch_0'):
                branch_0 = slim.conv2d(net, depth(320), [1, 1], scope='Conv2d_0a_1x1')
            with tf.variable_scope('Branch_1'):
                branch_1 = slim.conv2d(net, depth(384), [1, 1], scope='Conv2d_0a_1x1')
                branch_1 = tf.concat(axis=3, values=[
                  slim.conv2d(branch_1, depth(384), [1, 3], scope='Conv2d_0b_1x3'),
                  slim.conv2d(branch_1, depth(384), [3, 1], scope='Conv2d_0c_3x1')])
            with tf.variable_scope('Branch_2'):
                branch_2 = slim.conv2d(net, depth(448), [1, 1], scope='Conv2d_0a_1x1')
                branch_2 = slim.conv2d(
                  branch_2, depth(384), [3, 3], scope='Conv2d_0b_3x3')
                branch_2 = tf.concat(axis=3, values=[
                  slim.conv2d(branch_2, depth(384), [1, 3], scope='Conv2d_0c_1x3'),
                  slim.conv2d(branch_2, depth(384), [3, 1], scope='Conv2d_0d_3x1')])
            with tf.variable_scope('Branch_3'):
                branch_3 = slim.avg_pool2d(net, [3, 3], scope='AvgPool_0a_3x3')
                branch_3 = slim.conv2d(
                  branch_3, depth(192), [1, 1], scope='Conv2d_0b_1x1')
            net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
        end_points[end_point] = net

3 Auxiliary Logits、全局平均池化、Softmax分类

Inception V3网络的最后一部分是Auxiliary Logits、全局平均池化、Softmax分类。

首先是Auxiliary Logits,作为辅助分类的节点,对分类结果预测有很大帮助。

先通过end_points['Mixed_6e']得到Mixed_6e后的特征张量,之后接一个【5x5】的平均池化,步长为3,padding为VALID,张量尺寸从第2个模块组的【17x17x768】变为【5x5x768】。

接着连接一个输出通道为128的【1x1】卷积和输出通道为768的【5x5】卷积,输出尺寸变为【1x1x768】。

然后连接输出通道数为num_classes的【1x1】卷积,输出变为【1x1x1000】。最后将辅助分类节点的输出存储到字典表end_points中。

with slim.arg_scope([slim.conv2d,slim.max_pool2d,slim.avg_pool2d],stride=1,padding='SAME'):
            aux_logits = end_points['Mixed_6e']
            print(aux_logits.shape)
            with tf.variable_scope('AuxLogits'):
                aux_logits = slim.avg_pool2d(aux_logits,[5,5],stride=3,padding='VALID',scope='AvgPool_1a_5x5')
                aux_logits = slim.conv2d(aux_logits,depth(128),[1,1],scope='Conv2d_1b_1x1')  # (17-5)/3+1=5
            kernel_size = _reduced_kernel_size_for_small_input(aux_logits, [5, 5])
            aux_logits = slim.conv2d(aux_logits, depth(768), kernel_size, weights_initializer=trunc_normal(0.01),
                                     padding='VALID', scope='Conv2d_2a_{}x{}'.format(*kernel_size))
            aux_logits = slim.conv2d( aux_logits, num_classes, [1, 1], activation_fn=None,
                                      normalizer_fn=None, weights_initializer=trunc_normal(0.001),
                                      scope='Conv2d_2b_1x1')         
            aux_logits = tf.squeeze(aux_logits, [1, 2], name='SpatialSqueeze')
            end_points['AuxLogits'] = aux_logits

最后对最后一个卷积层的输出Mixed_7c进行一个【8x8】的全局平均池化,padding为VALID,输出张量从【8 x 8 x 2048】变为【1 x 1 x 2048】,然后连接一个Dropout层,接着连接一个输出通道数为1000的【1x1】卷积。

使用tf.squeeze去掉输出张量中维数为1的维度。最后用Softmax得到最终分类结果。返回分类结果logits和包含各个卷积后的特征图字典表end_points。

with tf.variable_scope('Logits'):
            kernel_size = _reduced_kernel_size_for_small_input(net, [8, 8])
            net = slim.avg_pool2d(net, kernel_size, padding='VALID',scope='AvgPool_1a_{}x{}'.format(*kernel_size))
            end_points['AvgPool_1a'] = net
            net = slim.dropout(net, keep_prob=dropout_keep_prob, scope='Dropout_1b')
            end_points['PreLogits'] = net 
            logits = slim.conv2d(net, num_classes, [1, 1], activation_fn=None, normalizer_fn=None, scope='Conv2d_1c_1x1')
            logits = tf.squeeze(logits, [1, 2], name='SpatialSqueeze')
            end_points['Logits'] = logits
            end_points['Predictions'] = slim.softmax(logits, scope='Predictions')
  return logits,end_points

参考文献:

1. 《TensorFlow实战》

以上就是Python tensorflow卷积神经Inception V3网络结构的详细内容!


Tags in this post...

Python 相关文章推荐
使用grappelli为django admin后台添加模板
Nov 18 Python
简单的Python2.7编程初学经验总结
Apr 01 Python
Python获取任意xml节点值的方法
May 05 Python
python导出chrome书签到markdown文件的实例代码
Dec 27 Python
python爬虫爬取淘宝商品信息
Feb 23 Python
python 平衡二叉树实现代码示例
Jul 07 Python
PyQt4实时显示文本内容GUI的示例
Jun 14 Python
Python秒算24点实现及原理详解
Jul 29 Python
python读取图片的几种方式及图像宽和高的存储顺序
Feb 11 Python
如何教少儿学习Python编程
Jul 10 Python
Django 用户认证Auth组件的使用
Nov 30 Python
对Pytorch 中的contiguous理解说明
Mar 03 Python
Python实现Matplotlib,Seaborn动态数据图
May 06 #Python
PYTHON InceptionV3模型的复现详解
代码复现python目标检测yolo3详解预测
讲解Python实例练习逆序输出字符串
May 06 #Python
python turtle绘图
May 04 #Python
python blinker 信号库
May 04 #Python
python三子棋游戏
May 04 #Python
You might like
php 正确解码javascript中通过escape编码后的字符
2010/01/28 PHP
ubuntu下编译安装xcache for php5.3 的具体操作步骤
2013/06/18 PHP
php实现阳历阴历互转的方法
2015/10/28 PHP
yum命令安装php7和相关扩展
2016/07/04 PHP
yii2.0整合阿里云oss上传单个文件的示例
2017/09/19 PHP
如何取得中文输入的真实长度?
2006/06/24 Javascript
jQuery探测位置的提示弹窗(toolTip box)详细解析
2013/11/14 Javascript
键盘上一张下一张兼容IE/google/firefox等浏览器
2014/01/28 Javascript
Js数组排序函数sort()介绍
2015/06/08 Javascript
基于RequireJS和JQuery的模块化编程日常问题解析
2016/04/14 Javascript
js判断出两个字符串最大子串的函数实现方法
2016/11/01 Javascript
微信小程序-消息提示框实例
2016/11/24 Javascript
jQuery.Validate表单验证插件的使用示例详解
2017/01/04 Javascript
js 实现省市区三级联动菜单效果
2017/02/20 Javascript
ES6 迭代器(Iterator)和 for.of循环使用方法学习(总结)
2018/02/08 Javascript
react 创建单例组件的方法
2018/04/26 Javascript
[02:45]DOTA2英雄敌法师基础教程
2013/11/25 DOTA
Tornado Web服务器多进程启动的2个方法
2014/08/04 Python
Python写的Discuz7.2版faq.php注入漏洞工具
2014/08/06 Python
Python应用03 使用PyQT制作视频播放器实例
2016/12/07 Python
Python中的is和==比较两个对象的两种方法
2017/09/06 Python
详解Django解决ajax跨域访问问题
2018/08/24 Python
设置python3为默认python的方法
2018/10/31 Python
python matplotlib画盒图、子图解决坐标轴标签重叠的问题
2020/01/19 Python
分享unittest单元测试框架中几种常用的用例加载方法
2020/12/02 Python
法国珠宝店:CLEOR
2017/01/29 全球购物
Ariat官网:美国马靴和服装品牌
2019/12/16 全球购物
意大利顶级奢侈品电商:LUISAVIAROMA(支持中文)
2020/05/26 全球购物
给定一个时间点,希望得到其他时间点
2013/11/07 面试题
实习教师自我鉴定
2013/12/12 职场文书
自我鉴定三原则
2014/01/13 职场文书
优秀老员工获奖感言
2014/02/15 职场文书
环保倡议书范文
2014/05/12 职场文书
党的群众路线教育实践活动对照检查材料
2014/09/22 职场文书
房屋鉴定委托书范本
2014/09/23 职场文书
2015年小学生自我评价范文
2015/03/03 职场文书