前往小程序,Get更优阅读体验!
立即前往
首页
学习
活动
专区
工具
TVP
发布
社区首页 >专栏 >【抬抬小手学Python】yolov3代码和模型结构图详细注解【图文】

【抬抬小手学Python】yolov3代码和模型结构图详细注解【图文】

原创
作者头像
查理不是猹
发布2022-01-07 21:32:15
2930
发布2022-01-07 21:32:15
举报

我对他的框图加了注释,便于理解,红色圈为yolo_block,深红色注解为前一模块的输出,请对照代码

YOLOv3相比于之前的yolo1和yolo2,改进较大,主要改进方向有:

**1、使用了残差网络Residual,残差卷积就是进行一次3X3的卷积,然后保存该卷积layer,再进行一次1X1的卷积和一次3X3的卷积,并把这个结果加上layer作为最后的结果, 残差网络的特点是容易优化,并且能够通过增加相当的深度来提高准确率。其内部的残差块使用了跳跃连接,缓解了在深度神经网络中增加深度带来的梯度消失问题。

2、提取多特征层进行目标检测,一共提取三个特征层(粉色方框图),它的shape分别为(13,13,75),(26,26,75),(52,52,75)最后一个维度为75是因为该图是基于voc数据集的,它的类为20种,yolo3只有针对每一个特征层存在3个先验框,所以最后维度为3x25。

3、其采用反卷积UmSampling2d设计,逆卷积相对于卷积在神经网络结构的正向和反向传播中做相反的运算,其可以更多更好的提取出特征**

代码语言:txt
复制
\# l2 正则化  
def \_batch\_normalization\_layer(self, input\_layer, name = None, training = True, norm\_decay = 0.99, norm\_epsilon = 1e-3):  
    '''  
    Introduction  
    ------------  
        对卷积层提取的feature map使用batch normalization  
    Parameters  
    ----------  
        input\_layer: 输入的四维tensor  
        name: batchnorm层的名字  
        trainging: 是否为训练过程  
        norm\_decay: 在预测时计算moving average时的衰减率  
        norm\_epsilon: 方差加上极小的数,防止除以0的情况  
    Returns  
    -------  
        bn\_layer: batch normalization处理之后的feature map  
    '''  
    bn\_layer = tf.layers.batch\_normalization(inputs = input\_layer,  
        momentum = norm\_decay, epsilon = norm\_epsilon, center = True,  
        scale = True, training = training, name = name)  
    return tf.nn.leaky\_relu(bn\_layer, alpha = 0.1)  
  
\# 这个就是用来进行卷积的  
def \_conv2d\_layer(self, inputs, filters\_num, kernel\_size, name, use\_bias = False, strides = 1):  
    """  
    Introduction  
    ------------  
        使用tf.layers.conv2d减少权重和偏置矩阵初始化过程,以及卷积后加上偏置项的操作  
        经过卷积之后需要进行batch norm,最后使用leaky ReLU激活函数  
        根据卷积时的步长,如果卷积的步长为2,则对图像进行降采样  
        比如,输入图片的大小为416\*416,卷积核大小为3,若stride为2时,(416 - 3 + 2)/ 2 + 1, 计算结果为208,相当于做了池化层处理  
        因此需要对stride大于1的时候,先进行一个padding操作, 采用四周都padding一维代替'same'方式  
    Parameters  
    ----------  
        inputs: 输入变量  
        filters\_num: 卷积核数量  
        strides: 卷积步长  
        name: 卷积层名字  
        trainging: 是否为训练过程  
        use\_bias: 是否使用偏置项  
        kernel\_size: 卷积核大小  
    Returns  
    -------  
        conv: 卷积之后的feature map  
    """  
    conv = tf.layers.conv2d(  
        inputs = inputs, filters = filters\_num,  
        kernel\_size = kernel\_size, strides = \[strides, strides\], kernel\_initializer = tf.glorot\_uniform\_initializer(),  
        padding = ('SAME' if strides == 1 else 'VALID'), kernel\_regularizer = tf.contrib.layers.l2\_regularizer(scale = 5e-4), use\_bias = use\_bias, name = name)  
    return conv  
  
\# 这个用来进行残差卷积的  
\# 残差卷积就是进行一次3X3的卷积,然后保存该卷积layer  
\# 再进行一次1X1的卷积和一次3X3的卷积,并把这个结果加上layer作为最后的结果  
def \_Residual\_block(self, inputs, filters\_num, blocks\_num, conv\_index, training = True, norm\_decay = 0.99, norm\_epsilon = 1e-3):  
    """  
    Introduction  
    ------------  
        Darknet的残差block,类似resnet的两层卷积结构,分别采用1x1和3x3的卷积核,使用1x1是为了减少channel的维度  
    Parameters  
    ----------  
        inputs: 输入变量  
        filters\_num: 卷积核数量  
        trainging: 是否为训练过程  
        blocks\_num: block的数量  
        conv\_index: 为了方便加载预训练权重,统一命名序号  
        weights\_dict: 加载预训练模型的权重  
        norm\_decay: 在预测时计算moving average时的衰减率  
        norm\_epsilon: 方差加上极小的数,防止除以0的情况  
    Returns  
    -------  
        inputs: 经过残差网络处理后的结果  
    """  
    # 在输入feature map的长宽维度进行padding  
    inputs = tf.pad(inputs, paddings=\[\[0, 0\], \[1, 0\], \[1, 0\], \[0, 0\]\], mode='CONSTANT')  
    layer = self.\_conv2d\_layer(inputs, filters\_num, kernel\_size = 3, strides = 2, name = "conv2d\_" + str(conv\_index))  
    layer = self.\_batch\_normalization\_layer(layer, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    for \_ in range(blocks\_num):  
        shortcut = layer  
        layer = self.\_conv2d\_layer(layer, filters\_num // 2, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index))  
        layer = self.\_batch\_normalization\_layer(layer, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        conv\_index += 1  
        layer = self.\_conv2d\_layer(layer, filters\_num, kernel\_size = 3, strides = 1, name = "conv2d\_" + str(conv\_index))  
        layer = self.\_batch\_normalization\_layer(layer, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        conv\_index += 1  
        layer += shortcut  
    return layer, conv\_index  
  
#---------------------------------------#  
\#   生成\_darknet53和逆卷积层  
#---------------------------------------#  
def \_darknet53(self, inputs, conv\_index, training = True, norm\_decay = 0.99, norm\_epsilon = 1e-3):  
    """  
    Introduction  
    ------------  
        构建yolo3使用的darknet53网络结构  
    Parameters  
    ----------  
        inputs: 模型输入变量  
        conv\_index: 卷积层数序号,方便根据名字加载预训练权重  
        weights\_dict: 预训练权重  
        training: 是否为训练  
        norm\_decay: 在预测时计算moving average时的衰减率  
        norm\_epsilon: 方差加上极小的数,防止除以0的情况  
    Returns  
    -------  
        conv: 经过52层卷积计算之后的结果, 输入图片为416x416x3,则此时输出的结果shape为13x13x1024  
        route1: 返回第26层卷积计算结果52x52x256, 供后续使用  
        route2: 返回第43层卷积计算结果26x26x512, 供后续使用  
        conv\_index: 卷积层计数,方便在加载预训练模型时使用  
    """  
    with tf.variable\_scope('darknet53'):  
        # 416,416,3 -> 416,416,32  
        conv = self.\_conv2d\_layer(inputs, filters\_num = 32, kernel\_size = 3, strides = 1, name = "conv2d\_" + str(conv\_index))  
        conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        conv\_index += 1  
        # 416,416,32 -> 208,208,64  
        conv, conv\_index = self.\_Residual\_block(conv, conv\_index = conv\_index, filters\_num = 64, blocks\_num = 1, training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        # 208,208,64 -> 104,104,128  
        conv, conv\_index = self.\_Residual\_block(conv, conv\_index = conv\_index, filters\_num = 128, blocks\_num = 2, training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        # 104,104,128 -> 52,52,256  
        conv, conv\_index = self.\_Residual\_block(conv, conv\_index = conv\_index, filters\_num = 256, blocks\_num = 8, training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        # route1 = 52,52,256  
        route1 = conv  
        # 52,52,256 -> 26,26,512  
        conv, conv\_index = self.\_Residual\_block(conv, conv\_index = conv\_index, filters\_num = 512, blocks\_num = 8, training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        # route2 = 26,26,512  
        route2 = conv  
        # 26,26,512 -> 13,13,1024  
        conv, conv\_index = self.\_Residual\_block(conv, conv\_index = conv\_index,  filters\_num = 1024, blocks\_num = 4, training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
        # route3 = 13,13,1024  
    return  route1, route2, conv, conv\_index  
  
\# 输出两个网络结果  
\# 第一个是进行5次卷积后,用于下一次逆卷积的,卷积过程是1X1,3X3,1X1,3X3,1X1  
\# 第二个是进行5+2次卷积,作为一个特征层的,卷积过程是1X1,3X3,1X1,3X3,1X1,3X3,1X1  
def \_yolo\_block(self, inputs, filters\_num, out\_filters, conv\_index, training = True, norm\_decay = 0.99, norm\_epsilon = 1e-3):  
    """  
    Introduction  
    ------------  
        yolo3在Darknet53提取的特征层基础上,又加了针对3种不同比例的feature map的block,这样来提高对小物体的检测率  
    Parameters  
    ----------  
        inputs: 输入特征  
        filters\_num: 卷积核数量  
        out\_filters: 最后输出层的卷积核数量  
        conv\_index: 卷积层数序号,方便根据名字加载预训练权重  
        training: 是否为训练  
        norm\_decay: 在预测时计算moving average时的衰减率  
        norm\_epsilon: 方差加上极小的数,防止除以0的情况  
    Returns  
    -------  
        route: 返回最后一层卷积的前一层结果  
        conv: 返回最后一层卷积的结果  
        conv\_index: conv层计数  
    """  
    conv = self.\_conv2d\_layer(inputs, filters\_num = filters\_num, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index))  
    conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    conv = self.\_conv2d\_layer(conv, filters\_num = filters\_num \* 2, kernel\_size = 3, strides = 1, name = "conv2d\_" + str(conv\_index))  
    conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    conv = self.\_conv2d\_layer(conv, filters\_num = filters\_num, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index))  
    conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    conv = self.\_conv2d\_layer(conv, filters\_num = filters\_num \* 2, kernel\_size = 3, strides = 1, name = "conv2d\_" + str(conv\_index))  
    conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    conv = self.\_conv2d\_layer(conv, filters\_num = filters\_num, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index))  
    conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    route = conv  
    conv = self.\_conv2d\_layer(conv, filters\_num = filters\_num \* 2, kernel\_size = 3, strides = 1, name = "conv2d\_" + str(conv\_index))  
    conv = self.\_batch\_normalization\_layer(conv, name = "batch\_normalization\_" + str(conv\_index), training = training, norm\_decay = norm\_decay, norm\_epsilon = norm\_epsilon)  
    conv\_index += 1  
    conv = self.\_conv2d\_layer(conv, filters\_num = out\_filters, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index), use\_bias = True)  
    conv\_index += 1  
    return route, conv, conv\_index  
  
\# 返回三个特征层的内容  
def yolo\_inference(self, inputs, num\_anchors, num\_classes, training = True):  
    """  
    Introduction  
    ------------  
        构建yolo模型结构  
    Parameters  
    ----------  
        inputs: 模型的输入变量  
        num\_anchors: 每个grid cell负责检测的anchor数量  
        num\_classes: 类别数量  
        training: 是否为训练模式  
    """  
    conv\_index = 1  
    # route1 = 52,52,256、route2 = 26,26,512、route3 = 13,13,1024  
    conv2d\_26, conv2d\_43, conv, conv\_index = self.\_darknet53(inputs, conv\_index, training = training, norm\_decay = self.norm\_decay, norm\_epsilon = self.norm\_epsilon)  
    with tf.variable\_scope('yolo'):  
        #--------------------------------------#  
        #   获得第一个特征层:conv2d\_59  
        #--------------------------------------#  
        # conv2d\_57 = 13,13,512,conv2d\_59 = 13,13,255(3x(80+5))  
        conv2d\_57, conv2d\_59, conv\_index = self.\_yolo\_block(conv, 512, num\_anchors \* (num\_classes + 5), conv\_index = conv\_index, training = training, norm\_decay = self.norm\_decay, norm\_epsilon = self.norm\_epsilon)  
  
        #--------------------------------------#  
        #   获得第二个特征层:conv2d\_67  
        #--------------------------------------#  
        conv2d\_60 = self.\_conv2d\_layer(conv2d\_57, filters\_num = 256, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index))  
        conv2d\_60 = self.\_batch\_normalization\_layer(conv2d\_60, name = "batch\_normalization\_" + str(conv\_index),training = training, norm\_decay = self.norm\_decay, norm\_epsilon = self.norm\_epsilon)  
        conv\_index += 1  
        # unSample\_0 = 26,26,256  
        unSample\_0 = tf.image.resize\_nearest\_neighbor(conv2d\_60, \[2 \* tf.shape(conv2d\_60)\[1\], 2 \* tf.shape(conv2d\_60)\[1\]\], name='upSample\_0')  
        # route0 = 26,26,768  
        route0 = tf.concat(\[unSample\_0, conv2d\_43\], axis = -1, name = 'route\_0')  
        # conv2d\_65 = 52,52,256,conv2d\_67 = 26,26,255  
        conv2d\_65, conv2d\_67, conv\_index = self.\_yolo\_block(route0, 256, num\_anchors \* (num\_classes + 5), conv\_index = conv\_index, training = training, norm\_decay = self.norm\_decay, norm\_epsilon = self.norm\_epsilon)  
  
        #--------------------------------------#  
        #   获得第三个特征层:conv2d\_75  
        #--------------------------------------#   
        conv2d\_68 = self.\_conv2d\_layer(conv2d\_65, filters\_num = 128, kernel\_size = 1, strides = 1, name = "conv2d\_" + str(conv\_index))  
        conv2d\_68 = self.\_batch\_normalization\_layer(conv2d\_68, name = "batch\_normalization\_" + str(conv\_index), training=training, norm\_decay=self.norm\_decay, norm\_epsilon = self.norm\_epsilon)  
        conv\_index += 1  
        # unSample\_1 = 52,52,128  
        unSample\_1 = tf.image.resize\_nearest\_neighbor(conv2d\_68, \[2 \* tf.shape(conv2d\_68)\[1\], 2 \* tf.shape(conv2d\_68)\[1\]\], name='upSample\_1')  
        # route1= 52,52,384  
        route1 = tf.concat(\[unSample\_1, conv2d\_26\], axis = -1, name = 'route\_1')  
        # conv2d\_75 = 52,52,255  
        \_, conv2d\_75, \_ = self.\_yolo\_block(route1, 128, num\_anchors \* (num\_classes + 5), conv\_index = conv\_index, training = training, norm\_decay = self.norm\_decay, norm\_epsilon = self.norm\_epsilon)  
  
    return \[conv2d\_59, conv2d\_67, conv2d\_75\]  
  

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档
http://www.vxiaotou.com