区块链技术博客
www.b2bchain.cn

ResNet 论文深度讲解求职学习资料

本文介绍了ResNet 论文深度讲解求职学习资料,有助于帮助完成毕业设计以及求职,是一篇很好的资料。

对技术面试,学习经验等有一些体会,在此分享。

论文地址

https://arxiv.org/pdf/1512.03385.pdf

阅读方式

本文采用原文、翻译、记录的排版。

笔者使用如何阅读深度学习论文的方法进行阅读,文中标注的 $1(第一步)、$2、$3、$4 分别表示在第该步阅读中的记录和思考

Deep Residual Learning for Image Recognition

图像识别的深度残差学习

$1 本论文介绍深度残差图像识别的运用,可以猜到深度残差就是本文论的核心

Abstract

摘要

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

更深的神经网络更难训练。我们提出了一种残差学习框架来减轻网络训练,这些网络比以前使用的网络更深。我们明确地将层变为学习关于层输入的残差函数,而不是学习未参考的函数。我们提供了全面的经验证据说明这些残差网络很容易优化,并可以显著增加深度提高准确性。在 ImageNet 数据集上我们评估了深度高达 152 层的残差网络——比 VGG[41]深 8 倍但仍具有较低的复杂度。这些残差网络的集合在 ImageNet 测试集上取得了 3.57% 的错误率。这个结果在 ILSVRC 2015 分类任务上赢得了第一名。我们也在 CIFAR-10 上分析了 100 层和 1000 层的残差网络。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

对于许多视觉识别任务而言,表示的深度是至关重要的。仅由于我们非常深度的表示,我们便在 COCO 目标检测数据集上得到了 28% 的相对提高。深度残差网络是我们向 ILSVRCCOCO 2015 竞赛提交的基础,我们也赢得了 ImageNet 检测任务,ImageNet 定位任务,COCO 检测和 COCO 分割任务的第一名。

$1 摘要中指出更深的神经网络更难训练,而作者提出的深度残差网络可以解决这个问题,从而可以通过显著增加深度提高准确性。并且,深度残差网络在几次大赛中都获得了第一名的成绩。

1 Introduction

1 简介

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 50, 40]. Deep networks naturally integrate low/mid/high-level features [50] and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other non-trivial visual recognition tasks [8, 12, 7, 32, 27] have also greatly benefited from very deep models.

深度卷积神经网络[22, 21]造就了图像分类[21, 49, 39]的一系列突破。深度网络自然地将低/中/高级特征[49]和分类器端到端多层方式进行集成,特征的“级别”可以通过堆叠层的数量(深度)来丰富。最近的证据[40, 43]显示网络深度至关重要,在具有挑战性的 ImageNet 数据集上领先的结果都采用了“非常深”[40]的模型,深度从 16 [40]到 30 [16]之间。许多其它重要的视觉识别任务[7, 11, 6, 32, 27]也从非常深的模型中得到了极大受益。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with back-propagation [22].

ResNet 论文深度讲解

Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

深度重要性的推动下,出现了一个问题:学些更好的网络是否像堆叠更多的层一样容易?回答这个问题的一个障碍是梯度消失/爆炸[14, 1, 8]这个众所周知的问题,它从一开始就阻碍了收敛。然而,这个问题通过标准初始化[23, 8, 36, 12]和中间标准化层[16]在很大程度上已经解决,这使得数十层的网络能通过具有反向传播的随机梯度下降(SGD)开始收敛。

ResNet 论文深度讲解

图 1. 具有 20 层和 56 层“普通”网络的 CIFAR-10 上的训练误差(左)和测试误差(右)。更深的网络具有更高的训练误差,从而具有更高的测试误差ImageNet 上的类似现象如图 4 所示。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深的网络能够开始收敛时,暴露了一个退化问题:随着网络深度的增加,准确率达到饱和(这可能并不奇怪)然后迅速下降。意外的是,这种下降不是由过拟合引起的,并且在适当的深度模型上添加更多的层会导致更高的训练误差,正如[10, 41]中报告的那样,并且由我们的实验完全证实。图 1 显示了一个典型的例子。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

退化(训练准确率)表明不是所有的系统都很容易优化。让我们考虑一个较浅的架构及其更深层次的对象,为其添加更多的层。存在通过构建得到更深层模型的解决方案:添加的层是恒等映射,其他层是从学习到的较浅模型的拷贝。这种构造解决方案的存在表明,较深的模型不应该产生比其对应的较浅模型更高的训练误差。但是实验表明,我们目前现有的解决方案无法找到与构建的解决方案相比相对不错或更好的解决方案(或在合理的时间内无法实现)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $H(x)$, we let the stacked nonlinear layers fit another mapping of $F (x) := H(x) − x$. The original mapping is recast into $F(x)+x$. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

ResNet 论文深度讲解

Figure 2. Residual learning: a building block.

在本文中,我们通过引入深度残差学习框架解决了退化问题。我们明确地让这些层拟合残差映射,而不是希望每几个堆叠的层直接拟合期望的基础映射。形式上,将期望的基础映射表示为 $H(x)$,我们将堆叠的非线性层拟合另一个映射 $F(x) := H(x) − x$。原始的映射重写为 $F(x)+x$。我们假设残差映射比原始的、未参考的映射更容易优化。在极端情况下,如果一个恒等映射是最优的,那么将残差置为零比通过一堆非线性层来拟合恒等映射更容易。

ResNet 论文深度讲解

图 2. 残差学习:构建块。

The formulation of $F (x) + x$ can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

公式 $F (x) + x$ 可以通过带有“快捷连接”的前向神经网络(图 2)来实现。快捷连接[2, 33, 48]是那些跳过一层或更多层的连接。在我们的案例中,快捷连接简单地执行恒等映射,并将其输出添加到堆叠层的输出(图 2)。恒等快捷连接既不增加额外的参数不增加计算复杂度。整个网络仍然可以由带有反向传播的 SGD 进行端到端的训练,并且可以使用公共库(例如,Caffe [19])轻松实现,而无需修改求解器。

We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在 ImageNet[36]上进行了综合实验来显示退化问题并评估我们的方法。我们发现:1)我们极深的残差网络易于优化,但当深度增加时,对应的“简单”网络(简单堆叠层)表现出更高的训练误差;2)我们的深度残差网络可以从大大增加的深度中轻松获得准确性收益,生成的结果实质上比以前的网络更好。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

CIFAR-10 数据集上[20]也显示出类似的现象,这表明了优化的困难以及我们的方法的影响不仅仅是针对一个特定的数据集。我们在这个数据集上展示了成功训练的超过 100 层的模型,并探索了超过 1000 层的模型。

On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

ImageNet 分类数据集[35]中,我们通过非常深的残差网络获得了很好的结果。我们的 152 层残差网络是 ImageNet 上最深的网络,同时还具有比 VGG 网络[40]更低的复杂性。我们的模型集合在 ImageNet 测试集上有 3.57% top-5 的错误率,并在 ILSVRC 2015 分类比赛中获得了第一名。极深的表示在其它识别任务中也有极好的泛化性能,并带领我们在进一步赢得了第一名:包括 ILSVRC & COCO 2015 竞赛中的 ImageNet 检测,ImageNet 定位,COCO 检测和 COCO 分割。坚实的证据表明残差学习准则是通用的,并且我们期望它适用于其它的视觉和非视觉问题。

$2 从简介部分可以了解到,更深的网络面临着梯度消失/爆炸这个退化问题,并且不是由过拟合引起。作者提出通过深度残差(恒等映射、快捷连接)来解决这个退化问题,并且既不增加额外的参数不增加计算复杂度,使得网络易于优化,提高了泛化性能。同时,作者在多个数据集中的实践也表明残差学习准则是通用的不局限于特定的数据集,也不一定局限于视觉问题

2 Related Work

2 相关工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

残差表示。在图像识别中,VLAD[18]是一种通过关于字典的残差向量进行编码的表示形式,Fisher 矢量[30]可以表示为 VLAD概率版本[18]。它们都是图像检索和图像分类[4,47]中强大的浅层表示。对于矢量量化,编码残差矢量[17]被证明比编码原始矢量更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低级视觉和计算机图形学中,为了求解偏微分方程(PDE),广泛使用的 Multigrid 方法[3]将系统重构为在多个尺度上的子问题,其中每个子问题负责较粗尺度和较细尺度的残差解。Multigrid 的替代方法是层次化基础预处理[44,45],它依赖于表示两个尺度之间残差向量的变量。已经被证明[3,44,45]这些求解器比不知道解的残差性质的标准求解器收敛得更快。这些方法表明,良好的重构或预处理可以简化优化

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷连接。导致快捷连接[2,33,48]的实践和理论已经被研究了很长时间。训练多层感知机(MLP)的早期实践是添加一个线性层来连接网络的输入和输出[33,48]。在[43,24]中,一些中间层直接连接到辅助分类器,用于解决梯度消失/爆炸。论文[38,37,31,46]提出了通过快捷连接实现层间响应,梯度和传播误差的方法。在[43]中,一个“inception”层由一个快捷分支和一些更深的分支组成。

Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

和我们同时进行的工作,“highway networks” [41, 42]提出了门功能[15]的快捷连接。这些门是数据相关且有参数的,与我们不具有参数恒等快捷连接相反。当门控快捷连接“关闭”(接近零)时,高速网络中的层表示非残差函数。相反,我们的公式总是学习残差函数;我们的恒等快捷连接永远不会关闭,所有的信息总是通过,还有额外的残差函数要学习。此外,高速网络还没有证实极度增加的深度(例如,超过 100 个层)带来的准确性收益。

$3 作者指出他并不是残差思想的第一个提出者,不过作者将其很好地运用起来了。

3. Deep Residual Learning

3. 深度残差学习

3.1. Residual Learning

3.1. 残差学习

Let us consider $H(x)$ as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., $H(x) − x$ (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function $F(x) := H(x) − x$. The original function thus becomes $F(x)+x$. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

我们考虑 $H(x)$ 作为几个堆叠层(不必是整个网络)要拟合的基础映射,x 表示这些层中第一层的输入。假设多个非线性层可以渐近地近似复杂函数,它等价于假设它们可以渐近地近似残差函数,即 $H(x)−x$ (假设输入输出是相同维度)。因此,我们明确让这些层近似参数函数 $F(x):=H(x)−x$,而不是期望堆叠层近似 $H(x)$。因此原始函数变为 $F(x)+x$。尽管两种形式应该都能渐近地近似要求的函数(如假设),但学习的难易程度可能是不同的。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

关于退化问题的反直觉现象激发了这种重构(图 1 左)。正如我们在引言中讨论的那样,如果添加的层可以被构建为恒等映射,更深模型的训练误差应该不大于它对应的更浅版本。退化问题表明求解器通过多个非线性层来近似恒等映射可能有困难。通过残差学习的重构,如果恒等映射是最优的,求解器可能简单地将多个非线性连接的权重推向零来接近恒等映射

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity map-pings provide reasonable preconditioning.

ResNet 论文深度讲解

Figure 7. Standard deviations (std) of layer responses on CIFAR- 10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

在实际情况下,恒等映射不太可能是最优的,但是我们的重构可能有助于对问题进行预处理。如果最优函数比零映射更接近于恒等映射,则求解器应该更容易找到关于恒等映射的抖动,而不是将该函数作为新函数来学习。我们通过实验(图 7)显示学习的残差函数通常有更小的响应,表明恒等映射提供了合理的预处理

ResNet 论文深度讲解

图 7. 层响应在 CIFAR-10 上的标准差(std)。这些响应是每个 3×3 层的输出,在 1BN1 之后非线性之前。上面:以原始顺序显示层。下面:响应按降序排列。

3.2. Identity Mapping by Shortcuts

3.2. 快捷恒等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

$y = F (x, {Wi }) + x.$ (1)

Here $x$ and $y$ are the input and output vectors of the layers considered. The function $F(x, {W_i})$ represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, $F = W_2 sigma(W_1x)$ in which $σ$ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation $F + x$ is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., $sigma(y)$, see Fig. 2).

我们每隔几个堆叠层采用残差学习。构建块如图 2 所示。在本文中我们考虑构建块正式定义为:

$y = F (x, {Wi }) + x.$ (1)

$x$ 和 $y$ 是考虑的层的输入和输出向量。函数 $F(x, {W_i})$ 表示要学习的残差映射。图 2 中的例子有两层,$F = W_2 sigma(W_1x)$ 中 $σ$ 表示 ReLU[29],为了简化写法忽略偏置项。$F + x$ 操作通过快捷连接和各个元素相加来执行。在相加之后我们采纳了第二种非线性(即 $sigma(y)$,看图 2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

方程(1)中的快捷连接既没有引入外部参数又没有增加计算复杂度。这不仅在实践中有吸引力,而且在简单网络和残差网络的比较中也很重要。我们可以公平地比较同时具有相同数量的参数,相同深度,宽度和计算成本的简单/残差网络(除了不可忽略的元素加法之外)。

The dimensions of $x$ and $F$ must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

$y = F(x, {W_i }) + W_sx.$ {2}

We can also use a square matrix $Ws$ in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

方程(1)中 $x$ 和 $F$ 的维度必须是相等的。如果不是这种情况(例如,当更改输入/输出通道时),我们可以通过快捷连接执行线性投影 $Ws$ 来匹配维度:

$y = F(x, {W_i }) + W_sx.$ {2}

我们也可以使用方程(1)中的方阵 $Ws$。但是我们将通过实验表明,恒等映射足以解决退化问题,并且是合算的,因此 $Ws$ 仅在匹配维度时使用

The form of the residual function $F$ is flexible. Experiments in this paper involve a function $F$ that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: $y = W_1x + x$, for which we have not observed advantages.

ResNet 论文深度讲解

Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet- 34. Right: a “bottleneck” building block for ResNet-50/101/152.

ResNet 论文深度讲解

图 5. ImageNet 的深度残差函数 $F$。左:ResNet-34 的构建块(在 56×56 的特征图上),如图 3。右:ResNet-50/101/152 的 “bottleneck”构建块。

残差函数 $F$ 的形式是可变的。本文中的实验包括有两层三层(图 5)的函数 $F$,同时可能有更多的层。但如果 $F$ 只有一层,方程(1)类似于线性层:$y = W_1x + x$,我们没有看到优势

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function $F(x,{W_i})$ can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,为了简单起见,尽管上述符号是关于全连接层的,但它们同样适用于卷积层。函数 $F(x,{W_i})$ 可以表示多个卷积层。元素加法在两个特征图上逐通道进行。

3.3. Network Architectures

3.3. 网络架构

We have tested various plain/residual nets, and have ob-served consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们测试了各种简单/残差网络,并观察到了一致的现象。为了提供讨论的实例,我们描述了 ImageNet 的两个模型如下。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

ResNet 论文深度讲解

Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

ResNet 论文深度讲解

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Down-sampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2.

简单网络。 我们简单网络的基准(图 3,中间)主要受到 VGG 网络[40](图 3,左图)的哲学启发。卷积层主要有 3×3 的滤波器,并遵循两个简单的设计规则:(i)对于相同的输出特征图尺寸,层具有相同数量的滤波器;(ii)如果特征图尺寸减半,则滤波器数量加倍,以便保持每层的时间复杂度。我们通过步长2 的卷积层直接执行下采样。网络以全局平均池化层和具有 softmax1000全连接层结束。图 3(中间)的加权层总数为 34

ResNet 论文深度讲解

图 3. ImageNet 的网络架构例子。左:作为参考的 VGG-19 模型[41]。中:具有 34 个参数层的简单网络(36 亿 FLOPs)。右:具有 34 个参数层的残差网络(36 亿 FLOPs)。带点的快捷连接增加了维度。表 1 显示了更多细节和其它变种。

ResNet 论文深度讲解

表 1. ImageNet 架构。构建块显示在括号中(也可看图 5),以及构建块的堆叠数量。下采样通过步长为 2conv3_1, conv4_1conv5_1 执行。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是我们的模型与 VGG 网络(图 3 左)相比,有更少的滤波器更低的复杂度。我们的 34 层基准有 36 亿 FLOP(乘加),仅是 VGG-19196 亿 FLOP)的 18%

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络。基于上述的简单网络,我们插入快捷连接(图 3,右),将网络转换为其对应的残差版本。当输入和输出具有相同的维度时(图 3 中的实线快捷连接)时,可以直接使用恒等快捷连接(方程(1))。当维度增加(图 3 中的虚线快捷连接)时,我们考虑两个选项:(A)快捷连接仍然执行恒等映射,额外填充零输入以增加维度。此选项不会引入额外的参数;(B)方程(2)中的投影快捷连接用于匹配维度(由 1×1 卷积完成)。对于这两个选项,当快捷连接跨越两种尺寸的特征图时,它们执行时步长为 2

3.4. Implementation

3.4. 实现

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].

ImageNet 中我们的实现遵循[21,41]的实践。调整图像大小,其较短的边在[256,480]之间进行随机采样用于尺度增强[40]。224×224 裁剪是从图像或其水平翻转中随机采样,并逐像素减去均值[21]。使用了[21]中的标准颜色增强。在每个卷积之后和激活之前,我们采用批量归一化(BN)[16]。我们按照[12]的方法初始化权重,从零开始训练所有的简单/残差网络。我们使用批大小256SGD 方法。学习速度从 0.1 开始,当误差稳定时学习率除以 10,并且模型训练高达 60×104 次迭代。我们使用的权重衰减为 0.0001,动量为 0.9。根据[16]的实践,我们不使用 dropout[14]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully- convolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).

在测试阶段,为了比较学习我们采用标准的 10-crop 测试[21]。对于最好的结果,我们采用如[41, 13]中的全卷积形式,并在多尺度上对分数进行平均(图像归一化,短边位于 {224, 256, 384, 480, 640} 中)。

$3 作者在本节先讲了残差网络更好的理论依据原始函数和残差函数学习的难易程度是不同的
然后说明了残差函数的形式是可变的,文中使用的是两层三层的,并且不采用一层的(类似于线性层);
紧接着通过对比 VGG34-layer plain34-layer residual讲解了网络的结构
最后讲解了网络的实现细节

4. Experiments

4. 实验

4.1. ImageNet Classification

4.1. ImageNet 分类

We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

我们在 ImageNet 2012 分类数据集[36]对我们的方法进行了评估,该数据集由 1000 个类别组成。这些模型在 128 万张训练图像上进行训练,并在 5 万张验证图像上进行评估。我们也获得了测试服务器报告的在 10 万张测试图像上的最终结果。我们评估了 top-1top-5 错误率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for de- tailed architectures.

简单网络。我们首先评估 18 层和 34 层的简单网络。34 层简单网络在图 3(中间)。18 层简单网络是一种类似的形式。有关详细的体系结构,请参见表 1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem — the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

ResNet 论文深度讲解

Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

表 2 中的结果表明,较深的 34 层简单网络比较浅的 18 层简单网络有更高的验证误差。为了揭示原因,在图 4(左图)中,我们比较训练过程中的训练/验证误差。我们观察到退化问题——虽然 18 层简单网络的解空间是 34 层简单网络解空间的子空间,但 34 层简单网络在整个训练过程中具有较高的训练误差

ResNet 论文深度讲解

论文地址

https://arxiv.org/pdf/1512.03385.pdf

阅读方式

本文采用原文、翻译、记录的排版。

笔者使用如何阅读深度学习论文的方法进行阅读,文中标注的 $1(第一步)、$2、$3、$4 分别表示在第该步阅读中的记录和思考

Deep Residual Learning for Image Recognition

图像识别的深度残差学习

$1 本论文介绍深度残差图像识别的运用,可以猜到深度残差就是本文论的核心

Abstract

摘要

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

更深的神经网络更难训练。我们提出了一种残差学习框架来减轻网络训练,这些网络比以前使用的网络更深。我们明确地将层变为学习关于层输入的残差函数,而不是学习未参考的函数。我们提供了全面的经验证据说明这些残差网络很容易优化,并可以显著增加深度提高准确性。在 ImageNet 数据集上我们评估了深度高达 152 层的残差网络——比 VGG[41]深 8 倍但仍具有较低的复杂度。这些残差网络的集合在 ImageNet 测试集上取得了 3.57% 的错误率。这个结果在 ILSVRC 2015 分类任务上赢得了第一名。我们也在 CIFAR-10 上分析了 100 层和 1000 层的残差网络。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

对于许多视觉识别任务而言,表示的深度是至关重要的。仅由于我们非常深度的表示,我们便在 COCO 目标检测数据集上得到了 28% 的相对提高。深度残差网络是我们向 ILSVRCCOCO 2015 竞赛提交的基础,我们也赢得了 ImageNet 检测任务,ImageNet 定位任务,COCO 检测和 COCO 分割任务的第一名。

$1 摘要中指出更深的神经网络更难训练,而作者提出的深度残差网络可以解决这个问题,从而可以通过显著增加深度提高准确性。并且,深度残差网络在几次大赛中都获得了第一名的成绩。

1 Introduction

1 简介

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 50, 40]. Deep networks naturally integrate low/mid/high-level features [50] and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other non-trivial visual recognition tasks [8, 12, 7, 32, 27] have also greatly benefited from very deep models.

深度卷积神经网络[22, 21]造就了图像分类[21, 49, 39]的一系列突破。深度网络自然地将低/中/高级特征[49]和分类器端到端多层方式进行集成,特征的“级别”可以通过堆叠层的数量(深度)来丰富。最近的证据[40, 43]显示网络深度至关重要,在具有挑战性的 ImageNet 数据集上领先的结果都采用了“非常深”[40]的模型,深度从 16 [40]到 30 [16]之间。许多其它重要的视觉识别任务[7, 11, 6, 32, 27]也从非常深的模型中得到了极大受益。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with back-propagation [22].

ResNet 论文深度讲解

Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

深度重要性的推动下,出现了一个问题:学些更好的网络是否像堆叠更多的层一样容易?回答这个问题的一个障碍是梯度消失/爆炸[14, 1, 8]这个众所周知的问题,它从一开始就阻碍了收敛。然而,这个问题通过标准初始化[23, 8, 36, 12]和中间标准化层[16]在很大程度上已经解决,这使得数十层的网络能通过具有反向传播的随机梯度下降(SGD)开始收敛。

ResNet 论文深度讲解

图 1. 具有 20 层和 56 层“普通”网络的 CIFAR-10 上的训练误差(左)和测试误差(右)。更深的网络具有更高的训练误差,从而具有更高的测试误差ImageNet 上的类似现象如图 4 所示。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深的网络能够开始收敛时,暴露了一个退化问题:随着网络深度的增加,准确率达到饱和(这可能并不奇怪)然后迅速下降。意外的是,这种下降不是由过拟合引起的,并且在适当的深度模型上添加更多的层会导致更高的训练误差,正如[10, 41]中报告的那样,并且由我们的实验完全证实。图 1 显示了一个典型的例子。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

退化(训练准确率)表明不是所有的系统都很容易优化。让我们考虑一个较浅的架构及其更深层次的对象,为其添加更多的层。存在通过构建得到更深层模型的解决方案:添加的层是恒等映射,其他层是从学习到的较浅模型的拷贝。这种构造解决方案的存在表明,较深的模型不应该产生比其对应的较浅模型更高的训练误差。但是实验表明,我们目前现有的解决方案无法找到与构建的解决方案相比相对不错或更好的解决方案(或在合理的时间内无法实现)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $H(x)$, we let the stacked nonlinear layers fit another mapping of $F (x) := H(x) − x$. The original mapping is recast into $F(x)+x$. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

ResNet 论文深度讲解

Figure 2. Residual learning: a building block.

在本文中,我们通过引入深度残差学习框架解决了退化问题。我们明确地让这些层拟合残差映射,而不是希望每几个堆叠的层直接拟合期望的基础映射。形式上,将期望的基础映射表示为 $H(x)$,我们将堆叠的非线性层拟合另一个映射 $F(x) := H(x) − x$。原始的映射重写为 $F(x)+x$。我们假设残差映射比原始的、未参考的映射更容易优化。在极端情况下,如果一个恒等映射是最优的,那么将残差置为零比通过一堆非线性层来拟合恒等映射更容易。

ResNet 论文深度讲解

图 2. 残差学习:构建块。

The formulation of $F (x) + x$ can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

公式 $F (x) + x$ 可以通过带有“快捷连接”的前向神经网络(图 2)来实现。快捷连接[2, 33, 48]是那些跳过一层或更多层的连接。在我们的案例中,快捷连接简单地执行恒等映射,并将其输出添加到堆叠层的输出(图 2)。恒等快捷连接既不增加额外的参数不增加计算复杂度。整个网络仍然可以由带有反向传播的 SGD 进行端到端的训练,并且可以使用公共库(例如,Caffe [19])轻松实现,而无需修改求解器。

We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在 ImageNet[36]上进行了综合实验来显示退化问题并评估我们的方法。我们发现:1)我们极深的残差网络易于优化,但当深度增加时,对应的“简单”网络(简单堆叠层)表现出更高的训练误差;2)我们的深度残差网络可以从大大增加的深度中轻松获得准确性收益,生成的结果实质上比以前的网络更好。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

CIFAR-10 数据集上[20]也显示出类似的现象,这表明了优化的困难以及我们的方法的影响不仅仅是针对一个特定的数据集。我们在这个数据集上展示了成功训练的超过 100 层的模型,并探索了超过 1000 层的模型。

On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

ImageNet 分类数据集[35]中,我们通过非常深的残差网络获得了很好的结果。我们的 152 层残差网络是 ImageNet 上最深的网络,同时还具有比 VGG 网络[40]更低的复杂性。我们的模型集合在 ImageNet 测试集上有 3.57% top-5 的错误率,并在 ILSVRC 2015 分类比赛中获得了第一名。极深的表示在其它识别任务中也有极好的泛化性能,并带领我们在进一步赢得了第一名:包括 ILSVRC & COCO 2015 竞赛中的 ImageNet 检测,ImageNet 定位,COCO 检测和 COCO 分割。坚实的证据表明残差学习准则是通用的,并且我们期望它适用于其它的视觉和非视觉问题。

$2 从简介部分可以了解到,更深的网络面临着梯度消失/爆炸这个退化问题,并且不是由过拟合引起。作者提出通过深度残差(恒等映射、快捷连接)来解决这个退化问题,并且既不增加额外的参数不增加计算复杂度,使得网络易于优化,提高了泛化性能。同时,作者在多个数据集中的实践也表明残差学习准则是通用的不局限于特定的数据集,也不一定局限于视觉问题

2 Related Work

2 相关工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

残差表示。在图像识别中,VLAD[18]是一种通过关于字典的残差向量进行编码的表示形式,Fisher 矢量[30]可以表示为 VLAD概率版本[18]。它们都是图像检索和图像分类[4,47]中强大的浅层表示。对于矢量量化,编码残差矢量[17]被证明比编码原始矢量更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低级视觉和计算机图形学中,为了求解偏微分方程(PDE),广泛使用的 Multigrid 方法[3]将系统重构为在多个尺度上的子问题,其中每个子问题负责较粗尺度和较细尺度的残差解。Multigrid 的替代方法是层次化基础预处理[44,45],它依赖于表示两个尺度之间残差向量的变量。已经被证明[3,44,45]这些求解器比不知道解的残差性质的标准求解器收敛得更快。这些方法表明,良好的重构或预处理可以简化优化

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷连接。导致快捷连接[2,33,48]的实践和理论已经被研究了很长时间。训练多层感知机(MLP)的早期实践是添加一个线性层来连接网络的输入和输出[33,48]。在[43,24]中,一些中间层直接连接到辅助分类器,用于解决梯度消失/爆炸。论文[38,37,31,46]提出了通过快捷连接实现层间响应,梯度和传播误差的方法。在[43]中,一个“inception”层由一个快捷分支和一些更深的分支组成。

Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

和我们同时进行的工作,“highway networks” [41, 42]提出了门功能[15]的快捷连接。这些门是数据相关且有参数的,与我们不具有参数恒等快捷连接相反。当门控快捷连接“关闭”(接近零)时,高速网络中的层表示非残差函数。相反,我们的公式总是学习残差函数;我们的恒等快捷连接永远不会关闭,所有的信息总是通过,还有额外的残差函数要学习。此外,高速网络还没有证实极度增加的深度(例如,超过 100 个层)带来的准确性收益。

$3 作者指出他并不是残差思想的第一个提出者,不过作者将其很好地运用起来了。

3. Deep Residual Learning

3. 深度残差学习

3.1. Residual Learning

3.1. 残差学习

Let us consider $H(x)$ as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., $H(x) − x$ (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function $F(x) := H(x) − x$. The original function thus becomes $F(x)+x$. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

我们考虑 $H(x)$ 作为几个堆叠层(不必是整个网络)要拟合的基础映射,x 表示这些层中第一层的输入。假设多个非线性层可以渐近地近似复杂函数,它等价于假设它们可以渐近地近似残差函数,即 $H(x)−x$ (假设输入输出是相同维度)。因此,我们明确让这些层近似参数函数 $F(x):=H(x)−x$,而不是期望堆叠层近似 $H(x)$。因此原始函数变为 $F(x)+x$。尽管两种形式应该都能渐近地近似要求的函数(如假设),但学习的难易程度可能是不同的。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

关于退化问题的反直觉现象激发了这种重构(图 1 左)。正如我们在引言中讨论的那样,如果添加的层可以被构建为恒等映射,更深模型的训练误差应该不大于它对应的更浅版本。退化问题表明求解器通过多个非线性层来近似恒等映射可能有困难。通过残差学习的重构,如果恒等映射是最优的,求解器可能简单地将多个非线性连接的权重推向零来接近恒等映射

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity map-pings provide reasonable preconditioning.

ResNet 论文深度讲解

Figure 7. Standard deviations (std) of layer responses on CIFAR- 10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

在实际情况下,恒等映射不太可能是最优的,但是我们的重构可能有助于对问题进行预处理。如果最优函数比零映射更接近于恒等映射,则求解器应该更容易找到关于恒等映射的抖动,而不是将该函数作为新函数来学习。我们通过实验(图 7)显示学习的残差函数通常有更小的响应,表明恒等映射提供了合理的预处理

ResNet 论文深度讲解

图 7. 层响应在 CIFAR-10 上的标准差(std)。这些响应是每个 3×3 层的输出,在 1BN1 之后非线性之前。上面:以原始顺序显示层。下面:响应按降序排列。

3.2. Identity Mapping by Shortcuts

3.2. 快捷恒等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

$y = F (x, {Wi }) + x.$ (1)

Here $x$ and $y$ are the input and output vectors of the layers considered. The function $F(x, {W_i})$ represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, $F = W_2 sigma(W_1x)$ in which $σ$ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation $F + x$ is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., $sigma(y)$, see Fig. 2).

我们每隔几个堆叠层采用残差学习。构建块如图 2 所示。在本文中我们考虑构建块正式定义为:

$y = F (x, {Wi }) + x.$ (1)

$x$ 和 $y$ 是考虑的层的输入和输出向量。函数 $F(x, {W_i})$ 表示要学习的残差映射。图 2 中的例子有两层,$F = W_2 sigma(W_1x)$ 中 $σ$ 表示 ReLU[29],为了简化写法忽略偏置项。$F + x$ 操作通过快捷连接和各个元素相加来执行。在相加之后我们采纳了第二种非线性(即 $sigma(y)$,看图 2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

方程(1)中的快捷连接既没有引入外部参数又没有增加计算复杂度。这不仅在实践中有吸引力,而且在简单网络和残差网络的比较中也很重要。我们可以公平地比较同时具有相同数量的参数,相同深度,宽度和计算成本的简单/残差网络(除了不可忽略的元素加法之外)。

The dimensions of $x$ and $F$ must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

$y = F(x, {W_i }) + W_sx.$ {2}

We can also use a square matrix $Ws$ in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

方程(1)中 $x$ 和 $F$ 的维度必须是相等的。如果不是这种情况(例如,当更改输入/输出通道时),我们可以通过快捷连接执行线性投影 $Ws$ 来匹配维度:

$y = F(x, {W_i }) + W_sx.$ {2}

我们也可以使用方程(1)中的方阵 $Ws$。但是我们将通过实验表明,恒等映射足以解决退化问题,并且是合算的,因此 $Ws$ 仅在匹配维度时使用

The form of the residual function $F$ is flexible. Experiments in this paper involve a function $F$ that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: $y = W_1x + x$, for which we have not observed advantages.

ResNet 论文深度讲解

Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet- 34. Right: a “bottleneck” building block for ResNet-50/101/152.

ResNet 论文深度讲解

图 5. ImageNet 的深度残差函数 $F$。左:ResNet-34 的构建块(在 56×56 的特征图上),如图 3。右:ResNet-50/101/152 的 “bottleneck”构建块。

残差函数 $F$ 的形式是可变的。本文中的实验包括有两层三层(图 5)的函数 $F$,同时可能有更多的层。但如果 $F$ 只有一层,方程(1)类似于线性层:$y = W_1x + x$,我们没有看到优势

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function $F(x,{W_i})$ can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,为了简单起见,尽管上述符号是关于全连接层的,但它们同样适用于卷积层。函数 $F(x,{W_i})$ 可以表示多个卷积层。元素加法在两个特征图上逐通道进行。

3.3. Network Architectures

3.3. 网络架构

We have tested various plain/residual nets, and have ob-served consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们测试了各种简单/残差网络,并观察到了一致的现象。为了提供讨论的实例,我们描述了 ImageNet 的两个模型如下。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

ResNet 论文深度讲解

Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

ResNet 论文深度讲解

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Down-sampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2.

简单网络。 我们简单网络的基准(图 3,中间)主要受到 VGG 网络[40](图 3,左图)的哲学启发。卷积层主要有 3×3 的滤波器,并遵循两个简单的设计规则:(i)对于相同的输出特征图尺寸,层具有相同数量的滤波器;(ii)如果特征图尺寸减半,则滤波器数量加倍,以便保持每层的时间复杂度。我们通过步长2 的卷积层直接执行下采样。网络以全局平均池化层和具有 softmax1000全连接层结束。图 3(中间)的加权层总数为 34

ResNet 论文深度讲解

图 3. ImageNet 的网络架构例子。左:作为参考的 VGG-19 模型[41]。中:具有 34 个参数层的简单网络(36 亿 FLOPs)。右:具有 34 个参数层的残差网络(36 亿 FLOPs)。带点的快捷连接增加了维度。表 1 显示了更多细节和其它变种。

ResNet 论文深度讲解

表 1. ImageNet 架构。构建块显示在括号中(也可看图 5),以及构建块的堆叠数量。下采样通过步长为 2conv3_1, conv4_1conv5_1 执行。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是我们的模型与 VGG 网络(图 3 左)相比,有更少的滤波器更低的复杂度。我们的 34 层基准有 36 亿 FLOP(乘加),仅是 VGG-19196 亿 FLOP)的 18%

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络。基于上述的简单网络,我们插入快捷连接(图 3,右),将网络转换为其对应的残差版本。当输入和输出具有相同的维度时(图 3 中的实线快捷连接)时,可以直接使用恒等快捷连接(方程(1))。当维度增加(图 3 中的虚线快捷连接)时,我们考虑两个选项:(A)快捷连接仍然执行恒等映射,额外填充零输入以增加维度。此选项不会引入额外的参数;(B)方程(2)中的投影快捷连接用于匹配维度(由 1×1 卷积完成)。对于这两个选项,当快捷连接跨越两种尺寸的特征图时,它们执行时步长为 2

3.4. Implementation

3.4. 实现

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].

ImageNet 中我们的实现遵循[21,41]的实践。调整图像大小,其较短的边在[256,480]之间进行随机采样用于尺度增强[40]。224×224 裁剪是从图像或其水平翻转中随机采样,并逐像素减去均值[21]。使用了[21]中的标准颜色增强。在每个卷积之后和激活之前,我们采用批量归一化(BN)[16]。我们按照[12]的方法初始化权重,从零开始训练所有的简单/残差网络。我们使用批大小256SGD 方法。学习速度从 0.1 开始,当误差稳定时学习率除以 10,并且模型训练高达 60×104 次迭代。我们使用的权重衰减为 0.0001,动量为 0.9。根据[16]的实践,我们不使用 dropout[14]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully- convolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).

在测试阶段,为了比较学习我们采用标准的 10-crop 测试[21]。对于最好的结果,我们采用如[41, 13]中的全卷积形式,并在多尺度上对分数进行平均(图像归一化,短边位于 {224, 256, 384, 480, 640} 中)。

$3 作者在本节先讲了残差网络更好的理论依据原始函数和残差函数学习的难易程度是不同的
然后说明了残差函数的形式是可变的,文中使用的是两层三层的,并且不采用一层的(类似于线性层);
紧接着通过对比 VGG34-layer plain34-layer residual讲解了网络的结构
最后讲解了网络的实现细节

4. Experiments

4. 实验

4.1. ImageNet Classification

4.1. ImageNet 分类

We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

我们在 ImageNet 2012 分类数据集[36]对我们的方法进行了评估,该数据集由 1000 个类别组成。这些模型在 128 万张训练图像上进行训练,并在 5 万张验证图像上进行评估。我们也获得了测试服务器报告的在 10 万张测试图像上的最终结果。我们评估了 top-1top-5 错误率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for de- tailed architectures.

简单网络。我们首先评估 18 层和 34 层的简单网络。34 层简单网络在图 3(中间)。18 层简单网络是一种类似的形式。有关详细的体系结构,请参见表 1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem — the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

ResNet 论文深度讲解

Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

表 2 中的结果表明,较深的 34 层简单网络比较浅的 18 层简单网络有更高的验证误差。为了揭示原因,在图 4(左图)中,我们比较训练过程中的训练/验证误差。我们观察到退化问题——虽然 18 层简单网络的解空间是 34 层简单网络解空间的子空间,但 34 层简单网络在整个训练过程中具有较高的训练误差

ResNet 论文深度讲解

论文地址

https://arxiv.org/pdf/1512.03385.pdf

阅读方式

本文采用原文、翻译、记录的排版。

笔者使用如何阅读深度学习论文的方法进行阅读,文中标注的 $1(第一步)、$2、$3、$4 分别表示在第该步阅读中的记录和思考

Deep Residual Learning for Image Recognition

图像识别的深度残差学习

$1 本论文介绍深度残差图像识别的运用,可以猜到深度残差就是本文论的核心

Abstract

摘要

Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [41] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers.

更深的神经网络更难训练。我们提出了一种残差学习框架来减轻网络训练,这些网络比以前使用的网络更深。我们明确地将层变为学习关于层输入的残差函数,而不是学习未参考的函数。我们提供了全面的经验证据说明这些残差网络很容易优化,并可以显著增加深度提高准确性。在 ImageNet 数据集上我们评估了深度高达 152 层的残差网络——比 VGG[41]深 8 倍但仍具有较低的复杂度。这些残差网络的集合在 ImageNet 测试集上取得了 3.57% 的错误率。这个结果在 ILSVRC 2015 分类任务上赢得了第一名。我们也在 CIFAR-10 上分析了 100 层和 1000 层的残差网络。

The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

对于许多视觉识别任务而言,表示的深度是至关重要的。仅由于我们非常深度的表示,我们便在 COCO 目标检测数据集上得到了 28% 的相对提高。深度残差网络是我们向 ILSVRCCOCO 2015 竞赛提交的基础,我们也赢得了 ImageNet 检测任务,ImageNet 定位任务,COCO 检测和 COCO 分割任务的第一名。

$1 摘要中指出更深的神经网络更难训练,而作者提出的深度残差网络可以解决这个问题,从而可以通过显著增加深度提高准确性。并且,深度残差网络在几次大赛中都获得了第一名的成绩。

1 Introduction

1 简介

Deep convolutional neural networks [22, 21] have led to a series of breakthroughs for image classification [21, 50, 40]. Deep networks naturally integrate low/mid/high-level features [50] and classifiers in an end-to-end multi-layer fashion, and the “levels” of features can be enriched by the number of stacked layers (depth). Recent evidence [41, 44] reveals that network depth is of crucial importance, and the leading results [41, 44, 13, 16] on the challenging ImageNet dataset [36] all exploit “very deep” [41] models, with a depth of sixteen [41] to thirty [16]. Many other non-trivial visual recognition tasks [8, 12, 7, 32, 27] have also greatly benefited from very deep models.

深度卷积神经网络[22, 21]造就了图像分类[21, 49, 39]的一系列突破。深度网络自然地将低/中/高级特征[49]和分类器端到端多层方式进行集成,特征的“级别”可以通过堆叠层的数量(深度)来丰富。最近的证据[40, 43]显示网络深度至关重要,在具有挑战性的 ImageNet 数据集上领先的结果都采用了“非常深”[40]的模型,深度从 16 [40]到 30 [16]之间。许多其它重要的视觉识别任务[7, 11, 6, 32, 27]也从非常深的模型中得到了极大受益。

Driven by the significance of depth, a question arises: Is learning better networks as easy as stacking more layers? An obstacle to answering this question was the notorious problem of vanishing/exploding gradients [1, 9], which hamper convergence from the beginning. This problem, however, has been largely addressed by normalized initialization [23, 9, 37, 13] and intermediate normalization layers [16], which enable networks with tens of layers to start converging for stochastic gradient descent (SGD) with back-propagation [22].

ResNet 论文深度讲解

Figure 1. Training error (left) and test error (right) on CIFAR-10 with 20-layer and 56-layer “plain” networks. The deeper network has higher training error, and thus test error. Similar phenomena on ImageNet is presented in Fig. 4.

深度重要性的推动下,出现了一个问题:学些更好的网络是否像堆叠更多的层一样容易?回答这个问题的一个障碍是梯度消失/爆炸[14, 1, 8]这个众所周知的问题,它从一开始就阻碍了收敛。然而,这个问题通过标准初始化[23, 8, 36, 12]和中间标准化层[16]在很大程度上已经解决,这使得数十层的网络能通过具有反向传播的随机梯度下降(SGD)开始收敛。

ResNet 论文深度讲解

图 1. 具有 20 层和 56 层“普通”网络的 CIFAR-10 上的训练误差(左)和测试误差(右)。更深的网络具有更高的训练误差,从而具有更高的测试误差ImageNet 上的类似现象如图 4 所示。

When deeper networks are able to start converging, a degradation problem has been exposed: with the network depth increasing, accuracy gets saturated (which might be unsurprising) and then degrades rapidly. Unexpectedly, such degradation is not caused by overfitting, and adding more layers to a suitably deep model leads to higher training error, as reported in [11, 42] and thoroughly verified by our experiments. Fig. 1 shows a typical example.

当更深的网络能够开始收敛时,暴露了一个退化问题:随着网络深度的增加,准确率达到饱和(这可能并不奇怪)然后迅速下降。意外的是,这种下降不是由过拟合引起的,并且在适当的深度模型上添加更多的层会导致更高的训练误差,正如[10, 41]中报告的那样,并且由我们的实验完全证实。图 1 显示了一个典型的例子。

The degradation (of training accuracy) indicates that not all systems are similarly easy to optimize. Let us consider a shallower architecture and its deeper counterpart that adds more layers onto it. There exists a solution by construction to the deeper model: the added layers are identity mapping, and the other layers are copied from the learned shallower model. The existence of this constructed solution indicates that a deeper model should produce no higher training error than its shallower counterpart. But experiments show that our current solvers on hand are unable to find solutions that are comparably good or better than the constructed solution (or unable to do so in feasible time).

退化(训练准确率)表明不是所有的系统都很容易优化。让我们考虑一个较浅的架构及其更深层次的对象,为其添加更多的层。存在通过构建得到更深层模型的解决方案:添加的层是恒等映射,其他层是从学习到的较浅模型的拷贝。这种构造解决方案的存在表明,较深的模型不应该产生比其对应的较浅模型更高的训练误差。但是实验表明,我们目前现有的解决方案无法找到与构建的解决方案相比相对不错或更好的解决方案(或在合理的时间内无法实现)。

In this paper, we address the degradation problem by introducing a deep residual learning framework. Instead of hoping each few stacked layers directly fit a desired underlying mapping, we explicitly let these layers fit a residual mapping. Formally, denoting the desired underlying mapping as $H(x)$, we let the stacked nonlinear layers fit another mapping of $F (x) := H(x) − x$. The original mapping is recast into $F(x)+x$. We hypothesize that it is easier to optimize the residual mapping than to optimize the original, unreferenced mapping. To the extreme, if an identity mapping were optimal, it would be easier to push the residual to zero than to fit an identity mapping by a stack of nonlinear layers.

ResNet 论文深度讲解

Figure 2. Residual learning: a building block.

在本文中,我们通过引入深度残差学习框架解决了退化问题。我们明确地让这些层拟合残差映射,而不是希望每几个堆叠的层直接拟合期望的基础映射。形式上,将期望的基础映射表示为 $H(x)$,我们将堆叠的非线性层拟合另一个映射 $F(x) := H(x) − x$。原始的映射重写为 $F(x)+x$。我们假设残差映射比原始的、未参考的映射更容易优化。在极端情况下,如果一个恒等映射是最优的,那么将残差置为零比通过一堆非线性层来拟合恒等映射更容易。

ResNet 论文深度讲解

图 2. 残差学习:构建块。

The formulation of $F (x) + x$ can be realized by feedforward neural networks with “shortcut connections” (Fig. 2). Shortcut connections [2, 34, 49] are those skipping one or more layers. In our case, the shortcut connections simply perform identity mapping, and their outputs are added to the outputs of the stacked layers (Fig. 2). Identity shortcut connections add neither extra parameter nor computational complexity. The entire network can still be trained end-to-end by SGD with backpropagation, and can be easily implemented using common libraries (e.g., Caffe [19]) without modifying the solvers.

公式 $F (x) + x$ 可以通过带有“快捷连接”的前向神经网络(图 2)来实现。快捷连接[2, 33, 48]是那些跳过一层或更多层的连接。在我们的案例中,快捷连接简单地执行恒等映射,并将其输出添加到堆叠层的输出(图 2)。恒等快捷连接既不增加额外的参数不增加计算复杂度。整个网络仍然可以由带有反向传播的 SGD 进行端到端的训练,并且可以使用公共库(例如,Caffe [19])轻松实现,而无需修改求解器。

We present comprehensive experiments on ImageNet [36] to show the degradation problem and evaluate our method. We show that: 1) Our extremely deep residual nets are easy to optimize, but the counterpart “plain” nets (that simply stack layers) exhibit higher training error when the depth increases; 2) Our deep residual nets can easily enjoy accuracy gains from greatly increased depth, producing results substantially better than previous networks.

我们在 ImageNet[36]上进行了综合实验来显示退化问题并评估我们的方法。我们发现:1)我们极深的残差网络易于优化,但当深度增加时,对应的“简单”网络(简单堆叠层)表现出更高的训练误差;2)我们的深度残差网络可以从大大增加的深度中轻松获得准确性收益,生成的结果实质上比以前的网络更好。

Similar phenomena are also shown on the CIFAR-10 set [20], suggesting that the optimization difficulties and the effects of our method are not just akin to a particular dataset. We present successfully trained models on this dataset with over 100 layers, and explore models with over 1000 layers.

CIFAR-10 数据集上[20]也显示出类似的现象,这表明了优化的困难以及我们的方法的影响不仅仅是针对一个特定的数据集。我们在这个数据集上展示了成功训练的超过 100 层的模型,并探索了超过 1000 层的模型。

On the ImageNet classification dataset [36], we obtain excellent results by extremely deep residual nets. Our 152-layer residual net is the deepest network ever presented on ImageNet, while still having lower complexity than VGG nets [41]. Our ensemble has 3.57% top-5 error on the ImageNet test set, and won the 1st place in the ILSVRC 2015 classification competition. The extremely deep representations also have excellent generalization performance on other recognition tasks, and lead us to further win the 1st places on: ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation in ILSVRC & COCO 2015 competitions. This strong evidence shows that the residual learning principle is generic, and we expect that it is applicable in other vision and non-vision problems.

ImageNet 分类数据集[35]中,我们通过非常深的残差网络获得了很好的结果。我们的 152 层残差网络是 ImageNet 上最深的网络,同时还具有比 VGG 网络[40]更低的复杂性。我们的模型集合在 ImageNet 测试集上有 3.57% top-5 的错误率,并在 ILSVRC 2015 分类比赛中获得了第一名。极深的表示在其它识别任务中也有极好的泛化性能,并带领我们在进一步赢得了第一名:包括 ILSVRC & COCO 2015 竞赛中的 ImageNet 检测,ImageNet 定位,COCO 检测和 COCO 分割。坚实的证据表明残差学习准则是通用的,并且我们期望它适用于其它的视觉和非视觉问题。

$2 从简介部分可以了解到,更深的网络面临着梯度消失/爆炸这个退化问题,并且不是由过拟合引起。作者提出通过深度残差(恒等映射、快捷连接)来解决这个退化问题,并且既不增加额外的参数不增加计算复杂度,使得网络易于优化,提高了泛化性能。同时,作者在多个数据集中的实践也表明残差学习准则是通用的不局限于特定的数据集,也不一定局限于视觉问题

2 Related Work

2 相关工作

Residual Representations. In image recognition, VLAD [18] is a representation that encodes by the residual vectors with respect to a dictionary, and Fisher Vector [30] can be formulated as a probabilistic version [18] of VLAD. Both of them are powerful shallow representations for image retrieval and classification [4, 48]. For vector quantization, encoding residual vectors [17] is shown to be more effective than encoding original vectors.

残差表示。在图像识别中,VLAD[18]是一种通过关于字典的残差向量进行编码的表示形式,Fisher 矢量[30]可以表示为 VLAD概率版本[18]。它们都是图像检索和图像分类[4,47]中强大的浅层表示。对于矢量量化,编码残差矢量[17]被证明比编码原始矢量更有效。

In low-level vision and computer graphics, for solving Partial Differential Equations (PDEs), the widely used Multigrid method [3] reformulates the system as subproblems at multiple scales, where each subproblem is responsible for the residual solution between a coarser and a finer scale. An alternative to Multigrid is hierarchical basis preconditioning [45, 46], which relies on variables that represent residual vectors between two scales. It has been shown [3, 45, 46] that these solvers converge much faster than standard solvers that are unaware of the residual nature of the solutions. These methods suggest that a good reformulation or preconditioning can simplify the optimization.

在低级视觉和计算机图形学中,为了求解偏微分方程(PDE),广泛使用的 Multigrid 方法[3]将系统重构为在多个尺度上的子问题,其中每个子问题负责较粗尺度和较细尺度的残差解。Multigrid 的替代方法是层次化基础预处理[44,45],它依赖于表示两个尺度之间残差向量的变量。已经被证明[3,44,45]这些求解器比不知道解的残差性质的标准求解器收敛得更快。这些方法表明,良好的重构或预处理可以简化优化

Shortcut Connections. Practices and theories that lead to shortcut connections [2, 34, 49] have been studied for a long time. An early practice of training multi-layer perceptrons (MLPs) is to add a linear layer connected from the network input to the output [34, 49]. In [44, 24], a few intermediate layers are directly connected to auxiliary classifiers for addressing vanishing/exploding gradients. The papers of [39, 38, 31, 47] propose methods for centering layer responses, gradients, and propagated errors, implemented by shortcut connections. In [44], an “inception” layer is composed of a shortcut branch and a few deeper branches.

快捷连接。导致快捷连接[2,33,48]的实践和理论已经被研究了很长时间。训练多层感知机(MLP)的早期实践是添加一个线性层来连接网络的输入和输出[33,48]。在[43,24]中,一些中间层直接连接到辅助分类器,用于解决梯度消失/爆炸。论文[38,37,31,46]提出了通过快捷连接实现层间响应,梯度和传播误差的方法。在[43]中,一个“inception”层由一个快捷分支和一些更深的分支组成。

Concurrent with our work, “highway networks” [42, 43] present shortcut connections with gating functions [15]. These gates are data-dependent and have parameters, in contrast to our identity shortcuts that are parameter-free. When a gated shortcut is “closed” (approaching zero), the layers in highway networks represent non-residual functions. On the contrary, our formulation always learns residual functions; our identity shortcuts are never closed, and all information is always passed through, with additional residual functions to be learned. In addition, highway networks have not demonstrated accuracy gains with extremely increased depth (e.g., over 100 layers).

和我们同时进行的工作,“highway networks” [41, 42]提出了门功能[15]的快捷连接。这些门是数据相关且有参数的,与我们不具有参数恒等快捷连接相反。当门控快捷连接“关闭”(接近零)时,高速网络中的层表示非残差函数。相反,我们的公式总是学习残差函数;我们的恒等快捷连接永远不会关闭,所有的信息总是通过,还有额外的残差函数要学习。此外,高速网络还没有证实极度增加的深度(例如,超过 100 个层)带来的准确性收益。

$3 作者指出他并不是残差思想的第一个提出者,不过作者将其很好地运用起来了。

3. Deep Residual Learning

3. 深度残差学习

3.1. Residual Learning

3.1. 残差学习

Let us consider $H(x)$ as an underlying mapping to be fit by a few stacked layers (not necessarily the entire net), with x denoting the inputs to the first of these layers. If one hypothesizes that multiple nonlinear layers can asymptotically approximate complicated functions2, then it is equivalent to hypothesize that they can asymptotically approximate the residual functions, i.e., $H(x) − x$ (assuming that the input and output are of the same dimensions). So rather than expect stacked layers to approximate H(x), we explicitly let these layers approximate a residual function $F(x) := H(x) − x$. The original function thus becomes $F(x)+x$. Although both forms should be able to asymptotically approximate the desired functions (as hypothesized), the ease of learning might be different.

我们考虑 $H(x)$ 作为几个堆叠层(不必是整个网络)要拟合的基础映射,x 表示这些层中第一层的输入。假设多个非线性层可以渐近地近似复杂函数,它等价于假设它们可以渐近地近似残差函数,即 $H(x)−x$ (假设输入输出是相同维度)。因此,我们明确让这些层近似参数函数 $F(x):=H(x)−x$,而不是期望堆叠层近似 $H(x)$。因此原始函数变为 $F(x)+x$。尽管两种形式应该都能渐近地近似要求的函数(如假设),但学习的难易程度可能是不同的。

This reformulation is motivated by the counterintuitive phenomena about the degradation problem (Fig. 1, left). As we discussed in the introduction, if the added layers can be constructed as identity mappings, a deeper model should have training error no greater than its shallower counterpart. The degradation problem suggests that the solvers might have difficulties in approximating identity mappings by multiple nonlinear layers. With the residual learning reformulation, if identity mappings are optimal, the solvers may simply drive the weights of the multiple nonlinear layers toward zero to approach identity mappings.

关于退化问题的反直觉现象激发了这种重构(图 1 左)。正如我们在引言中讨论的那样,如果添加的层可以被构建为恒等映射,更深模型的训练误差应该不大于它对应的更浅版本。退化问题表明求解器通过多个非线性层来近似恒等映射可能有困难。通过残差学习的重构,如果恒等映射是最优的,求解器可能简单地将多个非线性连接的权重推向零来接近恒等映射

In real cases, it is unlikely that identity mappings are optimal, but our reformulation may help to precondition the problem. If the optimal function is closer to an identity mapping than to a zero mapping, it should be easier for the solver to find the perturbations with reference to an identity mapping, than to learn the function as a new one. We show by experiments (Fig. 7) that the learned residual functions in general have small responses, suggesting that identity map-pings provide reasonable preconditioning.

ResNet 论文深度讲解

Figure 7. Standard deviations (std) of layer responses on CIFAR- 10. The responses are the outputs of each 3×3 layer, after BN and before nonlinearity. Top: the layers are shown in their original order. Bottom: the responses are ranked in descending order.

在实际情况下,恒等映射不太可能是最优的,但是我们的重构可能有助于对问题进行预处理。如果最优函数比零映射更接近于恒等映射,则求解器应该更容易找到关于恒等映射的抖动,而不是将该函数作为新函数来学习。我们通过实验(图 7)显示学习的残差函数通常有更小的响应,表明恒等映射提供了合理的预处理

ResNet 论文深度讲解

图 7. 层响应在 CIFAR-10 上的标准差(std)。这些响应是每个 3×3 层的输出,在 1BN1 之后非线性之前。上面:以原始顺序显示层。下面:响应按降序排列。

3.2. Identity Mapping by Shortcuts

3.2. 快捷恒等映射

We adopt residual learning to every few stacked layers. A building block is shown in Fig. 2. Formally, in this paper we consider a building block defined as:

$y = F (x, {Wi }) + x.$ (1)

Here $x$ and $y$ are the input and output vectors of the layers considered. The function $F(x, {W_i})$ represents the residual mapping to be learned. For the example in Fig. 2 that has two layers, $F = W_2 sigma(W_1x)$ in which $σ$ denotes ReLU [29] and the biases are omitted for simplifying notations. The operation $F + x$ is performed by a shortcut connection and element-wise addition. We adopt the second nonlinearity after the addition (i.e., $sigma(y)$, see Fig. 2).

我们每隔几个堆叠层采用残差学习。构建块如图 2 所示。在本文中我们考虑构建块正式定义为:

$y = F (x, {Wi }) + x.$ (1)

$x$ 和 $y$ 是考虑的层的输入和输出向量。函数 $F(x, {W_i})$ 表示要学习的残差映射。图 2 中的例子有两层,$F = W_2 sigma(W_1x)$ 中 $σ$ 表示 ReLU[29],为了简化写法忽略偏置项。$F + x$ 操作通过快捷连接和各个元素相加来执行。在相加之后我们采纳了第二种非线性(即 $sigma(y)$,看图 2)。

The shortcut connections in Eqn.(1) introduce neither extra parameter nor computation complexity. This is not only attractive in practice but also important in our comparisons between plain and residual networks. We can fairly compare plain/residual networks that simultaneously have the same number of parameters, depth, width, and computational cost (except for the negligible element-wise addition).

方程(1)中的快捷连接既没有引入外部参数又没有增加计算复杂度。这不仅在实践中有吸引力,而且在简单网络和残差网络的比较中也很重要。我们可以公平地比较同时具有相同数量的参数,相同深度,宽度和计算成本的简单/残差网络(除了不可忽略的元素加法之外)。

The dimensions of $x$ and $F$ must be equal in Eqn.(1). If this is not the case (e.g., when changing the input/output channels), we can perform a linear projection Ws by the shortcut connections to match the dimensions:

$y = F(x, {W_i }) + W_sx.$ {2}

We can also use a square matrix $Ws$ in Eqn.(1). But we will show by experiments that the identity mapping is sufficient for addressing the degradation problem and is economical, and thus Ws is only used when matching dimensions.

方程(1)中 $x$ 和 $F$ 的维度必须是相等的。如果不是这种情况(例如,当更改输入/输出通道时),我们可以通过快捷连接执行线性投影 $Ws$ 来匹配维度:

$y = F(x, {W_i }) + W_sx.$ {2}

我们也可以使用方程(1)中的方阵 $Ws$。但是我们将通过实验表明,恒等映射足以解决退化问题,并且是合算的,因此 $Ws$ 仅在匹配维度时使用

The form of the residual function $F$ is flexible. Experiments in this paper involve a function $F$ that has two or three layers (Fig. 5), while more layers are possible. But if F has only a single layer, Eqn.(1) is similar to a linear layer: $y = W_1x + x$, for which we have not observed advantages.

ResNet 论文深度讲解

Figure 5. A deeper residual function F for ImageNet. Left: a building block (on 56×56 feature maps) as in Fig. 3 for ResNet- 34. Right: a “bottleneck” building block for ResNet-50/101/152.

ResNet 论文深度讲解

图 5. ImageNet 的深度残差函数 $F$。左:ResNet-34 的构建块(在 56×56 的特征图上),如图 3。右:ResNet-50/101/152 的 “bottleneck”构建块。

残差函数 $F$ 的形式是可变的。本文中的实验包括有两层三层(图 5)的函数 $F$,同时可能有更多的层。但如果 $F$ 只有一层,方程(1)类似于线性层:$y = W_1x + x$,我们没有看到优势

We also note that although the above notations are about fully-connected layers for simplicity, they are applicable to convolutional layers. The function $F(x,{W_i})$ can represent multiple convolutional layers. The element-wise addition is performed on two feature maps, channel by channel.

我们还注意到,为了简单起见,尽管上述符号是关于全连接层的,但它们同样适用于卷积层。函数 $F(x,{W_i})$ 可以表示多个卷积层。元素加法在两个特征图上逐通道进行。

3.3. Network Architectures

3.3. 网络架构

We have tested various plain/residual nets, and have ob-served consistent phenomena. To provide instances for discussion, we describe two models for ImageNet as follows.

我们测试了各种简单/残差网络,并观察到了一致的现象。为了提供讨论的实例,我们描述了 ImageNet 的两个模型如下。

Plain Network. Our plain baselines (Fig. 3, middle) are mainly inspired by the philosophy of VGG nets [41] (Fig. 3, left). The convolutional layers mostly have 3×3 filters and follow two simple design rules: (i) for the same output feature map size, the layers have the same number of filters; and (ii) if the feature map size is halved, the number of filters is doubled so as to preserve the time complexity per layer. We perform downsampling directly by convolutional layers that have a stride of 2. The network ends with a global average pooling layer and a 1000-way fully-connected layer with softmax. The total number of weighted layers is 34 in Fig. 3 (middle).

ResNet 论文深度讲解

Figure 3. Example network architectures for ImageNet. Left: the VGG-19 model [41] (19.6 billion FLOPs) as a reference. Middle: a plain network with 34 parameter layers (3.6 billion FLOPs). Right: a residual network with 34 parameter layers (3.6 billion FLOPs). The dotted shortcuts increase dimensions. Table 1 shows more details and other variants.

ResNet 论文深度讲解

Table 1. Architectures for ImageNet. Building blocks are shown in brackets (see also Fig. 5), with the numbers of blocks stacked. Down-sampling is performed by conv3_1, conv4_1, and conv5_1 with a stride of 2.

简单网络。 我们简单网络的基准(图 3,中间)主要受到 VGG 网络[40](图 3,左图)的哲学启发。卷积层主要有 3×3 的滤波器,并遵循两个简单的设计规则:(i)对于相同的输出特征图尺寸,层具有相同数量的滤波器;(ii)如果特征图尺寸减半,则滤波器数量加倍,以便保持每层的时间复杂度。我们通过步长2 的卷积层直接执行下采样。网络以全局平均池化层和具有 softmax1000全连接层结束。图 3(中间)的加权层总数为 34

ResNet 论文深度讲解

图 3. ImageNet 的网络架构例子。左:作为参考的 VGG-19 模型[41]。中:具有 34 个参数层的简单网络(36 亿 FLOPs)。右:具有 34 个参数层的残差网络(36 亿 FLOPs)。带点的快捷连接增加了维度。表 1 显示了更多细节和其它变种。

ResNet 论文深度讲解

表 1. ImageNet 架构。构建块显示在括号中(也可看图 5),以及构建块的堆叠数量。下采样通过步长为 2conv3_1, conv4_1conv5_1 执行。

It is worth noticing that our model has fewer filters and lower complexity than VGG nets [41] (Fig. 3, left). Our 34-layer baseline has 3.6 billion FLOPs (multiply-adds), which is only 18% of VGG-19 (19.6 billion FLOPs).

值得注意的是我们的模型与 VGG 网络(图 3 左)相比,有更少的滤波器更低的复杂度。我们的 34 层基准有 36 亿 FLOP(乘加),仅是 VGG-19196 亿 FLOP)的 18%

Residual Network. Based on the above plain network, we insert shortcut connections (Fig. 3, right) which turn the network into its counterpart residual version. The identity shortcuts (Eqn.(1)) can be directly used when the input and output are of the same dimensions (solid line shortcuts in Fig. 3). When the dimensions increase (dotted line shortcuts in Fig. 3), we consider two options: (A) The shortcut still performs identity mapping, with extra zero entries padded for increasing dimensions. This option introduces no extra parameter; (B) The projection shortcut in Eqn.(2) is used to match dimensions (done by 1×1 convolutions). For both options, when the shortcuts go across feature maps of two sizes, they are performed with a stride of 2.

残差网络。基于上述的简单网络,我们插入快捷连接(图 3,右),将网络转换为其对应的残差版本。当输入和输出具有相同的维度时(图 3 中的实线快捷连接)时,可以直接使用恒等快捷连接(方程(1))。当维度增加(图 3 中的虚线快捷连接)时,我们考虑两个选项:(A)快捷连接仍然执行恒等映射,额外填充零输入以增加维度。此选项不会引入额外的参数;(B)方程(2)中的投影快捷连接用于匹配维度(由 1×1 卷积完成)。对于这两个选项,当快捷连接跨越两种尺寸的特征图时,它们执行时步长为 2

3.4. Implementation

3.4. 实现

Our implementation for ImageNet follows the practice in [21, 41]. The image is resized with its shorter side randomly sampled in [256, 480] for scale augmentation [41]. A 224×224 crop is randomly sampled from an image or its horizontal flip, with the per-pixel mean subtracted [21]. The standard color augmentation in [21] is used. We adopt batch normalization (BN) [16] right after each convolution and before activation, following [16]. We initialize the weights as in [13] and train all plain/residual nets from scratch. We use SGD with a mini-batch size of 256. The learning rate starts from 0.1 and is divided by 10 when the error plateaus, and the models are trained for up to 60 × 104 iterations. We use a weight decay of 0.0001 and a momentum of 0.9. We do not use dropout [14], following the practice in [16].

ImageNet 中我们的实现遵循[21,41]的实践。调整图像大小,其较短的边在[256,480]之间进行随机采样用于尺度增强[40]。224×224 裁剪是从图像或其水平翻转中随机采样,并逐像素减去均值[21]。使用了[21]中的标准颜色增强。在每个卷积之后和激活之前,我们采用批量归一化(BN)[16]。我们按照[12]的方法初始化权重,从零开始训练所有的简单/残差网络。我们使用批大小256SGD 方法。学习速度从 0.1 开始,当误差稳定时学习率除以 10,并且模型训练高达 60×104 次迭代。我们使用的权重衰减为 0.0001,动量为 0.9。根据[16]的实践,我们不使用 dropout[14]。

In testing, for comparison studies we adopt the standard 10-crop testing [21]. For best results, we adopt the fully- convolutional form as in [41, 13], and average the scores at multiple scales (images are resized such that the shorter side is in {224, 256, 384, 480, 640}).

在测试阶段,为了比较学习我们采用标准的 10-crop 测试[21]。对于最好的结果,我们采用如[41, 13]中的全卷积形式,并在多尺度上对分数进行平均(图像归一化,短边位于 {224, 256, 384, 480, 640} 中)。

$3 作者在本节先讲了残差网络更好的理论依据原始函数和残差函数学习的难易程度是不同的
然后说明了残差函数的形式是可变的,文中使用的是两层三层的,并且不采用一层的(类似于线性层);
紧接着通过对比 VGG34-layer plain34-layer residual讲解了网络的结构
最后讲解了网络的实现细节

4. Experiments

4. 实验

4.1. ImageNet Classification

4.1. ImageNet 分类

We evaluate our method on the ImageNet 2012 classification dataset [36] that consists of 1000 classes. The models are trained on the 1.28 million training images, and evaluated on the 50k validation images. We also obtain a final result on the 100k test images, reported by the test server. We evaluate both top-1 and top-5 error rates.

我们在 ImageNet 2012 分类数据集[36]对我们的方法进行了评估,该数据集由 1000 个类别组成。这些模型在 128 万张训练图像上进行训练,并在 5 万张验证图像上进行评估。我们也获得了测试服务器报告的在 10 万张测试图像上的最终结果。我们评估了 top-1top-5 错误率。

Plain Networks. We first evaluate 18-layer and 34-layer plain nets. The 34-layer plain net is in Fig. 3 (middle). The 18-layer plain net is of a similar form. See Table 1 for de- tailed architectures.

简单网络。我们首先评估 18 层和 34 层的简单网络。34 层简单网络在图 3(中间)。18 层简单网络是一种类似的形式。有关详细的体系结构,请参见表 1。

The results in Table 2 show that the deeper 34-layer plain net has higher validation error than the shallower 18-layer plain net. To reveal the reasons, in Fig. 4 (left) we compare their training/validation errors during the training procedure. We have observed the degradation problem — the 34-layer plain net has higher training error throughout the whole training procedure, even though the solution space of the 18-layer plain network is a subspace of that of the 34-layer one.

ResNet 论文深度讲解

Table 2. Top-1 error (%, 10-crop testing) on ImageNet validation. Here the ResNets have no extra parameter compared to their plain counterparts. Fig. 4 shows the training procedures.

表 2 中的结果表明,较深的 34 层简单网络比较浅的 18 层简单网络有更高的验证误差。为了揭示原因,在图 4(左图)中,我们比较训练过程中的训练/验证误差。我们观察到退化问题——虽然 18 层简单网络的解空间是 34 层简单网络解空间的子空间,但 34 层简单网络在整个训练过程中具有较高的训练误差

ResNet 论文深度讲解

部分转自互联网,侵权删除联系

赞(0) 打赏
部分文章转自网络,侵权联系删除b2bchain区块链学习技术社区 » ResNet 论文深度讲解求职学习资料
分享到: 更多 (0)

评论 抢沙发

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址

b2b链

联系我们联系我们