Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 49
Transposed convolution
ОглавлениеThe transposed convolution performs a transformation from the opposite direction of a normal convolution, i.e. transforms the output of a convolution into something similar to its input. The transposed convolution constructs the same connection pattern as a normal convolution, except that this is connected from the reverse direction. With the transposed convolution, one can expand the size of an input for up-sampling of feature maps (from a low to high resolution feature map).
To explain the transposed convolution, we take an example shown in figure 3.13. It is already known that a convolution can be expressed as a matrix multiplication. If an input X and an output Y are unrolled into column vectors, and the convolution kernels are represented as a sparse matrix C (normally, the convolution kernel is local), then the convolution operation can be written as
Y=CX,(3.14)
where the matrix C can be written as
C=w1w20w3w400000w1w20w3w4000000w1w20w3w400000w1w20w3w4.(3.15)
Using this representation, the transposed matrix C⊤ is easily obtained for transposed convolution. Then, we have the output X′ of the transposed convolution expressed as
X′=C⊤Y.(3.16)
Figure 3.13. Convolution kernel (left), normal convolution (middle), and transposed convolution (right). The input is in blue and the output is in green.
It is worth mentioning that the output X′ of the transposed convolution does not need to be equal to the input X, but they maintain the same connectivity. In addition, the actual weight values in the transposed convolution do not necessarily copy those for the corresponding convolution. When training a convolution neural network, the weight parameters of the transposed convolution can be iteratively updated.