Читать книгу Computational Statistics in Data Science - Группа авторов - Страница 82

5.2 Objective Function

Оглавление

Autoencoder is first introduced in Rumelhart et al. [16] as a model with the main goal of learning a compressed representation of the input in an unsupervised way. We are essentially creating a network that attempts to reconstruct inputs by learning the identity function. To do so, an autoencoder can be divided into two parts, (encoder) and (decoder), that minimize the following loss function w.r.t. the input :


The encoder () and decoder () can be any mappings with the required input and output dimensions, but for image analysis, they are usually CNNs. The norm of the distance can be different, and regularization can be incorporated. Therefore, a more general form of the loss function is

(3)


Figure 5 Architecture of an autoencoder.

Source: Krizhevsky [14]

where is the output of an autoencoder, and represents the loss function that captures the distance between an input and its corresponding output.

The output of the encoder part is known as the embedding, which is the compressed representation of input learned by an autoencoder. Autoencoders are useful for dimension reduction, since the dimension of an embedding vector can be set to be much smaller than the dimension of input. The embedding space is called the latent space, the space where the autoencoder manipulates the distances of data. An advantage of the autoencoder is that it can perform unsupervised learning tasks that do not require any label from the input. Therefore, autoencoder is sometimes used in pretraining stage to get a good initial point for downstream tasks.

Computational Statistics in Data Science

Подняться наверх