Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 45

Leaky ReLU

Оглавление

The leaky ReLU function is an improved ReLU, as shown in figure 3.8 (Maas and Hannun 2013). In order to save ‘dead neurons’ when the input is less than 0, leaky ReLU responds to a negative input in such a way that the negative input is greatly attenuated but the information on the negative input is still kept. Compared to ReLU, leaky ReLU is written as follows:

Leaky ReLU(x)=αx,x<0x,x⩾0,(3.6)


Figure 3.8. The leaky ReLU function, which greatly attenuates a negative input but still records the negative information (the slope for the negative input is set to α=0.1).

where α is a small positive constant, which is usually set to 0.1. Leaky ReLU gives all negative values a small positive slope to prevent the information loss, effectively solving the gradient diminishing problem.

Machine Learning for Tomographic Imaging

Подняться наверх