Читать книгу Machine Learning for Tomographic Imaging - Professor Ge Wang - Страница 46

ELU

Оглавление

The exponential linear unit (ELU) is also a variant of ReLU, as shown in figure 3.9 in Clevert et al (2015). When the input is less than 0, the ELU is expressed in an exponential form, and the output saturation is controlled by the parameter α to ensure a smooth transition from the deactivated to activated state. Compared to RELU, ELU has negative values that push the mean output closer to zero. Mean shifting toward zero helps speed up learning because of a reduced bias. The ELU function is defined as follows:

ELU(x)=x,x>0αex−1,x⩽0.(3.7)


Figure 3.9. ELU function, which gives a more negative area controlled by a parameter (usually α=0.5) to balance the positive area over positive input values.

ELU inherits major advantages of leaky ReLU, and is small at the system origin, which means a smoother/more robust performance with respect to noise than leaky ReLU. However, the computational overhead for ELU is greater than that for leaky ReLU due to the exponential factor in ELU.

Machine Learning for Tomographic Imaging

Подняться наверх