Читать книгу Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic - Страница 57

4.1.2 Global Versus Local Interpretability

Оглавление

Global interpretability facilitates the understanding of the whole logic of a model and follows the entire reasoning leading to all the different possible outcomes. This class of methods is helpful when ML models are crucial to inform population‐level decisions, such as drugs consumption trends or climate change [33]. In such cases, a global effect estimate would be more helpful than many explanations for all the possible idiosyncrasies. Works that propose globally interpretable models include the aforementioned additive models for predicting pneumonia risk [1] and rule sets generated from sparse Bayesian generative models [18]. However, these models are usually specifically structured and thus limited in predictability to preserve uninterpretability. Yang et al. [33] proposed a Global model Interpretation via Recursive Partitioning called GIRP to build a global interpretation tree for a wide range of ML models based on their local explanations. In their experiments, the authors highlighted that their method can discover whether a particular ML model is behaving in a reasonable way or is overfit to some unreasonable pattern. Valenzuela‐Escárcega et al. [34] proposed a supervised approach for information extraction that provides a global, deterministic interpretation. This work supports the idea that representation learning can be successfully combined with traditional, pattern‐based bootstrapping yielding models that are interpretable. Nguyen et al. [35] proposed an approach based on activation maximization – synthesizing the preferred inputs for neurons in neural networks – via a learned prior in the form of a deep generator network to produce a global interpretable model for image recognition. The activation maximization technique was previously used by Erhan et al. [36]. Although a multitude of techniques is used in the literature to enable global interpretability, global model interpretability is difficult to achieve in practice, especially for models that exceed a handful of parameters. In analogy with humans, who focus their effort on only part of the model in order to comprehend the whole of it, local interpretability can be more readily applied.

Explaining the reasons for a specific decision or single prediction means that interpretability is occurring locally. Ribeiro et al. [37] proposed LIME for Local Interpretable Model‐Agnostic Explanation. This model can approximate a black‐box model locally in the neighborhood of any prediction of interest. Work in [38], extends LIME using decision rules. Leave‐one covariate‐out (LOCO) [39] is another popular technique for generating local explanation models that offer local variable importance measures. In [40], the authors present a method capable of explaining the local decision taken by arbitrary nonlinear classification algorithms, using the local gradients that characterize how a data point has to be moved to change its predicted label. A set of works using similar methods for image classification models was presented in [41–44]. It is a common approach to understanding the decisions of image classification systems by finding regions of an image that are particularly influential for the final classification. Also called sensitivity maps, saliency maps, or pixel attribution maps [45], these approaches use occlusion techniques or calculations with gradients to assign an “importance” value to individual pixels that are meant to reflect their influence on the final classification. On the basis of the decomposition of a model’s predictions on the individual contributions of each feature, Robnik‐ ikonja and Kononenko [46] proposed explaining the model prediction for one instance by measuring the difference between the original prediction and the one made with omitting a set of features. A number of recent algorithms can be also found in [47–58].

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Подняться наверх