Читать книгу Artificial Intelligence and Quantum Computing for Advanced Wireless Networks - Savo G. Glisic - Страница 56

4.1.1 The Complexity and Interoperability

Оглавление

The complexity of an ML model is directly related to its interpretability. In general, the more complex the model, the more difficult it is to interpret and explain. Thus, the most straightforward way to get to interpretable AI/ML would be to design an algorithm that is inherently and intrinsically interpretable. Many works have been reported in that direction. Letham et al. [18] presented a model called Bayesian Rule Lists (BRL) based on decision tree; the authors claimed that preliminary interpretable models provide concise and convincing capabilities to gain domain experts’ trust. Caruana et al. [1] described an application of a learning method based on generalized additive models to the pneumonia problem. They proved the intelligibility of their model through case studies on real medical data.

Xu et al. [19] introduced an attention‐based model that automatically learns to describe the content of images. They showed through visualization how the model is able to interpret the results. Ustun and Rudin [20] presented a sparse linear model for creating a data‐driven scoring system called SLIM. The results of this work highlight the interpretability capability of the proposed system in providing users with qualitative understanding due to their high level of sparsity and small integer coefficients. A common challenge, which hinders the usability of this class of methods, is the trade‐off between interpretability and accuracy [21]. As noted by Breiman [22], “accuracy generally requires more complex prediction methods … [and] simple and interpretable functions do not make the most accurate predictors.” In a sense, intrinsic interpretable models come at the cost of accuracy.

An alternative approach to interpretability in ML is to construct a highly complex uninterpretable black‐box model with high accuracy and subsequently use a separate set of techniques to perform what we could define as a reverse engineering process to provide the needed explanations without altering or even knowing the inner works of the original model. This class of methods offers, then, a post‐hoc explanation [23]. Though it could be significantly complex and costly, most recent work done in the XAI field belongs to the post‐hoc class and includes natural language explanations [24], visualizations of learned models [25], and explanations by example [26].

So, we can see that interpretability depends on the nature of the prediction task. As long as the model is accurate for the task, and uses a reasonably restricted number of internal components, intrinsic interpretable models are sufficient. If, however, the prediction target involves complex and highly accurate models, then considering post‐hoc interpretation models is necessary. It should also be noted that in the literature there is a group of intrinsic methods for complex uninterpretable models. These methods aim to modify the internal structure of a complex black‐box model that are not primarily interpretable (which typically applies to a DNN that we are interested in) to mitigate their opacity and thus improve their interpretability [27]. The used methods may either be components that add additional capabilities, components that belong to the model architecture [28, 29], for example, as part of the loss function [30], or as part of the architecture structure, in terms of operations between layers [31, 32].

Artificial Intelligence and Quantum Computing for Advanced Wireless Networks

Подняться наверх