Читать книгу Fundamentals and Methods of Machine and Deep Learning - Pradeep Singh - Страница 40

2.5 Bayesian Classifier Combination (BCC)

Оглавление

Bayesian classifier combination (BCC) considers k different types of classifiers and produces the combined output. The motivation behind the innovation of this classifier is will capture the exhaustive possibilities about all forms of data, and ease of computation of marginal likelihood relationships. This classifier will not assume that the existing classifiers are true rather it is assumed to be probabilistic which mimics the behavior of the human experts. The BCC classifier uses different confusion matrices employed over the different data points for classification purpose. If the data points are hard, then the BCC uses their own confusion matrix; else, the posterior confusion matrix will be made use. The classifier identifies the relationship between the output of the model and the unknown data labels. The probabilistic models are not required; they share information about sending or receiving the information about the training data [21, 22].

The BCC model the parameters which includes hyperparameters . Based on the values of the prior posterior probability distribution of random variables with observed label classes, the independence posterior density id computed as follows:


The inferences drawn are based on the unknown random variables, i.e., P, π, t, V, and α which are collected using Gibbs and rejection sampling methodology. A high-level representation of BCC is shown in Figure 2.4. First parameters of BCC model, hyperparameters, and posterior probabilities are summed to generate final prediction as output.

Some of the advantages offered by BCC in diagnosing the zonotic diseases are as follows: performs probabilistic prediction, isolates the outliers which causes noise, efficient handling of missing values, robust handling of irrelevant attributes, side effects caused by dependency relationships can be prevented, easier in terms of implementation, ease modeling of dependency relationships among the random variables, learns collectively from labeled and unlabeled input data samples, ease feature selection, lazy learning, training time is less, eliminates unstable estimation, high knowledge is attained in terms of systems variable dependencies, high accuracy achieved in interpretation of the results, confusion matrix–based processing of data, low level of computational complexity, easily operates with less computational resources, requires less amount of training data, capable enough to handle the uncertainty in the data parameters, can learn from both labeled and unlabeled data samples, precise selection of the attributes which yields maximum information gain, eliminates the redundant values, lower number of tunning parameters, less memory requirement, highly flexible classification of data, and so on [23].


Figure 2.4 A high-level representation of Bayesian classifier combination (BCC).

Fundamentals and Methods of Machine and Deep Learning

Подняться наверх