Читать книгу Fundamentals and Methods of Machine and Deep Learning - Pradeep Singh - Страница 39

2.4 Bayesian Model Averaging (BMA)

Оглавление

It is one of the popularly referred ensemble machine learning model which applies Bayesian inference to solve the issues related to the selection of problem statement, performing the combined estimation, and produces the results using any of the straight model with less prediction accuracy. Several coherent models are available in BMA which are capable of handling the uncertainty available in the large datasets. The steps followed while implemented the MBA model is managing the summation, computation of integral values for MBA, using linear regression for predictions, and transformation purposes [18, 19].

Basically, BMA is an extended form of Bayesian inference which performs mathematical modeling of uncertainty using prior distribution by obtaining the posterior probability using Bayes theorem. For implementing the BMA, first prior distribution of each of the models in the ensemble network needs to be specified then evidence needs to be found for each of the model. Suppose the existing models are represented by Ml, where the value of l varies from 1 to k which basically represent the set of probability distributions. The probability distribution computes likelihood function L(Y|θl, Ml), where θl stands for parameter which are model specific dependent parameter. According to the Bayes theorem, the value for posterior probability is computed as follows [20]. A high-level representation of BMA is shown in Figure 2.3. Bayesian model representation begins with the data set which is distributed among multiple data subsets. Each subset of data is fed as input to the learner then average operation is performed finally compared with the average threshold and tested using permutation threshold to generate the Bayesian model as output.


Figure 2.3 A high-level representation of Bayesian model averaging (BMA).


Some of the advantages offered by BMA in diagnosing the zonotic diseases are as follows: capable of performing multi-variable selection, generates overconfident inferences, the number of selected features are less, easily scalable to any number of classes, posterior probability efficiency is high, deployment of the model is easier, correct estimation of uncertainty, suitable to handle complex applications, proper accounting of the model, combines estimation and predictions, flexible with prior distribution, uses mean candidate placement model, performs multi-linear operation, suitable of handling the heterogeneous resources, provides transparent interpretation of the large amount of data, error reduction happens exponentially, the variance incurred in prediction is less, flexibility achieved in parameter inference is less, prediction about model prediction is less, high-speed compilation happens, generated high valued output, combines efficiency achieved by several learner and average models, very much robust against the effect caused by misspecification of input attributes, model specification is highly dynamic, and so on.

Fundamentals and Methods of Machine and Deep Learning

Подняться наверх