Читать книгу Bioinformatics and Medical Applications - Группа авторов - Страница 23

1.3.5 Naive Bayes Algorithm

Оглавление

Naive Bayes is a fantastic AI calculation utilized for prediction which depends on Bayes Theorem. Bayes Theorem expresses that given a theory H and proof E, the relationship between possibility of pre-proof likelihood P(H) and the possibility of the following theoretical evidence P (H|E) is


Assumption behind Naive Bayes classifiers is that the estimation of a unique element is not dependent on the estimation of some different element taking the class variable into consideration. For instance, a product may be regarded as an apple if possibly it is red in color, round in shape, and around 10 cm wide.

A Naive Bayes classifier looks at all these highlights to offer independently to the chances that this product is an apple, although there is a potential relationship between shading, roundness, and dimension highlights. They are probabilistic classifiers and, subsequently, will compute the likelihood of every classification utilizing Bayes’ hypothesis, and the classification with the most elevated likelihood will be the yield.

Let D be the training dataset, y be the variable for class and the attributes represented as X hence according to Bayes theorem


where


So, replacing the X and applying the chain rule, we get


Since the denominator remains same, removing it from the dependency


Therefore, to find the category y with high probability, we use the following function:


Some of the advantages of Naive Bayes algorithm are as follows:

 • Easy to execute.

 • Requires a limited amount of training data to measure parameters.

 • High computational efficiency.

However, there are some disadvantages too, as follows:

 • It is thought that all aspects are independent and equally important which is virtually impossible in real applications.

 • The tendency to bias when increasing the number of training sets.

Bioinformatics and Medical Applications

Подняться наверх