Читать книгу Semantic Web for Effective Healthcare Systems - Группа авторов - Страница 28
1.5.4 Metrics Analysis
ОглавлениеAccuracy is measured as a performance measure for the number of terms correctly identified under the topic (or features). However, the model is further evaluated by the metrics such as precision, recall, and f-measure. Precision defines how precise the model is, and recall defines how complete the model is. It is represented as a confusion matrix, which contains information about actual and predicted results, as shown in Table 1.1.
Table 1.1 Confusion matrix.
Predicted positive | Predicted negative | |
Actual positive | TP | FIN |
Actual negative | FP | TN |
where
TP: the number of correct classifications of the positive examples (true positive)
FN: the number of incorrect classifications of positive examples (false negative)
FP: the number of incorrect classifications of negative examples (false positive)
TN: the number of correct classifications of negative examples (true negative)
Precision is defined as the percentage of correctly identified documents among the documents returned; whereas recall is defined as the percentage of relevant results among the correctly identified documents. Practically, high recall is achieved at the expense of precision and vice versa [61]. However, the metric f-measure is suitable when single metric is needed to compare different models. It is defined as the harmonic mean of precision and recall. Based on the confusion matrix, the precision, the recall, and the f-measure of the positive class are defined as
Ontology-based Semantic Indexing (OnSI) model is evaluated by the metrics such as precision, recall, and accuracy, as shown in Equations 1.3, 1.4, 1.5, and 1.6.