Читать книгу Machine Vision Inspection Systems, Machine Learning-Based Approaches - Группа авторов - Страница 31

2.4.1 N-Way Classification

Оглавление

One expectation of this model is achieving the ability to generalize previous experience and use it to make decisions with completely new unseen alphabets. Thus, the n-way classification task was designed to evaluate the model in classifying previously unseen characters. Here, we have used 30 alphabets having 659 characters from the evaluation set of Omniglot dataset which was not used in the training. However, that makes the model completely unfamiliar with these characters.

In this experiment, we have designed the one-shot learning task as deciding the category of a given test image X out of n given categories. For an n-way classification task, we selected n character categories and selected one-character category from the same set as the test category. Then the one-shot task is prepared with one test image (X) from test category and reference image set {XN}; one image for each character category. The Siamese network is fed with X, Xn couples and predict the similarity. Belonging category, n* is selected as category with the maximum similarity as in Equation (2.3). The argmax function denotes the index of n that maximize F function.

(2.3)

The model is evaluated by N-way classification, N varying in the range [1, 40] and results are depicted in Figure 2.2.

Figure 2.2 Omniglot one-shot learning performance of Siamese networks.

According to Figure 2.2, the proposed model of this study, capsule layer-based Siamese network classification has on par results with Koch et al.’s model with the convolutional Siamese network classification. However, our model has 2.4 million parameters, which is 40% less compared to 4 million parameters in Koch et al.’s model. Although the overall performance of Koch et al.’s model with the convolutional classification, and the proposed model in this study which is based on capsule network, are on par, there are certain cases our model shows superior performance. For instance, the proposed model has a superior capability of identifying minor changes in characters.

For the n-way classification task, the statistical approach random guessing techniques are defined, such that if there are n options and if only one is correct, the chance of prediction being correct is 1/n. Thus, for the repeated experiment the accuracy is considered as a percentage of that probability. Here, the classification accuracy has dropped with the growth of the reference set, because then the solution space is large for the classification task. Nearest neighbor shows exponential degrades while Siamese networks have less reduction with a similar level of performance.

Figure 2.3 shows the classification results obtained by different models, namely the 20-way classification task (top), Capsule Siamese network (middle) and Convolutional Siamese network (bottom). The figure shows the samples of the test images and the corresponding classification results. Capsule based architecture was able to identify small changes in image structure, as shown in the middle row.

Figure 2.3 illustrates a few 20-way classification problems in which the proposed capsule layers-based Siamese network model outperforms the convolutional Siamese network. In most of the cases, the convolutional network fails to identify minor changes in the image, such as small line segments, curves. However, with the detailed features extracted through capsules, such decisions were made easy in the proposed capsule network model.


Figure 2.3 Sample 1 classification results.

Figure 2.4 depicts a few samples, where the proposed capsule network model fails to classify characters correctly. For certain characters, there is a vast difference in the writing styles between two people. In such cases, the proposed capsule layers-based Siamese network underperforms compared to the CNN. Capsule network model fails in certain cases while convolutional units successfully identify the character.

As a solution to the decrease of n-way classification accuracy, we propose n-shot learning instead of one-shot learning. In one-shot learning, we use only one image from each class in the reference set, however, n-shot learning, we use n images for each category and select the category with highest similarity as in Equation (2.4), where argmax is the argument maximizing the summation, X denotes the image and the function F(x, xi,n) states the similarity score.


Figure 2.4 Sample 2 classification results.

(2.4)

Accuracies obtained with n-shot learning for 2-, 6-, 20- and 28-way classification are illustrated in Figure 2.5. There is no significant improvement for test cases with small classification set, however, when the classification set is large n-shot learning can significantly improve the performance. For instance, 28-way classification accuracy is improved from 78 to 90% by using 20 images for each class in the reference set. Here, the classification accuracy improves with the increase of the number of samples that are used to compare against. For n-way classification with smaller n with few samples 100% accuracy achieved while more complex task needs a greater number of samples.

Machine Vision Inspection Systems, Machine Learning-Based Approaches

Подняться наверх