Читать книгу Machine Vision Inspection Systems, Machine Learning-Based Approaches - Группа авторов - Страница 21

2.1 Introduction

Оглавление

Ability to learn visual concepts using a small number of examples is a distinctive ability of human cognition. For instance, even a child can correctly distinguish between a bicycle and car, after showing them one example. Taking this one step further, if we show them a plane and ship, which they have never seen before, they could correctly understand that they are two different vehicle types. One could argue that this ability is an application of previous experience and domain knowledge to new situations. How could we reproduce this same ability in machines? In this chapter, we propose a method to transfer previously learned knowledge about characters to differentiate between new character images.

There are versatile applications in image classification using few training samples [1–3]. Being able to classify images without any previous training possess greater importance in situations like character recognition, signature verification, and robot vision. This paradigm, where only one sample is used to learn and make predictions, is known as one-shot learning [4]. Especially when it comes to low resource languages, currently available deep learning techniques fail due to lack of large labeled datasets. If a model could do one-shot learning for an alphabet using a single image as a training sample for classification, that model could make a massive impact for optical character recognition [5].

This chapter uses Omniglot dataset [6] to train such one-shot learning model. Omniglot stands for the online encyclopedia of writing systems and languages, which is a dataset of handwritten characters and widely used in similar tasks that need a small number of data samples belonging to many classes. In the research, we extend this dataset by introducing a set of characters from Sinhala language, which has around 17 million native speakers and mainly used only in Sri Lanka. Due to lack of available resources for the language, using novel deep learning-based Optical Character Recognition (OCR) methods are challenging. With the trained model introduced in this chapter, significant character recognition accuracy was achieved for Sinhala language using a small dataset.

Character detection using one-shot learning has been addressed previously by researchers such as Lake et al. [6] using generative character model, Koch et al. [7] using Convolutional Neural Networks (CNN). In this proposed study, we focus on using capsule networks integrated into a Siamese network [8] to learn a generalized abstract function which outputs the similarity of two images. Capsule networks are the latest advancement in the computer vision domain, and they possess several advantages over traditional convolutional layers [9].

Translation invariance or disability to identify the position of an object relative to another is one main shortcoming of convolutional layers compared to capsules [10]. Further use of global pooling in CNN causes loss of valuable information. Hinton et al. [11] have proposed capsule networks as a solution to these problems. In this study, by using a capsule-based network architecture, we achieve equal level performance as deep convolutional Siamese networks, which proposed in previous literature but using a smaller number of parameters.

The main contributions of the study:

 Propose a novel capsule-based Siamese network architecture to perform one-shot learning,

 Improve energy function of Siamese network to grab complex information output by Capsules,

 Evaluate and analyse the performance of the model to identify characters which are previously not seen,

 Extend Omniglot dataset by adding new characters from Sinhala language.

The chapter is structured as follows. Section 2.2 explores related learning techniques. Section 2.3 describes the design and implementation aspects of the proposed solution for the capsule layers-based Siamese network. Section 2.4 evaluates the methodology using several experiments and analyzing the results. Section 2.5 discusses the contribution of the proposed solution with the existing studies and concludes the chapter.

Machine Vision Inspection Systems, Machine Learning-Based Approaches

Подняться наверх