Читать книгу Machine Learning Algorithms and Applications - Группа авторов - Страница 34

2.3.3 Egg Location Predictor

Оглавление

After segmentation of egg pixels from background pixels, the next step is to determine the location of the eggs. Many CNN models have been introduced such as Fast-RCNN [13] and YOLO [14] to predict the location of the object using the Intersect over Union method. There are two main reasons for not considering well-known techniques to locate the eggs. Firstly, in these methods the object/ROI dimensions are over 100 × 100, irregular in size, and the method has been trained for different bounding box dimensions. These techniques become unacceptable to determine the location of the egg, as the eggs have an average size of 28 × 28 pixels to 36 × 36 pixels. Moreover, the shape of the egg remains similar with minor deformation that can be neglected; hence, training for different dimension bounding box will not yield any good result. Secondly, these methods have the limitation of how many similar class objects can be recognized within a single bounding box, which are two objects for YOLO [14]. Since the eggs are small, many eggs will be present within a 100 × 100 grid image that may belong to the same class and hence may not be detected.

The specification of the egg location predictor CNN model is represented in Table 2.2, where the input to the core CNN model has been changed to 32 × 32 pixels, three-channel RGB image. Here, the output is a regression that provides the location of the egg center (x, y) rather than a bounding box (four corner points). The training dataset consists of both the class images, i.e., hatched eggs and unhatched eggs. The positive samples consist of images where an individual egg is visible completely, while the negative samples consist of eggs that are partially visible and have multiple egg entries. Figure 2.5 represents the classifier model that is trained to determine positive and negative samples, and the positive samples are later trained to predict the egg center location in terms of pixel values. Further, during the practical application, the center location of the egg predicted is used to crop a single egg data to be fed into a classifier that determines the class of the selected egg into HC or UHC. Figure 2.6 represents an overall result of locating egg centers using egg location predictor CNN model for one of the test data sheets where all egg centers are marked with a blue dot. A sliding window of (32 × 32) with a stride of (4, 4) was used to achieve the results.

Machine Learning Algorithms and Applications

Подняться наверх