Читать книгу Handbook of Intelligent Computing and Optimization for Sustainable Development - Группа авторов - Страница 76

3.2 Literature

Оглавление

Bu et al. [11] proposed a Multi-Depth Dilated Network (MDDNet) for the identification of landmarks on fashion items. Since garments and fashion items are often occluded in the environment of detection, the authors proposed an approach to identify the fashion landmarks by the introduction of a Multi-Depth Dilated (MDD) block. These MDD blocks are composed of a different number of dilated convolutions in parallel and are utilized in the MDDNet. A Batch-level Online Hard Keypoint Mining (B-OHKM) method is also proposed to extract hard-to-identify fashion landmarks during the training stage of the network, thus enabling the network to be trained in a manner that improves the performance of such landmarks. Although this approach achieves state-of-the-art performance on fashion dataset benchmarks, it is only effective in identifying generic clothing items such as shirts, pants, and skirts and cannot guarantee good results on complex garments with different textures and color overlays.

Yu et al. [12] proposed a model to identify fashion landmarks by enforcing structural layout relationships among landmarks that utilize multiple stacked Layout Graph Reasoning (LGR) layers. The authors define a graph called layout graph, which is a hierarchical structure with a root node, body-part nodes (eg. upper body, lower body), coarse clothes-part (eg. sleeves) nodes, and leaf nodes. Each LGR layer maps features into these structural graph nodes, performs reasoning over them using a LGR module, and then maps the graph nodes back to the features to enhance their representation. The reasoning module uses a graph clustering operation to get the representations of the intermediate nodes and performs a graph deconvolution operation over the entire graph. After stacking multiple such LGR layers in a convolutional network, a 1×1 convolution with a sigmoid activation function is utilized to produce the final fashion landmark heatmaps. Although the approach performs well in detecting landmarks of garments, the same performance metrics cannot be translated to scenarios in video surveillance, which often include occluded garments that need to be consistently detected on a per-frame basis.

Ge et al. [13] proposed DeepFashion2, a benchmark for detection, pose estimation, segmentation, and re-identification of clothing images. In addition to creating an expansive dataset comprising of 491,000 images of cloths, the authors proposed a model called Match R-CNN, which is based on the Mask R-CNN object detection model proposed by He et al. [9]. Match R-CNN is an end-to-end framework that jointly performs clothes detection, landmark estimation, instance segmentation, and customer-to-shop retrieval. Different streams are used and a siamese model is stacked on top of these streams to aggregate the learned features. Match R-CNN comprises three components, namely, a Feature Network (FN), a Perception Network (PN), and a Matching Network (MN). FN builds a pyramid of feature maps and RoIAlign is used to extract features from different levels of the pyramid. PN contains three streams of networks: landmark estimation, clothes detection, and mask prediction. The RoI features are fed into these streams of the PN. MN contains a feature extractor and a similarity learning network for clothes retrieval, which is used for recognition. Although the Match R-CNN model is state-of-the-art when it comes to identifying garments, it is only trained to identify fashion images that are available in the DeepFashion2 dataset, which although covers a wide array of clothing items, it does not cater to garments such as Indian sarees.

Hara et al. [14] proposed a CNN-based algorithm for the task of fashion items detection that combines the background information of human pose skeletons to detect fashion items. The authors consider the dynamic rigidity of the garments while using human pose estimation models to get coordinates of those garments that are close to the detected human pose coordinates. However, the use of R-CNN as the baseline object detection model significantly increases training cost in both space and time and results in slow object detection, when compared with other state-of-the-art object detection frameworks, for instance, Mask R-CNN.

Kita et al. [15] proposed a deformable-model-driven method to identify hanging garments. The authors recognize the state of a garment by considering its 3D location and posture. This 3D data of a garment is obtained from the deformable model by comparing the observed state of garments with predicted candidate shapes. Sutoyo et al. [16] proposed a methodology for hand detection, by obtaining an image dataset comprising of positive (with hands) and negative (without hands) images. The Haar cascade classifier model was trained on these images to build a hand detection model. The key disadvantage of using this model for the detection of hands is that the model requires an up-close image of a hand to classify it accurately, a scenario that is unattainable from surveillance footage data.

Modanwal et al. [17] developed a model for the detection of the human wrist point using geometric features. After obtaining a binary image of the hand mask, circular and elliptical shapes are used to approximate the palm region. The authors observed that the palm can be approximated using the largest circle inscribing the hand mask and the wrist point is approximately located on the boundary of this circle. Also, the wrist landmark point is a fixed one which lies on the center of the forearm-palm joint irrespective of the hand rotation. The authors locate this wrist point by performing a distance transform image processing operation on the binary hand mask image, thereby obtaining the largest circle inscribed within the hand mask. By locating the point with the largest value in the distance transform of the hand mask and by determining the maximum angles of tilt that a human hand can endure, mathematical operations are used to obtain the wrist landmark. Although this is an intricate process that obtains high precision and recall values, the method is not suitable for video datasets which comprise of noisy images and frequently occluded hand regions. The suitability to run the method on video datasets is vital for obtaining the hand mask to detect the wrist point.

As discussed previously, it can be comprehended that a combination of previous works had the following limitations or drawbacks: on some occasions, the works could not detect complex garments accurately, faced issues for detecting occluded garments in video surveillance, performed less adequately in cases of uncommon garments such as Indian sarees, required close-up images of hands for their proper identification, or simply used an archaic objected detection framework such as R-CNN. Our proposed approach attempts to address a majority of these problems. Color masks are applied to detect regions of garments and these are linked to obtain the entire garment. Missing regions of partially occluded garments are also identified before linking. The OpenPose framework is used for pose estimation as it does not require close-up images of wrists and Mask R-CNN is used as it outperforms R-CNN.

Handbook of Intelligent Computing and Optimization for Sustainable Development

Подняться наверх