Читать книгу Cyberphysical Smart Cities Infrastructures - Группа авторов - Страница 28
2.3.2.4 Learning Process and Emerging New Type of Data Problems
ОглавлениеIn this section, we address possible challenges and solutions in the world of data analysis. The first and foremost problem that researchers tackle is lack of data for rare classes within the dataset that are used to make a model. The less number of samples we have in the dataset, the higher chance of ignoring that sample while we train and make the model. To handle this problem, meta‐learning has come to play an essential role to make a model only using few samples. It has three important promises: ZSL, one‐shot learning, and few‐shot learning [7]. ZSL is a certain type of meta‐learning when a training dataset does not have any samples for classes, and we still can predict them during a test process. For instance, a research work [44] was conducted to not use any annotation for processing vehicles tracklets. This study established a route understanding system based on zero‐shot theory for intelligent transportation, namely, Zero‐virus, which obtains high effectiveness with zero samples of annotation of vehicle tracklets. Further, another research work [40] established a new technique, namely MetaSense, which is the process of learning to sense rather than sensing to learn. This process takes advantage of the lack of samples of classes by learning from learning rather than learning from samples.
The process of non‐annotation helps to work with data that lacks classes because typical machine learning algorithms fail to detect non‐annotation classes. Furthermore, sensing to learn helps new algorithms predict information that is completely lacking from the dataset itself. These advancements lead us closer to (or are part of) ZSL, which is critical for the advancement of Smart Cities.
The second promise is one‐shot learning, in which each epoch in a training phase has only one sample per each class that is taken by a DL algorithm or a combination of neural networks [7].
The third promises but not the least one is few (k)‐shot learning in which each epoch in a training phase has only few (k) samples per each class that are taken by a DL algorithm or a combination of neural networks [7].