Читать книгу Computational Statistics in Data Science - Группа авторов - Страница 112

9 Conclusion and the Way Forward

Оглавление

In this chapter, we have considered cutting‐edge issues concerning data stream or streaming data. The interest in stream processing is on the increase, and data must be handled quickly to make decisions in real‐time. The key presumption of stream computing is that the likelihood estimation of data lies in its newness. Thus, data analysis is done the moment they arrive in a stream instead of what is obtained in batch processing where data are first stored before they are explored. Challenges for data stream analysis include concept drift, scalability, integration, fault tolerance, timeliness, consistency, heterogeneity and incompleteness, load balancing, privacy issues, and accuracy [27, 28, 30–32, 34, 35], which emerges from the nature of data streams.

Streaming is an active research area. However, there are still some aspects of streaming that have received little attention. One of them is transactional guarantees. Current stream processing can provide basic guarantees such as processing each data point in the stream exactly once or at least once but cannot provide guarantees that span multiple operations or stream elements. Another area to intensify research effort is data stream pre‐processing. Data quality is a vital determinant in the knowledge discovery pipeline as low‐quality data yields low‐quality models and choices [69]. There is need to reinforce data stream pre‐processing stage [67] in the face of multi‐label [70], imbalance [71], and multi‐instance [72] problems associated data stream [66]. Also, the representation of social media posts must be such that the semantics of social media content is preserved [74, 75]. Moreover, data stream pre‐processing techniques with low computational requirement [73] need to be evolved as this is still open for research.

Data stream processing requires two factors which include storage capability and computational power in the face of an unbounded generation of data with high velocity and brief life span. To cope with these requirements, approximate computing, which aims at low latency at the expense of acceptable quality loss, has been a practical solution [110]. Even though approximate computing has been extensively used for the processing of data stream, combining it with distributed processing models brings new research directions. Such research directions include approximation with heterogeneous resources, pricing models with approximation, intelligent data processing, and energy‐aware approximation.

Computational Statistics in Data Science

Подняться наверх