Читать книгу Computational Statistics in Data Science - Группа авторов - Страница 90
1 Introduction
ОглавлениеAs at the dawn of 2020, the amount of the world data generated was estimated to be 44 zettabytes (i.e., 40 times more than the number of stars in the observable universe). The amount of data generated daily is projected to be 463 exabytes globally by 2025 [1]. Not only that, data are growing in volume but also in structure, in complexity, and geometrically [2]. These high‐volume data, generated at a high‐velocity, lead to what is called streaming data. Data streams can originate from IoT devices and sensors, spreadsheets, text files, images, audio and video recordings, chat and instant messaging, email, blogs and social networking sites, web traffic, financial transactions, telephone usage records, customer service records, satellite data, smart devices, GPS data, and network traffic and messages.
There are different schools of thought when it comes to defining streaming data and data stream, and it is difficult to situate a position between these two concepts. One school of thought defined streaming data as the act of sending data bit by bit instead of a whole package while data stream is the actual source of data. That is, streaming data is the act, the verb, the action while data stream is the product. In the field of Engineering, streaming data is the process or art of collecting the streamed data. It is the main activity or operation, while data stream is the pipeline through which streaming is performed. It is the engineering architecture, that is the line‐up of tools that will perform the streaming. In the context of data science, streaming data and data streams are used interchangeably. To better understand the concepts, let us first define what a stream is. A stream S is a possibly infinite bag of elements (x, t) where x is a tuple belonging to the schema S and t ∈ T is the timestamp of the elements [3]. Data stream refers to an unbounded and ordered sequence of instances of data arriving over time [4]. Data stream can be formally defined as an infinite sequence of tuples S = (x1, ti), (x2, t2),…, (xn, tn),… where xi is a tuple and ti is a timestamp [5]. Streaming data can be defined as frequently changing, and potentially infinite data flow generated from disparate sources [6]. Formally, streaming data is a set of count values of a variable x of an event that happened at timestamp t(0 < t ≤ T), where T is the lifetime of the streaming data [7]. Looking at the definitions of both data stream and streaming data in the context of data science, the two concepts are trickily similar. All the different schools of thought slightly agree with these slightly confusing and closely related concepts except for the Engineering school of thought that refers to data stream as an architecture. Although this is still left open for further exploration, we will use them interchangeably in this chapter.
Table 1 Streaming data versus static data [9, 10]
Dimension | Streaming data | Static data |
---|---|---|
Hardware | Typical single constrained measure of memory | Multiple CPUs |
Input | Data streams or updates | Data chunks |
Time | A few moments or even milliseconds | Much longer |
Data size | Infinite or unknown in advance | Known and finite |
Processing | A single or few pass over data | Processes in multiple rounds |
Storage | Not store or store a significant portion in memory | Store |
Applications | Web mining, traffic monitoring, sensor networks | Widely adopted in many domains |
Source: Tozi, C. (2017). Dummy's guide to batch vs streaming. Retrieved from Trillium Software, Retrieved from http://blog.syncs ort.com/2017/07/bigdata/; Kolajo, T., Daramola, O. & Adebiyi, A. (2019). Big data stream analysis: A systematic literature review, Journal of Big Data 6(47).
The ocean of streaming data continuously generated through various mediums such as sensors, ATM transactions, and the web is tremendously increasing, and recognizing patterns in these mediums is equally challenging [8]. Most methods used for data stream mining are adapted from techniques designed for a finite or static dataset. Data stream mining imposes a high number of constraints on canonical algorithms. To quickly appreciate these constraints, the differences between static and streaming scenarios are presented in Table 1.
In the big data era, data stream mining serves as one of the vital fields. Since streaming data is continuous, unlimited, and with nonuniform distribution, there is the need for efficient data structures and algorithms to mine patterns from this high volume, high traffic, often imbalanced data stream that is also plagued with concept drift [11].
This chapter intends to broaden the existing knowledge in the domain of data science, streaming data, and data streams. To do this, relevant themes including data stream mining issues, streaming data tools and technologies, streaming data pre‐processing, streaming data algorithms, strategies for processing data streams, best practices for managing data streams, and suggestions for the way forward are discussed in this chapter. The structure of the rest of this chapter is as follows. Section 2 presents a brief background on data stream computing; Section 3 discusses issues in data stream mining, tools, and technologies for data streaming are presented in Sections 4 while streaming data pre‐processing is discussed in Section 5. Sections 6 and 7 present streaming data algorithms and data stream processing strategies, respectively. This is followed by a discussion on best practices for managing data streams in Section 8, while the conclusion and some ideas on the way forward are presented in Section 9.