Читать книгу Intelligent IoT for the Digital World - Yang Sun Yang - Страница 9

Preface

Оглавление

Since the concept of “pairing technology” or “digital twin” was first introduced by NASA for the space exploration and aviation industry1,2, we have been on our way towards the digital world. Over the last decade, this digitization progress has been greatly accelerated, thanks to a broad spectrum of research and development (R&D) activities on internet of things (IoT) technologies and applications. According to the latest Cisco Annual Internet Report published in February 2020, the number of networked devices in the world will reach 29.3 billion by 2023, which includes 14.7 billion Machine‐to‐machine (M2M) or IoT devices3. While the joint report by Ovum and Heavy Reading, announced in May 2019, gave a more optimistic number, i.e. 21.5 billion IoT devices by 20234. This explosive growth of IoT devices and connections clearly shows the technical trends and business opportunities of Machine‐type Communications (MTC), such as the Ultra Reliable Low Latency Communications (URLLC) scenario specified in 3GPP Release 16 standard5, and the massive MTC (mMTC) scenario based‐on Narrowband IoT (NB‐IoT) technology and endorsed by the latest International Mobile Telecommunications‐2020 (IMT‐2020) 5G standard. Both IoT and MTC‐focused scenarios have been widely recognized and welcomed by different industry verticals. For examples, sensor devices for medical and health monitoring applications are usually light, reliable, energy efficient, wearable, or even implantable. Camera devices for security and public safety applications are usually hidden, durable, massively deployed, synchronized, and always connected. Robot devices for intelligent manufacturing applications are accurately positioned, constantly tracked in three dimensions, busy scheduled, fully utilized, and always ready for instant action. Besides the basic functions of sensing and transmitting data, some “smart” devices possess additional computing and storage resources, thus are capable of data screening and preprocessing tasks with localized tiny artificial intelligence (AI) algorithms6.

These massive connected devices or things deployed in our neighborhood are continuously generating more and more data, which is collected at different time scales, from different sources and owners, and has different formats and characteristics for different purposes and applications. As the UK mathematician Clive Humby pointed out in 2006, “Data is the new oil. It is valuable, but if unrefined it cannot really be used.” According to the IDC's “Data Age 2025” white paper7, sponsored by SEAGATE, the total data in our world is increasing at a Compounded Annual Growth Rate (CAGR) of 61% and will reach an astonishing 175 zettabytes (ZB) by 2025, among which 90 ZB data will be contributed by a variety of IoT devices. In order to realize its full potential, we must tackle the big challenge of integrating, processing, analyzing, and understanding such a huge volume of heterogeneous data from a wide range of applications and industrial domains. From the perspective of technology developers and service providers, more high‐quality data, in terms of variety, accuracy and timeliness, will definitely contribute to deeper knowledge and better models for characterizing and representing our complex physical world, thus enabling us to develop very precise digital twins for everything and everyone in the digital world. This digitization process will effectively reshape our human societies, accelerate overall economic growth, and generate huge commercial values, since disruptive data‐driven business models and cross‐domain collaboration opportunities will be created and fully utilized for exploring all sorts of potential markets. According to a study of IDC, the worldwide revenue for big data and business analytics was about USD189.1 billion in 2019, and will quickly increase to USD274.3 billion by 20228. The importance of data for future economic growth and social development has been widely recognized. In April 2020, the Chinese government officially defined “data” as a new factor of production, in addition to the traditional ones such as land, labor, capital, and entrepreneurship (or enterprise). It is clear that the digital world will rely on all kinds of trusted data to make timely, correct and fair decisions, and eventually, to provide comprehensive, objective and efficient solutions for every industrial sector and government department. Besides the volume, variety and value of data, IDC has predicted that, by 2025, about 30% of the world data, i.e. 52.5 ZB, will be consumed in real‐time9. For example, intelligent manufacturing and interactive entertainment applications both have high requirements on data velocity, hence local data should be quickly processed and analyzed by the devices/things themselves, or at nearby edge/fog computing nodes. Last but not least, data privacy and veracity are protected by not only advanced technologies such as a trusted execution environment (TEE), secure multi‐party computation (MPC), differential privacy, blockchain, confidential computing, and federated learning, but also specific laws such as the General Data Protection Regulation (GDPR) in the EU since May 2018 and the California Consumer Privacy Act (CCPA) in the US since January 2020. Both laws aim at protecting every consumer's personal data, anywhere at any time, and set strict rules on information disclosure and transparency, thus to enforce that collecting people's data will act in the best interest of those people.

The traditional three‐layer IoT architecture consists of (i) the perception layer with devices or things, i.e. sensors and actuators, for collecting data and taking actions; (ii) the network layer with gateways, access points, switches and routers for transmitting data and control signals; and (iii) the application layer with clouds and data centers for analyzing and exploiting cross‐domain data, and developing and managing intelligent IoT services. As IoT devices and their data are growing explosively, this architecture cannot satisfy a series of crucial service requirements on massive simultaneous connections, high bandwidth, low latency, ultra‐reliable under bursty traffic, end‐to‐end security and privacy protections. In order to tackle these challenges and support various IoT application scenarios, a user‐centric flexible service architecture should be implemented, so that feasible micro‐services in the neighborhood can be identified and assembled to meet very sophisticated user requirements10. This desirable data‐driven approach requires a multi‐tier computing network architecture that not only connects centralized computing resources and AI algorithms in the cloud but, more importantly, utilizes distributed computing resources and algorithms in the network, at the edge, and on IoT devices11. Therefore, most data and IoT services can be efficiently processed and executed by intelligent algorithms using local or regional computing resources at nearby sites, such as empowered edge and distributed cloud12. In doing so, a large amount of IoT data need not to be transmitted over long distances to the clouds, which means lower communication bandwidth, lower service delay, reliable network connectivity, lower vulnerability to different attacks, and better user satisfaction in all kinds of industrial sectors and application scenarios.

Based on the above analyses, we believe a pyramid model could best describe the fundamental relationships between these three elements, i.e. data (as raw material), computing (as hardware resource) and algorithms (as software resource) jointly constitute the triangular base to support a variety of user‐centric intelligent IoT services at the spire by using different kinds of smart terminals or devices. This book aims at giving a state‐of‐the‐art review of intelligent IoT technologies and applications, as well as the key challenges and opportunities facing the digital world. In particular, from the perspectives of network operators, service providers and typical users, this book tries to answer the following five critical questions.

1 What is the most feasible network architecture to effectively provide sufficient resources anywhere at any time for intelligent IoT application scenarios?

2 How can we efficiently discover, allocate and manage computing, communication and caching resources in heterogeneous networks across multiple domains and operators?

3 How do we agilely achieve adaptive service orchestration and reliable service provisioning to meet dynamic user requirements in real‐time?

4 How do we effectively protect data privacy in IoT applications, where IoT devices and edge/fog computing nodes only have limited resources and capabilities?

5 How do we continuously guarantee and maintain the synchronization and reliability of wide‐area IoT systems and applications?

Specifically, Chapter 1 reviews the traditional IoT system architecture, some well‐known IoT technologies and standards, which are leveraged to improve the perception of the physical world, as well as the efficiency of data collection, transmission and analysis. Further, a pyramid model concentrated on user data, distributed algorithms and computing resources is proposed and discussed. This model is based on the multi‐tier computing network architecture and applies a data‐driven approach to coordinate and allocate most feasible resources and algorithms inside the network for effective processing of user‐centric data in real‐time, thus supporting various intelligent IoT applications and services, such as information extraction, pattern recognition, decision making, behavior analysis and prediction. As 5G communication networks and edge/fog/cloud computing technologies are getting more and more popular in different industrial sectors and business domains, a series of new requirements and key challenges should be carefully addressed for providing more sophisticated, data‐driven and intelligent IoT services with usable resources and AI algorithms in different application scenarios. For instance, in a smart factory, 4G/5G mobile communication networks and wireless terminals are ubiquitous and always connected. A large variety of industrial IoT devices are continuously monitoring the working environment and machines, and generating massive data on temperature, humidity, pressure, state, position, movement, etc. This huge amount of data needs to be quickly analyzed and accurately comprehended with domain‐specific knowledge and experiences. To satisfy the stringent requirements on end‐to‐end service delay, data security, user privacy, as well as accuracy and timeliness in decision making and operation control, the proposed new model and architecture are able to fully utilize dispersive computing resources and intelligent algorithms in the neighborhood for effectively processing massive cross‐domain data, which is collected and shared through intensive but reliable local communications between devices, machines and distributed edge/fog nodes.

Chapter 2 presents the multi‐tier computing network architecture for intelligent IoT applications, which comprises not only computing, communication and caching (3C) resources but also a variety of embedded AI algorithms along the cloud‐to‐things continuum. This architecture advocates active collaborations between cloud, fog and edge computing technologies for intelligent and efficient data processing at different levels and locations. It is strongly underpinned by two important frameworks, i.e. Cost Aware Task Scheduling (CATS) and Fog as a Service Technology (FA2ST). Specifically, CATS is an effective resource sharing framework that utilizes a practical incentive mechanism to motivate efficient collaboration and task scheduling across heterogeneous resources at multiple devices, edge/fog nodes and the cloud, which are probably owned by different individuals and operators. While FA2ST is a flexible service provisioning framework that is able to discover, orchestrate, and manage micro‐services and cross‐layer 3C resources at any time, anywhere close to end users, thus guaranteeing high‐quality services under dynamic network conditions. Further, two intelligent application scenarios and the corresponding technical solutions are described in detail. Firstly, based on edge computing, an on‐site cooperative Deep Neural Network (DNN) inference framework is proposed to execute DNN inference tasks with low latency and high accuracy for industrial IoT applications, thus meeting the strict requirements on service delay and reliability. Secondly, based on fog computing, a three‐tier collaborative computing and service framework is proposed to support dynamic task offloading and service composition in Simultaneous Localization and Mapping (SLAM) for a robot swarm system, which requires timely data sharing and joint processing among multiple moving robots. Both cases are implemented and evaluated in real experiments, and a set of performance metrics demonstrates the effectiveness of the proposed multi‐tier computing network and service architecture in supporting intelligence IoT applications in stationary and mobile scenarios.

Under this architecture, Chapter 3 investigates cross‐domain resources management and adaptive allocation methods for dynamic task scheduling to meet different application requirements and performance metrics. Specifically, considering a general system model with Multiple Tasks and Multiple Helpers (MTMH), the game theory based analytical frameworks for non‐splittable and splittable tasks are derived to study the overall delay performance under dynamic computing and communication (2C) resources. The existence of a Nash equilibrium for both cases is proven. Two distributed task scheduling algorithms are developed for maximizing the utilization of nearby 2C resources, thus minimizing the overall service delay and maximizing the number of beneficial nodes through device/node collaborations. Further, by taking storage or caching into consideration, a fog‐enabled 3C resource sharing framework is proposed for energy‐critical IoT data processing applications. An energy cost minimization problem under 3C constraints is formulated and an efficient 3C resources management algorithm is then developed by using an iterative task team formation mechanism. This algorithm can greatly reduce energy consumption and converge to a stable system point via utility improving iterations. In addition, based on the fundamental trade‐off relationship between service delay and energy consumption in IoT devices/nodes, an offload forwarding mechanism is developed to promote collaborations of distributed fog/edge nodes with different computing and energy resources. The optimal trade‐off is achieved through a distributed optimization framework, without disclosing any node's private information, nor lengthy back‐and‐forth negotiations among collaborative nodes. The proposed mechanism and framework are evaluated via an extensive simulation of a fog‐enabled self‐driving bus system in Dublin, Ireland, and demonstrate very good performance in balancing energy consumption among multiple nodes and reducing service delay in urban mobile scenarios.

After 3C and energy resources are properly managed, Chapter 4 concentrates on dynamic service provisioning in multi‐tier computing networks. Firstly, at the network edge, an online orchestration framework is proposed for cross‐edge service function chaining to maximize the holistic cost‐efficiency through joint optimization of resource utilization and traffic routing. By carefully combining an online optimization technique with an approximate optimization method, this framework runs on top of geographically dispersed edge/fog nodes to tackle the long‐term cost minimization problem with future uncertain information. In this way, the benefits of service function chaining are fully unleashed for configuring and providing various intelligent services in an agile, flexible, and cost‐efficient manner. Secondly, inside a computing network using renewable energy, a network slicing framework for dynamic resource allocation and service provisioning is proposed, where a regional orchestrator timely coordinates workload distribution among multiple edge/fog nodes, and provides necessary slices of energy and computing resources to support specific IoT applications with Quality of Service (QoS) guarantees. Based on game theory and the Markov decision process, an effective algorithm is developed to optimally satisfy dynamic service requirements with available energy and network resources under randomly fluctuating energy harvesting and workload arrival processes. Thirdly, across multiple networks, a multi‐operator network sharing framework is proposed to enable efficient collaborations between resource‐limited network operators in supporting a variety of IoT applications and high‐speed cellular services simultaneously. This framework is based on the Third Generation Partnership Project (3GPP) Radio Access Network (RAN) sharing architecture, and can significantly improve the utilization of network resources, thus effectively reducing the overall operational costs of multiple networks. Both the network slicing and multi‐network sharing frameworks are evaluated by using more than 200 base station (BS) location data from two mobile operators in the city of Dublin, Ireland. Numerical results show they can greatly improve the workload processing capability and almost double the total number of connected IoT devices and applications.

Chapter 5 addresses the privacy concerns in public IoT applications and services, where IoT devices are usually embedded in a user's private time and space, and the corresponding data is in general privacy sensitive. Unlike resourceful clouds that can apply powerful security and privacy mechanisms in processing massive user data for training and executing deep neural network models to solve complex problems, IoT devices and their nearby edge/fog nodes are resource‐limited, and therefore, have to adopt light‐weight algorithms for privacy protection and data analysis locally. With low computing overhead, three approaches with different privacy‐preserving features are proposed to tackle this challenge. Specifically, random and independent multiplicative projections are applied to IoT data, and the projected data is used in a stochastic gradient descent method for training a deep neural network, thus to protect the confidentiality of the original IoT data. In addition, random additive perturbations are applied to the IoT data, which can realize differential privacy for all the IoT devices while training the deep neural network. A secret shallow neural network is also applied to the IoT data, which can protect the confidentiality of the original IoT data while executing the deep neural network for inference. Extensive performance evaluations based on various standard datasets and real testbed experiments show these proposed approaches can effectively achieve high learning and inference accuracy while preserving the privacy of IoT data.

Chapter 6 considers clock synchronization and service reliability problems in wide‐area IoT networks, such as long‐distance powerlines in a state power grid. Typically, the IoT systems for such outdoor application scenarios obtain the standard global time from a Global Positioning System (GPS) or the periodic timekeeping signals from Frequency Modulation (FM) and Amplitude Modulation (AM) radios. While for indoor IoT systems and applications, the current clock synchronization protocols need reliable network connectivity for timely transmissions of synchronization packets, which cannot be guaranteed as IoT devices are often resource‐limited and their unpredictable failures cause intermittent network connections and synchronization packet losses or delays. To solve this problem, a natural timestamping approach is proposed to retrieve the global time information by analyzing the minute frequency fluctuations of powerline electromagnetic radiation. This approach can achieve sub‐second synchronization accuracy in real experiments. Further, by exploiting a pervasive periodic signal that can be sensed in most indoor electromagnetic radiation environments with service powerlines, the trade‐off relationship between synchronization accuracy and IoT hardware heterogeneity is identified, hence a new clock synchronization approach is developed for indoor IoT applications. It is then applied to body‐area IoT devices by taking into account the coupling effect between a human body and the surrounding electric field generated by the powerlines. Extensive experiments show that this proposed approach can achieve milliseconds clock synchronization accuracy.

Finally, Chapter 7 concludes this book and identifies some additional challenging problems for further investigation.

We believe all the challenges and technical solutions discussed in this book will not only encourage and enable many novel intelligent IoT applications in our daily lives but, more importantly, will deliver a series of long‐term benefits to businesses, consumers, governments, and human societies in the digital world.

Yang Yang

ShanghaiTech University and

Peng Cheng Laboratory, China

August 19, 2020

Intelligent IoT for the Digital World

Подняться наверх