Читать книгу Handbook on Intelligent Healthcare Analytics - Группа авторов - Страница 13

1.1 Introduction 1.1.1 Online Learning and Fragmented Learning Modeling

Оглавление

Applied artificial intelligence (AI) was defined as knowledge engineering [1], with three major scientific questions: knowledge representation, the use of information, and the acquisition of knowledge. In the big data age, the three fundamental problems must evolve with the basic characteristics of the complex and evolving connections between data objects, which are autonomous sources of information. Big data not only rely on domain awareness but also distribute knowledge from numerous information sources. To have knowledge of engineering tools for big data, we need tons of experience. Three primary research issues need to be addressed for the 54-month, RMB 45-million, 15-year Big Data Knowledge Engineering (BigKE) project sponsored by China’s Ministry of Science and Technology and several other domestic agencies: 1) online learning and fragmented learning modeling; 2) nonlinear fusion of fragmented information; and 3) multimedia fusion of knowledge. Discussing these topics is the main contribution of this article. With 1), we examine broken information and representation clusters, immersive online content learning with fragmented knowledge, and simulation with spatial and temporal characteristics of evolving knowledge. Question 2) will discuss connections, a modern pattern study, and dynamic integration between skills subsections of fragmented information. The key issues mentioned in Figure 1.1 will be collaborative, context-based computing, information browsing, route discovery, and the enhancement of interactive knowledge adaptation.


Figure 1.1 Knowledge engineering.

Due to these features of several channels, traditional offline data mining methods cannot stream data since it is important to reform the data. Online learning methods help solve this issue and adapt easily to the drift of the effects of streaming. But typical methods to online learning are explicitly configured for single-source info. Thus, the maintenance of these features concurrently provides great difficulties and opportunities for large-scale data production. Big data starts with global details, tackles dispersed data like data sources and function streams, and integrates diverse understanding from multiple data channels, as well as domain experience in personalized demand-driven knowledge services. In the age of big data, many data sources are usually heterogeneous and independent and require evolving, complex connections among data objects. These qualities are considered by substantial experience. Meanwhile, major suppliers of information provide personalized and in-house demand-driven offerings through the usage of large-scale information technologies [2].

Centered on the characteristics of multiple data sets, the key to a multisource retrieval of information is fragmented data processing [3]. To create global awareness, local information pieces from individual data points can be merged. Present online learning algorithms often use linear fitting for the retrieval of dispersed knowledge from local data sources [4]. In the case of fragmented knowledge fusion, though, linear fitting is not successful and may even create problems of overfitting. Several studies are ongoing to improve coherence in the processing and interpretation of fragmented knowledge [6], and the advantage of machine learning for large data interpreting is that most samples are efficient, thus eliminating the possibility of over-adjustment at any rate [7]. Big data innovation acquires knowledge mostly from user-produced content, as opposed to traditional information engineering’s focused on domain experience, in addition to authoritative sources of knowledge, such as technical knowledge bases. The content created by users provides a new type of database that could be used as a main human information provider as well as to help solve the problem of bottlenecks in traditional knowledge engineering. The information created by the consumer is broad and heterogeneous which leads to storage and indexing complexities [5], and the knowledge base should be able to build and disseminate itself to establish realistic models of data relations. For instance, for a range of reasons, clinical findings in survey samples can be incomplete and unreliable, and preprocessing is needed to improve analytical data [8].

Both skills are essential for the creation of personalized knowledge base tools as the knowledge base should be targeted to the needs of individual users. Huge information reinforces distributed expertise to develop any ability. Big data architecture also requires a customer interface to overcome user-specific problems. With the advent of science and innovations, in the fast-changing knowledge world of today, the nature of global economic growth has changed by introducing more communication models, shorter product life cycles, and a modern product production rate. Knowledge engineering is an AI field that implements systems based on information. Such structures provide computer applications with a broad variety of knowledge, policy, and reasoning mechanisms that provide answers to real-world issues. Difficulties dominated the early years of computer technology. Also, knowledge engineers find that it is a very long and expensive undertaking to obtain appropriate quality knowledge to construct a reliable and usable system. The construction of an expert system was identified as a knowledge learning bottleneck. This helped to gain skills and has been a big area of research in the field of information technology.

The purpose of gathering information is to create strategies and tools that make it as simple and effective as possible to gather and verify a professional’s expertise. Experts tend to be critical and busy individuals, and the techniques followed would also minimize the time expended on knowledge collection sessions by each expert. The key form of the knowledge-based approach is an expert procedure, which is intended to mimic an expert’s thinking processes. Typical examples of specialist systems include bacterial disease control, mining advice, and electronic circuit design assessment. It currently refers to the planning, administration, and construction of a system centered on expertise. It operates in a broad variety of aspects of computer technology, including data baselines, data collection, advanced networks, decision-making processes, and geographic knowledge systems. This is a big part of software computing. Wisdom engineering also falls into connection with mathematical reasoning as well as a strong concern with cognitive science and social-cognitive engineering, where intelligence is generated by socio-cognitive aggregates (mainly human beings) and structured according to the way humans thought and logic operates. Since then, information engineering has been an essential technology for knowledge incorporation. In the end, the exponentially increasing World Wide Web generates a growing market for improved usage of knowledge and technological advancement.

Handbook on Intelligent Healthcare Analytics

Подняться наверх