Fundamentals and Methods of Machine and Deep Learning
Реклама. ООО «ЛитРес», ИНН: 7719571260.
Оглавление
Pradeep Singh. Fundamentals and Methods of Machine and Deep Learning
Table of Contents
Guide
List of Illustrations
List of Tables
Pages
Fundamentals and Methods of Machine and Deep Learning. Algorithms, Tools and Applications
Preface
1. Supervised Machine Learning: Algorithms and Applications
1.1 History
1.2 Introduction
1.3 Supervised Learning
1.4 Linear Regression (LR)
1.4.1 Learning Model
1.4.2 Predictions With Linear Regression
1.5 Logistic Regression
1.6 Support Vector Machine (SVM)
1.7 Decision Tree
1.8 Machine Learning Applications in Daily Life
1.8.1 Traffic Alerts (Maps)
1.8.2 Social Media (Facebook)
1.8.3 Transportation and Commuting (Uber)
1.8.4 Products Recommendations
1.8.5 Virtual Personal Assistants
1.8.6 Self-Driving Cars
1.8.7 Google Translate
1.8.8 Online Video Streaming (Netflix)
1.8.9 Fraud Detection
1.9 Conclusion
References
2. Zonotic Diseases Detection Using Ensemble Machine Learning Algorithms
2.1 Introduction
2.2 Bayes Optimal Classifier
2.3 Bootstrap Aggregating (Bagging)
2.4 Bayesian Model Averaging (BMA)
2.5 Bayesian Classifier Combination (BCC)
2.6 Bucket of Models
2.7 Stacking
2.8 Efficiency Analysis
2.9 Conclusion
References
3. Model Evaluation
3.1 Introduction
3.2 Model Evaluation
3.2.1 Assumptions
3.2.2 Residual
3.2.3 Error Sum of Squares (Sse)
3.2.4 Regression Sum of Squares (Ssr)
3.2.5 Total Sum of Squares (Ssto)
3.3 Metric Used in Regression Model
3.3.1 Mean Absolute Error (Mae)
3.3.2 Mean Square Error (Mse)
3.3.3 Root Mean Square Error (Rmse)
3.3.4 Root Mean Square Logarithm Error (Rmsle)
3.3.5 R-Square (R2)
3.3.5.1 Problem With R-Square (R2)
3.3.6 Adjusted R-Square (R2)
3.3.7 Variance
3.3.8 AIC
3.3.9 BIC
3.3.10 ACP, Press, and R2-Predicted
3.3.11 Solved Examples
3.4 Confusion Metrics
3.4.1 How to Interpret the Confusion Metric?
3.4.2 Accuracy
3.4.2.1 Why Do We Need the Other Metric Along With Accuracy?
3.4.3 True Positive Rate (TPR)
3.4.4 False Negative Rate (FNR)
3.4.5 True Negative Rate (TNR)
3.4.6 False Positive Rate (FPR)
3.4.7 Precision
3.4.8 Recall
3.4.9 Recall-Precision Trade-Off
3.4.10 F1-Score
3.4.11 F-Beta Sore
3.4.12 Thresholding
3.4.13 AUC-ROC
3.4.14 AUC - PRC
3.4.15 Derived Metric From Recall, Precision, and F1-Score
3.4.16 Solved Examples
3.5 Correlation
3.5.1 Pearson Correlation
3.5.2 Spearman Correlation
3.5.3 Kendall’s Rank Correlation
3.5.4 Distance Correlation
3.5.5 Biweight Mid-Correlation
3.5.6 Gamma Correlation
3.5.7 Point Biserial Correlation
3.5.8 Biserial Correlation
3.5.9 Partial Correlation
3.6 Natural Language Processing (NLP)
3.6.1 N-Gram
3.6.2 BELU Score
3.6.2.1 BELU Score With N-Gram
3.6.3 Cosine Similarity
3.6.4 Jaccard Index
3.6.5 ROUGE
3.6.6 NIST
3.6.7 SQUAD
3.6.8 MACRO
3.7 Additional Metrics
3.7.1 Mean Reciprocal Rank (MRR)
3.7.2 Cohen Kappa
3.7.3 Gini Coefficient
3.7.4 Scale-Dependent Errors
3.7.5 Percentage Errors
3.7.6 Scale-Free Errors
3.8 Summary of Metric Derived from Confusion Metric
3.9 Metric Usage
3.10 Pro and Cons of Metrics
3.11 Conclusion
References
4. Analysis of M-SEIR and LSTM Models for the Prediction of COVID-19 Using RMSLE
4.1 Introduction
4.2 Survey of Models. 4.2.1 SEIR Model
4.2.2 Modified SEIR Model
4.2.3 Long Short-Term Memory (LSTM)
4.3 Methodology
4.3.1 Modified SEIR
4.3.2 LSTM Model. 4.3.2.1 Data Pre-Processing
4.3.2.2 Data Shaping
4.3.2.3 Model Design
4.4 Experimental Results. 4.4.1 Modified SEIR Model
4.4.2 LSTM Model
4.5 Conclusion
4.6 Future Work
References
5. The Significance of Feature Selection Techniques in Machine Learning
5.1 Introduction
5.2 Significance of Pre-Processing
5.3 Machine Learning System
5.3.1 Missing Values
5.3.2 Outliers
5.3.3 Model Selection
5.4 Feature Extraction Methods
5.4.1 Dimension Reduction
5.4.1.1 Attribute Subset Selection
5.4.1.1.1 Forward Selection Method
5.4.1.1.2 Backward Elimination Method
5.4.1.1.3 Decision Tree Induction Method
5.4.2 Wavelet Transforms
5.4.3 Principal Components Analysis
5.4.4 Clustering
5.5 Feature Selection
5.5.1 Filter Methods
5.5.2 Wrapper Methods
5.5.3 Embedded Methods
5.6 Merits and Demerits of Feature Selection
5.7 Conclusion
References
6. Use of Machine Learning and Deep Learning in Healthcare—A Review on Disease Prediction System
6.1 Introduction to Healthcare System
6.2 Causes for the Failure of the Healthcare System
6.3 Artificial Intelligence and Healthcare System for Predicting Diseases
6.3.1 Monitoring and Collection of Data
6.3.2 Storing, Retrieval, and Processing of Data
6.4 Facts Responsible for Delay in Predicting the Defects
6.5 Pre-Treatment Analysis and Monitoring
6.6 Post-Treatment Analysis and Monitoring
6.7 Application of ML and DL. 6.7.1 ML and DL for Active Aid
6.7.1.1 Bladder Volume Prediction
6.7.1.2 Epileptic Seizure Prediction
6.8 Challenges and Future of Healthcare Systems Based on ML and DL
6.9 Conclusion
References
7. Detection of Diabetic Retinopathy Using Ensemble Learning Techniques
7.1 Introduction
7.2 Related Work
7.3 Methodology
7.3.1 Data Pre-Processing
7.3.2 Feature Extraction
7.3.2.1 Exudates
7.3.2.2 Blood Vessels
7.3.2.3 Microaneurysms
7.3.2.4 Hemorrhages
7.3.3 Learning
7.3.3.1 Support Vector Machines
7.3.3.2 K-Nearest Neighbors
7.3.3.3 Random Forest
7.3.3.4 AdaBoost
7.3.3.5 Voting Technique
7.4 Proposed Models
7.4.1 AdaNaive
7.4.2 AdaSVM
7.4.3 AdaForest
7.5 Experimental Results and Analysis. 7.5.1 Dataset
7.5.2 Software and Hardware
7.5.3 Results
7.6 Conclusion
References
8. Machine Learning and Deep Learning for Medical Analysis—A Case Study on Heart Disease Data
8.1 Introduction
8.2 Related Works
8.3 Data Pre-Processing
8.3.1 Data Imbalance
8.4 Feature Selection
8.4.1 Extra Tree Classifier
8.4.2 Pearson Correlation
8.4.3 Forward Stepwise Selection
8.4.4 Chi-Square Test
8.5 ML Classifiers Techniques
8.5.1 Supervised Machine Learning Models
8.5.1.1 Logistic Regression
8.5.1.2 SVM
8.5.1.3 Naive Bayes
8.5.1.4 Decision Tree
8.5.1.5 K-Nearest Neighbors (KNN)
8.5.2 Ensemble Machine Learning Model
8.5.2.1 Random Forest
8.5.2.2 AdaBoost
8.5.2.3 Bagging
8.5.3 Neural Network Models
8.5.3.1 Artificial Neural Network (ANN)
8.5.3.2 Convolutional Neural Network (CNN)
8.6 Hyperparameter Tuning
8.6.1 Cross-Validation
8.7 Dataset Description
8.7.1 Data Pre-Processing
8.7.2 Feature Selection
8.7.3 Model Selection
8.7.4 Model Evaluation
8.8 Experiments and Results
8.8.1 Study 1: Survival Prediction Using All Clinical Features
8.8.2 Study 2: Survival Prediction Using Age, Ejection Fraction and Serum Creatinine
8.8.3 Study 3: Survival Prediction Using Time, Ejection Fraction, and Serum Creatinine
8.8.4 Comparison Between Study 1, Study 2, and Study 3
8.8.5 Comparative Study on Different Sizes of Data
8.9 Analysis
8.10 Conclusion
References
9. A Novel Convolutional Neural Network Model to Predict Software Defects
9.1 Introduction
9.2 Related Works
9.2.1 Software Defect Prediction Based on Deep Learning
9.2.2 Software Defect Prediction Based on Deep Features
9.2.3 Deep Learning in Software Engineering
9.3 Theoretical Background. 9.3.1 Software Defect Prediction
9.3.2 Convolutional Neural Network
9.4 Experimental Setup. 9.4.1 Data Set Description
9.4.2 Building Novel Convolutional Neural Network (NCNN) Model
9.4.3 Evaluation Parameters
9.4.4 Results and Analysis
9.5 Conclusion and Future Scope
References
10. Predictive Analysis of Online Television Videos Using Machine Learning Algorithms
10.1 Introduction
10.1.1 Overview of Video Analytics
10.1.2 Machine Learning Algorithms
10.1.2.1 Decision Tree C4.5
10.1.2.2 J48 Graft
10.1.2.3 Logistic Model Tree
10.1.2.4 Best First Tree
10.1.2.5 Reduced Error Pruning Tree
10.1.2.6 Random Forest
10.2 Proposed Framework
10.2.1 Data Collection
10.2.2 Feature Extraction
10.2.2.1 Block Intensity Comparison Code
10.2.2.2 Key Frame Rate
10.3 Feature Selection
10.4 Classification
10.5 Online Incremental Learning
10.6 Results and Discussion
10.7 Conclusion
References
11. A Combinational Deep Learning Approach to Visually Evoked EEG-Based Image Classification
11.1 Introduction
11.2 Literature Review
11.3 Methodology. 11.3.1 Dataset Acquisition
11.3.2 Pre-Processing and Spectrogram Generation
11.3.3 Classification of EEG Spectrogram Images With Proposed CNN Model
11.3.4 Classification of EEG Spectrogram Images With Proposed Combinational CNN+LSTM Model
11.4 Result and Discussion
11.5 Conclusion
References
12. Application of Machine Learning Algorithms With Balancing Techniques for Credit Card Fraud Detection: A Comparative Analysis
12.1 Introduction
12.2 Methods and Techniques
12.2.1 Research Approach
12.2.2 Dataset Description
12.2.3 Data Preparation
12.2.4 Correlation Between Features
12.2.5 Splitting the Dataset
12.2.6 Balancing Data
12.2.6.1 Oversampling of Minority Class
12.2.6.2 Under-Sampling of Majority Class
12.2.6.3 Synthetic Minority Over Sampling Technique
12.2.6.4 Class Weight
12.2.7 Machine Learning Algorithms (Models)
12.2.7.1 Logistic Regression
12.2.7.2 Support Vector Machine
12.2.7.3 Decision Tree
12.2.7.4 Random Forest
12.2.8 Tuning of Hyperparameters
12.2.9 Performance Evaluation of the Models
12.3 Results and Discussion
12.3.1 Results Using Balancing Techniques
12.3.2 Result Summary
12.4 Conclusions
12.4.1 Future Recommendations
References
13. Crack Detection in Civil Structures Using Deep Learning
13.1 Introduction
13.2 Related Work
13.3 Infrared Thermal Imaging Detection Method
13.4 Crack Detection Using CNN
13.4.1 Model Creation
13.4.2 Activation Functions (AF)
13.4.3 Optimizers
13.4.4 Transfer Learning
13.5 Results and Discussion
13.6 Conclusion
References
14. Measuring Urban Sprawl Using Machine Learning
14.1 Introduction
14.2 Literature Survey
14.3 Remotely Sensed Images
14.4 Feature Selection
14.4.1 Distance-Based Metric
14.5 Classification Using Machine Learning Algorithms. 14.5.1 Parametric vs. Non-Parametric Algorithms
14.5.2 Maximum Likelihood Classifier
14.5.3 k-Nearest Neighbor Classifiers
14.5.4 Evaluation of the Classifiers
14.5.4.1 Precision
14.5.4.2 Recall
14.5.4.3 Accuracy
14.5.4.4 F1-Score
14.6 Results
14.7 Discussion and Conclusion
Acknowledgements
References
15. Application of Deep Learning Algorithms in Medical Image Processing: A Survey
15.1 Introduction
15.2 Overview of Deep Learning Algorithms
15.2.1 Supervised Deep Neural Networks
15.2.1.1 Convolutional Neural Network
15.2.1.2 Transfer Learning
15.2.1.3 Recurrent Neural Network
15.2.2 Unsupervised Learning
15.2.2.1 Autoencoders
15.2.2.2 GANs
15.3 Overview of Medical Images
15.3.1 MRI Scans
15.3.2 CT Scans
15.3.3 X-Ray Scans
15.3.4 PET Scans
15.4 Scheme of Medical Image Processing
15.4.1 Formation of Image
15.4.2 Image Enhancement
15.4.3 Image Analysis
15.4.4 Image Visualization
15.5 Anatomy-Wise Medical Image Processing With Deep Learning
15.5.1 Brain Tumor
15.5.2 Lung Nodule Cancer Detection
15.5.3 Breast Cancer Segmentation and Detection
15.5.4 Heart Disease Prediction
15.5.5 COVID-19 Prediction
15.6 Conclusion
References
16. Simulation of Self-Driving Cars Using Deep Learning
16.1 Introduction
16.2 Methodology. 16.2.1 Behavioral Cloning
16.2.2 End-to-End Learning
16.3 Hardware Platform
16.4 Related Work
16.5 Pre-Processing
16.5.1 Lane Feature Extraction
16.5.1.1 Canny Edge Detector
16.5.1.2 Hough Transform
16.5.1.3 Raw Image Without Pre-Processing
16.6 Model
16.6.1 CNN Architecture
16.6.2 Multilayer Perceptron Model
16.6.3 Regression vs. Classification
16.6.3.1 Regression
16.6.3.2 Classification
16.7 Experiments
16.8 Results
16.9 Conclusion
References
17. Assistive Technologies for Visual, Hearing, and Speech Impairments: Machine Learning and Deep Learning Solutions
17.1 Introduction
17.2 Visual Impairment
17.2.1 Conventional Assistive Technology for the VIP
17.2.1.1 Way Finding
17.2.1.2 Reading Assistance
17.2.2 The Significance of Computer Vision and Deep Learning in AT of VIP
17.2.2.1 Navigational Aids
17.2.2.2 Scene Understanding
17.2.2.3 Reading Assistance
17.2.2.4 Wearables
17.3 Verbal and Hearing Impairment
17.3.1 Assistive Listening Devices
17.3.2 Alerting Devices
17.3.3 Augmentative and Alternative Communication Devices
17.3.3.1 Sign Language Recognition
17.3.4 Significance of Machine Learning and Deep Learning in Assistive Communication Technology
17.4 Conclusion and Future Scope
References
18. Case Studies: Deep Learning in Remote Sensing
18.1 Introduction
18.2 Need for Deep Learning in Remote Sensing
18.3 Deep Neural Networks for Interpreting Earth Observation Data. 18.3.1 Convolutional Neural Network
18.3.2 Autoencoder
18.3.3 Restricted Boltzmann Machine and Deep Belief Network
18.3.4 Generative Adversarial Network
18.3.5 Recurrent Neural Network
18.4 Hybrid Architectures for Multi-Sensor Data Processing
18.5 Conclusion
References
Index
WILEY END USER LICENSE AGREEMENT
Отрывок из книги
Scrivener Publishing 100 Cummings Center, Suite 541J Beverly, MA 01915-6106
.....
Figure 1.4 shows the hyper-plane that categorizes two classes.
Figure 1.4 SVM [11].
.....