Жанры
Авторы
Контакты
О сайте
Книжные новинки
Популярные книги
Найти
Главная
Авторы
Stephen Winters-Hilt
Informatics and Machine Learning
Читать книгу Informatics and Machine Learning - Stephen Winters-Hilt - Страница 1
Оглавление
Предыдущая
Следующая
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
...
57
Оглавление
Купить и скачать книгу
Вернуться на страницу книги Informatics and Machine Learning
Оглавление
Страница 1
Table of Contents
List of Tables
List of Illustrations
Guide
Pages
Страница 7
Страница 8
Страница 9
Страница 10
1 Introduction
1.1 Data Science: Statistics, Probability, Calculus … Python (or Perl) and Linux
1.2 Informatics and Data Analytics
1.3 FSA‐Based Signal Acquisition and Bioinformatics
1.4 Feature Extraction and Language Analytics
1.5 Feature Extraction and Gene Structure Identification
1.5.1 HMMs for Analysis of Information Encoding Molecules
1.5.2 HMMs for Cheminformatics and Generic Signal Analysis
1.6 Theoretical Foundations for Learning
1.7 Classification and Clustering
1.8 Search
1.9 Stochastic Sequential Analysis (SSA) Protocol (Deep Learning Without NNs)
1.9.1 Stochastic Carrier Wave (SCW) Analysis – Nanoscope Signal Analysis
1.9.2 Nanoscope Cheminformatics – A Case Study for Device “Smartening”
1.10 Deep Learning using Neural Nets
1.11 Mathematical Specifics and Computational Implementations
2 Probabilistic Reasoning and Bioinformatics
2.1 Python Shell Scripting
2.1.1 Sample Size Complications
2.2 Counting, the Enumeration Problem, and Statistics
2.3 From Counts to Frequencies to Probabilities
2.4 Identifying Emergent/Convergent Statistics and Anomalous Statistics
2.5 Statistics, Conditional Probability, and Bayes' Rule
2.5.1 The Calculus of Conditional Probabilities: The Cox Derivation
2.5.2 Bayes' Rule
2.5.3 Estimation Based on Maximal Conditional Probabilities
2.6 Emergent Distributions and Series
2.6.1 The Law of Large Numbers (LLN)
2.6.2 Distributions 2.6.2.1 The Geometric Distribution(Emergent Via Maxent)
2.6.2.2 The Gaussian (aka Normal) Distribution (Emergent Via LLN Relation and Maxent)
2.6.2.3 Significant Distributions That Are Not Gaussian or Geometric
2.6.3 Series
2.7 Exercises
3 Information Entropy and Statistical Measures
3.1 Shannon Entropy, Relative Entropy, Maxent, Mutual Information
3.1.1 The Khinchin Derivation
3.1.2 Maximum Entropy Principle
3.1.3 Relative Entropy and Its Uniqueness
3.1.4 Mutual Information
3.1.5 Information Measures Recap
3.2 Codon Discovery from Mutual Information Anomaly
3.3 ORF Discovery from Long‐Tail Distribution Anomaly
3.3.1
Ab initio
Learning with smORF’s, Holistic Modeling, and Bootstrap Learning
3.4 Sequential Processes and Markov Models
3.4.1 Markov Chains
3.5 Exercises
{buyButton}
Подняться наверх