Читать книгу Deep Learning for Physical Scientists - Edward O. Pyzer-Knapp - Страница 2

Оглавление

Table of Contents

Cover

Title Page

Copyright Page

About the Authors

Acknowledgements

1 Prefix – Learning to “Think Deep” 1.1 So What Do I Mean by Changing the Way You Think?

2 Setting Up a Python Environment for Deep Learning Projects 2.1 Python Overview 2.2 Why Use Python for Data Science? 2.3 Anaconda Python 2.4 Jupyter Notebooks

3 Modelling Basics 3.1 Introduction 3.2 Start Where You Mean to Go On – Input Definition and Creation 3.3 Loss Functions 3.4 Overfitting and Underfitting 3.5 Regularisation 3.6 Evaluating a Model 3.7 The Curse of Dimensionality 3.8 Summary

4 Feedforward Networks and Multilayered Perceptrons 4.1 Introduction 4.2 The Single Perceptron 4.3 Moving to a Deep Network 4.4 Vanishing Gradients and Other “Deep” Problems 4.5 Improving the Optimisation 4.6 Parallelisation of learning 4.7 High and Low‐level Tensorflow APIs 4.8 Architecture Implementations 4.9 Summary 4.10 Papers to Read

10  5 Recurrent Neural Networks 5.1 Introduction 5.2 Basic Recurrent Neural Networks 5.3 Long Short‐Term Memory (LSTM) Networks 5.4 Gated Recurrent Units 5.5 Using Keras for RNNs 5.6 Real World Implementations 5.7 Summary 5.8 Papers to Read

11  6 Convolutional Neural Networks 6.1 Introduction 6.2 Fundamental Principles of Convolutional Neural Networks 6.3 Graph Convolutional Networks 6.4 Real World Implementations 6.5 Summary 6.6 Papers to Read

12  7 Auto‐Encoders 7.1 Introduction 7.2 Getting a Good Start – Stacked Auto‐Encoders, Restricted Boltzmann Machines, and Pretraining 7.3 Denoising Auto‐Encoders 7.4 Variational Auto‐Encoders 7.5 Sequence to Sequence Learning 7.6 The Attention Mechanism 7.7 Application in Chemistry: Building a Molecular Generator 7.8 Summary 7.9 Real World Implementations 7.10 Papers to Read

13  8 Optimising Models Using Bayesian Optimisation 8.1 Introduction 8.2 Defining Our Function 8.3 Grid and Random Search 8.4 Moving Towards an Intelligent Search 8.5 Exploration and Exploitation 8.6 Greedy Search 8.7 Diversity Search 8.8 Bayesian Optimisation 8.9 Summary 8.10 Papers to Read

14  Case Study 1: Solubility Prediction Case Study CS 1.1 Step 1 – Import Packages CS 1.2 Step 2 – Importing the Data CS 1.3 Step 3 – Creating the Inputs CS 1.4 Step 4 – Splitting into Training and Testing CS 1.5 Step 5 – Defining Our Model CS 1.6 Step 6 – Running Our Model CS 1.7 Step 7 – Automatically Finding an Optimised Architecture Using Bayesian Optimisation

15  Case Study 2: Time Series Forecasting with LSTMs CS 2.1 Simple LSTM CS 2.2 Sequence‐to‐Sequence LSTM

16  Case Study 3: Deep Embeddings for Auto‐Encoder‐Based Featurisation

17  Index

18  End User License Agreement

List of Tables

1 Chapter 3Table 3.1 A rule of thumb guide for understanding AUC‐ROC scores.

List of Illustrations

1 Chapter 3Figure 3.1 Examples of ROC curves.Figure 3.2 Optimal strategy without knowing the distribution.Figure 3.3 Optimal strategy when you know 50% of galaxies are elliptical and...Figure 3.4 A graphical look at the bias–variance trade‐off.Figure 3.5 A flow chart for dealing with high bias or high‐variance situatio...Figure 3.6 Graphical representation of the holdout‐validation algorithm.Figure 3.7 The effects of different scales on a simple loss function topolog...

2 Chapter 4Figure 4.1 An overview of a single perceptron learning.Figure 4.2 The logistic function.Figure 4.3 Derivatives of the logistic function.Figure 4.4 How learning rate can affect the training, and therefore performa...Figure 4.5 A schematic of a multilayer perceptron.Figure 4.6 Plot of ReLU activation function.Figure 4.7 Plot of leaky ReLU activation function.Figure 4.8 Plot of ELU activation function.Figure 4.9 Bias allows you to shift the activation function along the X‐axis...Figure 4.10 Training vs. validation error.Figure 4.11 Validation error from training model on the Glass dataset.

3 Chapter 5Figure 5.1 A schematic of a RNN cell. X and Y are inputs and outputs, respec...Figure 5.2 Connections in a feedforward layer in an MLP (a) destroy the sequ...Figure 5.3 An example of how sequential information is stored in a recurrent...Figure 5.4 A schematic of information flow through an LSTM cell. As througho...Figure 5.5 An LSTM cell with the flow through the forget gate highlighted.Figure 5.6 An LSTM cell with the flow through the input gate highlighted.Figure 5.7 An LSTM cell with the flow through the output gate highlighted.Figure 5.8 An LSTM cell with peephole connections highlighted.Figure 5.9 A schematic of information flow through a GRU cell. Here, X refer...

4 Chapter 6Figure 6.1 Illustration of convolutional neural network architecture.Figure 6.2 Illustration of average and max pooling algorithms.Figure 6.3 Illustration of average and max pooling on face image.Figure 6.4 Illustration of average and max pooling on handwritten character ...Figure 6.5 Illustration of the effect of stride on change in data volume.Figure 6.6 Illustration of stride.Figure 6.7 Illustration of the impact of sparse connectivity on CNN unit's r...Figure 6.8 Illustration of graph convolutional network.Figure 6.9 Example graph.Figure 6.10 Example adjacency matrix.

5 Chapter 7Figure 7.1 A schematic of a shallow auto‐encoder.Figure 7.2 Representing a neural network as a stack of RBMs for pretraining....Figure 7.3 Training an auto‐encoder from stacked RBMs. (1) Train a stack of ...Figure 7.4 Comparison of standard auto‐encoder and variational auto‐encoder....Figure 7.5 Illustration of sequence to sequence model.

6 Chapter 8Figure 8.1 Schematic for greedy search.Figure 8.2 Bayes rule.

Guide

Cover Page

Title Page

Copyright Page

About the Authors

Acknowledgements

Table of Contents

Begin Reading

Index

Wiley End User License Agreement

Pages

iii

iv

xi

xii

1

2

3

5

6

10  7

11  8

12  9

13  10

14  11

15  12

16  13

17  14

18  15

19  16

20  17

21  18

22  19

23  20

24  21

25  22

26  23

27  24

28  25

29  26

30  27

31  28

32  29

33  30

34  31

35  32

36  33

37  34

38  35

39  36

40  37

41  38

42  39

43  41

44  42

45  43

46  44

47  45

48  46

49  47

50  48

51  49

52  50

53  51

54  52

55  53

56  54

57  55

58  56

59  57

60  58

61  59

62  60

63  61

64  62

65  63

66  64

67  65

68  66

69  67

70  68

71  69

72  70

73  71

74  72

75  73

76  74

77  75

78  77

79  78

80  79

81 80

82  81

83  82

84  83

85  84

86  85

87  86

88  87

89  88

90  89

91 90

92  91

93  92

94  93

95  94

96  95

97  96

98  97

99  98

100  99

101  100

102  101

103  102

104  103

105  104

106  105

107  106

108  107

109 108

110  109

111  110

112  111

113  112

114  113

115  114

116  115

117  116

118  117

119  118

120  119

121  120

122  121

123  122

124  123

125  124

126  125

127  126

128  127

129  128

130  129

131  130

132  131

133  132

134  133

135  134

136  135

137  136

138  137

139  138

140  139

141  140

142  141

143  142

144  143

145  144

146  145

147  146

148  147

149  148

150  149

151  150

152  151

153  152

154  153

155  154

156  155

157  156

158  157

159  158

160  159

161  160

162  161

163  162

164  163

165  164

166  165

167  167

168  168

169  169

170  170

171  171

172  172

173  173

174  174

175  175

176  176

177  177

178  178

179  179

180  180

181  181

182  182

183  183

184  184

185  185

186  186

187  187

188  188

189  189

190  190

191 191

192 192

193 193

194 194

195 195

196 196

197  197

Deep Learning for Physical Scientists

Подняться наверх