Deep Learning for Physical Scientists

Deep Learning for Physical Scientists
Автор книги: id книги: 2165172     Оценка: 0.0     Голосов: 0     Отзывы, комментарии: 0 8518,97 руб.     (92,01$) Читать книгу Купить и скачать книгу Электронная книга Жанр: Химия Правообладатель и/или издательство: John Wiley & Sons Limited Дата добавления в каталог КнигаЛит: ISBN: 9781119408352 Скачать фрагмент в формате   fb2   fb2.zip Возрастное ограничение: 0+ Оглавление Отрывок из книги

Реклама. ООО «ЛитРес», ИНН: 7719571260.

Описание книги

Discover the power of machine learning in the physical sciences with this one-stop resource from a leading voice in the field    Deep Learning for Physical Scientists: Accelerating Research with Machine Learning  delivers an insightful analysis of the transformative techniques being used in deep learning within the physical sciences. The book offers readers the ability to understand, select, and apply the best deep learning techniques for their individual research problem and interpret the outcome.  Designed to teach researchers to think in useful new ways about how to achieve results in their research, the book provides scientists with new avenues to attack problems and avoid common pitfalls and problems. Practical case studies and problems are presented, giving readers an opportunity to put what they have learned into practice, with exemplar coding approaches provided to assist the reader.  From modelling basics to feed-forward networks, the book offers a broad cross-section of machine learning techniques to improve physical science research. Readers will also enjoy:  A thorough introduction to the basic classification and regression with perceptrons An exploration of training algorithms, including back propagation and stochastic gradient descent and the parallelization of training An examination of multi-layer perceptrons for learning from descriptors and de-noising data Discussions of recurrent neural networks for learning from sequences and convolutional neural networks for learning from images A treatment of Bayesian optimization for tuning deep learning architectures Perfect for academic and industrial research professionals in the physical sciences,  Deep Learning for Physical Scientists: Accelerating Research with Machine Learning  will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access.  Perfect for academic and industrial research professionals in the physical sciences, Deep Learning for Physical Scientists: Accelerating Research with Machine Learning will also earn a place in the libraries of industrial researchers who have access to large amounts of data but have yet to learn the techniques to fully exploit that access. [/i] This book introduces the reader to the transformative techniques involved in deep learning. A range of methodologies are addressed including: •Basic classification and regression with perceptrons •Training

Оглавление

Edward O. Pyzer-Knapp. Deep Learning for Physical Scientists

Table of Contents

List of Tables

List of Illustrations

Guide

Pages

Deep Learning for Physical Scientists. Accelerating Research with Machine Learning

About the Authors

Acknowledgements

1 Prefix – Learning to “Think Deep”

1.1 So What Do I Mean by Changing the Way You Think?

Key Features of Thinking Deep

2 Setting Up a Python Environment for Deep Learning Projects. 2.1 Python Overview

2.2 Why Use Python for Data Science?

2.3 Anaconda Python. 2.3.1 Why Use Anaconda?

2.3.2 Downloading and Installing Anaconda Python

Conda vs. Mini‐conda

2.3.2.1 Installing TensorFlow. 2.3.2.1.1 Without GPU

2.3.2.1.2 With GPU

2.4 Jupyter Notebooks

2.4.1 Why Use a Notebook?

2.4.2 Starting a Jupyter Notebook Server

2.4.3 Adding Markdown to Notebooks

2.4.4 A Simple Plotting Example

2.4.5 Summary

3 Modelling Basics. 3.1 Introduction

3.2 Start Where You Mean to Go On – Input Definition and Creation

3.3 Loss Functions

3.3.1 Classification and Regression

3.3.2 Regression Loss Functions. 3.3.2.1 Mean Absolute Error

3.3.2.2 Root Mean Squared Error

3.3.3 Classification Loss Functions

3.3.3.1 Precision

3.3.3.2 Recall

3.3.3.3 F1 Score

3.3.3.4 Confusion Matrix

3.3.3.5 (Area Under) Receiver Operator Curve (AU‐ROC)

3.3.3.6 Cross Entropy

3.4 Overfitting and Underfitting

3.4.1 Bias–Variance Trade‐Off

Definitions

A Quick Aside

3.5 Regularisation

3.5.1 Ridge Regression

3.5.2 LASSO Regularisation

3.5.3 Elastic Net

3.5.4 Bagging and Model Averaging

3.6 Evaluating a Model. 3.6.1 Holdout Testing

3.6.2 Cross Validation

3.7 The Curse of Dimensionality

3.7.1 Normalising Inputs and Targets

3.8 Summary

Notes

4 Feedforward Networks and Multilayered Perceptrons. 4.1 Introduction

4.2 The Single Perceptron. 4.2.1 Training a Perceptron

4.2.2 Activation Functions

4.2.3 Back Propagation

4.2.3.1 Weight Initialisation

4.2.3.2 Learning Rate

4.2.4 Key Assumptions

4.2.5 Putting It All Together in TensorFlow

4.3 Moving to a Deep Network

4.4 Vanishing Gradients and Other “Deep” Problems

4.4.1 Gradient Clipping

4.4.2 Non‐saturating Activation Functions. 4.4.2.1 ReLU

4.4.2.2 Leaky ReLU

4.4.2.3 ELU

4.4.3 More Complex Initialisation Schemes

4.4.3.1 Xavier

4.4.3.2 He

4.4.4 Mini Batching

4.5 Improving the Optimisation. 4.5.1 Bias

4.5.2 Momentum

4.5.3 Nesterov Momentum

4.5.4 (Adaptive) Learning Rates

4.5.5 AdaGrad

4.5.6 RMSProp

4.5.7 Adam

4.5.8 Regularisation

4.5.9 Early Stopping

4.5.10 Dropout

4.6 Parallelisation of learning. 4.6.1 Hogwild!

4.7 High and Low‐level Tensorflow APIs

4.8 Architecture Implementations

4.9 Summary

4.10 Papers to Read

5 Recurrent Neural Networks. 5.1 Introduction

5.2 Basic Recurrent Neural Networks

5.2.1 Training a Basic RNN

5.2.2 Putting It All Together in TensorFlow

5.2.3 The Problem with Vanilla RNNs

5.3 Long Short‐Term Memory (LSTM) Networks

5.3.1 Forget Gate

5.3.2 Input Gate

5.3.3 Output Gate

5.3.4 Peephole Connections

5.3.5 Putting It All Together in TensorFlow

5.4 Gated Recurrent Units

5.4.1 Putting It All Together in TensorFlow

5.5 Using Keras for RNNs

5.6 Real World Implementations

5.7 Summary

5.8 Papers to Read

6 Convolutional Neural Networks. 6.1 Introduction

6.2 Fundamental Principles of Convolutional Neural Networks. 6.2.1 Convolution

6.2.2 Pooling

6.2.2.1 Why Use Pooling?

6.2.2.2 Types of Pooling. 6.2.2.2.1 Average Pooling

6.2.2.2.2 Max Pooling

6.2.2.2.3 Global Pooling

6.2.3 Stride and Padding

6.2.4 Sparse Connectivity

6.2.5 Parameter Sharing

6.2.6 Convolutional Neural Networks with TensorFlow

6.3 Graph Convolutional Networks

6.3.1 Graph Convolutional Networks in Practice

6.4 Real World Implementations

6.5 Summary

6.6 Papers to Read

7 Auto‐Encoders. 7.1 Introduction

7.1.1 Auto‐Encoders for Dimensionality Reduction

7.2 Getting a Good Start – Stacked Auto‐Encoders, Restricted Boltzmann Machines, and Pretraining

7.2.1 Restricted Boltzmann Machines

7.2.2 Stacking Restricted Boltzmann Machines

7.3 Denoising Auto‐Encoders

7.4 Variational Auto‐Encoders

7.5 Sequence to Sequence Learning

7.6 The Attention Mechanism

7.7 Application in Chemistry: Building a Molecular Generator

7.8 Summary

7.9 Real World Implementations

7.10 Papers to Read

8 Optimising Models Using Bayesian Optimisation. 8.1 Introduction

8.2 Defining Our Function

8.3 Grid and Random Search

8.4 Moving Towards an Intelligent Search

8.5 Exploration and Exploitation

8.6 Greedy Search

8.6.1 Key Fact One – Exploitation Heavy Search is Susceptible to Initial Data Bias

8.7 Diversity Search

8.8 Bayesian Optimisation

8.8.1 Domain Knowledge (or Prior)

8.8.2 Gaussian Processes

8.8.3 Kernels

8.8.3.1 Stationary Kernels. 8.8.3.1.1 RBF Kernel

8.8.3.2 Noise Kernel

8.8.4 Combining Gaussian Process Prediction and Optimisation

8.8.4.1 Probability of Improvement

8.8.4.2 Expected Improvement

8.8.5 Balancing Exploration and Exploitation

8.8.6 Upper and Lower Confidence Bound Algorithm

8.8.7 Maximum Entropy Sampling

8.8.8 Optimising the Acquisition Function

8.8.9 Cost Sensitive Bayesian Optimisation

8.8.10 Constrained Bayesian Optimisation

8.8.11 Parallel Bayesian Optimisation. 8.8.11.1 qEI

8.8.11.2 Constant Liar and Kriging Believer

8.8.11.3 Local Penalisation

8.8.11.4 Parallel Thompson Sampling

8.8.11.5 K‐Means Batch Bayesian Optimisation

8.9 Summary

8.10 Papers to Read

Case Study 1 Solubility Prediction Case Study

CS 1.1 Step 1 – Import Packages

CS 1.2 Step 2 – Importing the Data

CS 1.3 Step 3 – Creating the Inputs

CS 1.4 Step 4 – Splitting into Training and Testing

CS 1.5 Step 5 – Defining Our Model

CS 1.6 Step 6 – Running Our Model

CS 1.7 Step 7 – Automatically Finding an Optimised Architecture Using Bayesian Optimisation

Case Study 2 Time Series Forecasting with LSTMs

CS 2.1 Simple LSTM

CS 2.2 Sequence‐to‐Sequence LSTM

Case Study 3 Deep Embeddings for Auto‐Encoder‐Based Featurisation

Index

a

b

c

d

e

f

g

h

i

j

k

l

m

n

o

p

q

r

s

t

u

v

w

x

y

WILEY END USER LICENSE AGREEMENT

Отрывок из книги

Edward O. Pyzer‐Knapp

.....

$> pip install tensorflow

We recommend sticking to conda install commands to ensure package compatibility with your conda environment, however a few earlier examples made use of TensorFlow 1's low‐level application programming interface (API) to illustrate lower‐level concepts. For compatibility, the earlier low‐level API can be used by including the following at the top of your script:

.....

Добавление нового отзыва

Комментарий Поле, отмеченное звёздочкой  — обязательно к заполнению

Отзывы и комментарии читателей

Нет рецензий. Будьте первым, кто напишет рецензию на книгу Deep Learning for Physical Scientists
Подняться наверх