Читать книгу Deep Learning Approaches to Text Production - Shashi Narayan - Страница 8

Оглавление

Contents

List of Figures

List of Tables

Preface

1Introduction

1.1What is Text Production?

1.1.1Generating Text from Meaning Representations

1.1.2Generating Text from Data

1.1.3Generating Text from Text

1.2Roadmap

1.3What’s Not Covered?

1.4Our Notations

PART IBasics

2Pre-Neural Approaches

2.1Data-to-Text Generation

2.2Meaning Representations-to-Text Generation

2.2.1Grammar-Centric Approaches

2.2.2Statistical MR-to-Text Generation

2.3Text-to-Text Generation

2.3.1Sentence Simplification and Sentence Compression

2.3.2Document Summarisation

2.4Summary

3Deep Learning Frameworks

3.1Basics

3.1.1Convolutional Neural Networks

3.1.2Recurrent Neural Networks

3.1.3LSTMs and GRUs

3.1.4Word Embeddings

3.2The Encoder-Decoder Framework

3.2.1Learning Input Representations with Bidirectional RNNs

3.2.2Generating Text Using Recurrent Neural Networks

3.2.3Training and Decoding with Sequential Generators

3.3Differences with Pre-Neural Text-Production Approaches

3.4Summary

PART IINeural Improvements

4Generating Better Text

4.1Attention

4.2Copy

4.3Coverage

4.4Summary

5Building Better Input Representations

5.1Pitfalls of Modelling Input as a Sequence of Tokens

5.1.1Modelling Long Text as a Sequence of Tokens

5.1.2Modelling Graphs or Trees as a Sequence of Tokens

5.1.3Limitations of Sequential Representation Learning

5.2Modelling Text Structures

5.2.1Modelling Documents with Hierarchical LSTMs

5.2.2Modelling Document with Ensemble Encoders

5.2.3Modelling Document With Convolutional Sentence Encoders

5.3Modelling Graph Structure

5.3.1Graph-to-Sequence Model for AMR Generation

5.3.2Graph-Based Triple Encoder for RDF Generation

5.3.3Graph Convolutional Networks as Graph Encoders

5.4Summary

6Modelling Task-Specific Communication Goals

6.1Task-Specific Knowledge for Content Selection

6.1.1Selective Encoding to Capture Salient Information

6.1.2Bottom-Up Copy Attention for Content Selection

6.1.3Graph-Based Attention for Salient Sentence Detection

6.1.4Multi-Instance and Multi-Task Learning for Content Selection

6.2Optimising Task-Specific Evaluation Metric with Reinforcement Learning

6.2.1The Pitfalls of Cross-Entropy Loss

6.2.2Text Production as a Reinforcement Learning Problem

6.2.3Reinforcement Learning Applications

6.3User Modelling in Neural Conversational Model

6.4Summary

PART IIIData Sets and Conclusion

7Data Sets and Challenges

7.1Data Sets for Data-to-Text Generation

7.1.1Generating Biographies from Structured Data

7.1.2Generating Entity Descriptions from Sets of RDF Triples

7.1.3Generating Summaries of Sports Games from Box-Score Data

7.2Data Sets for Meaning Representations to Text Generation

7.2.1Generating from Abstract Meaning Representations

7.2.2Generating Sentences from Dependency Trees

7.2.3Generating from Dialogue Moves

7.3Data Sets for Text-to-Text Generation

7.3.1Summarisation

7.3.2Simplification

7.3.3Compression

7.3.4Paraphrasing

8Conclusion

8.1Summarising

8.2Overview of Covered Neural Generators

8.3Two Key Issues with Neural NLG

8.4Challenges

8.5Recent Trends in Neural NLG

Bibliography

Authors’ Biographies

Deep Learning Approaches to Text Production

Подняться наверх