Читать книгу Deep Learning Approaches to Text Production - Shashi Narayan - Страница 9

Оглавление

List of Figures

1.1Input contents and communicative goals for text production

1.2Shallow dependency tree from generation challenge surface realisation task

1.3Example input from the SemEval AMR-to-Text Generation Task

1.4E2E dialogue move and text

1.5Data-to-Text example input and output pair

2.1A Robocup input and output pair example

2.2Data to text: A pipeline architecture

2.3Simplifying a sentence

2.4A Sentence/Compression pair

2.5Abstractive vs. extractive summarisation

2.6A document/summary pair from the CNN/DailyMail data set

2.7Key modules in pre-neural approaches to text production

3.1Deep learning for text generation

3.2Feed-forward neural network or multi-layer perceptron

3.3Convolutional neural network for sentence encoding

3.4RNNs applied to a sentence

3.5Long-range dependencies

3.6Sketches of LSTM and GRU cells

3.7Two-dimensional representation of word embeddings

3.8RNN-based encoder-decoder architecture

3.9The German word “zeit” with its two translations

3.10Bidirectional RNNs applied to a sentence

3.11RNN decoding steps (Continues.)

3.12(Continued.) RNN decoding steps

3.13RNN decoding: conditional generation at each step

4.1Example input/output with missing, added, or repeated information

4.2Focusing on the relevant source word

4.3Sketch of the attention mechanism

4.4Example delexicalisations from the E2E and WebNLG data sets

4.5Interactions between slot values and sentence planning

4.6Example of generated text containing repetitions

4.7The impact of coverage on repetition

4.8Evolution of the DA vector as generation progresses

5.1Bidirectional RNN modelling document as a sequence of tokens

5.2Linearising AMR for text production

5.3Linearising RDF to prepare input-output pairs for text production

5.4Linearising dialogue moves for response generation

5.5Linearising Wikipedia descriptions for text generation

5.6Hierarchical document representation for abstractive document summarisation

5.7Multi-agent document representation for abstractive document summarisation

5.8Communication among multiple agents, each encoding a paragraph

5.9Extractive summarisation with a hierarchical encoder-decoder model.

5.10Graph-state LSTMs for text production from AMR graphs

5.11Graph-triple encoder for text production from RDF triple sets

5.12Graph convolutional networks for encoding a sentence

6.1An example of abstractive sentence summarisation.

6.2Selective encoding for abstractive sentence summarisation

6.3Heat map learned with the selective gate mechanism

6.4Two-step process for content selection and summary generation

6.5Graph-based attention to select salient sentences for abstractive summarisation

6.6Generating biography from Wikipedia infobox

6.7Example of word-property alignments for the Wikipedia abstract and facts

6.8The exposure bias in cross-entropy trained models

6.9Text production as a reinforcement learning problem

6.10The curriculum learning in application

6.11Deep reinforcement learning for sentence simplification

6.12Extractive summarisation with reinforcement learning

6.13Inconsistent responses generated by a sequence-to-sequence model

6.14Single-speaker model for response generation

6.15Examples of speaker consistency and inconsistency in the speaker model

6.16Responses to “Do you love me?” from the speaker-addressee model

7.1Infobox/text example from the WikiBio data set

7.2Example data-document pair from the extended WikiBio data set

7.3Wikidata/text example

7.4Example data-document pair from the RotoWire data set

7.5Example input and output from the SemEval AMR-to-Text Generation Task

7.6Example shallow input from the SR’18 data set

7.7Example instance from the E2E data set

7.8Example summary from the NewsRoom data set

7.9An abridged example from the XSum data set

7.10PWKP complex and simplified example pairs

7.11Newsela example simplifications

7.12GigaWord sentence compression or summarisation example

7.13Sentence compression example

7.14Example of abstractive compression from Toutanova et al. [2016]

7.15Example of abstractive compression from Cohn and Lapata [2008]

7.16Example paraphrase pairs from ParaNMT-50

7.17Examples from the Twitter News URL Corpus

7.18Paraphrase examples from PIT-

7.19Paraphrase examples from the MSR corpus

Deep Learning Approaches to Text Production

Подняться наверх