Читать книгу Natural Language Processing for the Semantic Web - Diana Maynard - Страница 10

Оглавление

CHAPTER 2

Linguistic Processing

2.1 INTRODUCTION

There are a number of low-level linguistic tasks which form the basis of more complex language processing algorithms. In this chapter, we first explain the main approaches used for NLP tasks, and the concept of an NLP processing pipeline, giving examples of some of the major open source toolkits. We then describe in more detail the various linguistic processing components that are typically used in such a pipeline, and explain the role and significance of this pre-processing for Semantic Web applications. For each component in the pipeline, we describe its function and show how it connects with and builds on the previous components. At each stage, we provide examples of tools and describe typical performance of them, along with some of the challenges and pitfalls associated with each component. Specific adaptations to these tools for non-standard text such as social media, and in particular Twitter, will be discussed in Chapter 8.

2.2 APPROACHES TO LINGUISTIC PROCESSING

There are two main kinds of approach to linguistic processing tasks: a knowledge-based approach and a learning approach, though the two may also be combined. There are advantages and disadvantages to each approach, summarized in Table 2.1.

Knowledge-based or rule-based approaches are largely the more traditional methods, and in many cases have been superseded by machine learning approaches now that processing vast quantities of data quickly and efficiently is less of a problem than in the past. Knowledge-based approaches are based on hand-written rules typically written by NLP specialists, and require knowledge of the grammar of the language and linguistic skills, as well as some human intuition. These approaches are most useful when the task can easily be defined by rules (for example: “a proper noun always starts with a capital letter”). Typically, exceptions to such rules can be easily encoded too. When the task cannot so easily be defined in this way (for example, on Twitter, people often do not use capital letters for proper nouns), then this method becomes more problematic. One big advantage of knowledge-based approaches is that it is quite easy to understand the results. When the system incorrectly identifies something, the developer can check the rules and find out why the error has occurred, and potentially then correct the rules or write additional rules to resolve the problem. Writing rules can, however, be quite time-consuming, and if specifications for the task change, the developer may have to rewrite many rules.

Machine learning approaches have become more popular recently with the advent of powerful machines, and because no domain expertise or linguistic knowledge is required. One can set up a supervised system very quickly if sufficient training data is available, and get reasonable results with very little effort. However, acquiring or creating sufficient training data is often extremely problematic and time-consuming, especially if it has to be done manually. This dependency on training data also means that adaptation to new types of text, domain, or language is likely to be expensive, as it requires a substantial amount of new training data. Human readable rules therefore typically tend to be easier to adapt to new languages and text types than those built from statistical models. The problem of sufficient training data can be handled by incorporating unsupervised or semi-supervised methods for machine learning: these will be discussed further in Chapters 3 and 4. However, these typically produce less accurate results than supervised learning.

Table 2.1: Summary of knowledge-based vs. machine learning approaches to NLP

Knowledge-Based Machine Learning Systems
Based on hand-coded rules Use statistics or other machine learning
Developed by NLP specialists Developers do not need NLP expertise
Make use of human intuition Requires large amounts of training data
Easy to understand results Cause of errors is hard to understand
Development could be very time consuming Development is quick and easy
Changes may require rewriting rules Changes may require re-annotation

2.3 NLP PIPELINES

An NLP pre-processing pipeline, as shown in Figure 2.1, typically consists of the following components:

• Tokenization.

• Sentence splitting.

• Part-of-speech tagging.

• Morphological analysis.

• Parsing and chunking.

Figure 2.1: A typical linguistic pre-processing pipeline.

The first task is typically tokenization, followed by sentence splitting, to chop the text into tokens (typically words, numbers, punctuation, and spaces) and sentences respectively. Part-of-speech (POS) tagging assigns a syntactic category to each token. When dealing with multilingual text such as tweets, an additional step of language identification may first be added before these take place, as discussed in Chapter 8. Morphological analysis is not compulsory, but is frequently used in a pipeline, and essentially consists of finding the root form of each word (a slightly more sophisticated form of stemming or lemmatization). Finally, parsing and/or chunking tools may be used to analyze the text syntactically, identifying things like noun and verb phrases in the case of chunking, or performing a more detailed analysis of grammatical structure in the case of parsing.

Concerning toolkits, GATE [4] provides a number of open-source linguistic preprocessing components under the LGPL license. It contains a ready-made pipeline for Information Extraction, called ANNIE, and also a large number of additional linguistic processing tools such as a selection of different parsers. While GATE does provide functionality for machine learning-based components, ANNIE is mostly knowledge-based, making for easy adaptation. Additional resources can be added via the plugin mechanism, including components from other pipelines such as the Stanford CoreNLP Tools. GATE components are all Java-based, which makes for easy integration and platform independence.

Stanford CoreNLP [5] is another open-source annotation pipeline framework, available under the GPL license, which can perform all the core linguistic processing described in this section, via a simple Java API. One of the main advantages is that it can be used on the command line without having to understand more complex frameworks such as GATE or UIMA, and this simplicity, along with the generally high quality of results, makes it widely used where simple linguistic information such as POS tags are required. Like ANNIE, most of the components other than the POS tagger are rule-based.

OpenNLP1 is an open-source machine learning-based toolkit for language processing, which uses maximum entropy and Perceptron-based classifiers. It is freely available under the Apache license. Like Stanford CoreNLP, it can be run on the command line via a simple Java API. While, like most other pipelines, the various components further down the pipeline mainly rely on tokens and sentences, the sentence splitter can be run either before or after the tokenizer, which is slightly unusual.

NLTK [6] is an open-source Python-based toolkit, available under the Apache license, which is also very popular due to its simplicity and command-line interface, particularly where Java-based tools are not a requirement. It provides a number of different variants for some components, both rule-based and learning-based.

In the rest of this chapter, we will describe the individual pipeline components in more detail, using the relevant tools from these pipelines as examples.

2.4 TOKENIZATION

Tokenization is the task of splitting the input text into very simple units, called tokens, which generally correspond to words, numbers, and symbols, and are typically separated by white space in English. Tokenization is a required step in almost any linguistic processing application, since more complex algorithms such as part-of-speech taggers mostly require tokens as their input, rather than using the raw text. Consequently, it is important to use a high-quality tokenizer, as errors are likely to affect the results of all subsequent NLP components in the pipeline. Commonly distinguished types of tokens are numbers, symbols (e.g., $, %), punctuation, and words of different kinds, e.g., uppercase, lowercase, mixed case. A representation of a tokenized sentence is shown in Figure 2.2, where each pink rectangle corresponds to a token.

Figure 2.2: Representation of a tokenized sentence.

Tokenizers may add a number of features describing the token. These include details of orthography (e.g., whether they are capitalized or not), and more information about the kind of token (whether it is a word, number, punctuation, etc.). Other components may also add features to the existing token annotations, such as their syntactic category, details of their morphology, and any cleaning or normalization (such as correcting a mis-spelled word). These will be described in subsequent sections and chapters. Figure 2.3 shows a token for the word offences in the previous example with some features added: the kind of token is a word, it is 8 characters long, and the orthography is lowercase.

Tokenizing well-written text is generally reliable and reusable, since it tends to be domain-independent. However, such general purpose tokenizers typically need to be adapted to work correctly with things like chemical formulae, twitter messages, and other more specific text types. Other non-standard cases are hyphenated words in English, which some tools treat as a single token and some tools treat as three (the two words, plus the hyphen itself). Some systems also perform a more complex tokenization that takes into account number combinations such as dates and times (for example, treating 07:56 as a single token). Other tools leave this to later processing stages, such as a Named Entity Recognition component. Another issue is the apostrophe: for example in cases where an apostrophe is used to denote a missing letter and effectively joins two words without a space between, such as it’s, or in French l’homme. German compound nouns suffer the opposite problem, since many words can be written together without a space. For German tokenizers, an extra module which splits compounds into their constituent parts can therefore be very useful, in particular for retrieval purposes. This extra segmentation module is critical to define word boundaries also for many East Asian languages such as Chinese, which have no notion of white space between words.

Figure 2.3: Representation of a tokenized sentence.

Because tokenization generally follows a rigid set of constraints about what constitutes a token, pattern-based rule matching approaches are frequently used for these tools, although some tools do use other approaches. The OpenNLP TokenizerME,2 for example, is a trainable maximum entropy tokenizer. It uses a statistical model, based on a training corpus, and can be re-trained on a new corpus.

GATE’s ANNIE Tokenizer3 relies on a set of regular expression rules which are then compiled into a finite-state machine. It differs somewhat from most other tokenizers in that it maximizes efficiency by doing only very light processing, and enabling greater flexibility by placing the burden of deeper processing on other components later in the pipeline, which are more adaptable. The generic version of the ANNIE tokenizer is based on Unicode4 and can be used for any language which has similar notions of token and white space to English (i.e., most Western languages). The tokenizer can be adapted for different languages either by modifying the existing rules, or by adding some extra post-processing rules. For English, a specialized set of rules is available, dealing mainly with use of apostrophes in words such as don’t.

The PTBTokenizer5 is an efficient, fast, and deterministic tokenizer, which forms part of the suite of Stanford CoreNLP tools. It was initially designed to largely mimic Penn Treebank 3 (PTB) tokenization, hence its name. Like the ANNIE Tokenizer, it works well for English and other Western languages, but works best on formal text. While deterministic, it uses some quite good heuristics, so as with ANNIE, it can usually decide when single quotes are parts of words, when full stops imply sentence boundaries, and so on. It is also quite customizable, in that there are a number of options that can be tweaked.

NLTK6 also has several similar tokenizers to ANNIE, one based on regular expressions, written in Python.

2.5 SENTENCE SPLITTING

Sentence detection (or sentence splitting) is the task of separating text into its constituent sentences. This typically involves determining whether punctuation, such as full stops, commas, exclamation marks, and question marks, denote the end of a sentence or something else (quoted speech, abbreviations, etc.). Most sentence splitters use lists of abbreviations to help determine this: a full stop typically denotes the end of a sentence unless it follows an abbreviation such as Mr., or lies within quotation marks. Other issues involve determining sentence structure when line breaks are used, such as in addresses or in bulleted lists. Sentence splitters vary in how such things are handled.

More complex cases arise when the text contains tables, titles, formulae, or other formatting markup: these are usually the biggest source of error. Some splitters ignore these completely, requiring a punctuation mark as a sentence boundary. Others use two consecutive new lines or carriage returns as an indication of a sentence end, while there are also cases when even a single newline or carriage return character would indicate end of a sentence (e.g., comments in software code or bulleted/numbered lists which have one entry per line). GATE’s ANNIE sentence splitter actually provides several variants in order to let the user decide which is the most appropriate solution for their particular text. HTML formatting tags, Twitter hashtags, wiki syntax, and other such special text types are also somewhat problematic for general-purpose sentence splitters which have been trained on well-written corpora, typically newspaper texts. Note that sometimes tokenization and sentence splitting are performed as a single task rather than sequentially.

Sentence splitters generally make use of already tokenized text. GATE’s ANNIE sentence splitter uses a rule-based approach based on GATE’s JAPE pattern-action rule-writing language [7]. The rules are based entirely on information produced by the tokenizer and some lists of common abbreviations, and can easily be modified as necessary. Several variants are provided, as mentioned above.

Unlike ANNIE, the OpenNLP sentence splitter is typically run before the tokenization module. It uses a machine learning approach, with the models supplied being trained on untokenized text, although it is also possible to perform tokenization first and let the sentence splitter process the already tokenized text. One flaw in the OpenNLP splitter is that because it cannot identify sentence boundaries based on the contents of the sentence, it may cause errors on articles which have titles since these are mistakenly identified to be part of the first sentence.

NLTK uses the Punkt sentence segmenter [8]. This uses a language-independent, unsupervised approach to sentence boundary detection, based on identifying abbreviations, initials, and ordinal numbers. Its abbreviation detection, unlike most sentence splitters, does not rely on precompiled lists, but is instead based on methods for collocation detection such as log-likelihood.

Stanford CoreNLP makes use of tokenized text and a set of binary decision trees to decide where sentence boundaries should go. As with the ANNIE sentence splitter, the main problem it tries to resolve is deciding whether a full stop denotes the end of a sentence or not.

In some studies, the Stanford splitter scored the highest accuracy out of common sentence splitters, although performance will of course vary depending on the nature of the text. State-of-the-art sentence splitters such as the ones described score about 95–98% accuracy on well-formed text. As with most linguistic processing tools, each one has strengths and weaknesses which are often linked to specific features of the text; for example, some splitters may perform better on abbreviations but worse on quoted speech than others.

2.6 POS TAGGING

Part-of-Speech (POS) tagging is concerned with tagging words with their part of speech, e.g., noun, verb, adjective. These basic linguistic categories are typically divided into quite fine-grained tags, distinguishing for instance between singular and plural nouns, and different tenses of verbs. For languages other than English, gender may also be included in the tag. The set of possible tags used is critical and varies between different tools, making interoperability between different systems tricky. One very commonly used tagset for English is the Penn Treebank (PTB) [9]; other popular sets include those derived from the Brown corpus [10] and the LOB (Lancaster-Oslo/Bergen) Corpus [11], respectively. Figure 2.4 shows an example of some POS-tagged text, using the PTB tagset.

Figure 2.4: Representation of a POS-tagged sentence.

The POS tag is determined by taking into account not just the word itself, but also the context in which it appears. This is because many words are ambiguous, and reference to a lexicon is insufficient to resolve this. For example, the word love could be a noun or verb depending on the context (I love fish vs. Love is all you need).

Approaches to POS tagging typically use machine learning, because it is quite difficult to describe all the rules needed for determining the correct tag given a context (although rule-based methods have been used). Some of the most common and successful approaches use Hidden Markov models (HMMs) or maximum entropy. The Brill transformational rule-based tagger [12], which uses the PTB tagset, is one of the most well-known taggers, used in several major NLP toolkits. It uses a default lexicon and ruleset acquired from a large corpus of training data via machine learning. Similarly, the OpenNLP POS tagger also uses a model learned from a training corpus to predict the correct POS tag from the PTB tagset. It can be trained with either a Maximum Entropy or a Perceptron-based model. The Stanford POS tagger is also based on a Maximum Entropy approach [13] and makes use of the PTB tagset. The TNT (Trigrams’n’Tags) tagger [14] is a fast and efficient statistical tagger using an implementation of the Viterbi algorithm for second-order Markov models.

In terms of major NLP toolkits, some (such as Stanford CoreNLP) have their own POS taggers, as described above, while others use existing implementations or variants on them. For example, NLTK has Python implementations of the Brill tagger, the Stanford tagger, and the TNT tagger. GATE’s ANNIE English POS tagger [15] is a modified version of the Brill tagger trained on a large corpus taken from the Wall Street Journal. It produces a POS tag as an annotation on each word or symbol. One of the big advantages of this tagger is that the lexicon can easily be modified manually by adding new words or changing the value or order of the possible tags associated with a word. It can also be retrained on a new corpus, although this requires a large pre-tagged corpus of text in the relevant domain/genre, which is not easy to find.

The accuracy of these general-purpose, reusable taggers is typically excellent (97–98%) on texts similar to those on which the taggers have been trained (mostly news articles). However, the accuracy can fall sharply when presented with new domains, genres, or noisier data such as social media. This can have a serious knock-on effect on other processes further down the pipeline such as Named Entity recognition, ontology learning via lexico-syntactic patterns, relation and event extraction, and even opinion mining, which all need reliable POS tags in order to produce high-quality results.

2.7 MORPHOLOGICAL ANALYSIS AND STEMMING

Morphological analysis essentially concerns the identification and classification of the linguistic units of a word, typically breaking the word down into its root form and an affix. For example, the verb walked comprises a root form walk and an affix -ed. In English, morphological analysis is typically applied to verbs and nouns, because these may appear in the text as variants created by inflectional morphology. Inflectional morphology refers to the different forms of words reflected by mood, tense, number, and so on, such as the past tense of a verb or the plural of a noun. Inflection in English is typically expressed by adding a suffix to the root form (e.g., walk, walked, box, boxes) or another internal modification such as a vowel change (e.g., run, ran, goose, geese). In other languages, prefixes (adding to the beginning of a word), infixes (adding in the middle of a word), and other changes may be used. Some morphological analysis tools represent these internal modifications as an alternative representation of the default affix. What we mean by this is that if the plural of a noun is commonly represented by adding -s as a suffix, the output of the tool will show the value of the affix as -s even in the case of plural forms such as geese. Essentially, it treats an irregular vowel change form simply as a kind of surface representational variant of the standard affix. The GATE morphological analyzer, for example, depicts the word geese as having the root goose and affix -s.

Typically, NLP tools which perform morphological analysis deal only with inflectional morphology, as described above, but do not handle derivational morphology. Derivation is the process of adding derivational morphemes, which create a new word from existing words, usually involving a change in grammatical category (for example, creating the noun worker from the verb work, or the noun loudness from the adjective loud.

Morphological analyzers for English are often rule-based, since the majority of inflectional variants follow grammatical rules and set patterns (for example, plural nouns are typically created by adding -s or -es to the end of the singular noun). Exceptions can also be handled quite easily by rules, and unknown words are assumed to follow default rules. The English morphological analyzer in GATE is rule-based, with the rule language (flex) supporting rules and variables that can be used in regular expressions in the rules. POS tags can be taken into account if desired, depending on a configuration parameter. The analyzer takes as input a tokenized document, and considering one token and its POS tag at a time, it identifies its lemma and affix. These values are than added as features of the token.

The Stanford Morphology tool also uses a rule-based approach, is based on a finite-state transducer, and is written in flex. Unlike the GATE tool, however, it requires the use of POS tags as well as tokens, and generates lemmas but not affixes.

NLTK provides an implementation of morphological analysis based on WordNet’s built-in morphy function. WordNet [16] is a large lexical database of English resembling a thesaurus, where nouns, verbs, adjectives, and adverbs are grouped into sets of cognitive synonyms (synsets), each expressing a concept. Synsets are interlinked by means of conceptual-semantic and lexical relations. The morphy function is designed to allow users to query an inflectional form against a base form listed in WordNet. It uses a rule-based method involving lists of inflectional endings, based on syntactic category, and an exception list for each syntactic category, in which a search for an inflected form is done. Like the Stanford tool, it returns only the lemma but not the affix. Furthermore, it can only handle words present in WordNet.

OpenNLP does not currently provide any tools for morphological analysis.

2.7.1 STEMMING

Stemmers produce the stem form of each word, e.g., driving and drivers have the stem drive, whereas morphological analysis tends to produce the root/lemma forms of the words and their affixes, e.g., drive and driver for the above examples, with affixes -ing and -s respectively. There is much confusion about the difference between stemming and morphological analysis, due to the fact that stemmers can vary considerably in how they operate and in their output. In general, stemmers do not attempt to perform an analysis of the root or stem and its affix, but simply strip the word down to its stem. The main way in which stemmers themselves vary is due to the presence or absence of the constraint that the stem must also be a real word in the given language. Basic stemming algorithms simply strip off the affix, e.g., driving would be stripped to the stem driv-by removing the suffix -ing. The distinction between verbs and nouns is often not maintained, so both driver and driving would be stripped down to the stem driv-. Information retrieval (IR) systems often make use of this kind of suffix stripping, since it can be performed by a simple algorithm and does not require other linguistic pre-processing such as POS tagging. Stemming is useful for IR systems because it brings together lexico-syntactic variants of a word which have a common meaning (so one can use either the singular or plural form of a word in the search query, and it will match against either form in a web page). Note that unlike most morphological analysis tools, stemming tools may also consider variants arising from derivational morphology, since they ignore the syntactic category of the word. A further difference is that typically, stemmers do not refer to the context surrounding the word, but only to the word in isolation, while morphological analyzers may also use the context.

Figure 2.5 shows an example of how stemming and morphological analysis may differ. The stemmer in GATE strips off the derivational affix -ness, reducing the noun loudness to the base adjective loud, as shown by the stem feature. The morphological analyzer, on the other hand, is not concerned with derivational morphology, and leaves the word in its entirety, as shown by the root feature loudness and producing a zero affix.

Figure 2.5: Comparison of stemming and morphological analysis in GATE.

Suffix-stripping algorithms may differ in results for a variety of reasons. One such reason is whether the algorithm constrains whether the output word must be a real word in the given language. Some approaches do not require the word to actually exist in the language lexicon (the set of all words in the language).

The most well-known stemming algorithm is the Porter Stemmer [17], which has been re-implemented in many forms. Due to the problems of many different variants being created, Porter later invented the Snowball language, which is a small string processing language designed specifically for creating stemming algorithms for use in Information Retrieval. A variety of useful open-source stemmers for many languages have since been created in Snowball. GATE provides a wrapper for a number of these, covering 11 European languages, while NLTK provides an implementation of them for Python. Because the stemmers are rule-based and easy to modify, following Porter’s original approach, this makes them very straightforward to combine with the other low-level linguistic components described previously in this chapter. OpenNLP and Stanford CoreNLP do not provide any stemmers.

2.8 SYNTACTIC PARSING

Syntactic parsing is concerned with analysing sentences to derive their syntactic structure according to a grammar. Essentially, parsing explains how different elements in a sentence are related to each other, such as how the subject and object of a verb are connected. There are many different syntactic theories in computational linguistics, which posit different kinds of syntactic structures. Parsing tools may therefore vary widely not only in performance but in the kind of representation they generate, based on the syntactic theory they make use of.

Freely available wide-coverage parsers include the Minipar7 dependency parser, the RASP [18] statistical parser, the Stanford [19] statistical parser, and the general-purpose SUPPLE parser [20]. These are all available within GATE, so that the user can try them all and decide which is the most appropriate for their needs.

Minipar is a dependency parser, i.e., it determines the dependency relationships between the words in a sentence. It processes the text one sentence at a time, and thus only needs a sentence splitter as a prerequisite. It works on the basis of identifying linguistic constructions and parts-of-speech like apposition, relative clauses, subjects and objects of verbs, and determiners, and how they relate to each other. Apposition is the construction where two noun phrases next to each other refer to the same thing, e.g., “my brother John,” or “Paris, the capital of France.” Relative clauses typically start with a relative pronoun (such as “who,” “which,” etc.) and modify a preceding noun, e.g., “who was wearing the hat” in the phrase “the man who was wearing the hat.”

In contrast to dependency relations, constituency parsers are based on the idea of constituency relations, and may involve a number of different Constituency Grammar theories such as Phrase-Structure Grammars, Categorial Grammars and Lexical Functional Grammars, amongst others. The constituency relation is hierarchical and derives from the subject-predicate division of Latin and Greek grammars, where the basic clause structure is divided into the subject (noun phrase) and predicate (verb phrase). Further subdivisions of each are then made at a more finegrained level.

A good example of a constituency parser is the Shift-Reduce Constituency Parser which is part of the Stanford CoreNLP Tools.8 Shift-and-reduce operations have long been used for dependency parsing with high speed and accuracy, but only more recently have they been used for constituency parsing. The Shift-Reduce parser aims to improve on older constituency parsers which used chart-based algorithms (dynamic programming) to find the highest scoring parse, which were accurate but very slow. The latest Shift-Reduce Constituency parser is faster than the previous Stanford parsers, while being more accurate than almost all of them.

Figure 2.6 shows a parse tree generated using a dependency grammar, while Figure 2.7 shows one generated using a constituency grammar for the same sentence.

Figure 2.6: Parse tree showing dependency relation.

Figure 2.7: Parse tree showing constituency relation.

The RASP statistical parser [18] is a domain-independent, robust parser for English. It comes with its own tokenizer, POS tagger, and morphological analyzer included, and as with Minipar, requires the text to be already segmented into sentences. RASP is available under the LGPL license and can therefore be used also in commercial applications.

The Stanford statistical parser [19] is a probabilistic parsing system. It provides either a dependency output or a phrase structure output. The latter can be viewed in its own GUI or through the user interface of GATE Developer. The Stanford parser comes with data files for parsing Arabic, Chinese, English, and German and is licensed under GNU GPL.

The SUPPLE parser is a bottom-up parser that can produce a semantic representation of sentences, called simplified quasilogical form (SQLF). It has the advantage of being very robust, since it can still produce partial syntactic and semantic results for fragments even when the full sentence parses cannot be determined. This makes it particularly applicable for deriving semantic features for the machine learning–based extraction of semantic relations from large volumes of real text.

2.9 CHUNKING

Parsing algorithms can be computationally expensive and, like many linguistic processing tools, tend to work best on text similar to that on which they have been trained. Because it is a much more difficult task than some of the lower-level processing tasks, such as tokenization and sentence splitting, performance is also typically much lower, and this can have knock-on effects on any subsequent processing modules such as Named Entity recognition and relation finding. Sometimes it is better therefore to sacrifice the increased knowledge provided by a parser for something more lightweight but reliable, such as a chunker which performs a more shallow kind of analysis. Chunkers, also sometimes called shallow parsers, recognize sequences of syntactically correlated words such as Noun Phrases, but unlike parsers, do not provide details of their internal structure or their role in the sentence.

Tools for chunking can be subdivided into Noun Phrase (NP) Chunkers and Verb Phrase (VP) Chunkers. They vary less than parsing algorithms because the analysis is at a more coarsegrained level—they perform identification of the relevant “chunks” of text but do not try to analyze it. However, they may differ in what they consider to be relevant for the chunk in question. For example, a simple Noun Phrase might consist of a consecutive string containing an optional determiner, one or more optional adjectives, and one or more nouns, as shown in Figure 2.8. A more complex Noun Phrase might also include a Prepositional Phrase or Relative Clause modifying it. Some chunkers include such things as part of the Noun Phrase, as shown in Figure 2.9, while others do not (Figure 2.10). This kind of decision is highly dependent on what the chunks will be used for later. For example, if they are used as input for a term recognition tool, it should be considered whether the possibility of a term that contains a Prepositional Phrase is relevant or not. For ontology generation, such a term is probably not required, but for use as a target for sentiment analysis, it might be useful.


Figure 2.8: Simple NP chunking.

Figure 2.9: Complex NP chunking excluding PPs.


Figure 2.10: Complex NP chunking including PPs.

Verb Phrase chunkers delimit verbs, which may consist of a single word such as bought or a more complex group comprising modals, infinitives and so on (for example might have bought or to buy). They may even include negative elements such as might not have bought or didn’t buy. An example of chunker output combining both noun and verb phrase chunking is shown in Figure 2.11.

Figure 2.11: Complex VP chunking.

Some tools also provide additional chunks; for example, the TreeTagger [21] (trained on the Penn Treebank) can also generate chunks for prepositional phrases, adjectival phrases, adverbial phrases, and so on. These can be useful for building up a representation of the whole sentence without the requirement for full parsing.

As we have already seen, linguistic processing tools are not infallible, even assuming that the components they rely on have generated perfect output. It may seem simple to create an NP chunker based on grammatical rules involving POS tags, but it can easily go wrong. Consider the two sentences I gave the man food and I bought the baby food. In the first case, the man and food are independent NPs which are respectively the indirect and direct objects of the verb gave. We can rephrase this sentence as I gave food to the man without any change in meaning, where it is clear these NPs are independent. In the second example, however, the baby food could be either a single NP which contains the compound noun baby food, or follow the same structure as the previous example (I bought food for the baby). An NP chunker which used the seemingly sensible pattern “Determiner + Noun + Noun” would not be able to distinguish between these two cases. In this case, a learning-based model might do better than a rule-based approach.

GATE provides both NP and VP chunker implementations. The NP Chunker is a Java implementation of the Ramshaw and Marcus BaseNP chunker [22], which is based on their POS tags and uses transformation-based learning. The output from this version is identical to the output of the original C++/Perl version.

The GATE VP chunker is written in JAPE, GATE’s rule-writing language, and is based on grammar rules for English [23, 24]. It contains rules for the identification of non-recursive verb groups, covering finite (is investigating), non-finite (to investigate), participles (investigated), and special verb constructs (is going to investigate). All the forms may include adverbials and negatives. One advantage of this tool is that it explicitly marks negation in verbs (e.g., don’t, which is extremely useful for other tasks such as sentiment analysis. The rules make use of POS tags as well as some specific strings (e.g., the word might is used to identify modals).

OpenNLP’s chunker uses a pre-packaged English maximum entropy model. Unlike GATE, whose two chunkers are independent, it analyses the text one sentence at a time and produces both NP and VP chunks in one go, based on their POS tags. The OpenNLP chunker is easily retrainable, making it easy to adapt to new domains and text types if one has a suitable pre-annotated corpus available.

NLTK and Stanford CoreNLP do not provide any chunkers, although they could be created using rules and/or machine learning from the other components (such as POS tags) in the relevant toolkit.

2.10 SUMMARY

In this chapter we have introduced the idea of an NLP pipeline and described the main components, with reference to some of the widely used open-source toolkits. It is important to note that while performance in these low-level linguistic processing tasks is generally high, the tools do vary in performance, not just in accuracy, but also in the way in which they perform the tasks and their output, due to adhering to different linguistic theories. It is therefore critical when selecting pre-processing tools to understand what is required by other tools downstream in the application. While mixing and matching of some tools is possible (particularly in frameworks such as GATE, which are designed precisely with interoperability in mind), compatibility between different components may be an issue. This is one of the reasons why there are several different toolkits available offering similar but slightly different sets of tools. On the performance side, it is also important to be aware of the effect of changing domain and text type, and whether the tools are easily modifiable or not if this is necessary. In particular, moving from tools trained on standard newswire to processing social media text can be problematic; this is discussed in detail in Chapter 8. Similarly, some tools can be adapted easily to new languages (in particular, the first components in the chain such as tokenizers), while more complex tools such as parsers may be more difficult to adapt. In the following chapter, we introduce the task of Named Entity Recognition and show how the linguistic processing tools described in this chapter can be built on to accomplish this.

1 http://opennlp.apache.org/index.html

2 http://incubator.apache.org/opennlp/documentation/manual/opennlp.html

3 http://gate.ac.uk

4A good explanation of Unicode can be found at http://www.unicode.org/standard/WhatIsUnicode.html.

5 http://nlp.stanford.edu/software/tokenizer.shtml

6 http://www.nltk.org/

7 http://www.cs.ualberta.ca/~lindek/minipar.htm

8 http://nlp.stanford.edu/software/srparser.shtml

Natural Language Processing for the Semantic Web

Подняться наверх