Читать книгу Informatics and Machine Learning - Stephen Winters-Hilt - Страница 15

1.4 Feature Extraction and Language Analytics

Оглавление

The FSA sequential‐data signal processing, and extraction of statistical moments on windowed data, will be shown in Chapter 2 to be O(L) with L the size of the data (double the data and you double the processing time). If HMMs can be used, with their introduction of states (the sequential data is described as a sequentence of “hidden” states), then the computational cost goes as O(LN2). If N = 10, then this could be 100 times more computational time to process than that of a FSA‐based O(L) computation, so the HMMs can generally be a lot more expensive in terms of computational time. Even so, if you can benefit from a HMM it is generally possible to do so, even if hardware specialization (CPU farm utilization, etc.) is required. The problem is if you do not have a strong basis for a HMM application, e.g. when there is no strong basis for delineating the states of the system of communication under study. This is the problem encounterd in the study of natural languages (where there is significant context dependency). In Chapter 5 we look into FSA analysis for language by doing some basic text analytics.

Chapter 5 shows some (very) basic extensions to an FSA analysis in applications to text. This begins with a simple frequency analysis on words, which for some classics (in their original languages) reveal important word‐frequency results with implied meanings meant by the author (polysemy word usage by Machiavelli, for example). The frequency on word groupings in a given text can be studied as well, with some useful results from texts of sufficient size with clear stylistic conventions by the author. Authors that structure their lines of text according to iambic pentameter (Shakespeare, for example) can also be identified according to the profile (histogram) of syllables used on each line (i.e. 10 for iambic pentameter will dominate).

Text analytics can also take what is still O(L) processing into mapping the mood or sentiment of text samples by use of word‐scored sentiment tables. The generation and use of such sentiment tables is its own craft, usually proprietary, so only minimal examples are given. Thus Chapter 5 shows an elaboration of FSA‐based analysis that might be done when there is no clear definition of state, such as in language. NLP processing in general encompasses a much more complete grammatical knowledge of the language, but in the end the NLP and the FSA‐based “add‐on” still suffer from not being able to manage word context easily (the states cannot simply be words since the words can have different meaning according to context). The inability to use HMMs has been a blockade to a “universal translator” that has since been overcome with use of Deep Learning using NNs (Chapter 13) – where immense amounts of translation data, such as the massive corpus of dual language Canadian Government proceedings, is sufficient to train a translator (English–French). Most of the remaining Chapters focus on situations where a clear delinaeation of signal state can be given, and thus benefit from the use of HMMs.

Informatics and Machine Learning

Подняться наверх