Читать книгу Self-Service Data Analytics and Governance for Managers - Nathan E. Myers - Страница 30
Natural Language Processing
ОглавлениеNatural language processing (NLP) is a dimension of artificial intelligence that enlists linguistics and computer science to improve how computers can capture, analyze, and process large volumes of human language data. Efforts in this area have centered around speech recognition, language translation, natural language understanding, and natural language generation. This is perhaps the area of artificial intelligence which has been in existence the longest, spanning decades of work in the field.
In our discussion of robotic process automation (RPA) in a previous section of this chapter, we described in some detail how Chat Bots can multiply the efforts of existing customer service staff by engaging users to extract common demands and then locating appropriate information to provide in response. Chat Bots leverage NLP to “understand” those demands. Other popular assistants in use today are reliant on NLP to enable commands to be invoked.
For many of us, informal conversational English can be quite different from explicit computer language demands. “Hey Siri, can you place a call to order pizza from that place on Main Street we ordered from last week?” is a natural language request that needs to be classified, or broken down, to answer such questions as Q1: “What does the user want?” – A1: The user would like me to place a call, or at least to locate a phone number. Q2: “Place a call to (or get a phone number for) whom?” – A2: A pizza restaurant that is in the user's call history, with an address on Main Street, Q3: “If my classification fails, or any of my actions do not appear suitable, relevant, or of value, is there any other related information I can provide to better assist the user?” – A3: Provide pizza options from nearby restaurants.
NLP is all about being able to structure, classify, and understand meaning from volumes and volumes of unstructured data. If we think about how much information is available on the internet, could not the internet be a valuable and rich dataset containing a nearly infinite number of observations for virtually any study? If you were the CEO of one of the largest social media sites, do you think you could benefit from digesting at any point in time, the millions of posts that are published in your ecosystem? Remember, these posts are both the lifeblood of your livelihood, but they may also represent the biggest liabilities to your reputation and ultimately threats to your profitability. Perhaps even worse, could a reckless failure to draft guidelines on appropriate online behavior, to actively monitor the body of content published on your site, and to implement policies and procedures to appropriately respond to any behaviors that are counter to your guidelines introduce regulatory risk? Could they prompt regulatory action, potentially impact your business model, and compromise your long-cultivated self-determination and independence? GULP!
In the above example, getting assistance from a finely tuned NLP model, to process informal vernacular in posts, to glean meaning from content, to perform classification to determine whether posts are appropriate, and finally to flag as exceptions any posts deemed inconsistent with usage guidelines, could be a very valuable tool to have at your disposal. Equipping an NLP model to extract, structure, and classify messages, all while continually improving the outcomes, is integral to allowing humans to interface with computers on our own terms.