Читать книгу The Innovation Ultimatum - Steve Brown - Страница 26
A 1950s Concept and 1980s Algorithms Meet Modern Computing Horsepower
ОглавлениеThe term “artificial intelligence” was first coined in the 1950s. The core algorithms behind today's AIs were first proposed in the 1970s and popularized in the mid-1980s. But it was 2012 before the recent crop of AI breakthroughs began to appear. Why the quarter-century delay? Older computers lacked the performance to run AI applications. High-end graphics processors, GPUs from companies like Nvidia, eventually provided the computing horsepower needed. Their parallel number-crunching architectures, designed to create realistic video games, turn out to be pretty good for training an AI. As well as fast computers, AIs need training data to learn from. As digital storage costs fell and broadband speeds increased, data flooded in from many sources: billions of industrial sensors, millions of smart cameras, billions of people sharing trillions of photos and billions of videos, and trillions of clicks on social media. Users upload 500 hours of video to YouTube every minute and more than 1.2 billion photos to Google Photos every day (Source: Wikipedia).
With cheap, powerful computing, an avalanche of training data, and a small army of AI-savvy researchers and developers, artificial intelligence is now poised to solve myriad problems and create many exciting new capabilities.