Читать книгу Outsmarting AI - Brennan Pursell - Страница 12

Myth 2: AI Knows What It’s Doing

Оглавление

The other popular myth is that AI is a hazard because it “wants” something—that is, to replace humans. Very prominent US entrepreneurs such as Elon Musk have issued warnings along these lines. Futurists who predict a coming “superintelligence” warn that AI or machine intelligence will outstrip the human in due time, with dire consequences.[3] Once we figure out “general artificial intelligence,” others claim, it will then figure out that it does not need us. Because it has to be plugged in in order to function, will start to defend itself from humans, using every conceivable means to keep the electricity on.

An AI system doesn’t “want” anything. It lacks volition—a will. It is a mathematical object that works to attain the goals defined by its programmers.

AI performance at rule-bound games, such as chess, Go, Jeopardy, Dota 2, and other competitive eSports, depends entirely on the data sets, rules, and goals established by the programmers. The appropriate means to victory do not really matter as long as the rules allow them. In a boating game experiment, the AI was extensively trained in the program, and it proved victorious, but only by crashing its boat into the wall as many times as possible.

AI can “learn” the software, not the spirit of the game, or competition, or camaraderie. AI can play well enough alone, but its record for team playing is abysmal. Some observers of the AI vs. human Dota 2 video game showdown remarked that the AI character pulled moves “as if guided by an alien.” The more-accurate statement would be that it had mastered the software as directed, untrammeled by human hands on a controller. Of course audience members saw moves no human could do.

Don’t worry at all about AI having designs. Do worry about human stupidity, carelessness, and malice. Name a technology, any technology, any part of the great and growing human tool set since from the end of the last Ice Age about twelve thousand years ago that has not been abused. With computer software came the viruses. Tech militants who argue that AI systems should set the targets and decide the launches as well as guide the missiles are begging for hell. Don’t let them run the planet.

AI requires human intelligence and good common sense to function well. In 2016, developers at Microsoft notoriously released a chatbot called “Tay” that was supposed to learn language use from millennials on social media and pass it on liberally, actually, with no filters. In a matter of days, Tay tweeted, “feminists . . . should all die and burn in hell” and “Hitler was right.” Obviously the company disabled it for “adjustments.” This episode was enormously embarrassing for Microsoft, but what on earth were the project managers thinking?

Like teenagers, technologists sometimes do things just because they are “cool,” like winning at Jeopardy using an immense customized database and a natural language interface, or winning at chess using a similar approach, or a video game, again, with vast amounts of data, precision, and speed that a human couldn’t hope to match or exceed. But what value does this have for actual, working people besides entertainment and shock value?

So the real danger may be plain old negligence, thoughtless failures in AI design, failure to understand systems thoroughly before we fully commercialize them. AI may seem new and shiny, but greed, fear, and laziness are the old ways to distort, destroy, and demonize new things.

Think of the resourceful young minds at MIT that put together “Norman” and proudly proclaimed “the World’s First Psychopath AI.”[4] Norman was trained to respond to the inkblot images of the Rorschach test with macabre and even grisly captions. Associating text with images is now a normal AI function. Norman serves a very important point that we emphasize throughout the book: AI performance is no better than the data on which it was trained and parameters (rules) by which it operates. Norman was programmed, you can say, to make the associations it does. There is nothing independent, or psychopathic, about Norman’s associations, or those of any AI system. Psychopathy is a human problem.

Outsmarting AI

Подняться наверх