Читать книгу Artificial Intelligence for Marketing - Sterne Jim - Страница 18

CHAPTER 1
Welcome to the Future
AI‐POCALYPSE

Оглавление

Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming fully unmanned. Afterwards, they fly with a perfect operational record. The Skynet Funding Bill is passed. The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self‐aware at 2:14 a.m. Eastern time, August 29th. In a panic, they try to pull the plug.

The Terminator, Orion Pictures, 1984

At the end of 2014, Professor Stephen Hawking rattled the data science world when he warned, “The development of full artificial intelligence could spell the end of the human race… It would take off on its own, and re‐design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete and would be superseded.”16

In August 2014, Elon Musk took to Twitter to express his misgivings:

“Worth reading Superintelligence by Bostrom. We need to be super careful with AI. Potentially more dangerous than nukes,” (Figure 1.2) and “Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”


Figure 1.2 Elon Musk expresses his disquiet on Twitter.


In a clip from the movie Lo and Behold, by German filmmaker Werner Herzog, Musk says:

I think that the biggest risk is not that the AI will develop a will of its own, but rather that it will follow the will of people that establish its utility function. If it is not well thought out – even if its intent is benign – it could have quite a bad outcome. If you were a hedge fund or private equity fund and you said, “Well, all I want my AI to do is maximize the value of my portfolio,” then the AI could decide, well, the best way to do that is to short consumer stocks, go long defense stocks, and start a war. That would obviously be quite bad.

While Hawking is thinking big, Musk raises the quintessential Paperclip Maximizer Problem and the Intentional Consequences Problem.

The AI that Ate the Earth

Say you build an AI system with a goal of maximizing the number of paperclips it has. The threat is that it learns how to find paperclips, buy paperclips (requiring it to learn how to make money), and then work out how to manufacture paperclips. It would realize that it needs to be smarter, and so increases its own intelligence in order to make it even smarter, in service of making paperclips.

What is the problem? A hyper‐intelligent agent could figure out how to use nanotech and quantum physics to alter all atoms on Earth into paperclips.

Whoops, somebody seems to have forgotten to include the Three Laws of Robotics from Isaac Asimov's 1950 book, I Robot:

1. A robot may not injure a human being, or through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Max Tegmark, president of the Future of Life Institute, ponders what would happen if an AI

is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI's goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a(n) ambitious geoengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.17

If you really want to dive into a dark hole of the existential problem that AI represents, take a gander at “The AI Revolution: Our Immortality or Extinction.”18

Intentional Consequences Problem

Bad guys are the scariest thing about guns, nuclear weapons, hacking, and, yes, AI. Dictators and authoritarian regimes, people with a grudge, and people who are mentally unstable could all use very powerful software to wreak havoc on our self‐driving cars, dams, water systems, and air traffic control systems. That would, to repeat Mr. Musk, obviously be quite bad.

That's why the Future of Life Institute offered “Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” which concludes, “Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”19

In his 2015 presentation on “The Long‐Term Future of (Artificial) Intelligence,” University of California, Berkeley professor Stuart Russell asked, “What's so bad about the better AI? AI that is incredibly good at achieving something other than what we really want.”

Russell then offered some approaches to managing the it's‐smarter‐than‐we‐are conundrum. He described AIs that are not in control of anything in the world, but only answer a human's questions, making us wonder whether it could learn to manipulate the human. He suggested creating an agent whose only job is to review other AIs to see if they are potentially dangerous and admitted that was a bit of a paradox. He's very optimistic, however, given the economic incentive for humans to create AI systems that do not run amok and turn people into paperclips. The result will inevitably be the development of community standards and a global regulatory framework.

Setting aside science fiction fears of the unknown and a madman with a suitcase nuke, there are some issues that are real and deserve our attention.

Unintended Consequences

The biggest legitimate concern facing marketing executives when it comes to machine learning and AI is when the machine does what you tell it to do rather than what you wanted it to do. This is much like the paperclip problem, but much more subtle. In broad terms, this is known as the alignment problem. The alignment problem wonders how to explain to an AI system goals that are not absolute, but take all of human values into consideration, especially considering that values vary widely from human to human, even in the same community. And even then, humans, according to Professor Russell, are irrational, inconsistent, and weak‐willed.

The good news is that addressing this issue is actively happening at the industrial level. “OpenAI is a non‐profit artificial intelligence research company. Our mission is to build safe AI, and ensure AI's benefits are as widely and evenly distributed as possible.”20

The other good news is that addressing this issue is actively happening at the academic/scientific level. The Future of Humanity Institute teamed with Google to publish a paper titled “Safely Interruptible Agents.”21

Reinforcement learning agents interacting with a complex environment like the real world are unlikely to behave optimally all the time. If such an agent is operating in real‐time under human supervision, now and then it may be necessary for a human operator to press the big red button to prevent the agent from continuing a harmful sequence of actions – harmful either for the agent or for the environment – and lead the agent into a safer situation. However, if the learning agent expects to receive rewards from this sequence, it may learn in the long run to avoid such interruptions, for example by disabling the red button – which is an undesirable outcome. This paper explores a way to make sure a learning agent will not learn to prevent (or seek!) being interrupted by the environment or a human operator. We provide a formal definition of safe interruptibility and exploit the off‐policy learning property to prove that either some agents are already safely interruptible, like Q‐learning, or can easily be made so, like Sarsa. We show that even ideal, uncomputable reinforcement learning agents for (deterministic) general computable environments can be made safely interruptible.

There is also the Partnership on Artificial Intelligence to Benefit People and Society,22 which was “established to study and formulate best practices on AI technologies, to advance the public's understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.”

Granted, one of its main goals from an industrial perspective is to calm the fears of the masses, but it also intends to “support research and recommend best practices in areas including ethics, fairness, and inclusivity; transparency and interoperability; privacy; collaboration between people and AI systems; and of the trustworthiness, reliability, and robustness of the technology.”

The Partnership on AI's stated tenets23 include:

We are committed to open research and dialog on the ethical, social, economic, and legal implications of AI.

We will work to maximize the benefits and address the potential challenges of AI technologies, by:

Working to protect the privacy and security of individuals.

Striving to understand and respect the interests of all parties that may be impacted by AI advances.

Working to ensure that AI research and engineering communities remain socially responsible, sensitive, and engaged directly with the potential influences of AI technologies on wider society.

Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.

Opposing development and use of AI technologies that would violate international conventions or human rights, and promoting safeguards and technologies that do no harm.

That's somewhat comforting, but the blood pressure lowers considerably when we notice that the Partnership includes the American Civil Liberties Union. That makes it a little more socially reliable than the Self‐Driving Coalition for Safer Streets, which is made up of Ford, Google, Lyft, Uber, and Volvo without any representation from little old ladies who are just trying to get to the other side.

Will a Robot Take Your Job?

Just as automation and robotics have displaced myriad laborers and word processing has done away with legions of secretaries, some jobs will be going away.

The Wall Street Journal article, “The World's Largest Hedge Fund Is Building an Algorithmic Model from Its Employees' Brains,”24 reported on $160 billion Bridgewater Associates trying to embed its founder's approach to management into a so‐called Principles Operating System. The system is intended to study employee reviews and testing to delegate specific tasks to specific employees along with detailed instructions, not to mention having a hand in hiring, firing, and promotions. Whether a system that thinks about humans as complex machines can succeed will take some time.

A Guardian article sporting the headline “Japanese Company Replaces Office Workers with Artificial Intelligence”25 reported on an insurance company at which 34 employees were to be replaced in March 2017 by an AI system that calculates policyholder payouts.

Fukoku Mutual Life Insurance believes it will increase productivity by 30 % and see a return on its investment in less than two years. The firm said it would save about 140m yen (£1m) a year after the 200m yen (£1.4m) AI system is installed this month. Maintaining it will cost about 15m yen (£100k) a year.

The technology will be able to read tens of thousands of medical certificates and factor in the length of hospital stays, medical histories and any surgical procedures before calculating payouts, according to the Mainichi Shimbun.

While the use of AI will drastically reduce the time needed to calculate Fukoku Mutual's payouts – which reportedly totalled 132,000 during the current financial year – the sums will not be paid until they have been approved by a member of staff, the newspaper said.

Japan's shrinking, ageing population, coupled with its prowess in robot technology, makes it a prime testing ground for AI.

According to a 2015 report by the Nomura Research Institute, nearly half of all jobs in Japan could be performed by robots by 2035.

I plan on being retired by then.

Is your job at risk? Probably not. Assuming that you are either a data scientist trying to understand marketing or a marketing person trying to understand data science, you're likely to keep your job for a while.

In September 2015, the BBC ran its “Will a Robot Take Your Job?”26 feature. Choose your job title from the dropdown menu and voilà! If you're a marketing and sales director, you're pretty safe. (See Figure 1.3.)


Figure 1.3 Marketing and sales managers get to keep their jobs a little longer than most.


In January 2017, McKinsey Global Institute published “A Future that Works: Automation, Employment, and Productivity,”27 stating, “While few occupations are fully automatable, 60 percent of all occupations have at least 30 percent technically automatable activities.”

The institute offered five factors affecting pace and extent of adoption:

1. Technical feasibility: Technology has to be invented, integrated, and adapted into solutions for specific case use.

2. Cost of developing and deploying solutions: Hardware and software costs.

3. Labor market dynamics: The supply, demand, and costs of human labor affect which activities will be automated.

4. Economic benefits: Include higher throughput and increased quality, alongside labor cost savings.

5. Regulatory and social acceptance: Even when automation makes business sense, adoption can take time.

Christopher Berry sees a threat to the lower ranks of those in the marketing department.28

If we view it as being a way of liberating people from the drudgery of routine within marketing departments, that would be quite a bit more exciting. People could focus on the things that are most energizing about marketing like the creativity and the messaging – the stuff people enjoy doing.

I just see nothing but opportunity in terms of tasks that could be automated to liberate humans. On the other side, it's a typical employment problem. If we get rid of all the farming jobs, then what are people going to do in the economy? It could be a tremendous era of a lot more displacement in white collar marketing departments.

Some of the first jobs to be automated will be juniors. So we could be very much to a point where the traditional career ladder gets pulled up after us and that the degree of education and professionalism that's required in marketing just increases and increases.

So, yes, if you've been in marketing for a while, you'll keep your job, but it will look very different, very soon.

18

“The AI Revolution: Our Immortality or Extinction,” http://waitbutwhy.com/2015/01/artificial‐intelligence‐revolution‐2.html.

19

“Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” http://futureoflife.org/open‐letter‐autonomous‐weapons.

21

“Safely Interruptible Agents,” http://intelligence.org/files/Interruptibility.pdf.

22

Partnership on Artificial Intelligence to Benefit People and Society, https://www.partnershiponai.org/.

23

The Partnership on AI's stated tenets, https://www.partnershiponai.org/tenets.

26

“Will a Robot Take Your Job?” http://www.bbc.com/news/technology‐34066941.

28

Source: Personal interview.

Artificial Intelligence for Marketing

Подняться наверх