Читать книгу Leadership by Algorithm - David De Cremer - Страница 6

Оглавление

Prologue

I’m seated at a round table where I am being introduced to several conference attendees. Our table is not the only one in the room. Many other round tables fill up the ballroom and have people seated in nice suits and dresses. After being introduced to my neighbors, I sit down and look around for a moment to make myself familiar with the context.

It is 7pm on a Thursday evening. I am a young scholar, only having received my PhD a few years ago, and I find myself in the midst of a fancy business event. When I was invited by a colleague, I was unsure about whether to go, not knowing how it could be relevant for my research. Did I have anything in common with these executives? It took some persuasion, but eventually my colleague convinced me and here I was. So, to make the best out of it, I started talking to my neighbor.

He was a young, ambitious person, who seemed to have it all figured out. He was recently promoted to an executive position and had a clear idea about what success was and how to achieve it. Clearly someone who knew what he was doing. I became intrigued with his acute drive to talk about his successes and his conviction that you have to push limits until you get what you want.

After listening for a while, I managed to ask him a question. My question, which must have sounded quite naïve to those sitting at my table, was how he was so convinced that a business world where everyone would be pushing the limits continuously could survive. Wouldn’t it be the case that such behavior, shown by all, would create problems and maybe damage or even destroy the system that had been built?

As I expected, he was surprised, and for a second it almost looked like he didn’t know what to say. However, he quickly overcame his surprise and simply responded that such a situation would never happen. If there was any risk that our behavior would lead to threats to our organizations or society, he was convinced that science and technology would solve it. In his view, technology allowed us to push beyond our human limits and helped to overcome any challenges that we may encounter.

Somewhat taken by his answer, I followed up with another question, asking him whether such a belief in the almost superpower of technology would not make him too dependent on that same technology. Wouldn’t that make him surplus to requirements in the long term? He looked at me in disbelief and said with a grin on his face that I should not worry about that, because it would never be an issue. He then turned his attention to the neighbor on his other side, which made clear to me that our conversation was finished.

As a young scholar, but also as a person, this conversation made a deep impression on me. The story stayed with me for many years, but eventually I forgot about it. Until a few years ago! When I started working on questions addressing the existential drive of humans in developing technology the story came back to me. And, this time, two thoughts kept flashing through my head.

First, why was it that my companion at the dinner didn’t seem to be aware that his own behavior was leading to problems that could only be solved if the science of technology made sufficient progress? Second, where did he find that sense of confidence that technology would solve it all for him to remain in charge and to keep doing what he was doing?

Both questions are important to ask, but I was particularly intrigued by the thought that someone could be so confident in technology innovation. It made me curious as to the kind of future that awaits us when technology will have the potential to impact on our lives in such a significant way. What kind of technology would that be and how would it affect us?

Well, as you are probably all aware, today we are living in an era where exactly this kind of technology innovation is knocking loudly on all of our doors. It is a strong and confident knock from a technology ready to take its place in human society. What am I talking about? Clearly, I am talking about artificial intelligence (AI).

Today, AI is beyond cool! Every advancement that is made in the field of technology is hailed as a great triumph by many. And with that triumph its impact becomes visible and that impact is recognized as significant. Indeed, AI brings the message that our world will change fundamentally.

In a sense, the rapid development of AI and its many applications gives us a peek into a future where our society will function in a completely different way. With the arrival of AI, we can already see a future in place that forces all of us to act now. AI is the kind of technology innovation that is so disruptive that if you do not start changing your ways of working today, there may not even be a future for you tomorrow.

While this may come across as somewhat threatening, it is a future that we have to be serious about. If Moore’s law – the idea that the overall processing power of computers will double every two years – is applicable, then in the next decade we should be ready to witness dramatic changes in how we live and work together. All of this buzz has made me – just as when I met the very ambitious executive – curious about a technology-driven future. For me, AI is acting as a time machine, helping us to see what could be, but at a moment in time that we actually still have to build it. And, this is an interesting thought.

Why?

Well, if we consider AI as a kind of time machine, giving us a peek into the future, we should use it to our benefit. Use it in a way that can help us to be conscious and careful about how we design, develop and apply AI. Because once the future sets in, the past may be remembered, but it will be gone.

Today, we still live in a time where we can have an impact on technology. Why am I saying this? Let me respond to this question by referring to a series on Netflix that I very much enjoyed watching. The series is called Timeless and describes the adventures of a team that wants to stop a mysterious organization, called Rittenhouse, from changing history by making use of a time machine.

In the first episode, the relevance to our discussion in this book is obvious right away. There, one of the main characters, Lucy Preston, a history professor, is introduced to Connor Mason, who is the inventor of a time machine. Mason explains that certain individuals have taken control of a time machine, called the Lifeboat, and gone back in time. With a certain weight in his voice, he makes clear that “history will change”. Everyone in the room is aware of the magnitude of his words and realizes the consequences that this will have on the world, society and maybe even their own lives.

Lucy Preston responds emotionally by asking why he would be so stupid as to invent something so dangerous. Why invent technology that could hurt the human race in such significant ways (i.e. changing its own history)? The answer from Mason is as clear as it is simple: he didn’t count on this happening. And, isn’t this how it usually goes with significant technological innovations? Blinded by the endless opportunities, we don’t want to waste any time and only look at what technology may be capable of. The consequences of an unchecked technology revolution for humanity are usually not addressed.

Can we expect the same thing with AI? Are we fully aware of the implications for humanity if society becomes smart and automated? Are we focusing too much on developing a human-like intelligence that can surpass real human intelligence in both specific and general ways? And, are we doing so without fully considering the development and application dangers of AI?

As with every significant change, there are pros and cons. Not too long ago, I attended a debate where the prospects of a smart society were discussed. Initially the focus was entirely on the cost recommendations and efficiencies that AI applications would bring. Everyone was happy so far.

At one point in the debate, however, someone in the audience asked whether we shouldn’t evaluate AI more critically in terms of its functionality for us as human beings, rather than on maximizing the abilities of the technology itself. One speaker responded loudly with the comment that AI should definitely tackle humanity’s problems (e.g. climate change, population size, food scarcity and so forth), but its development should not be slowed down by anticipatory thoughts on how it would impact humanity itself. As you can imagine, the debate became suddenly much more heated. Two camps formed relatively quickly. One camp advocated a focus on a race to the bottom to maximize AI abilities as fast as possible (and thus discounting long-term consequences for humanity), whereas the other camp advocated the necessity of social responsibility in favor of maximizing technology employment.

Who is right? In my view, both perspectives make sense. On the one hand, we do want to have the best technology and maximize its effectiveness. On the other hand, we also want to ensure that the technology being developed will serve humanity in its existence, rather than potentially undermining it.

So, how to solve this dilemma?

In this book, I want to delve deeper into this question and see how it may impact the way we run our teams, institutes and organizations, and what the choices will be that we have to make. It is my belief that in order to address the question of how to proceed in the development and application of algorithms in our daily activities, we need to agree on the purpose of the technology development itself. What purpose does AI serve for humanity and how will this impact the shaping of it? This kind of exercise is necessary to avoid two possible outcomes that I have been thinking about for years.

First, we do not want to run the risk that the rapid development of AI technologies creates a future where our human identity is slowly removed and a humane society becomes something of the past. Like Connor Mason’s time machine that altered human history, mindless development of AI technology, with little awareness of its consequences for humanity, may run the same risks.

Second, we push the limits of technology advancement with the aim for AI to augment our abilities and thus to serve the development of a more (and not less) humane society. From that point of view, the development of AI should not be seen as a way to solve the mess we create today, but rather as a means of creating opportunities that will improve the human condition. As the executive I met as a young scholar proclaimed that technology is developed to deal with the problems that we create, AI technology developed with the sole aim of maximizing efficiency and minimizing errors will reduce the human presence rather than augment its ability.

Putting these two possible outcomes together made me realize that the purpose served by investing so much in AI technology advancement should not be to make our society less humane and more efficient in eliminating mistakes and failures. This would result in humankind having to remove itself from its place in the world to be replaced by another type of intelligence not burdened by human flaws. If this were to happen, our organizations and society would ultimately be run by technology. What will our place in society be then?

In this book, I will address these questions by unravelling the complex relationship that exists between on the one hand our human desire to constantly evolve, and the drive for fairness and co-operation on the other hand. Humans have an innate motivation to go where no man has gone before. The risk associated with this motivation is that at some point we may lose control of the technology we are building and the consequence will be that we will submit to it.

Will this ever be a reality? Humans as subordinates of the almighty machine? Some signs indicate that it may well happen. Take the example of the South Korean Lee Sedol, who was the world champion at the ancient Chinese board game Go. This board game is highly complex and was considered for a long time beyond the reach of machines. All that changed in 2016 when the computer program AlphaGO beat Lee Sedol four matches to one. The loss against AI made him doubt his own (human) qualities so much that he decided to retire in 2019. So, if even the world champion admits defeat, why would we not expect that one day machines will develop to the point where they run our organizations?

To tackle this question, I will start from the premise that the leadership we need in a humane society is likely not to emerge through more sophisticated technology. Rather, enlightened leadership will emerge by becoming more sophisticated about human nature and our own unique abilities to design better technology that is used in wise (and not smart) ways.

Let me take you on a journey, where we will look at what exactly is happening today with AI in our organizations; what we can expect from moving into a new era where algorithms are developed for each task; what kind of influence it will have on how we will run our organizations in the future; and how we should best approach such radical transformation.

The time machine is waiting, but this time with the aim to inform us and make us smarter about the ways in which we can design technology to improve humanity.

Leadership by Algorithm

Подняться наверх