Читать книгу Leadership by Algorithm - David De Cremer - Страница 7

Оглавление

Chapter 1: Entering a New Era

In 1985, Mark Knopfler and his band Dire Straits released a song about a boy who got the action, got the motion and did the walk of life. This boy became the hero in many a young kid’s fantasy. In the 21st century, we have another kind of hero, something that is not human. Now, we admire the use of algorithms in all walks of life.

However, it is also important to note that AI is not some new phenomenon that has only arrived in the last few years. In fact, the notion of AI was used for the first time in 1956. At that time, the eight-week long Dartmouth Summer Research project on AI at Dartmouth College in New Hampshire was organized. The project included names like Marvin Minsky, John McCarthy and Nathaniel Rochester, who would later become known as the founding fathers of AI.

So, early on in the second half of last century, the belief in the super power of AI was already very much present. Consider, for example, the quote of Herbert A. Simon, Nobel laureate in economics, who wrote in 1965: “machines will be capable, within 20 years, of doing any work a man can do.” However, researchers failed to deliver on these lofty promises. Since the 1970s, AI projects have been heavily criticized for being too expensive and using too formalized, top-down approaches which fail to replicate human intelligence. And as a result, AI research was partly frozen, with no real progress being made. Until now!

AI witnessed a comeback in the last decade, primarily because the world woke up to the realization that deep learning by machines is possible to the level where they can actually perform many tasks better than humans. Where did this wake-up call come from? From a simple game called Go.

In 2016, AlphaGo, a program developed by Google DeepMind, beat the human world champion in the Chinese board game, Go. This was a surprise to many, as Go – because of its complexity – was considered the territory of human, not AI, victors. In a decade where our human desire to connect globally, execute tasks faster, and accumulate massive amounts of data, was omnipresent, such deep learning capabilities were, of course, quickly embraced.

As a result, we are now witnessing an almost obsessive focus on AI and the benefits it can bring to our society, organizations and people. This obsessive focus, combined with an exponential increase in AI applications, has resulted in a certain fear that human intelligence may well be on the verge of being challenged in all facets of our lives. Or, to be more precise, a fear has emerged in society that we, as humans, may have entered an era where we will be replaced by machines (for real, this time!).

However, before we address the challenge (some may even call it a threat) to our authentic sense of the human self and intelligence, we need to make clear what we are talking about when we talk about AI. Although the purpose of this book is not to present a technical manual to work with AI, or to teach you how to become a coder, I do feel that we first need to familiarize ourselves with a brief definition of AI.

In its simplest form, AI can be seen as a system that employs techniques to make external data – available everywhere in our organizations and society – as a whole more transparent. Making data more transparent allows for interpreting data more accurately. This allows us to learn from these interpretations and subsequently act upon them to promote more optimal ways of achieving our goals.

The technique that is known to all and drives our learning from data is called machine learning. It is machine learning that creates algorithms that are applied to data with the aim of promoting our understanding of what the data is actually saying. Algorithms are learned scripts for mathematical calculations that are applied to data to arrive at new insights and conclusions that we may not directly see. Specifically, they allow us to arrive at insights that can help us to develop more comprehensive and more accurate predictions and models. Algorithms act in autonomous ways to identify patterns in data that signal underlying principles and rules.

As you can easily see, algorithms are not only useful but powerful tools in a society interested in continuously improving and enhancing knowledge. Indeed, algorithms are en route to serve such an important function to how we act and live in society that they will be as much part of our social and work lives as other human beings. In other words, the ability of algorithms to analyze, work with and learn from external data, means that algorithms today have reached a level where they can interact and partner with the outside (human) world.

The rise of algorithms in organizations

When you look around today and see what excites people about the future, it quickly becomes clear that the influence of our new hero (the algorithm in action) is rapidly growing, especially in domains where the potential for realizing significant cost savings is high. One such domain concerns our work life, where algorithms are increasingly becoming part of how organizations are managed.1 Although it may be a scary development for some of us, there are good reasons why algorithms are applied to a wide variety of problem-solving operations.2

Let us first look at the economic benefits. Current estimates show that the application of AI in business will add at least $13trn to the global economy in the next ten years. In a recent report by PwC, it was predicted that using AI at a larger scale – across industries and society – could boost the global economy by $15.7trn by 2030.3,4

Why do we expect AI to contribute in such enormous ways to the global economy? Mainly because algorithms are expected to have an impact on how businesses will be managed and controlled (as indicated by 56% of interviewed managers by Accenture) and therefore will facilitate the creation of a more interesting and effective work context (as indicated by 84% of managers interviewed by Accenture).5,6 This enhancement in effectiveness will ensure economic growth. Indeed, surveys worldwide indicate that the adoption of algorithms in the work context will help businesses to promote the fulfilment of their potential and create larger market shares.7,8

For some, these numbers have been used to suggest that algorithms represent steroids for companies wanting to perform better and faster. It is nevertheless a reality that companies today are developing new partnerships between machines and AI on one hand, and humans on the other hand. Developing and promoting this kind of partnership also has an important implication for humankind. It is likely that the new technology, available to push companies’ productivity and performance to a higher level, is bound to steadily take more autonomous forms that will enable humans to offload parts of their jobs. Importantly, this development is not something that is likely to happen tomorrow. In fact, it has arrived already. AI is developing so fast that an increasing number of machines are already capable of autonomous learning. In reality, AI has achieved a level of development that makes it capable of taking actions and making decisions that previously were only considered possible under the discretion of humans.

If this is the case, then it is no surprise that the availability and possibility of implementing intelligent machines and their learning algorithms will have a significant impact on how work will be executed and experienced. This reality is hard to deny because the facts seem to be there. As mentioned earlier, Google’s DeepMind autonomous AI beat the world’s best Go-player, and recently Alibaba’s algorithms have been shown to be superior to humans in the basic skills of reading and comprehension.9

If such basic human skills can be left to machines and those machines possess the ability to learn, what then will the future look like? This predicted (and feared?) change in the nature of work will be seen across a broad range of jobs and professions. It is already widely accepted that automation of jobs in the business world is happening. For example, algorithms are being employed to recruit new staff, decide which employees to promote, and manage a wide range of administrative tasks.10,11,12

But companies are not just investing in complex algorithms for passive administrative tasks that can lead to hiring the best employees. They are also being used already for more active approaches. For example, the bank JPMorgan Chase uses algorithms to track employees and assess whether or not they act in line with the company’s compliance regulations.13 Organizations thus see the benefit of algorithms in the daily activities of their employees.

As another case in point, companies have set out to enable algorithms to track how satisfied employees feel, in order to predict the probability of them resigning. For any organization this type of data is important and useful in promoting effective management. After all, once the right kind of people are working in the organization, you want to do all you can to keep them. In that respect, an interesting study from the US National Bureau of Economic Research demonstrated that low-skill service-sector workers (where retention rates are low) stayed in the job 15% longer when an algorithm was used to judge their employability.14

Automation and innovation

Automation and the corresponding use of algorithms with deep learning abilities are also penetrating other industries. The legal sector is another area where many discussions are taking place about how and whether to automate services. Legal counsellors have started to use automated advisors to contest relatively small fines such as parking tickets.

The legal sector is also considering the use of AI to help judges go through evidence collected to reach a verdict in court cases. Here, algorithms are expected to help present evidence needed to make decisions where the interests of different stakeholders are involved. The fact that decisions, including the interests of different stakeholders, may become automated should make us aware that automation in the legal sector introduces risks and challenges. Indeed, such use of algorithms may put autonomous learning machines well on the way to influencing fair decisions within the framework of the law. Needless to say, if questions about human rights and duties gradually become automated, we will enter a potentially risky era where human values and priorities could become challenged.

Another important industry where technology and the use of automated learning machines are quickly becoming part of the ecosystem is financial services. Traders and those running financial and risk management are working in an environment where digital adoption and machine learning are no longer the exception.15 Rather, in today’s financial industry, they seem to have become the default. In fact, the use and application of algorithms to, for example, manage risk analysis or provide personalized products based on the profile of the customer is unparalleled. It has reached the level where we can confidently say that banks today are technology companies first, and financial institutes second. It’s no surprise that the financial industry is forecast to spend nearly $300bn in 2021 on IT, up from about $260bn just three years earlier.16

It is not only that banks have embraced technology so much that it has transformed the workings of their industry significantly. No, it is also the other way around. Technology companies are now moving into the financial industry. Indeed, tech companies are becoming banks. Take recent examples such as Alibaba (BABA), Facebook (FB), and Amazon (AMZN); all are moving into providing financial services and products.

A final important area where we see that the use of autonomous learning algorithms will make a big difference is healthcare.17 The keeping and administration of medical files is increasingly being automated to provide an interconnected and fast delivery of information to doctors.18 Transforming the healthcare industry will also impact medical research, hence better results can be achieved in saving human lives.19 Doctors making use of technology to detect disease and subsequently propose treatment will become more accurate and truly evidence-based. For example, examining how to increase cancer detection in the images of lymph node cells research showed that an AI-exclusive approach had a 7.5% error rate and a human one a 3.5% error rate. The combined approach, however, revealed an error rate of only 0.5% (85% reduction in error).20

Us versus them?

Putting all these developments together makes it clear that the basic cognitive skills and physical abilities that humans have always brought to the table are about to become a thing of the past. These abilities are vulnerable to becoming automated and optimized further by fast-processing, learning machines. It is this vision – widely advocated in the popular press – that makes many of us wonder where the limits of automation lie; if there are any. After all, if even the skills and abilities that are essential to what makes us human seem ready to be replaced by AI, and this new technology is able to engage in deep learning and thus continuously improve, what will be left for humans in the future?

This reflection is not a new one. In fact, it has been around for quite some time. Indeed, in 1965 British mathematician I.J. Good wrote, “An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” In all fairness, such speculation introduces several existential questions. And, it is those kinds of questions that make people very nervous today about the future of humanity in an ecosystem where technology that may overtake us has arrived. In fact, it introduces us to a potential conflict of interest that will make it hard for us to choose.

On one hand, we are clearly obsessed with the power of AI to bring many benefits to our organizations and society. On the other hand, however, this obsession also creates a moment of reflection that worries us. A reflection that confronts us with the realization that human limits can be solved by technology; ultimately, this means that applying technology may render humans obsolete. In our pursuit for more profit and growth, and a desire to increase efficiency, we may be confronted with a sense of disappointment about what it actually means to be human.

This kind of reflective and critical thinking about humanity makes clear that although we fear being replaced, we do look at humans and machines as two different entities. We make a big distinction between humans as us and machines as them. Because of this sentiment, it is clear that the idea of we (humans and machines together) may be difficult to accept. So, if this is the case, how on earth can we talk about a partnership between humans and machines? If we think we are so different that becoming one is impossible, coexistence will be the best situation possible. But even coexistence is feared by many, because this may still lead to humans being replaced by the superior machine.

All these concerns point out that we consider humans as actors that are limited in their abilities, whereas we regard machines as entities that can develop and reach heights that ultimately humans will be unable to reach. But, is this a valid assumption? What does science say? Much of the research out there seems to provide evidence that this view may indeed be valid. Studies do suggest that if we look at how people judge the potential of new technology, approach its functionality and predict how to use it in the future, the conclusion seems to be that humans fear being outperformed. Why does science suggest such a conclusion?

Since the 1970s, scholars have been providing evidence that human experts do not perform as well as simple linear models in things like clinical diagnosis, forecasting graduate students’ success, and other prediction tasks.21,22 Findings like this have led to the idea that algorithmic judgment is superior to expert human judgment.23 For example, research has shown that algorithms deliver more accurate medical diagnoses when detecting heart-rate diseases.24,25,26

Furthermore, in the world of business, algorithms prove better at predicting employee performance, the products customers want to buy, and identifying fake news and information.27,28 An overall analysis of all these effects (what is called a meta-analysis) even reveals that algorithms outperform human forecasters by 10% on average.29 Overall, the evidence suggests that it is (and will increasingly be) the case that algorithms outperform humans.

This scientific evidence, combined with our tendency to think of humans and machines as us versus them, poses the question of whether AI will replace people’s jobs at center-stage.30 This question is no longer a peripheral one. It dominates many discussions in business and society, to the extent that websites now exist where one can discover the likelihood of your job being automated in the next 20 years.

In fact, we do not even have to wait for this scenario to happen. For example, in 2018 online retailer Shop Direct announced the closure of warehouses because nearly 2,000 jobs had become automated. The largest software company in Europe, SAP, has also eliminated several thousands of jobs by introducing AI into their management structure.

The framework for today’s society is clearly dominated by the assumption that humans will be replaced by technology whenever possible (human-out-of-the-loop) and that it only makes sense for humans to be part of the business process when automation is not yet possible (contingent participation). Several surveys indicate that it is only a matter of time. For example, an Accenture study revealed that 85% of surveyed executives want to invest more extensively in AI-related technologies by 2020.31 Likewise, a PwC survey revealed that 62% of executives are planning to deploy AI in several management areas.32 Furthermore, a survey by Salesforce Research revealed that, in the service industry, 69% of organizations are actively preparing for AI-based service solutions to be applied. Finally, Yahoo Finance predicts that in 2040 our workforce “may be totally unrecognizable.”33

Why we think about replacing humans

Where does this obsession with replacing humans come from? Is it the human default that once we find a limitation – in this case, our own – we believe it must be eliminated and replaced? Is there simply no room for the weak? A matter of accepting that once a stronger villain arrives in town, the old (and weaker) one is replaced? If this is the case, then this kind of thinking will transform the discussion about the human-AI relationship into a zero-sum game. If one is better (and thus wins), then the other loses (and is eliminated). Where does the belief in this logic come from?

To answer this question, it is worthwhile to look at the distinction that the famous French philosopher René Descartes made between mind and body.34 The body allowed us to do physical work, but, with the industrial revolution taking place, we were able to replicate our physical strength by utilizing machines. The enormous advantage was that we could now work faster and create more growth and profit. Importantly, however, it also allowed us to free ourselves from physical labor and move our attention towards the power of our brain. This led to humans becoming more sophisticated and creative, and able to come up with new ways of dealing with reality. Our move towards the mind, and away from the body, meant that we submitted for the first time to the machine. With machines doing the mindless physical work, rendering the human body obsolete, we were then able to devote most of our time to work that requires the application of the mind.

In the 21st century, it is our mind that is now being challenged by the technology revolution. Our mental capacity simply cannot compete with the speed of algorithms to process data, as well as their ability to learn and optimize any outcome in almost unlimited ways. These developments mean that, as a society, we have entered yet another phase of great opportunities which can benefit and further our interests. However, the opportunity available is not the augmentation of our physical strength to bring material success, but the augmentation of our cognitive strength. When using the idea of the body and mind to look at these developments, we may well have reason to be afraid.

In the past, we became dependent on the machine to do our physical work. If the present and future follows the path of the past, does this mean that we will now also become dependent on technology to do the work of the mind? If we adopt a rational point of view, where we consider ourselves as primarily striving for optimization, this kind of dependence will definitely happen. We know that we live in a time where a new type of super mind – AI that goes well beyond the cognitive abilities of humans – has arrived. At the same time, we are being bombarded with news that the authentic human sense of intelligence is failing when we compare it to the efficiencies of artificial intelligence.

Obviously, it is somewhat of an irony that we have created this challenge ourselves. Beyond that, it is a cynical sentiment that reminds us the end may be near. In fact, if algorithms now replace the human mind (after the machine replaced the body), we may have nowhere else to run. Wasn’t it the case that there is only body and mind? If both are replaced, in which direction do humans move? Do we need to now think about whether the human race is needed at all? Is it time to ask ourselves where, if at all, we can use humans in the cycle of algorithms that we are creating?

As indicated earlier, for some jobs (e.g. financial industry, health care) automation seems to be rapidly becoming the dominant voice. But, towards the future, it will not only be in those industries where humans will become inferior to algorithms. Telling in this respect is the 2018 Deloitte Global Human Capital Trends survey and report of business and HR leaders. This survey found that 72% of leaders indicated that AI, robots, and automation are quickly becoming the most important investment areas.

When innovating becomes leading

If body and mind can be replaced, man itself should be replaced. It sounds like science fiction, but all the signs seem to be there. So, if this is really happening, the question of whether we submit to the machine and corresponding technology will be the next one to answer.

In the volatile and uncertain business environment of today, this idea may not sound too crazy. Hasn’t it been suggested that the kind of leader needed to survive such circumstances is one who has superior data management and utilization skills? One who is able to produce specific cost-saving recommendations, and enables organizational efficiency and productivity? And, most importantly, is able to deliver all of this at lightning speed! Yes, from this point of view, ladies and gentlemen, we could argue that the demand for a new leader has arrived and it is not the human kind. In fact, as a society we have landed in a new industrial revolution – and this one is led by algorithms. Human leadership may not even survive the impact of AI. If so, will this change of leadership happen smoothly and without opposition?

Given all the benefits that our new automated leader brings us, resistance may not only be futile, but even non-existent. It should be, if we as humans react rationally. As rational beings we should strive for maximizing our own interests. And, as we can see it now, all the benefits coming along with the increase of automation can only create more efficient lives for us. So, our rationality says a big yes to this new leadership situation.

But it is not only our rationality that is at play. Emotions are likely to play a role as well. All the benefits also create a comfortable situation that humans will easily adjust to and may even become addicted to. And, once we become addicted to it, we will comply with it because it makes us happy. As a matter of fact, research shows that machines can trigger the reward centers in our brain (one of the reasons why humans have become so addicted to continuously checking their smartphones). The reward center releases the hormone dopamine, which creates a feeling of happiness. But, as with any addiction, humans will run the risk of looking for these rewards more often. They want to maintain this feeling of happiness, so they will increasingly feel a need for more automation. Since our automated leader seems to be able to give us what we want, and as such make us addicted, human compliance is likely to follow. OK, it is clear humans will surrender. Autonomous algorithms are here to stay and – could it really be true? – will lead us.

But, before you close this book and accept the idea of an algorithm telling you tomorrow what to do, might I introduce you to another reality? A reality that brings a more complex view on leadership and the potential role that algorithms will play. Allow me to start with a first request. Think about the question of whether an optimizing leader really constitutes leadership? Is a leader simply the combination of being a strong and smart person? Is leadership something that can be achieved by the body and mind combined into one role? If so, then the smart machine of today is truly the winner. But, I do beg to differ. For the sake of the argument, let us take a quick look at how exactly algorithms learn and whether this fits the leadership process as we know it in today’s (human) society.

Do limits exist for self-learning machines?

To understand how algorithms learn, it is necessary to introduce the English mathematician Alan Turing. Depicted by actor Benedict Cumberbatch in the movie The Imitation Game, Alan Turing is best known for his accomplishment of deciphering the Enigma code used by the Germans during the second world war. To achieve this, he developed an electro-mechanical computer, which was called the Bombe. The fact that the Bombe achieved something that no human was capable of led Turing to think about the intelligence of the machine.

This led to his 1950 article, ‘Computing Machinery and Intelligence,’ in which he introduced the now-famous Alan Turing test, which is today still considered the crucial test to determine whether a machine is truly intelligent. In the test, a human interacts with another human and a machine. The participant cannot see the other human or the machine and can only use information on how the other unseen party behaves. If the human is not able to distinguish between the behavior of another human and the behavior of a machine, it follows that we can call the machine intelligent. It is these behavioral ideas of Alan Turing that are today still significantly influencing the development of learning algorithms.

The fact that observable behaviors form the input to learning is not a surprise as in the time of Turing behavioral science was dominating. This stream within psychology refrained from looking inside the mind of humans. The mind was considered the black box of humans (interestingly enough the same is being said of AI nowadays), as it was not directly observable. For that reason, scientists back then suggested that the mind should not be studied. Only behaviors could be considered the true indicators of what humans felt and thought.

To illustrate the dominance of this way of thinking, consider the following joke: Two behaviorists walk into a bar. One says to the other: “You’re fine. How am I?” In a similar vein, today we assume that algorithms can learn by analysing data in ways that identify observable patterns. And those patterns teach algorithms the rules of the game. Based on these rules they make inferences and construct models that guide predictions. Thus, in a way, we could say that algorithms decide and advise strategies based on the patterns observed in data. These patterns inform the algorithm what the common behavior is (the rule of the context of the data) and subsequently the algorithm adjusts to it.

Algorithms thus act in line with the data holding observable patterns with which they are being fed. These observable patterns (which reflect the behaviors Turing referred to), however, do not lead algorithms to learn what lies behind these patterns. Or, in other words, they do not allow algorithms to understand the feelings and deeper level of thinking, reflection and pondering that hide beneath the observable behaviors. This reality means that algorithms can perfectly imitate (hence, the title of the movie) and pretend to be human, but can they really be human in being able to function in relationships in the manner of leaders? Can algorithms, which supposedly display human (learned) behaviors, really survive and function in human social relationships?

Consider the following example. Google Duplex recently demonstrated AI having a flawless conversation over the phone when making an appointment for a dinner.35 The restaurant owner did not have a clue he was talking to AI making the reservation. But imagine what would happen if unexpected events occurred during such a conversation? (Note, the mere fact that you are able to imagine such a scenario makes you already different from the algorithm who would never consider this scenario.) What if the restaurant owner suddenly had a change of heart and told AI that he does not want to work that evening, despite the fact that it is mentioned online that the restaurant will be open that same evening? Will AI be able to take perspective and give a reasonable (human) response?

In all honesty, this may be less likely. It is one thing for an algorithm to know the behaviors that humans usually show and based on those observations develop a behavioral repertoire to deal with most situations. It is, however, another thing to understand the meaning behind human behaviors and respond to it in an equally meaningful way. And here lies the potential limitation for the algorithm as a leader. At this moment, an algorithm cannot understand the meaning of behavior in a given context. AI learns and operates in a context-free way, whereas humans have the ability to account for the situation when behaviors are shown – and, importantly, we expect this skill from leaders. It is as Melanie Mitchell noted in her book Artificial Intelligence: A Guide for Thinking Humans: “Even today’s most capable AI systems have crucial limitations. They are good only at narrowly defined tasks and utterly clueless about the world beyond.”

As a side note, this logic of meaning and taking perspective is something that unfortunately seems to be forgotten by those saying that we have replaced Descartes’s body and mind, making humans less needed. Yes, Descartes identified the two separate entities of body and mind, but he also noted that they are connected. We still use this assumption today when we say a healthy mind makes for a healthy body. But what makes for the connection? What is the glue that holds mind and body so closely aligned? In philosophical terms we may say it is the soul. The soul that gives us passion, emotions and a sense of intuitive interpretation with respect to the things we see, do and decide. As such, we may be able to replace the body and the mind, but do the ones replacing us also have the soul to make the total entity work? If body and mind cannot connect, then leadership without heart is the consequence.

And, think about it, would you then simply comply and follow orders from an intelligent machine leader? Those who are big fans of the Star Trek movies will know the character Data. Data is a humanoid robot who is trying to learn how to understand human emotion. In one episode, Data has to take over the command of the Starship USS Enterprise. This experience turned out to be a useful lesson for both the robot and the human crew for how important human emotions are to leadership.

Today, we have arrived in an era where this scenario may not be science fiction for too much longer. But with such futuristic views on leadership in sight, we also need to understand the kind of society and organizations we would like to see. How do we want to lead them? We need to come up with an answer to what leadership means to us and who should take up the leadership position, including assessing our own strengths and weaknesses.

1 Reeves, M. (2015). ‘Algorithms Can Make Your Organization Self-Tuning.’ Harvard Business Review. May 13. Retrieved from: https://hbr.org/2015/05/algorithms-can-make-your-organization-self-tuning

2 Andrews, L. (2019). ‘Public administration, public leadership and the construction of public value in the age of algorithm and big data.’ Public Administration, 97(2), 296-310.

3 Fountaine, T., McCarthy, B., & Saleh, T. (2019). ‘Building the AI-powered Organization.’ Harvard Business Review, July-August, 2-13.

4 Lehnis, M. (2018). ‘Can we trust AI if we don't know how it works?’ Retrieved from https://www.bbc.com/news/business-44466213

5 Accenture (2017). ‘AI as the new UI – Accenture Tech Vision.’ Retrieved from: https://www.accenture.com/t20171005T065832Z__w__/us-en/_acnmedia/Accenture/next-gen-4/tech-vision-2017/pdf/Accenture-TV17-Trend-1.pdf

6 Accenture (2018). ‘Realizing the full value of AI.’ Retrieved from: https://www.accenture.com/_acnmedia/pdf-77/accenture-workforce-banking-survey-report

7 Chui, M., Henke, M., Miremadi, M. (2018). ‘Most of AI’s Business Uses Will Be in Two Areas.’ Harvard Business Review. July 20. Retrieved from: https://hbr.org/2018/07/most-of-ais-business-uses-will-be-in-two-areas

8 McKinsey (2018). ‘Notes from the AI frontier: Applications and value of deep learning.’ Retrieved from: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning

9 Bloomberg (2018, January 15th). ‘Alibaba's AI Outguns Humans in Reading Test.’ Retrieved from https://www.bloomberg.com/news/articles/2018-01-15/alibaba-s-ai-outgunned-humans-in-key-stanford-reading-test

10 Gee, K. (2017). ‘In Unilever's Radical Hiring Experiment, Resumes Are Out, Algorithms Are In.’ The Wall Street Journal. Retrieved from https://www.wsj.com/articles/in-unilevers-radical-hiring-experiment-resumes-are-out-algorithms-are-in-1498478400

11 Glaser, V. (2014). ‘Enchanted Algorithms: How Organizations Use Algorithms to Automate Decision-Making Routines.’ Academy of Management Proceedings, 2014(1), 12938.

12 Hoffman, M., Kahn, L.B., & Li, D. (2017). ‘Discretion in hiring.’ NBER Working Paper No. 21709. Retrieved from: https://www.nber.org/papers/w21709?sy=709

13 Son, H. (2015). ‘JP Morgan algorithm knows you’re a rogue employee before you do.’ (8 April 2015). Retrieved from: https://www.bloomberg.com/news/articles/2015-04-08/jpmorgan-algorithm-knows-you-re-a-rogue-employee-before-you-do.

14 Hoffman, M., Kahn, L.B., & Li, D. (2017). ‘Discretion in hiring.’ NBER Working Paper No. 21709. Retrieved from: https://www.nber.org/papers/w21709?sy=709

15 Fethi, M.D., & Fotios, P. (2010). ‘Assessing bank efficiency and performance with operational research and artificial intelligence techniques: A survey.’ European Journal of Operational Research, 204(2), 189-198.

16 Greer, S., Lodge, G., Mazzini, J., & Yanagawa, E. (2018). ‘Global Tech spending forecast: Banking edition.’ 20 March 2018. Retrieved from: https://www.celent.com/insights/929209647

17 Paterl, V.L., Shortliffe, E.H., Stefanelli, M., Szolovits, O.P., Berthold, M.R., & Abu-Hanna, A. (2009). ‘The coming age of artificial intelligence in medicine.’ Artificial Intelligence in Medicine, 46(1), 5-17.

18 Leachman, S.A., & Merlino, G. (2017). ‘The final frontier in cancer diagnosis.’ Nature, 542, 36.

19 Bennett, C.C., & Hauer, K. (2013). ‘Artificial intelligence framework for simulating clinical decision-making: A Markov decision process approach.’ Artificial Intelligence in Medicine, 57(1), 9-19.

20 Wang, D., Khosla, A., Gargeya, R., Irshad, H., & Beck, A.H. (2016). ‘Deep learning for identifying metastatic breast cancer.’ arXiv, preprint arXiv:1606.05718. Copy at http://j.mp/2o6FejM

21 Dawes, R. M., (1979). ‘The robust beauty of improper linear models in decision making.’ American Psychologist, 34(7), 571-582.

22 Dawes, R. M., Faust, D., & Meehl, P. E. (1989). ‘Clinical versus Actuarial Judgment.’ Heuristics and Biases, 716-729.

23 Kleinmuntz, D. N., & Schkade, D. A. (1993). ‘Information displays and decision processes.’ Psychological Science, 4(4), 221-227.

24 Adams, I.D., Chan, M., Clifford, P.C., et al. (1986). ‘Computer aided diagnosis of acute abdominal pain: A multicentre study.’ British Medical Journal, 2093, 800-804.

25 Beck, A. H., Sangoi, A. R., Leung, S., Marinelli, R. J., Nielsen, T. O., Van De Vijver, M. J., & Koller, D. (2011). ‘Systematic analysis of breast cancer morphology uncovers stromal features associated with survival.’ Science translational medicine, 3(108), doi: 108ra113-108ra113

26 Grove, W. M., Zald, D. H., Lebow, B. S., Snitz, B. E., & Nelson, C. (2000). ‘Clinical versus mechanical prediction: A meta-analysis.’ Psychological Assessment, 12(1), 19-30.

27 Maidens, J., & Slamon, N.B. (2018). Abstract12591: ‘Artificial intelligence detects pediatric heart murmurs with cardiologist-level accuracy.’ Circulation, 138 (suppl_1).

28 Highhouse, S. (2008). ‘Stubborn Reliance on Intuition and Subjectivity in Employee Selection.’ Industrial and Organizational Psychology, 1 (3), 333-342.

29 Schweitzer, M.E., & Cachon, G.P. (2000). ‘Decision bias in the newsvendor problem with a known demand distribution: Experimental evidence.’ Management Science, 46(3), 404-420.

30 Frey, C. B., & Osborne, M. A. (2017). ‘The future of employment: how susceptible are jobs to computerisation?’ Technological Forecasting and Social Change, 114, 254-280.

31 Accenture (2017). ‘The promise of Artificial Intelligence: Redefining management in the workforce of the future.’ Retrieved from: https://www.accenture.com/no-en/insight-promise-artificial-intelligence

32 PwC (2019). ‘AI Predictions: Six AI priorities you can’t afford to ignore.’ Retrieved from: https://www.pwc.com/us/en/services/consulting/library/artificial-intelligence-predictions-2019?WT.mc_id=CT13-PL1300-DM2-TR1-LS4-ND30-TTA5-CN_ai2019-ai19-digpul-1&eq=CT13-PL1300-DM2-CN_ai2019-ai19-digpul-1

33 Salesforce Research (2019). ‘State of Service.’ Insights and trends from over 3,500 service leaders and agents worldwide. Retrieved from: https://www.salesforce.com/blog/2019/03/customer-service-trends.html

34 Hoffman, P. (1986). ‘The Unity of Descartes’ Man,’ The Philosophical Review 95, 339-369.

35 Google Duplex (2018). https://www.youtube.com/watch?v=D5VN 56jQMWM

Leadership by Algorithm

Подняться наверх