Читать книгу The Smart Nonprofit - Beth Kanter - Страница 14
THE DANGERS OF AUTOMATION
ОглавлениеNicholas Carr wrote in The Glass Cage, “Automation severs ends from means. It makes getting what we want easier, but it distances us from the work of knowing.”11
There is enormous danger and damage to be done in distancing ourselves from knowing. It means potentially cutting ourselves off from the needs of clients if they are first interacting with bots screening them for services. It could mean using automation to send out many times more fundraising appeals and not listening to the complaints from current and prospective donors. It could mean hiding behind screens instead of stepping out to build stronger relationships with constituents. And it could mean allowing an insidious form of racism and sexism to take hold unabated inside your organization.
We tend to see work done by computers and robots as incapable of being swayed by emotions and therefore incapable of being racist, sexist, or otherwise biased or unfair. However, the code that powers smart tech was at some point created by people and carries forward their opinions, assumptions, and biases. When this code makes decisions that are discriminatory, we call it embedded bias. The renowned data scientist Cathy O'Neil says, “Algorithms are opinions embedded in code.”12
Embedded biases are very difficult to undo. Programmers make literally thousands of choices beneath the hood of smart tech that the rest of us can't see. Automation is increasingly being used to make vital and life-changing decisions for people. Therefore, the choices that programmers (overwhelmingly white men) make, based on their own experiences and backgrounds, become more important.
For instance, smart tech is increasingly used to screen applications for mortgages. It is illegal to ask questions about, say, race, in these applications, so programmers create “proxies,” or substitute questions, to create a profile of an applicant. For instance, a zip code could be used as a proxy for “safe neighborhood.” Safe generally means white, particularly for white programmers using their own life experiences. In addition, the data is needed to train smart tech systems. An automated mortgage screening process will use data from the enormous data sets from past mortgage application decisions. Black people were historically denied mortgages at astonishing rates and therefore will be woefully underrepresented in these data sets. In this way, seemingly benign programming decisions, mundane proxies, and historic data create embedded biases against people of color that is difficult to see from the outside.
Once bias is baked into smart tech, it stays there forever and becomes self-reinforcing over time. Nancy Smyth, former dean of the School of Social Work at the University of Buffalo, State University of New York, says, “Code is racist because society is racist.”13
In her book Race After Technology, Ruha Benjamin describes a “New Jim Code.” It is a take on the old Jim Crow that powered decades of institutional racism in Reconstructionist southern states. She writes, “The animating force of the New Jim Code is that tech designers encode judgments into technical systems but claim that the racist results of their designs are entirely exterior to the encoding process. Racism thus becomes doubled—magnified and buried under layers of digital denial.”14 She later writes, “… the outsourcing of human decisions is, at once, the insourcing of coded inequity.” We will explore the ethical use of smart tech throughout this book.