Читать книгу Reflexiones para una Democracia de calidad en una era tecnológica - Rosa María Fernández Riveira - Страница 4
Prólogo A DEMOCRACY OF AVATARS OR AN AVATAR OF DEMOCRACY?
ОглавлениеThe Internet has opened up an entirely new world of human interconnections, with unprecedented, unimagined and unimaginable potential and instant exchanges of ideas, information, views, projects, protests, complaints. It has been viewed by some as an antidote to the fragmentation of the society, the separation of social classes and categories, the isolation of the individuals, the geographical seclusion. An antidote to indifference, to laxism, to the feeling that the voice of the single individual cannot be heard and does not count.
New technologies have provided a megaphone and an ear to these individual voices. Not only can individuals find real-time information on any topic, follow and participate in on-line debates: they can initiate these discussions, they can join groups, they can meet like-minded people and create communities, they can become followers, but also opinion leaders.
But are digital technologies an antidote to the modern crisis in public participation? May they facilitate a potential solution to the crisis in public trust and governance through the inclusion of citizens in the deliberative processes at all levels? Do they ultimately improve democracy? In many ways they have increased and even reinvented public participation. Electoral processes have been transformed by social media. Campaigning has evolved, providing platforms that can reach out to any person, anywhere, through internet connection or through encrypted messages such as WhatsApp. Marginalised groups can be reached. Diaspora can be reached. Electoral concerns may be shared at a global level, and irregularities and fraud may thus be more easily exposed.
Digital technologies undoubtedly assist in organising the electoral processes (voters’ lists, registration processes) more effectively and transparently. This has been precious during the Covid-19 pandemic. But digitally handled electoral processes are vulnerable to cyberattacks aiming to suppress voter turnout, tamper with election results, steal voter information, to conduct cyberespionage for the purposes of coercion and manipulation, and to publicly discredit individuals.
Digital technologies have also amplified the potential for abuse, for hate speech, for bullying (online or offline through online coordination), for disinformation and fake news. “Fake news” have existed since the invention of the printed press by Johannes Gutenberg in 1439. In August 1835 the New York Sun newspaper boosted its sales after announcing that a living unicorn had been discovered on the moon: its readers believed it originally, but did not stop buying the Sun when the latter later confessed the hoax. Sensationalism sells well. There is fake news we are keen to believe. And trues stories which we find hard to believe.
Disinformation is a matter of business, but not only: it is also a matter of power and influence. Specifically, of influence on votes. Fake news affects people’s choices and their votes. When elections are close, fake news may have an impact on who ultimately wins and who loses. The appetite for influence and the desire and need to target the most suitable public have grown.
But there is another aspect of digital technologies which troubles electoral processes: the potential clash between the business-minded private owners of information highways and the public interest in ensuring free and fair elections. Micro-targeting of voters is a clear example: as any consumer of any product, potential voters may be (openly or secretly) profiled into choosing one or the other candidate, their opinion fogged or pushed without them realising it. Are we talking about profitable business or tampering with the fundamental freedom of forming one’s opinion in order to express one’s free vote?
There are several fundamental rights at issue, and particularly during an electoral process they need to be balanced against each other, for the sake of democracy: the freedom of expression and the freedom of the press; the right to vote and to be elected; the freedom of association and to create and belong to political parties; the freedom of assembly; the right to respect for one’s private life; the right of access to information; the right to data protection; commercial freedom; the right of access to a court. Just to name the most obvious.
The complexity of the human rights balancing exercise has become a feature of our globalised societies, where the number, nature and complexity of our interactions have become virtually endless. We are learning how to deal with such complexity. We aspire to the same level of human rights protection online and offline. We apply the same mechanisms. Is this realistic? We will need to find new avenues and make compromises. Traditional remedies risk not being effective against digital wrongdoing in the short span of an electoral campaign. The standards against corruption in the funding of political parties and electoral campaigns, spending limits and campaign finance controls, subsidies for campaigning communications have limited applicability and efficacy in times of digital political advertising. The same is true for pre-poll black outs, media regulation, rules on political advertising including impartiality, subsidies and free airtime.
Through digital technologies, information spins fast, is often unaccounted for, leaves traces even after it is removed: can the victims of hate speech turn to a court? Can the author of removed content turn to a court? Our judicial systems and rules are not adequate to this speed. And then, there’s the numbers. According to its Community Standards Enforcement Report (https://transparency.facebook.com/community-standards-enforcement#hate-speech), Facebook took action on, and in most cases removed, 22.1 million pieces of content for hate speech between July and September 2020. It is hard to imagine that these cases will end up with a court decision within a meaningful time. And what should the remedy be, if those pieces of content were a legitimate form of electoral campaigning? And how could an unsuccessful candidate measure and prove the damage of an unfair campaign of voter micro-targeting? How should voters measure and bring evidence of undue interference with their right to form an opinion in order to exercise their right to a free vote? These questions sound terribly serious and hollow at the same time: will any of these actions be attempted at some stage? To the best of my knowledge, this has not happened yet (to a visible extent at least). It might not happen in a near future either. And yet, this is the digital environment. Accrued digital technology literacy is one of the antidotes, for sure. But this environment cannot be left totally unregulated.
The Council of Europe’s Venice Commission has elaborated a set of eight “principles for a fundamental rights-compliant use of digital technologies in electoral matters”1, which strive to encapsulate these concerns. These principles are as follows:
1. The principles of freedom of expression implying a robust public debate must be translated into the digital environment, in particular during electoral periods.
2. During electoral campaigns, a competent impartial Electoral Management Body (EMB) or judicial body should be empowered to require private companies to remove clearly defined third-party content from the Internet, based on electoral laws and in line with international standards.
3. During electoral periods, the open Internet and net neutrality need to be protected.
4. Personal data need to be effectively protected, particularly during the crucial period of elections.
5. Electoral integrity must be preserved through periodically reviewed rules and regulations on political advertising and on the responsibility of Internet intermediaries.
6. Electoral integrity should be guaranteed by adapting the specific international regulations to the new technological context and by developing institutional capacities to fight cyber threats.
7. The international cooperation framework and public-private cooperation should be strengthened.
8. The adoption of self-regulatory mechanisms should be promoted.
These principles, once duly endorsed by the political actors, may bring a decisive contribution to the acknowledgment that digital technologies may only serve democracy if they are operated within the framework of its fundamental principles and rules.
Digital technologies have empowered every individual to freely express him or herself on the net, with virtually no limits. These digital avatars have a life of their own: they can expand and develop towards new forms of freedom and interaction, unleashing appetite for public participation, leading to creative forms of democratic participation and to renewed enthusiasm for the public good, strengthening monitory democracy. But digital technologies can also manipulate these avatars, poison interactions, stimulate hatred and division. Digital avatars may thus end up having only the illusion of being the actors of a new form of democracy. They could end up losing trust. The digital environment should be accountable to the real world, lest our digital avatars, our connected-selves, remained confined in an avatar of democracy.
Simona Granata-Menghini
1. Adopted by the Venice Commission at its 125th Plenary Session, 11-12 December 2020, https://www.venice.coe.int/webforms/documents/?pdf=CDL-AD(2020)037-e.