top of page
alovocgarenod

Facebook’s approach to content moderation receives flak from EU commissioners for undermining public



For Haugen, Facebook presented content moderation as a false choice between censorship and free speech. She noted that non-content-based solutions existed and were more effective, but would require making the platform smaller and slower.


At Facebook, the content moderation team reports to different branches of the company from the public affairs team, an internal structure Haugen said has a chilling effect for content moderators when the author of the controversial content is a high-profile politician.




Facebook’s approach to content moderation slammed by EU commissioners



This balancing approach to free speech among European democracies is apparent from the content of the DSA. The text does include some elements worthy of praise. This includes greater transparency obligations on large social media platforms to lift the veil on their removal decisions and publish annual reports on content moderation and enforcement. This will allow users to better understand how content is recommended to them and how moderation decisions are made; users will also enjoy a right to reinstatement if platforms make mistakes. It is also a positive step that the DSA does not impose general monitoring obligations on social media platforms that would further increase the use of content-filtering algorithms to scan, flag, and remove supposedly illegal content.


The obvious advantage of combining IHRL with distributed content moderation is that it reserves centralized content moderation for the worst and most heinous content while providing users agency over the content they wish to see and engage with. This could serve as a break on the censorship race to the bottom, where various governments and interest groups insist that speech they find particularly concerning should be prohibited and platforms find it difficult to resist, as they are ultimately more concerned with stakeholder management and public relations issues that affect their bottom line than with upholding principled free speech norms.


Second, as the United States remains paralysed by partisanship and as the Chinese government aggressively pursues its own distinct approach to digital media based on very different values, the European Union is increasingly emerging as a global policy entrepreneur on digital issues, with influence that resounds far afield. As the UN Special Rapporteur on freedom of opinion and expression David Kaye has repeatedly pointed out, from the more active informal and formal regulation of online content to more robust competition and data protection policies, Europe will de facto regulate the global internet (Kaye 2019). As policymakers all over the world look to Europe for inspiration, this is a unique opportunity for the European Union and its member states to show leadership and demonstrate what truly democratic digital media policies can look like.


These complexities, coupled with the politically sensitive nature of intervening in a space that concerns public debate and involves fundamental rights, led the EU High Level Group on online disinformation to embrace the position also taken by a number of digital rights organisations: that interventions targeted at potentially problematic but often legal content and behaviour should (a) operate within a fundamental rights framework, and (b) avoid interventions targeted directly at content or expression, especially when those interventions are designed by the executive branch and other public authorities (High Level Expert Group on Fake News and Online Disinformation 2018). While Article 10 of the European Convention on Human Rights allows for various speech restrictions, those restrictions must meet the classic three-part test, where interferences with freedom of expression are legitimate only if they (a) are prescribed by law; (b) pursue a legitimate aim; and (c) are proportional and necessary in a democratic society. The European Court of Human Rights has ruled that the right to freedom of expression is not limited solely to truthful information, suggesting that the veracity of content alone may not be a sufficient justification for some approaches to countering disinformation.10 Refusing to directly regulate content (apart from when it is illegal) may seem cautious, but policymakers in fact have many options for making very significant interventions in this space.


the idea that a single effective and proportionate regulatory approach could be designed in such a way as to tackle every one of these matters is highly presumptuous and neglects the wide array of complex social factors underpinning the production, sharing and engagement of such content.


Robert Gorwa is a doctoral candidate in the Department of Politics and International Relations, University of Oxford. His research on platform regulation, content moderation, and other transnational digital policy challenges has been recently published or is forthcoming in Information, Communication & Society, Big Data & Society, Internet Policy Review, and other academic journals. He is currently a fellow at the Weizenbaum Institute for the Networked Society in Berlin.


The world's largest social media companies removed less hate speech from their platforms in 2021 compared to last year, according to the European Commission's annual review of the firms' content moderation activities, seen by POLITICO.


The European Union is in the middle of a political fight on overhauling how it approaches online content regulation, with the European Parliament working on its revised version of the Digital Services Act. Those proposals would impose hefty fines on companies that don't combat illegal material, including hate speech, as well as require greater transparency over how specific posts were displayed in people's social media feeds.


Consumer protection issues in online services include but extend beyond traditional privacy concerns:87 Issues with fraud, scams, manipulation, discrimination, and systemic failures in content promotion and moderation have leveled devastating individual and collective harms.


An opt-in approach offers a degree of future-proofing that may be difficult to provide through statutory definition alone. Allowing companies to generally opt in ensures that only those that consider themselves infrastructural and understand the requirements choose this model; this is expected to comprise a minority of online services overall. This approach may enable new companies to start with the explicit goal of competing as online infrastructure and would offer all infrastructural companies a strong defense against the technical, legal, and public relations costs resulting from good and bad faith demands for increased content moderation lower in the stack. While challenges exist around the incentives and consequences for infrastructure providers in and outside of the tier, those opting in would be regulated by an entity that prioritizes the goals of online infrastructure. Infrastructural firms outside the tier will have to deal with rules designed for broader online services or gatekeepers and deal with the business realities of any potential intermediary liability changes. Business customers will be able to exercise choice in determining which online service provider may best meet their infrastructural needs.


Content moderation is best defined as a series of practices with shared characteristics which are used to screen user-generated content including posts, images, videos, or even hashtags to determine what will make it onto or remain on a social media platform, Web site or other online outlets (Roberts, 2019, 2016; Gerrard, 2018). The process often includes three distinct phases. First is editorial review, which refers to oversight imposed on content before its made available, such as the ratings given to movies prior to their release (Gillespie, 2018). In the case of social media, editorial review often refers to the community standards set by social media platforms.


The last of the three phases of content moderation is community flagging (Gillespie, 2018). Here users report content they believe violates the Community Standards outlined by the company. Reported content is then manually reviewed by employees and a determination is made regarding whether it will be blocked, deleted, or remain on the site. Social media organizations often contract this work out to other organizations. Workers in these roles are dispersed globally at a variety of worksites and the work itself often takes place in secret by low-status workers paid very low wages (Roberts, 2016). Workers in these roles suffer panic attacks and other mental health issues as a result of the 1,500 violent, hateful, or otherwise troubling posts they review each week (Newton, 2019).


Content moderation scholarship faces an urgent challenge of relevance for policy formation. Emerging policies will be limited if they do not draw on the kind of expansive understanding of content moderation that scholars can provide.


Moreover, discussion tends to focus almost exclusively on the largest, US based platforms. There are good, or at least understandable reasons, why this is so. These platforms are enormous, and their policies affect billions of users. Their size makes them desirable venues for bad faith actors eager to have an impact. Their policies and techniques set a standard for how content moderation works on other platforms, they offer the most visible examples, and they drive legislative concerns. But the inordinate attention is also structural. Critics talk about Facebook and YouTube as stand-ins for the entire set of platforms. Journalists hang critical reporting on high profile decisions, blunders, and leaks from the biggest players. Scholars tend to empirically study one platform at a time, and tend to choose large, well-known platforms where problems are apparent, where data will be plentiful, and that are widely used by or familiar to their research subjects.


And any policy enacted to regulate moderation or curb online harms, 2 while it may reasonably have Facebook or YouTube in its sights, will probably in practice apply to all platforms and user-content services. In that case, the result could further consolidate the power of the biggest tech companies, those best able to manage the regulatory burdens. This has been a concern in related areas, for example privacy (Krämer & Stüdlein, 2019) and copyright protection (Samuelson, 2019; Romero-Moreno, 2019). If power asymmetries are to be challenged, we need to understand how different values are engineered into these mechanisms across a wider array of examples. 2ff7e9595c


1 view0 comments

Recent Posts

See All

Baixe o zoom m

Baixar Zoom M: um guia para iniciantes Se você está procurando uma maneira conveniente e confiável de se comunicar com seus colegas,...

Kommentare


bottom of page