Jump to content

Engaging Platforms in Open Scholarship/Models and Mechanisms of Platform Governance and Regulation

From Wikibooks, open books for an open world

Approaches to Platform Governance

[edit | edit source]

Black, Laura W., Howard T. Welser, Dan Cosley, and Jocelyn M. DeGroot. 2011. “Self-Governance Through Group Discussion in Wikipedia: Measuring Deliberation in Online Groups.” Small Group Research 42 (5): 595–634. https://doi.org/10.1177/1046496411406137.

Black et al. analyze how deliberative were the online group discussions that shaped the Wikipedia policy of no personal attacks. Using content and network analysis, the authors found that group members in these discussions provided a great deal of information and proposed solutions, but they usually replied by finding faults with them. Despite this tendency, in 21% of the threads, users built on one another’s solutions, asked for feedback and responded to each other’s comments, demonstrating the potential of wiki environments for collaborative work and open deliberation. The authors conclude that the case of Wikipedia could serve as a model for other online communities to collaboratively develop policies to govern their own communities.

Broeders, Dennis and Linnet Taylor. 2017. “Does Great Power Come with Great Responsibility? The Need to Talk About Corporate Political Responsibility.” In The Responsibilities of Online Service Providers, edited by Mariarosaria Taddeo and Luciano Floridi, 315–23. New York: Springer.

Dennis Broeders and Linnet Taylor argue that Online Service Providers (or OSPs), such as Google and Facebook, have amassed significant political power that often surpasses the power of some countries. This power allows OSPs to shape and even design the information environments of citizens and governments. The authors point out that OSPs also exert influence in international politics, whether intentionally or otherwise, through their power to organise and mediate access to information and speech. However, despite this influence, OSPs operate with limited regulation or accountability in terms of their political responsibilities. The authors advocate for a shift from traditional Corporate Social Responsibility (CSR) to a framework of Corporate Political Responsibility (CPR), which recognizes and addresses the political implications of OSPs. Situated within International Relations, Political Economy, and Technology Studies, the article calls for a more robust regulatory approach to hold OSPs accountable, particularly for their role in shaping information societies, and emphasizes the need for national and international frameworks to manage OSPs’ political influence.

Doctorow, Cory. 2023. The Internet Con: How to Seize the Means of Computation. New York: Verso.

Setting the tone for the remainder of the volume, Doctorow opens this short text with a provocation for readers: “This is a book for people who want to destroy Big Tech. It’s not a book for people who want to tame Big Tech. There’s no fixing Big Tech” (1). The tool Doctorow proposes readers use? Interoperability. Doctorow’s argument is that various kinds of interoperability (from reverse engineering to adopting formal standards) have immediate relevance and applicability in terms of policy, legal, technical, social, and commercial responses. The book is organized into two parts–the first focusing on addressing what interoperability is, how it works, and with methods for achieving interoperability. Picking up where the first part of the book ends, the second half focuses on mitigating some of the issues associated with interoperability, such as privacy, harassment, radicalization, blockchain and so on. Lastly, the book ends with some suggestions for further reading, viewing and listening for those interested in learning more.

Fenwick, Mark, Joseph A. McCahery, and Erik P. M. Vermeulen. 2019. “The End of ‘Corporate’ Governance: Hello ‘Platform’ Governance.” European Business Organization Law Review 20 (1): 171–199. https://doi.org/10.1007/s40804-019-00137-z.

The authors discuss their proposal that a shift in corporate governance models have occurred (e.g, from traditional and hierarchical approaches to that of flat, inclusive, accessible and open approaches characterized by platforms). In the context of this article, the authors understand ‘accessible’ and ‘open’ to mean that a platform and its participants have a clear understanding of the people, processes, and systems involved. Open can also, again within the context of this article, refer to protocols and code that are open source (if a platform is decentralized) or to a level of transparency about a platform’s algorithms and code (if it is centralized). The authors argue that the role of regulators and policymakers should be to incentivize firms to adopt this newer approach to governance. However, the authors rely on examples of for-profit mega platforms (e.g., Amazon, Facebook, Netflix) to make their argument and appear to operate from the assumption that the best platforms are for-profit ones, ignoring questions about the ethical implications of their framing, or the needs and roles of communities beyond serving the interests of businesses, as well as issues with hyper-commercialization and the consolidation of power held by privately-owned corporations. In addition, the authors do not address what influence common (questionable) platform practices, such as locking-in users, might have on the shape and style of governance a platform adopts, or its success for that matter.

Fitzpatrick, Kathleen. 2018. Generous Thinking. Baltimore: Johns Hopkins University Press. https://generousthinking.hcommons.org.

Fitzpatrick ruminates on the current state of academia with a focus on dominant trends toward competition, individualism, and weakening public support. She argues that a substantial shift is required in order to reinstate public trust and build relationships with the larger communities that universities are a part of. Moreover, Fitzpatrick suggests that making scholarship available is a foundational step in collaborating with others, in line with the community engagement for which she advocates throughout the book. Overall, Fitzpatrick argues that such a transition requires an embrace of listening over telling, of care over competition, and of working with the public rather than in isolation and insulation—in short, it requires the generous thinking (and actions) of the book’s title.

Fuster Morell, Mayo. 2014. “Governance of Online Creation Communities for the Building of Digital Commons: Viewed through the Framework of Institutional Analysis and Development.” In Governing Knowledge Commons, edited by Brett M. Frischmann, Michael J. Madison, and Katherine J. Strandburg, 281–312. Oxford: Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199972036.003.0009.

Fuster Morell examines the governance of online creation communities (OCCs), which are communities of individuals that mainly interact via a platform of online participation to build a common-pool resource. The author lists eight interrelated dimensions that determine governance in OCCs: 1) collective mission, 2) cultural principles and social norms, 3) platform design, 4) self-management of contributions, 5) formal rules or policies applied to community interaction, 6) licence, 7) decision-making and conflict resolution systems for community interactions, and 8) infrastructure provision. Fuster Morell analyzes in greater depth infrastructure provision because it shapes several of the other governance dimensions. Infrastructure provision can be open or closed to community involvement and lead to four models of governance of OCCs (from least to more open): corporations, mission enterprises, representational foundations, and self-provision assemblies. The more closed models are able to generate larger communities but they are less collaborative, while foundation and enterprise models raise more collaborative, mid-sized communities. As infrastructure constitutes the means of production of common-pool-produced resources, Fuster Morell concludes that OCCs only generate a commons if they have some form of community governance and the generated resources are openly accessible, or if both the resources and the infrastructure are available in open access.

Glass, Erin R. 2018. “Engaging the Knowledge Commons: Setting Up Virtual Participatory Spaces for Academic Collaboration and Community.” In Digital Humanities, Libraries, and Partnerships: A Critical Examination of Labor, Networks, and Community, edited by Robin Kear and Kate Joranson, 100–115. Kent, UK: Elsevier Science & Technology.

Glass introduces the concept of “participatory” digital commons and discusses some of the differences between these platforms and institutional repositories, on the one hand, and for-profit academic or non- academic social networking sites such as Academia.edu and ResearchGate, on the other. She maintains that participatory commons can help libraries to simultaneously engage new publics and strengthen existing mandates to preserve and share knowledge. Situating her work in relation to the fields of library studies, media studies, and scholarly communication, Glass also summarizes findings from two of her own commons-based projects, Social Paper and KNIT. She encourages libraries to embrace the participatory affordances of such projects, and she reminds researchers of the important role libraries can and will continue to play in supporting this kind of collaborative and engaged scholarship in the future.

Gorwa, Robert. 2019. “What is Platform Governance?” Information, Communication Society 22 (6): 854–871. https://doi.org/10.1080/1369118X.2019.1573914.

Gorwa seeks to connect governance scholarship with platform studies literature to further a more precise understanding of platform governance, defined as the layers of governance relationships structuring interactions between key parties in today’s platform society. There are three perspectives of platform governance: platforms govern the online experience, platform companies can be governed as traditional multinational enterprises, and platforms are governed by local laws and can be scrutinized by third parties. While there is an increased understanding in how platforms govern (e.g., via content policy decisions, forms of user-dependency/ies, and algorithms), the question of how they should be governed remains open, but Gorwa identifies three emergent governance models: 1) Self-governance: the current model in which companies self-regulate, own, and operate the online “public” space, while responding to third-party complaints about content. 2) External governance: a model where the government intervenes, usually around issues of privacy and data protection, intermediary liability protection, and competition and monopoly law. 3) Co-Governance: a third way between the two previous models that considers increasing user participation in policy decisions, and external organizations performing oversight functions around platform companies. Given the rapid pace of the platform ecosystem, new models for digital governance will likely need to be developed. These new models should consider how to benefit the multitude rather than the few by following some guiding principles, such as rights-based legal approaches, data justice, democratic accountability, or corporate social responsibility.

Helberger, Natali, Jo Pierson, and Thomas Poell. 2018. “Governing Online Platforms: From Contested to Cooperative Responsibility.” The Information Society 34 (1): 1–14. https://doi.org/10.1080/01972243.2017.1391913.

Helberger et al. detail a conceptual framework for governing the public role of platforms based on an adequate allocation of responsibility to all key stakeholders: users, platforms, and governments. The authors explain this accountability issue drawing from theories on the problem of many hands, in which different entities contribute in different ways to a problem so that it is difficult to identify who is responsible for each action. To address this issue in digital platforms, the authors argue that there is a cooperative responsibility between all stakeholders to uphold public values. Platforms should create the conditions that enable users to act responsibly through their design. Users can pressure governments and platforms for better practices and regulations, while governments should create frameworks for shared responsibility that treat platforms and users as partners in regulation rather than its subjects. The article exemplifies how this framework could apply in three cases: transparency and non-discrimination in ride-sharing platforms, contentious content in social media, and the diversity of content in platforms. To spread the responsibility across the relevant stakeholders, the authors propose four key steps: 1) defining the public values at stake in each issue, 2) accepting that each stakeholder has a role to play in realizing these values, 3) reaching an agreement among stakeholders about how to advance the values, 4) putting the agreement into action through regulations, codes of conduct, terms of use, and technology design.

Hensher, Martin, Katie Kish, Joshua Farley, Stephen Quilley, and Katharine Zywert. 2020. “Open Knowledge Commons versus Privatized Gain in a Fractured Information Ecology: Lessons from COVID-19 for the Future of Sustainability.” Global Sustainability 3. https://doi.org/10.1017/sus.2020.21

In this intelligence briefing, Hensher, Kish, Farley, Quilley, and Zywert explore how the COVID-19 pandemic has revealed serious problems, both for the public and for researchers, with the ways that knowledge is currently produced, shared, protected, and monetized. They argue that open knowledge commons are essential to global efforts to address these problems and their root causes. In the process, their briefing bridges the fields of health and medical sciences, information studies, sustainability, and information policy, with particular attention to conversations about COVID-19 and intellectual property, knowledge scarcity, and misinformation. However, the authors are equally concerned with other problems that are already apparent alongside—or that may emerge in the wake of—COVID-19. They advocate for open, publicly funded research and knowledge commons as the most promising means of improving global responses to such crises.

Katzenbach, Christian, João Carlos Magalhães, Adrian Kopps, Tom Sühr, and Larissa Wunderlich. Platform Governance Archive (PGA). 2021. Alexander von Humboldt Institute for Internet and Society (HIIG). https://pga-interface.netlify.app/about.

The Platform Governance Archive (PGA) collates governance related policies for several major platforms (Facebook, Instagram, Twitter, and Youtube) between 2007 and 2021, making it possible for interested viewers to compare changes to policies across these platforms at a given point in time or for a time period. The PGA focuses on including not only legalistic terms of service and privacy policies, but also community guidelines, along with notation that identifies when changes to policies occurred. This focus enables close studies of platform governance, particularly those interested in situating a study of platforms within a consideration of its socio-historical context.

Kostakis, Vasilis. 2010. “Peer Governance and Wikipedia: Identifying and Understanding the Problems of Wikipedia’s Governance.” First Monday, March. https://doi.org/10.5210/fm.v15i3.2613.

Kostakis explores the governance issues on Wikipedia regarding the type of contents the site should include. This internal struggle has two visions: inclusionists (who support the development of a broad range of Wikipedia articles) and deletionists (who think Wikipedia should be more selective and avoid trivial topics). Based on interviews with ex-Wikipedians, the author finds that there is a need to reform the clarity of Wikipedia’s rules and conflict resolution process for content disputes, as this struggle over which content to include is detrimental to the project.

Mirghaderi, Leilasadat, Monika Sziron, and Elisabeth Hildt. 2023. "Ethics and Transparency Issues in Digital Platforms: An Overview" AI 4 (4): 831-843. https://doi.org/10.3390/ai4040042.

This literature review identifies three zones where there is a lack of transparency in the use of artificial intelligence (AI) by digital platforms. First, there is non-transparency about who contributes to platforms, such as how many platform workers there are and who counts as a platform worker. Second, there is non-transparency about the contributions and work conditions of platform workers. Third, there is non-transparency on how algorithms for AI are developed, as they are currently developed as proprietary and confidential black boxes that are often biased and susceptible to deception. To address these issues, the authors suggest regulating the status of platform workers, creating ethics codes around platform labour, developing self-regulatory agencies and public policies for platforms, and making training datasets for AI openly available for bias evaluation.

Owen, Taylor, ed. 2019. Models for Platform Governance. CIGI Essay Series. Waterloo, Canada: Centre for International Governance Innovations. https://www.cigionline.org/static/documents/documents/Platform-gov-WEB_VERSION.pdf.

This collection gathers different policy suggestions for improving and enforcing platform governance, particularly in national and transnational levels, as current platform regulations have usually been limited despite platforms’ increasing social and economic importance. An alternative is a platform governance approach, which seeks to regulate platform content, data, and competition through national and international collaboration and coordination. Some of the recommendations in the report are establishing social media councils that follow human rights standards, creating an international global governance framework that coordinates policies and regulations, promoting competition among platforms in addition to enforcing antitrust laws, holding accountable the entire supply chain of platforms, and establishing global standards that regulate platforms. In addition to these suggestions, some of the authors also encourage more democratic and participatory mechanisms of platform governance that include civil society and actors in the broader online media ecosystem, such as cloud services, content delivery networks, or internet service providers.

Papadimitropoulos, Evangelos. 2021. “Platform Capitalism, Platform Cooperativism, and the Commons.” Rethinking Marxism 33 (2): 246–62. https://doi.org/10.1080/08935696.2021.1893108.

Papadimitropoulos reviews the concept of platform cooperativism, in which workers design, manage, and own platforms that promote decentralization, democratic co-ownership, and equitable value distribution. One criticism towards platform cooperativism is that it works under the copyright system, therefore, it does not create, protect, or produce a commons. To overcome this limitation, the author analyzes the idea of open cooperativism, which incorporates platform cooperativism with commons-based peer production without closed copyright licenses. Open cooperatives share characteristics such as multi-stakeholder governance models, contributions to immaterial (knowledge) commons, orientation toward a broader socioeconomic and political transformation, and the use of CopyFair licenses, which allow the commercialization of commons knowledge in exchange for rent or reciprocal contributions to the commons. Despite the advantages of open cooperativism, the author warns that it is not only an economic or technological project, but a part of a broader political struggle for autonomy and radical democratic practices.

Schneider, Nathan. 2024. Governable Spaces: Democratic Design for Online Life. University of California Press. https://doi.org/10.1525/luminos.181.

Schneider explores how to design more democratic spaces for online everyday interactions. He criticizes the current design of online spaces because of its ideological origins in homesteading and its implicit feudalism, in which the founders and online community admins hold most of the power, while users lack effective voice for decision-making. Due to the undemocratic origins and features of online spaces, Schneider finds that they deteriorate everyday democratic skills. As an alternative, he calls for the creation of new forms of self-governance to be implemented in the design of online spaces, as illustrated by two case studies that use technologies as democratic mediums: the movement working toward police abolition and the communities surrounding the Ethereum cryptocurrency blockchain. To conclude, Schneider advocates for policy design that cultivates governable spaces as sites of problem-solving for challenges such as social media communities, platform-mediated work, and network infrastructure. These spaces use social and technical infrastructures to enable participants to make and enact decisions through transparent processes, but they require the design of participative technologies and governmental and organizational policies that support self-governance over undemocratic alternatives.

Scholz, R. Trebor. 2023. Own This! How Platform Cooperatives Help Workers Build a Democratic Internet. London New York: Verso.

“Own This!” could be read as a call-to-action, blueprint, and critique of digital platforms and its structurations of labour and life. From the outset of the book, Scholz argues that the labour required to sustain digital platforms is typically provided by migrant workers who receive little pay, virtually no protections, or benefits—all while billions of dollars accumulate in the “bank accounts of Silicon Valley executives” (2). Viewing this issue as intertwined with efforts to reclaim public control over the Internet, its apps, and its protocols, Scholz presents ‘platform cooperatives’ as an alternative. Platform cooperatives blend platforms with the principles of cooperatives (namely, shared ownership and democratic decision-making) in order to gain greater control over the provision of their labour and secure fairer, safer, and more equitable working conditions. The book has eight chapters, including the introduction and epilogue. The first two chapters focus on making the case for cooperatives, and Scholz makes a concerted effort to highlight real-world examples of cooperatives throughout. They also offer an introduction and review of the principles underpinning cooperatives that may be helpful to those who have only a passing familiarity with the concept of cooperatives. The third and fourth chapter consider issues of scale and definitions of value from the perspective of platform cooperatives. Scholz’s discussion of expanding theories of social good beyond economic indicators to others that include collective and individual well-being, as well as equity and sustainability, may resonate with readers who are interested in discussions regarding scholarly impact and the value of open access research. The fifth chapter considers partnerships, including those between unions and cooperatives, and the sixth chapter focuses on data–data commons, data democracy, and liberating data from private interests. The last chapter presents a vision of a near-future (2035) where cooperatives thrive and cities form alliances. In the epilogue, Scholz addresses the question of how to start a platform cooperative, offering some practical tips and points for further consideration.

Scholz, Trebor. 2016. Platform Cooperativism: Challenging the Corporate Sharing Economy. New York: Rosa Luxemburg Stiftung. https://rosalux.nyc/wp-content/uploads/2020/11/RLS-NYC_platformcoop.pdf.

Scholz explores platform cooperativism as an alternative to the exploitative labour and ownership relations of the sharing economy, characterized by platforms such as Airbnb, Uber, and Amazon Mechanical Turk. Scholz outlines three parts of platform cooperativism: 1) it involves using the technology of platforms with an ownership model centred around democratic values; 2) it promotes solidarity between workers and consumers; 3) it reframes ideas of innovation and efficiency to benefit all. The guiding principles of platform cooperativism include decent pay for workers, data transparency, portable worker benefits, rejection of excessive workplace surveillance, and a right to log off. As this model represents technological, cultural, political, and social changes, Schotz concludes that platforms co-ops depend on relationships with other cooperatives, funding schemes, lawyers, workers, and designers that should be committed to the open commons and the envisioning of a society that is not centred around shareholder enterprises.

Srivastava, Lina. 2024. “Building Community Governance for AI.” Stanford Social Innovation Review, March 4, 2024, sec. Technology. https://ssir.org/articles/entry/ai-building-community-governance.

Starting from the position that artificial intelligence can offer real social benefits, Srivastava focuses on making the case for community-led approaches to governance as an alternative to the consolidation of power by tech giants. Viewing AI as a communal resource, Srivastava argues, is a necessary first step. In doing so, the focus is shifted to prioritizing the collective good over private profits, for instance by redistributing the benefits of AI across communities instead of sequestering them to a limited few. To get there, though, we need regulatory environments that are strong enough to implement and protect public oversight while addressing issues related to abuse, fraud, data accessibility and privacy. We also need safeguards that ensure cultural, civil, and human rights are respected. Lastly, we need updated frameworks for governance that emphasize accountability, transparency, ethics, community, intentionality, and so on. In addition, Srivastava offers examples of some collectives and cooperatives that are engaged in this work. These include: Promising Trouble, Careful Industries, the Technology Salon, Black in AI, the Cyber Collective, and the Distributed AI Research Institute.

Zhang, Jun. 2024. “Unlocking the Social Value of Platform Cooperatives.” Platform Cooperativism Consortium (blog). September 11, 2024. https://platform.coop/blog/unlocking-the-social-value-of-platform-cooperatives/.

This blog post focuses on how social value is created within digital platform cooperatives. Platform cooperativism positions itself as an alternative to for-profit mega platforms drawing on insights from the broader cooperative movement. In essence, the approach seeks to create platforms that are owned by the people who use them. Typically, cooperatives seek social value over economic value, unlike most profit-driven companies that must prioritize their shareholders’ best interests above all else—interests that may include the needs of a community when or if it benefits them. But, as Zhang rightfully asks, what exactly does ‘social value’ mean in the context of platform cooperativism? Zhang reasons that it begins with looking beyond traditional metrics of business success in order to consider other benefits. For instance, cooperatives often build relationships with local community organizations and public institutions to create a support network for members, contributing to increasing the capacities of members and their communities. In addition, platform cooperatives often form strategic alliances and networks that allow them to scale up. These alliances often cross borders, creating global networks that have more power to influence policies and regulations that support the cooperative movement and its members. Further, members of a cooperative often develop a strong sense of shared identity, which can empower members to participate in advocacy movements and movement-building, as well as in group-decision making processes regarding the platform’s direction. While not necessarily framed this way by Zhang, it is worth noting that the ability of platform cooperatives to fulfill their potential relies (whether in part or whole) on supportive institutional frameworks and policies that understand what platform cooperatives are and are not. Without this support, platform cooperatives may struggle to secure funding, stay afloat, build credibility, or navigate regulatory concerns.

Misinformation as Platform Governance Issue

[edit | edit source]

Avieson, Bunty. 2022. “Editors, Sources and the ‘go Back’ Button: Wikipedia’s Framework for Beating Misinformation.” First Monday, November. https://doi.org/10.5210/fm.v27i11.12754.

Avieson explains how the editorial framework developed by the Wikipedia community was successful in preventing misinformation on the site during the COVID-19 pandemic. The author attributes this success to three key factors: the volunteer labour performed by editors, the emphasis on credible sources, and the technological affordances of the Wikipedia platform. Editors write content, correct grammar, oversee newcomers, and decide which content is included or deleted. Sources for medical entries are required to be peer-reviewed or from medical institutions. Finally, the technological affordances of Wikipedia, such as the revert edit function and the automated bots provide a structural advantage to remove misinformation. These three factors, alongside Wikipedia’s lack of a commercial agenda, ensured the provision of factual information during the pandemic, demonstrating the enduring possibilities of participatory culture.

Danaher, John, Michael J Hogan, Chris Noone, Rónán Kennedy, Anthony Behan, Aisling De Paor, Heike Felzmann, et al. 2017. “Algorithmic Governance: Developing a Research Agenda through the Power of Collective Intelligence.” Big Data & Society 4 (2): 205395171772655. https://doi.org/10.1177/2053951717726554.

In 2017, when Danaher et al. published this article, one of the most important shifts in the design of algorithms was taking place. This shift entailed a move from ‘top-down’ algorithms (where rulesets for algorithms were exhaustively defined by programmers) to ‘bottom up’ machine-learning algorithms (where an algorithm is essentially trained to develop its own set of rules). The implications of this shift continue to be profound today, but at the time, the authors noted how bottom-up machine-learning algorithms decrease the transparency of algorithmic governance systems, especially when these algorithms are integrated into governance structures that are already opaque. To dig into questions about algorithmic governance systems (particularly those that make use of bottom-up algorithms), the authors convened a multidisciplinary group of scholars who collectively identified priorities for a research agenda in light of these shifts, as well as barriers to legitimate and effective algorithmic governance. These included a lack of understanding among government officials, public servants, and publics regarding how algorithmic governance structures are constructed; a prevailing tendency towards over-optimism among stakeholders and politicians that can lead to them rushing to adopt these sorts of governance systems without taking the time to think through the consequences, potential and otherwise, associated with these systems; a lack of awareness on the part of technical experts when it comes to their own implicit biases or how biases affect coding processes; a recurring gap between public (social-driven) and private (profit-driven) interests that often leads to resistance across the private sector to engage more deeply with questions of ethics, biases, consequences, and affects, because this engagement might slow attempts to make progress (which in turn might limit commercial success).

Digital Citizens Alliance. 2017. “Trouble in Our Digital Midst: How Digital Platforms Are Being Overrun by Bad Actors and How the Internet Community Can Beat Them at Their Own Game”. https://www.digitalcitizensalliance.org/clientuploads/directory/Reports/Trouble-in-Our%20Digital-Midst%20Report-June-2017.pdf.

In April 2017, the Digital Citizens Alliance—a non-profit, consumer-oriented coalition based in Washington, DC—conducted a poll of American consumers to determine their level of trust regarding online services and platforms. Their poll, whose results are summarized in this report, found that the public’s trust in these services and platforms has decreased significantly, despite recent steps taken by Facebook, Google, Twitter, and other major players to combat bad actors, fake news, and other issues that continue to plague digital communities. The authors assert that digital platforms must adopt a holistic approach, working together with cybersecurity experts, law enforcement agencies, and civil rights and consumer protection groups in order to proactively restore trust and make the Internet—including their own websites—a safer place. Although the poll and accompanying report focus on American consumers, their call for solutions to issues of digital security, safety, and trust are relevant in other national contexts as well.

Duffy, Brooke Erin, and Colten Meisner. 2023. “Platform Governance at the Margins: Social Media Creators’ Experiences with Algorithmic (in)Visibility.” Media, Culture & Society 45 (2): 285–304. https://doi.org/10.1177/01634437221111923.

Duffy and Meisner analyze the experiences with algorithmic (in)visibility of 30 social media creators (particularly from historically marginalized identities and/or stigmatized content genres). Based on the responses of in-depth interviews with the creators, the authors present a typology of platforms’ invisibility mechanisms, consisting of human-enacted bans and violations, automated bans and suspensions, bias and discrimination, and shadowbans (informal and uninformed concealing of certain content and creators). As workarounds, the creators tend to engage in four types of platform visibility practices, depending on their compliance or resistance to the invisibility mechanisms: suppression (self-censorship), experimentation, resignation, and circumvention to resist algorithmic detection. The creators’ experiences suggest that platform companies allocate visibility in inconsistent and biased ways. Drawing from Michel Foucault’s ideas, the authors conclude that platforms are moral arbiters that punish certain creators and content as an exercise of institutional power, wherein governance happens through processes of discipline and normalization.

Gillespie, Tarleton. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven [Connecticut]: Yale University Press, 2018.

Gillespie examines the policies and practices of content moderation in social media platforms, as he argues that moderation is an essential part of the essence of platforms because it constitutes what users see. Platforms perform this moderation to remove and filter offensive, vile, or illegal content posted on their sites. While the legal framework of the U.S. telecommunications law has allowed platforms to not be liable for the content they host, platforms still have an interest in moderating to protect their brand image and regulate problematic content and behavior. This task requires different strategies and labour: moderation before content is posted (which few platforms perform), community flagging of objectionable content reviewed by crowdworkers, or automatic detection of problematic content through artificial intelligence. These three strategies have unique and common downsides, such as the difficulty of scaling manual moderation, the exploitative labour conditions of crowdworkers, the misuse of the flagging features, and a lack of transparency and accountability on what is considered problematic or not. While Gillespie acknowledges the sociotechnical complexity of content moderation, he criticizes the current model that handed over to private companies the power to enforce the barriers to free speech. In response, he suggests shifting the discussion about content moderation from the missteps of platforms to a more expansive examination of their responsibilities to the public.

Gorwa, Robert, Reuben Binns, and Christian Katzenbach. 2020. “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.” Big Data & Society 7 (1): 2053951719897945. https://doi.org/10.1177/2053951719897945.

Gorwa et al. (2020) discuss the complexities and ethical issues surrounding algorithmic content moderation, focusing on its use by commercial mega-platforms such as Facebook and YouTube. The authors begin by providing readers with an introduction to the technical mechanisms that enable platforms to moderate the large volumes of content that are uploaded to sites everyday, such as matching, by using hashes (i.e., transforming a known example into a unique string of data, or ‘hash,’ that is used to identify underlying content, and then matching the hashes of new uploads against the hash for the known example) or classification, which refers to a categorization process typically assisted by trained algorithms. In the latter, new uploads are assessed for ‘fit’ into certain categories, such as content containing words associated with hate speech. Gorwa et al. (2020) note that there are several challenges that arise from the use of automation in both matching and classification processes.

Ovadya, Aviv. 2019. “What Is Credibility Made Of?” Tow Report. Columbia Journalism Review. https://www.cjr.org/tow_center_reports/ovadya-credibility-journalism-ocasio.php/.

Ovadya (2019) focuses on developing frameworks for evaluating credibility. According to Ovadya, two approaches inform assessments of credibility. These are a) evidence chains, which considers each of the links in an evidence chain, whether they can be verified, and then, upon verification, whether they support the claim/claims being made; and b) reputation networks, where an individual determines whether to trust the credibility of a claim on the basis of whether it was made by a person, institution, or site that they trust. The two approaches are not mutually exclusive; they can interact in ways that complicate the process. For instance, Ovadya gives an example of “Researcher X” who is seen as credible, who has made reasoned arguments using evidence chains that check out, but publishes their work now with Institution Y, which is known to have published lies about the topic Researcher X has written about. This might become further complicated if Institution Y associates with a certain country that has been shown to influence content for its own geo-political motivations. Significant challenges impede progress in providing meaningful assessments of credibility. One challenge relates to the necessity of involving humans in the process, which is time-consuming and expensive, resulting in organizations spending hundreds of dollars to check a single fact. Whether or not an organization or company takes this cost on depends on several factors that include their aims, business model, revenue, and, probably, the consequences, impact, or importance associated with a given fact (and therefore the need to check it). Automated methods might be one solution, however most of these methods also involve humans who can train a machine learning system using training data that has sufficient “ground truth” examples. In addition, even in cases where automated methods are developed and sufficient enough to evaluate and flag inadequate credibility signals, it is only a matter of time before “non-credible agents” adapt to these methods, eventually rendering them obsolete.
Engaging Platforms in Open Scholarship
 ← Social Implications of Platforms Models and Mechanisms of Platform Governance and Regulation Alternative Models and Approaches →