Loading [Contrib]/a11y/accessibility-menu.js

This website uses cookies

We use cookies to enhance your experience and support COUNTER Metrics for transparent reporting of readership statistics. Cookie data is not sold to third parties or used for marketing purposes.

Skip to main content
Journal of Libertarian Studies
  • Menu
  • Articles
    • Articles
    • Book Reviews
    • Editorial
    • Notes and Replies
    • All
  • For Authors
  • Editorial Board
  • About
  • Issues
  • search
  • X (formerly Twitter) (opens in a new tab)
  • RSS feed (opens a modal with a link to feed)

RSS Feed

Enter the URL below into your favorite RSS reader.

https://jls.mises.org/feed
P-ISSN 2643-4601
E-ISSN 2643-4598
Articles
Vol. 29, Issue 2, 2025February 05, 2026 CDT

The Misinformation Challenge: How Information Freedom Promotes Democratic Discourse

James Kennon Rice, MSc, MSc,
classical liberalismdecentralized governancedemocratic discoursefree expressionmisinformation
Copyright Logoccby-4.0 • https://doi.org/10.35297/001c.155344
Photo by Carlos Muza on Unsplash
Journal of Libertarian Studies
Rice, James Kennon. 2026. “The Misinformation Challenge: How Information Freedom Promotes Democratic Discourse.” Journal of Libertarian Studies 29 (2): 176–208. https:/​/​doi.org/​10.35297/​001c.155344.

View more stats

Abstract

The contemporary discourse surrounding misinformation has generated unprecedented calls for government intervention in information markets and content regulation. While acknowledging legitimate concerns about false information’s potential harms, this article argues that top-down regulatory approaches fundamentally misunderstand both the nature of democratic discourse and the most effective means of promoting information quality. Drawing on classical liberal principles and empirical evidence, I demonstrate that decentralized information governance systems, market competition, and robust protection of free expression provide superior frameworks for addressing misinformation challenges while preserving the foundational elements of freedom within a democratic society. My research examines historical patterns of novel moral paradigms surrounding new communication technologies, and in doing so it evaluates evidence regarding the effects of misinformation and presents the case for distributed, bottom-up approaches to information quality that strengthen rather than constrain democratic discourse and the ideals of liberty.

The digital age has ushered in an era of unprecedented information abundance, connectivity, and democratized communication. Alongside these transformative social changes, however, there has emerged what some may characterize as a crisis of misinformation that threatens the very foundations of democratic society. From concerns about foreign election interference to disputes over the truth of the science of global public health during the COVID-19 pandemic, false or misleading content has become a central preoccupation of policymakers, technology companies, and civil society organizations (Tucker et al. 2018; Persily 2017).

This perceived crisis has generated calls for extensive government regulation of online speech, expansion of content moderation by private platforms, and the creation of new institutions designed to determine what the truth is and enforce that version of the truth (Helberger, Pierson, and Poell 2018; Gorwa 2019; Wardle and Derakhshan 2017). Proposals range from requiring platforms to remove false content to establishing government bodies with authority to designate official versions of contested facts. New legislative initiatives represent attempts to systematize control over online discourse, threatening fundamental values of free speech at an unprecedented scale (Gillespie 2020; Suzor 2019).

Yet this regulatory response to the very real issue of fake information, however well intentioned, fundamentally misconceives both the nature of democratic discourse and the most effective means of promoting information quality in free societies. Classically liberal economics, and the intellectual tradition that underpins libertarianism, is grounded in principles of individual autonomy, market competition, and the marketplace of ideas, offering a more promising framework for understanding and addressing these acute misinformation challenges (Mill [1859] 2012; F. A. Hayek 1945; Popper 1945; Berlin 1969). Murray Rothbard argues that freedom of information is inseparable from individual liberty, contending that state intervention in speech or knowledge markets inevitably leads to coercion and intellectual stagnation. His libertarian framework views open information exchange as both a moral right and an economic necessity (Rothbard 1978).

The concept of a marketplace of ideas, refined by generations of liberal theorists, rests on the premise that free and open debate provides the most reliable mechanism for distinguishing truth from falsehood over time (Mill [1859] 2012; Scanlon 1972; Schauer 1982). This principle does not naively assume that all ideas are equally valid or that truth always prevails in an openly competitive market of ideas. Rather, it recognizes that alternative systems—particularly those involving government or concentrated private power exercising force to dictate “truth”—pose even greater risks to democratic legitimacy than the loss of freedom resulting from misinformation (Berlin 1969; Baker 1989). While misinformation represents genuine challenges in democratic discourse, the solution lies not in top-down regulatory interventions but in strengthening bottom-up information ecosystems and preserving the fundamental liberal principle that robust debate and free expression provide the most effective and scientific means of distinguishing truth from falsehood. This is my main thesis; it challenges both the apocalyptic and authoritarian narratives surrounding misinformation, revealing the negative implications of previously proposed solutions involving censorship and enforcement. I advocate for approaches that enhance democratic resilience through laissez-faire market corrections rather than onerous control.

Contemporary concerns about misinformation must be understood within the broader historical context of moral debates surrounding new, general-purpose technologies that have political implications. From the printing press to radio, television, and now social media, each major innovation in communications and information dissemination has generated fears about its potential to corrupt, mislead, or destabilize society (Eisenstein 1980; Starr 2004; Postman 1985). While digital platforms present new challenges in terms of scale and speed, the fundamental dynamics of competition for recognition within the information landscape as well as the adaptation of democratic processes to new means of communication and creation remain consistent with cyclical historical patterns of perceived crisis.

This article proceeds through several stages of analysis. First, I examine the historical context of moral trepidation toward information technologies, demonstrating that initial alarm is often followed by social adoption of technology and the recognition that initial beliefs were an overreaction. Second, I provide an assessment of misinformation’s actual measured effects, distinguishing between initial exposure to false information and subsequent effects on macroeconomics and individual behavior. Third, I present a case for community-led information governance that acknowledges misinformation challenges while preserving democratic principles. Fourth, I propose a range of market-based and civil society solutions to misinformation that use the mechanisms of competition and distributed knowledge. Finally, I summarize policy approaches that strengthen information freedom while addressing legitimate concerns about information quality.

Historical Context: Moral Panics and Information Technologies

The contemporary response to digital misinformation follows a historical pattern in which new communication technologies bring about moral approbation—seen most recently by the association of misinformation with populism, extremism, and social exclusion. AI has been rapidly changing this landscape, representing a new frontier of knowledge creation in the digital era. While AI does generate fake content, the value it will create in the near future will far outweigh the threat from AI-generated misinformation. Understanding the pattern of knowledge creation endemic to new technology such as AI is essential for maintaining a deeper, modern perspective on current challenges and avoiding policy responses that, in hindsight, often appear disproportionate and counterproductive (Cohen 1972; Goode and Ben-Yehuda 1994).

Common Characteristics of Information Panics

Historical episodes of censorship, the curtailment of media rights, and government oversight of entertainment, communication, and public discourse share several common characteristics that illuminate contemporary misinformation discourse. First, instances of censorship typically begin with anecdotal evidence of harm that is then generalized into broad claims about a deep and systemic societal threat promising the immoral degradation of culture. Second, the destruction of previously impervious communicative civil liberties involves claims that new technologies threaten the rational operation and functioning of society and that new media—instead of government intervention—directly manipulate behavior in ways that traditional media do not (Cohen 1972). Third, moral crises surrounding information technologies consistently underestimate human agency and adaptability. They portray audiences—particularly those most vulnerable in society (children, the poor, the disabled, etc.)—as passive victims of media influence as opposed to active citizens who derive significant meaning through their social relationships and cultural frameworks (Jenkins 2006). Finally, these political divisions, exacerbated by technological change, generate demands for control-oriented regulatory intervention that often proves unnecessary as society naturally adapts to new forms of technology over time.[1]

Most importantly, the dominant political agenda consistently fails to account for the self-correcting mechanisms that emerge within markets and information ecosystems. AI users will develop their own quality standards just as television audiences became more sophisticated consumers and internet users developed informal networks for identifying reliable sources. The predicted catastrophic effects of any new technology rarely materialize in their predicted forms, while the overwhelming benefits of new technologies, including AI-generated content, become increasingly apparent over time (Shirky 2008; Benkler 2006).

The Misinformation Issue as a New Digital Paradigm

Contemporary concerns about misinformation exhibit all the characteristics that typically occur during significant transformations in media dissemination and economic value creation. Anecdotal examples of false information—a conspiracy theory leading to political violence and riots or a medical hoax causing false reports, diagnoses, and harmful treatments of health problems—are generalized into claims about systematic threats to democracy and public health (Wardle and Derakhshan 2017; Freelon and Wells 2020). Social media platforms are often characterized as uniquely manipulative technologies that bypass critical thinking and create unprecedented vulnerabilities to false information. These platforms have amplified such concerns by making previously private conversations, political positions, and philosophical beliefs fully visible to the general public (Tufekci 2017; S. T. Roberts 2019). Activities that once occurred only within small groups or face-to-face settings now leave digital traces that can be aggregated, analyzed, and presented as evidence of systematic ideological and ethical problems. The effect of this high personal visibility creates the impression that false beliefs and harmful communication have increased dramatically when, in reality, they may simply have become more observable.

The uproar over misinformation exhibits novel characteristics that distinguish it from previous episodes of paradigm change in the media or economy. Unlike earlier transformations that focused primarily on interpersonal communication, sharing, and broadcasting, debates over misinformation now center on the creation of entirely new content by artificial agents and anonymous online actors who may or may not be real. This expanded scope has legitimized regulatory approaches that would have been unthinkable in the past, including direct government involvement in determining the truth or falsehood of political claims (Klonick 2018; Balkin 2018). Additionally, the global nature of digital platforms has created an international dimension to misinformation that complicates traditional approaches to media regulation. When platforms operate across jurisdictions with different political and ethical norms and cultural systems, questions arise about whose standards should apply and how democratic values can be preserved in globally integrated information systems premised on freedom of expression (Flew 2021; Kaye 2019).

Empirical Assessment of the Misinformation Threat

While misinformation undoubtedly exists and can cause specific harms in particular contexts, research suggests that its aggregate effects on democratic outcomes and public welfare are more limited and contested than popular discourse suggests. It is also worth noting that only a small fraction of the reams of data on the internet is misinformation—and, although this may change with AI-generated content, the majority of AI agents have not been proven to have ill intentions.[2] The perceived scale of the misinformation threat and measurable effects on political behavior, health outcomes, and social stability is significant—but not enough to imminently weaken democratic societies (Lazer et al. 2018).

Measuring the Impact of Misinformation

Quantifying the effects of misinformation presents a few challenging methodological issues that have troubled researchers. First, defining misinformation requires normative and sometimes subjective judgments about the validity of claims whose truth or falsehood may be contested, evolving, or context dependent—even from the perspective of experts, scientists, and policymakers (Freelon and Wells 2020). Scientific understanding evolves and changes as knowledge is proven or disproven, and political facts usually involve a degree of interpretation that is not easily placed within an objective true or false categorization.

Second, measuring exposure to misinformation—how much misinformation is “out there”—is different from measuring changes in beliefs or behavior as direct results of misinformation. A wide range of studies have both pointed to and theorized about the prevalence of misinformation on social media platforms, but they often fail to conclusively demonstrate that users actually believe or act upon false claims about political, social, or cultural matters (Guess, Nagler, and Tucker 2019). Even when exposure to misinformation is established without question, the pathway from exposure to action and altered political or physical behavior is not as straightforward (in most cases) as it is with other forms of influence such as advertising (Grinberg et al. 2019).

Research has shown that the effects of misinformation on electoral outcomes and democratic participation have, in the past, been quite limited. Studies of the 2016 US presidential election found that while political misinformation circulated widely on social media, exposure was highly concentrated, targeting users who already held strong partisan commitments and many who were already members of fringe communities (Allcott and Gentzkow 2017). The most willing and gullible consumers of false political information were typically already committed supporters of particular candidates, suggesting that misinformation reinforced rather than created political divisions across party lines and leadership preferences (Vosoughi, Roy, and Aral 2018).

More fundamentally, research suggests that social media use in general has smaller effects on political attitudes than commonly assumed. Boxell, Gentzkow, and Shapiro (2017) found that political polarization—in response to an idea or moral concept—spread most rapidly among demographic groups with the lowest rates of internet and social media use, suggesting that online echo chambers alone cannot explain the broader patterns of political division we see within society. Similarly, studies of platform algorithms find limited evidence that systematically tailored or even directly targeted political ads significantly increase political polarization compared to, for example, geographic location and existing patterns of partisan media consumption (Bakshy, Messing, and Adamic 2015; Dubois and Blank 2018).

The Robustness of Democratic Institutions

In the context of this evidence, I argue that democratic systems possess robust, self-regulating mechanisms that limit misinformation’s potential for systematic harm. Electoral competition creates strong incentives for opposing candidates to identify and publicize their opponents’ false claims. Media organizations, despite their individually distinct biases and limitations, maintain professional standards that force them to distinguish truth from falsity. Civil society organizations, academic institutions, and other knowledge-producing social structures provide corroborative sources of information and analysis such that truth often wins.

Historical evidence supports the robustness of democratic institutions in the face of these kinds of information challenges. American democracy survived and ultimately strengthened during periods of extensive misinformation, including during the yellow journalism era of the 1890s, the McCarthyist paranoia of the 1950s, and conspiracy theories surrounding events such as the Kennedy assassination (Balkin 2018). While these episodes caused real harms and distortions, democratic institutions adapted and developed improved mechanisms for understanding and reacting to the proliferation of contested information.

The decentralized nature of information systems within democratic societies—characterized by multiple centers of power all vying for political influence—provides additional resilience against systematic manipulation. Unlike authoritarian systems that rely on centralized control of information, democracies benefit from having competing sources, fact-checking institutions, and investigative journalism, and they often rely on the general principle that no single actor can control the information environment (Drezner 2021). This decentralized media environment creates redundancy and institutionalizes error correction mechanisms that make comprehensive and sustained deception difficult—barring a global onset of novel authoritarianism (Benkler, Faris, and Roberts 2018).

Cognitive and Social Factors in Information Processing

Research in cognitive psychology and political science suggests that individuals are more sophisticated at processing information than the current narratives around misinformation assume (Kahneman 2011; van der Linden and Roozenbeek 2024). While people certainly use mental shortcuts and exhibit various biases, they also employ contextual cues, assess the credibility of sources, and verify social relationships when evaluating information (Kahan 2017; Pennycook and Rand 2019).

The concept of motivated reasoning—the tendency to process information in ways that confirm existing beliefs—is often cited as evidence of human susceptibility to misinformation. But motivated reasoning also serves as a defense mechanism against false information that conflicts with well-established beliefs or comes from untrusted sources (Lodge and Taber 2013). People are not passive recipients of information but active interpreters who filter new claims through their own existing knowledge structures and social relationships (Nyhan and Reifler 2010). Social networks provide additional verification mechanisms that mitigate the impact of misinformation. When false information circulates through social media, it often encounters challenges from other users who possess contradictory knowledge or different perspectives. These informal, ground-up fact-checking processes, while imperfect, create distributed systems for identifying and correcting false claims (Mendoza, Poblete, and Castillo 2010).

The significance of phenomena such as echo chambers and filter bubbles is consequently often overstated in popular discourse. While social media algorithms do generate hyperpersonal content, leading to isolated users and communities sharing sometimes extreme common values, research suggests that most people encounter diverse information sources and opposing viewpoints through their social networks (Flaxman, Goel, and Rao 2016). Even individuals who exhibit strong partisan bias in the composition of their networks typically consume some information from sources that challenge their preexisting beliefs, though they may interpret these alternative facts through their own partisan lens (Bail et al. 2018).

Case Studies in Misinformation Impact

Specific political cases in which misinformation caused significant social harm reveal complex causal relationships between information consumers and creators. The 2016 US election interference, attributed to Russian state-sponsored bots and media infiltrators, involved genuine attempts to spread false information, but research suggests the actual impact on vote choice was minimal (Eady et al. 2023). This is compounded with the revelations from the 2024 US election, in which misinformation played an arguably much less significant role, and the outcome was the same as in 2016. While foreign interference undoubtedly represented a serious violation of democratic sovereignty, evidence for changed electoral outcomes, specifically from misinformation, remains limited.

The COVID-19 infodemic provides another case study. Misinformation about the spread and threat of COVID as well as the scale, rollout, and effectiveness of vaccines circulated widely and certainly influenced health outcomes. I argue that misinformation exposure concerning COVID and health outcomes during the pandemic was in fact mediated by numerous confounding factors including access to healthcare, economic circumstances, political identity, and trust in institutions (Loomba et al. 2021). These myriad causal contributors arguably obscure the effects of misinformation as a direct influence. Misinformation exacerbated these factors but was not the root cause. Countries with minimal social media penetration also experienced significant challenges with their COVID-19 responses—some due to apprehension toward science and intervention, others due to economics and logistics—suggesting that information quality was a primary determinant of public health outcomes but not the only one. Misinformation is unquestionably a key factor in these cases, but ideology, ignorance, and incompetence are often more systemic and tangible causes.

Finally, during the pandemic, political polarization over vaccine mandates and public health measures reflected deeper disagreements about government authority, individual liberty, and risk assessment rather than simply being a symptom of increased misinformation. Many vaccine-hesitant individuals possessed accurate information about vaccine efficacy and safety but weighed these benefits differently against concerns about personal autonomy or distrust of companies and governments participating in what might be called the pharmaco-industrial complex (Troiano and Nardi 2021).

Beyond Top-Down Regulation: A Case for Decentralized Information Governance

There are very real challenges posed by false and misleading information—both for democracy and personal autonomy in the age of AI—but effective responses must work with rather than against the grain of democratic discourse and capitalist, individualist society. Many regulatory approaches that seek to harness the power of markets to control information and public opinion face inherent limitations that make such legislation practically ineffective and garner moral and social approbation. I argue that decentralized governance systems for the management of information offer superior alternatives within the capitalist system. Such innovations could utilize distributed knowledge, preserve democratic accountability, and strengthen the marketplace of ideas through sharing, debate, and rhetoric—a skill and vocation our society seems to have forgotten.

Acknowledging the Real Challenges of Misinformation

A serious analysis of misinformation must begin by acknowledging that fake or malicious information can cause real harms in specific contexts. As we have seen with COVID-19, and vaccine skepticism in general, medical misinformation can lead people to reject effective treatments and embrace dangerous, unproven, and downright harmful healthcare alternatives. In addition, financial misinformation can cause economic losses—through poor advice on investment, gambling, or increased susceptibility to scams—harming vulnerable individuals. Political misinformation, too, can increase social tensions and undermine trust in democratic institutions—as has been shown time and again in the philosophical literature (Lewandowsky et al. 2012; Wardle and Derakhshan 2017).

I do not mean to argue that these harms are merely theoretical; they clearly have measurable effects on individual welfare and the social and economic status of groups and societies. I claim that the real challenge for democratic societies is to develop proportionate responses that address these harms without creating greater risks to free expression and democratic governance. This strain of practical, measured governance and policymaking, in a time of challenges to power coming from an unwilling public and accelerating technological change, requires moving beyond dismissive attitudes that ignore the real effects of misinformation. But it also requires deftly countering apocalyptic narratives that exaggerate catastrophic or systematic threats of misinformation and their political corollaries, which hasten the rise of injunctions on freedom and liberty of speech, thought, and action (Pennycook and Rand 2019).

The complexity of the information environments faced by policymakers in democratic societies means that simple solutions promoted as ubiquitous fixes on the platforms we use every day—solutions such as removing false content or labeling disputed claims—often prove inadequate or counterproductive when faced with dire threats to social order. It is undeniable that information—and misinformation—exists within social and political environments of debate, disagreement, and dialogue that shape the meaning and reception of language and media. Content that is factually accurate may be misleading in or out of context, while technically false claims may reveal important truths about social conditions or institutional failures (Phillips 2018). A timely example is the debate the present article engages in: I show that, insofar as it increases antidemocratic censorship, AI is an even more sinister threat to personal liberty than the present realities of misinformation (Jack 2017).

Finally, the boundaries between misinformation, shifts in public opinion, choices of taste or interpretation, and the impact of political and social advocacy (or activism) are often blurred in democratic discourse. Political communication—including misinformation, whatever its source—necessarily involves selective presentation of facts, value judgments about the significance of the information being communicated, and predictions about the state of the political arena in the future and present—all of which defies the ability of any one actor to legitimate every claim.[3] Attempting to regulate misinformation without constraining legitimate political debate is so challenging that ideological neutrality—that is, fairly deciding whose discourse counts as fake news—is practically impossible to maintain (Kaye 2019).

The Limits of Centralized Approaches

The premise I put forward is that government regulation of online, platform-based content is unsuitable as a primary response to misinformation. Determining truth and falsehood requires substantive judgments that government officials are poorly positioned to make, given that most legislative and executive power is exercised along partisan lines. Particularly in contested political domains, misinformation regulation all too often looks like the persecution of political enemies and ideas. Democratic legitimacy in the information age, or in the age of the knowledge economy, depends on the ability of everyday citizens to evaluate competing claims and choose among alternative perspectives—not government authorities determining correct beliefs and disseminating them to a willing and culpable public (Post 2009). The people have accumulated a significant amount of power in modern societies, through activism, decentralization, and the delegation of science and social science to institutions (Gillespie 2018).

The practical challenges of content regulation (by humans or AI), on the scale made possible by technology and social media platforms, compound evidence for the principled objection to the centralized regulation of content—particularly for content that concerns social outcomes and issues. Social media platforms process billions of posts daily in dozens of languages across multiple cultural and political contexts. Even human-driven content moderation systems lack the context necessary to enable fine-grained and careful judgments about the truth or falsehood of singular claims regarding political and social issues. Human reviewers cannot possibly examine more than a small fraction of posted content with appropriate care (S. T. Roberts 2019). Giving up some of this control to automated systems can help, but the black box nature of AI moderation leaves content creators without viable solutions or remedies when their media is taken down unjustly (Gorwa, Binns, and Katzenbach 2020).

I argue that even well-intentioned content moderation efforts, including direct a priori psychological efforts to counter misinformation—so-called prebunking (a twist on debunking)—systematically favor certain mainstream perspectives over others. These mainstream institutional sources are generally treated as more credible than alternative voices, potentially silencing legitimate dissent and minority viewpoints that raise valid objections or refutations. As a general example, content moderation guidelines developed in Western contexts may be inappropriate when applied globally, yet tech platforms still face pressure to apply consistent, Western-grounded standards across diverse societies with different social norms or ethical value systems (Suzor 2019; McGuire 1964).

A further problem in content regulation overseen by governments or centrally directed platforms is mission creep, a potentially serious phenomenon that threatens the foundations of free society. Policies initially designed to address clear cases of harmful misinformation tend to expand over time to cover broader categories of unwanted speech.[4] Terms like misinformation, harmful content, and threats to democracy are inherently elastic and can be distorted and twisted to justify censorship of legitimate but unpopular viewpoints (Candeub 2021). This is not a new argument, but it is not the prevailing view within academia or the social policy community. Closer scrutiny of issues such as this could yield greater autonomy, freedom, and choice in the types of information we consume, create, and value.

The Promise of Distributed Information Systems

Community-driven approaches to ensuring the quality of online information offer several advantages over centralized government or elite-dominated regulation. Local communities possess contextual knowledge that distant regulators grounded in metropolitan values of global citizenship may lack, including an understanding of cultural difference or heritage and background information about the land, healing, or past values that were less mediated by technology. Decentralized management of information and misinformation can more easily address the specific needs and concerns of disadvantaged communities, including rural communities as well as cities in developing nations that often do not have control over information ecosystems or data-intensive technologies. Thus, local knowledge could enable more accurate assessments of information credibility and relevance than top-down systems can provide. It could also make the management of community values more humane, ethical, and personal for everyone involved (Ostrom 1990; Benkler 2006).

Distributed verification systems for fact-checking with the goal of countering the threat of misinformation harness the aggregated knowledge of many individuals rather than relying on the expertise of a few designated authorities. Wikipedia provides a successful model of how independent agents can produce high-quality, collectively valued information through collaboration, peer review, and transparent discussion about inclusion and exclusion in the creation of valuable content. The internet can be a discursive space where people come together and sort out substantive differences without worrying about offending each other or timely political merit. While Wikipedia is not perfect, research consistently shows that its scope is more comprehensive and undoubtedly more informative than traditional encyclopedias since it offers both broader coverage and faster, more accurate updates when culture or events change (Giles 2005; Greenstein and Zhu 2012).

We have seen that community-based fact-checking initiatives demonstrate the untapped potential for distributed information verification systems. Local authors and influencers, including both subject matter experts and engaged citizens, can collaborate to investigate claims relevant to their communities while maintaining connections to broader networks of knowledge and expertise. Such initiatives often possess greater credibility with community members than distant and institutionalized fact-checking organizations or platform governance initiatives overseen by technologists with varying levels of political ties (Graves 2016). It is by countering the threat of misinformation in ways that add value, rather than silence voices, that progress can be made toward a truly equitable information ecosystem. With AI, this mandate only becomes more urgent, since AI shifts the creation of epistemic content toward scientific and digital unknowns (Kraut and Resnick 2012).

Strengthening Democratic Discourse through Freedom

Free expression is not only an individual right but a crucial mechanism for the management of political division, social difference, and moral diversity in democratic societies. In such societies, when false information circulates freely, it inevitably will encounter challenges from those with knowledge of the truth. This contestation process, while messy and inefficient in its current form, will ultimately provide more reliable long-term accuracy than systems that simply suppress disputed claims (Schauer 1982). Because AI will enable the democratization of expertise but also endanger originality and suppress creativity, free expression, including the possibility for social rebellion, must be protected.

The marketplace of ideas functions most effectively when it operates through open competition of diverse perspectives to the advantage of a social whole—rather than acting as a curated selection of preapproved viewpoints. Censorship, even when well intentioned,[5] distorts the competition between contesting, opposing, and even polemic viewpoints by removing certain ideas from consideration before they can be evaluated through democratic processes of discourse and rhetorical argumentation. This distortion may prevent the discovery of important truths through scientific competition and impair the correction of widely accepted falsehoods through the development of a watchtower state (Scanlon 1972).

Protecting the space where controversial and potentially false speech can be expressed also serves important democratic values beyond truth-seeking. Disputes over moral, social, and political values—and over facts within science itself—reinforce the epistemological principle that citizens rather than authorities should determine which ideas deserve consideration, thus preserving opportunities for minority viewpoints to challenge majority opinions on political and scientific issues. The aim of this reasoning is to prevent the establishment of a hegemonic, antidemocratic orthodoxy immune to criticism and warranted revision (Baker 1989). The point is not that we should be extremists—I do not dispute that violent and abjectly evil content should be taken down—but that the expression of values should not be shot down by a politically engineered trend or even by popular censorship (Wineburg et al. 2016).[6]

The ultimate goal should be to create information environments that reward accuracy, transparency, and good-faith engagement (Potter 2004). We need, as a society, to be committed to maintaining openness to diverse perspectives and the possibility of an ongoing revision of accepted beliefs—what is true today may be refuted tomorrow. This open-endedness of cultural discourse requires institutional changes that cannot be achieved through regulation alone. They must emerge from communities’ own commitments to political virtues and the democratic values that have driven capitalist society to its present heights of truth, justice, and equality.

Market-Based Solutions and Civil Society Responses

Competitive markets and voluntary, peaceful associations between and within groups of citizens provide powerful mechanisms for improving democratic access to information and steadfastly preserve freedom of expression through the protection of autonomy. Markets harness economic incentives—using the intrinsic forces that drive them toward and beyond equilibrium—to mandate and establish professional standards of managerial and scientific creativity and use social pressure to encourage the publication and release of accurate information. Markets that require government intervention in scientific and productive decisions will be outcompeted by more efficient mechanisms and platforms that seamlessly enable information transactions and transmissions. Deeply efficient information markets also enable ongoing innovation in technologies such as AI that could potentially enable a new frontier of creativity and artistic expression without the threat of misinformation. Forms of institutional organization maintaining top-down regulation cannot match the power of embedded structural and transactional liberty founded on historically salient human values (F. A. Hayek 1945; Friedman 1962).

Competitive Information Markets

Platform competition can create natural incentives for the protection of information based on truth, fact, and logical scientific deduction by rewarding creators and distributors who provide surplus value derived from technological progression. If individuals or users consistently encounter false or misleading information on specific platforms, they will inevitably—given resources and free choice—migrate to alternative media (perhaps more mainstream or trusted outlets) that offer better curation, fact-based verification, or increased opportunities for discussion—where silence in the face of moral or scientific misinformation is the exception, not the norm. The competitive pressure this enables encourages platforms to develop more effective approaches to maintaining a discourse grounded in truth, within their respective information ecosystems and without the need to explicitly mandate specific solutions (Thierer 2014; Mill [1859] 2012).

Systems that are grounded in community trust and verification are often based on the reputation of the platform as well as of the user base, and they often allow users to make informed choices about information sources based on the past credibility of the specific or general institutional structure. Academic journals use peer review and citation metrics to signal quality. News organizations build reputations for accuracy and reliability in their reporting and social engagement—online, in print, and in person—over time. Social media platforms could enhance their participation in natural online reputation verification mechanisms by providing better information about the identity of users, composition of groups, and the nature of associations, firms, and NGOs so that citizens are aware of the track records of sources they value and rely on. This would enable users to customize their information ecosystem using both individual (private) and crowd-sourced credibility assessments of factual or political statements (Resnick et al. 2000; Hirschman 1970).

Market segmentation in the online information space is crucial to competition and innovation, and it enables users to access novel features of new technologies on different platforms. These technologies enable the curation of favored content as well as the curation—intentional or random, natural or artificial—of the online discursive space in which users and consumers exist. Algorithmic curation allows the tastes of community members to differ dramatically across socioeconomic or personality-based demarcations and for platforms to serve everyone—ideally but not in practice—equally. Some individuals might prefer heavily moderated environments with extensive fact-checking while others prefer minimal intervention and maximum openness to diverse viewpoints—including those that pose alternative interpretations of substantive debates and flashpoints in modern culture. Allowing platforms to specialize, rather than requiring all platforms to adopt identical policies, better serves ideological, ethical, and moral diversity as well as inclusive, participatory legislation and policymaking (Anderson 2006; Buchanan and Tullock 1962).

Innovation in fact-checking technologies—now predominantly managed with AI—can provide accelerating and nonlinear improvements in the ability to assess information credibility that can be rapidly rolled out on technology platforms to millions or billions of users instantaneously. As a controversial example, blockchain systems could create records of information provenance that are stable across millennia (Werbach 2018)—or as long as the digital computer remains our dominant technological innovation. In turn, AI already helps technologists identify synthetic and fabricated media and works in the background autonomously to flag and take down large swaths of potentially false claims and send off other segments for human review. Cryptographic systems—including systems based on quantum architectures—could enable anonymous and secure verification of sources, allowing people to share information without fear of automated retribution by bots or doxxing if they share unpopular opinions on sensitive political topics. These technological solutions emerge most visibly in competitive markets. Regulated market structures that use force to dictate and mandate specific transactions could fail to allow for the environments necessary for the emergence of these technologies (Stigler 1971).

Civil Society and Counterspeech

Professional journalists, academic researchers, and civil society organizations are often well positioned to provide crucial counterspeech that challenges false or misleading information through investigation, analysis, outreach, and public education. These institutions and groups of proactive, socially aware, economic agents often operate with greater independence from the political pressures that government agents are often subjected to. They also often institutionalize and maintain professional standards of conduct and practice that distinguish them from purely partisan sources (Christians et al. 2009). While these decentralized associations, aimed at creating a more equitable balance of political and social power, are important for democratized information ecosystems, they are often not enough (Polanyi 1962).

Academic institutions—universities, schools, research centers, and nonprofit think tanks—play essential roles in producing and disseminating reliable knowledge through social and political research, public outreach and awareness building, and education. The commitment of universities and think tanks to academic freedom and freedom of information serves as a beacon for intellectual diversity—although sometimes imperfectly in practice—but also provides important protection for controversial research and minority viewpoints. Supporting academic independence and public access to research, science, and innovation strengthens the ability of society and the public in general to distinguish reliable from unreliable information (Menand 2010). While these institutions are a driving force behind the protection of fundamental rights such as freedom of thought, expression, and the press, they may not be enough in an age of technological upheaval and rapid acceleration of capability and computation. In Man, Economy, and State, Rothbard argues that the voluntary exchange of ideas, like the market exchange of goods, maximizes social welfare and knowledge efficiency, while state monopolization of information undermines spontaneous order and free inquiry (Rothbard 1962).

Citizen journalism and grassroots verification networks focused on undermining and mitigating threats to the integrity of an intelligent and informed society demonstrate what individuals without the backing of the deep state are capable of. Institutionalized and undoubtedly politicized fact-checking also contributes to the degradation of reliable online information. When proactive citizen groups are empowered by platforms, tech companies, and governments themselves, individuals can offer appropriate tools, incentives, and awareness derived from decentralized critical thought. Social media platforms could facilitate digital integrity by providing better access to source materials and documentation, creating collaborative tools for journalistic and scientific investigation, and promoting the visibility of high-quality citizen journalism and science (Allan and Thorsen 2013; Kosseff 2019).

Technological Solutions

Transparency initiatives originating in AI and data science already enable users, researchers, and civil society organizations to better understand how platforms curate and recommend content. Rather than mandating specific algorithmic approaches, transparency in the digital age provides information that enables informed user choice and provides the framework for ongoing social research into the effects of different information processing systems (Diakopoulos 2016). These technological solutions to the problem of protecting data quality—for example, for text-based data such as articles, books, and online posts—enable democratic systems based on the primacy of individual agency to competitively jockey for prominence on global platforms. This section briefly expands on these prospects.

User control over algorithmic AI systems—including those that will be present in the aftermath of artificial general intelligence (the “singularity”)—is another promising way to steer autonomous systems in a direction that preserves the independence of individual creativity and volition while mitigating widespread concerns about redundancy of human intelligence and threats to freedom and autonomy. In theory, tech platforms could provide users with choices about how content is ranked, what factors influence AI-based recommendations, and how disputed information is processed and displayed. This approach respects users as capable decision-makers while enabling experimentation with different approaches to information curation (Rader, Cotter, and Cho 2018). Because the capabilities of the systems we are building are unknown to us—they are emergent from the ensemble of invented techniques—we must prepare for every eventuality of our relationship to information, creativity, and invention (Berlin 1969).

Decentralized platforms and the protocols that guide them have the unique ability to reduce concerns about concentrated control over information while enabling ongoing innovation in platform design. Technologies such as blockchain and peer-to-peer networking have made it possible to create social media systems that no single entity controls—although the innovations that would operationalize these capabilities are yet to be put into practice. While these systems face technical and adoption challenges, they offer the potential in the long run to preserve freedom of expression while enabling community-driven governance through entrepreneurship and the disruption of static and obsolete institutions (Schneider 2021). I hope to have argued in the present article for the ability of novel systems and social structures to fundamentally change the ways we view information. These technologies can not only empower us but also shape our perception and relationships with each other and the universe.

Finally, as a technical note, interoperability within technological ecosystems, driven by principles of libertarian and classically liberal politics and ideology, could ensure that users are not locked into particular platforms and can shift the way they leverage social connections and content—that is, through technological alternatives enabled by democratically aligned systems—if they become dissatisfied with current ecosystems. This kind of technical exchange, developed as an objectively competitive market for information and connectivity services, enhances the pressure to innovate within platform governance while reducing concerns about monopolistic control over digital communication. As liberals of one sort or another, we prioritize choice, the protection of freedoms, and the universal accessibility of fundamental truths about politics and the natural world. The proposals in this section could provide substantial progress toward the achievement of these milestones (Tiebout 1956).

Professional Standards and Industry Self-Regulation

In this final part of this section, I examine industry associations and professional organizations that ultimately provide mechanisms for developing and enforcing standards without government intervention. The tech industry has its own unique professional ethics codes—similar to those in journalism, medicine, and engineering—that emphasize accuracy, transparency, and service to the profession rather than purely commercial considerations (Martin 2019). These types of stipulations protect technologists and technocrats from ethical and moral missteps when considering launches of fundamentally new and innovative modes of human communication, human–computer interaction, and the mitigation of digital threats from misinformation, hate speech, or intolerance toward diverse lifestyles. It may come as a surprise that I argue for the protection of diversified expression, but while it is true that classical liberals and libertarians argue that truth and the good are objective in an ideal sense, I believe that individual ideals about how to pursue these objectives, the conclusions drawn from them, and the untrodden paths pursued along the way are fundamental to achieving the ideal liberal society (F. Hayek 1960).

In addition to industry groups, multistakeholder and multilateral initiatives within government—including international associations such as the UN—bring together the managers and users of platforms, leaders and contributing members of civil society organizations, professional academics, and other relevant parties, including the laity, to develop consensus approaches to issues surrounding misinformation and disinformation online. I argue that, despite their elitist projection in culture, these collaborative processes produce more legitimate and effective solutions to the problems of accessing reliable information than either the self-regulation of industry (a commonly proposed libertarian solution) or mandates by government bodies to incorporate diverse perspectives and expertise in existing democratic institutions (Raymond and DeNardis 2015). Neither approach is an optimal solution—instead, what is needed is a hybrid mixture of self-regulating government (strong restrictions on power) and private industry mandates moderated by independent consensus-based institutional collaborations.

International cooperation among democratic countries promotes shared norms and standards for protecting the freedom to consume and profit from information while resisting (not without exception) authoritarian approaches to the control of discourse and access to scientific findings. Today’s democratic societies share fundamental commitments to free expression and open debate that distinguish them from authoritarian systems, and these shared values provide a foundation for cooperation by preserving the integrity of digital information globally (Diamond and Schell 2019). As social scientists, we must make sure that our proposals drive forward debate in democratic and liberal theory, because if creative stagnation becomes the status quo, whatever liberal spaces remain for the incubation of new ideas will succumb to regulation—a dire prospect indeed.

Reforming the Regulatory Approach

This section argues that while market and civil society institutions provide the primary mechanisms for addressing the challenge of misinformation in the present, government policy plays a minimally supporting yet important role—and will continue to do so—by creating favorable conditions for market approaches to regulation. Government intervention should enable and foster rather than replace private and community responses to misinformation. If liberals want to avoid regulatory interventions that constrain legitimate speech or distort information markets, then information markets that rely on the freedom to communicate and transmit information are critical to maintaining legitimate governance institutions.

Protecting Platform Immunity and Editorial Discretion

Section 230 of the Communications Decency Act protects online platforms from being held liable for user-generated content. At the same time, the act preserves platforms’ ability to moderate content according to their own standards. This framework allows platforms to experiment with different approaches to moderating pervasive malicious content, including misinformation, while protecting free expression by preventing government authorities from mandating speech restrictions (Kosseff 2019). Because the act allows for differentiated management protocols toward unwanted content, freedom of speech is more or less protected in the law, but in practice, platforms still retain too much power.

Proposals to weaken or eliminate Section 230 protections would likely reduce rather than enhance the quality of information online by creating incentives for excessive censorship, on the one hand, and complete abandonment of content moderation, on the other. Platforms facing potential liability for user content would likely choose broad removal of potentially harmful content, such as misinformation and hate speech, rather than making individualized and perspicuous judgments about the harmfulness of content and the maliciousness of the content creator’s intent. This overcensorship would threaten the freedom of individuals to disseminate legitimate speech and simultaneously fail to protect the public against genuinely harmful content (Goldman 2019). Because AI is already shaping the content we view online, adequate rules and practices, agreed upon by the community, for the uniform management of misinformation and unethical content online is necessary.

Finally, international cooperation should focus on protecting the principles of freedom by which modern platforms operate globally rather than seeking to harmonize restrictive content standards across different legal systems by capitulating to the most restrictive regime. Democratic countries working together can effectively resist authoritarian demands for the removal of expressive content and should seek to respect different cultural approaches by balancing free expression with values such as dignity and common morality (Kaye 2019). These kinds of values and measured restrictions enable maximum freedom under democratic governance with the minimum possible exercise of restrictive limits on the freedom of expression—online or off.[7]

Transparency without Censorship

International and domestic public policy can support the quality of information online by requiring transparency from platforms about their algorithms, content policies, and enforcement practices even without mandating specific content decisions and practices that would threaten the liberality of free market regulation in the forum. In addition to the work of regulators, transparency on the part of platforms about forward guidance for future restrictions and frameworks enables informed user choices, facilitates academic research, and provides accountability for platform decisions while preserving editorial discretion (Diakopoulos 2016). This type of disclosure protects vulnerable online actors whose content may not be malicious but is regarded—in some form or another—as “fringe.”

Auditing the functionality of automated algorithms could help identify potential biases or problematic effects of content recommendation systems without prescribing particular solutions that may be perceived as top-down. Independent researchers often serve the public interest by examining platform algorithms—through so-called red teaming—for evidence of systematic discrimination, manipulation, or other harmful or socially concerning patterns while leaving platforms free to address any identified problems through their own methods (Barocas and Selbst 2016). Removing the political pressure on platforms to conform with a specific ideology, conceptual framing, or morally predetermined concept of what is right is a stepping stone to a truly decentralized discursive online ecosystem built on the tenets of economic and social liberty. Researchers need access to the vast corpus of data available on the internet.

As tech platforms create and empower generative AI, researchers not affiliated with for-profit firms must have access to open-source models that can train and execute tasks on independent yet still comprehensive datasets. This provision could facilitate ongoing study of how platforms affect the reliability, dispersion, and allocation of quality information online—particularly as that information increasingly entrenches political discourse among elites and produces asymmetrical social outcomes. Academic researchers currently face significant barriers to accessing the platform data necessary for rigorous empirical analysis. Government policy could require platforms to provide structured access to researchers while protecting user privacy and commercial interests (King and Persily 2020). Ability and training aside, more people with access to more data and more algorithms should serve to pave the way for a horizontal—that is, nonhierarchical—information ecosystem that provides a basis for replicable studies of the effects of data and information on political society at a global scale.

Public requirements ensuring egalitarian dissemination of information fundamentally aligned with the public interest could ensure that platforms and the owners and managers of data and knowledge provide updated and current information about the ways in which content moderation activities, social engineering processes, and policy changes—internal or external—affect the constituents that political bodies serve and the consumers who rely on and trust these platforms to provide them with valued information. By institutionalizing transparency and information integrity, civil society organizations, journalists, and users are given the freedom and capacity to both monitor and control their relationship to the contents and services provided by platforms. This type of knowledge commons would, through libertarian guidance, hold companies accountable for their decisions without requiring government preapproval of specific actions such as civil suits, actions, and torts (Klonick 2018). Knowledge commons are uniquely suited to this role because they enable multisided platforms where transactions and exchanges can take place with minimal friction (Evans and Schmalensee 2016).

Supporting Media Literacy and Counterspeech

Investments in media literacy through public education campaigns, when combined with a focus on critical thinking skills, provide defenses against misinformation that are more sustainable than content restrictions. This educational approach enhances the capacity of citizens to independently evaluate information found and consumed online while preserving their autonomy as they work toward their own conclusions about contested issues (Potter 2004). While education and critical thinking are not perfect solutions—thinking can be corrupted, jammed, or scrambled through sensory overload—they remind citizens that ownership comes from awareness and action rooted in learning about and evaluating evidence from more or less powerful epistemic vantage points.

The protection of freedom to access information as well as additional research funding to study misinformation and the effects on the public of platform competition can improve our social and economic understanding of the phenomena that define the digital age. They can also inform public debate about appropriate responses and resistance to illiberal policies targeting freedom of expression. Government agencies such as the US National Science Foundation have changed their posture, deprioritizing research that provides empirical evidence about misinformation’s effects. This will only leave the public and the academic community to rely on anecdotal evidence or theoretical speculation about the effects and true nature of the social and political space in which the knowledge economy exists (Lazer et al. 2018). The dismantling of government support for science under Donald Trump—recently (as of July 2025) pulling out of UNESCO—can only harm the understanding and mitigation of misinformation.[8]

International scientific exchange programs facilitate cooperation among democratic countries by developing effective approaches to misinformation challenges and coordinating responses to authoritarian and populist information manipulation. These programs support scientific and civil society organizations, promote the independence of journalists and researchers, and often champion community-based efforts in conflicts with government agencies (Diamond and Schell 2019). The role of science in the fight against online misinformation is crucial, and the research community’s work—both quantitative and qualitative—fosters a continuity in intellectual discourse that in part makes up for the defects in current social networks that cause misinformation to spread, whether covertly and slowly or publicly and virally.

Resisting Authoritarian Models

Democratic countries face pressure from populists and extreme left- and right-wing activists to adopt authoritarian approaches to control communication in response to growing concerns over misinformation. Many modern populist regimes have viewed misinformation as a political tool of the left—wielded to diminish right-wing and populist policies. But adopting such a narrow approach to the problems of misinformation legitimizes these alternative politics globally by suppressing criticism of repressive government propaganda and censorship (Polyakova and Meserole 2019). Content regulation models developed in authoritarian contexts, such as China’s social credit system and Russia’s foreign agent laws, fundamentally contradict democratic values and reject or stymie efforts to address misinformation at the international level. Democratic responses to these international challenges to communication and the diffusion of knowledge should consistently uphold commitments to the fundamental freedoms of constitutional liberty, including due process and limited government power (M. E. Roberts 2018).

Likewise, international institutions and multilateral agreements that emphasize the protection of freedom of information help coordinate the maintenance of vital, healthy, and robust online communities. Organizations such as the United Nations, the European Union, and regional bodies including banking and development institutions often consider proposals for international coordination where the issue of misinformation arises. These organizations must debate the viability of policies that could be used to justify authoritarian content control if the wrong decision is made or the wrong interpretation of the law is taken as the true will of the people (Kaye 2019). In these cases, trade agreements and diplomatic pressure should be used to resist authoritarian demands for content removal or platform compliance with repressive and centralized information policies. By coordinating responses to authoritarian and populist information manipulation, democracies and the institutions that sustain them can find ways to avoid onerous and systemically flawed measures that compromise their own democratic values (Diamond and Schell 2019)—including the protection of free and open press organizations (journalistic, academic, or otherwise).

Conclusion: Toward Information Freedom

The challenges posed by misinformation in democratic societies are not insurmountable within liberal democratic and libertarian frameworks. The appropriate response to rising digital ecosystem corruption is not to abandon foundational commitments to free expression and open debate—by, for example, employing “cancel culture” censorship—but to strengthen the institutions and practices that make democratic discourse effective in the first place while preserving space for dissent, allowing for collaborative digital solutions, and accepting the reality that there must be an ongoing revision of accepted beliefs (Milton [1918] 1999; Rauch 2021).

Synthesis of Arguments

This article has argued that misinformation represents a legitimate concern requiring a thoughtful political response. Top-down regulatory approaches, however, are both insufficient and potentially counterproductive or even harmful. The historical pattern of moral panic surrounding new communication technologies suggests that today’s alarm, although responding to genuine challenges, may overstate perceived threats to communication and fall short of a measured assessment of the more fundamental threats to democratic governance. I have shown that empirical evidence of the effects of misinformation reveals that the perceived threats lack efficacy and authenticity and that measurable impacts on democratic outcomes are absent. While false information can cause specific harms in particular contexts, research suggests that its aggregate influence on electoral behavior, psychological public health outcomes, and social or political stability is more limited than popular discourse assumes. I have argued that democratic institutions possess resilience mechanisms that allow them to adapt to information challenges over time (Cohen 1972; Wood and Porter 2019; Orben and Przybylski 2019).

Community-driven approaches to the governance of our shared information space offer superior alternatives to centralized regulation by utilizing distributed and diffuse knowledge, thereby preserving democratic accountability. These mechanisms work with instead of against the flow of legally bounded free expression. Market competition, civil society organizations, and technological innovation provide mechanisms for improving information quality that do not require government intervention in content decisions. The policy framework I have outlined here emphasizes the protection of platform integrity and independence, promotes transparency in the face of widespread censorship, and supports effective solutions such as counterspeech and media literacy while resisting authoritarian models of information control. These approaches strengthen democratic discourse and enhance the capacity of ordinary citizens to evaluate the information they consume and preserve the individual autonomy they need to ponder politically and scientifically contested issues (Ostrom 2010; Benkler 2002).

Policy Recommendations

Based on this analysis, several specific policy recommendations emerge for democratic societies seeking to address misinformation challenges while preserving information freedom. First, governments should resist proposals to weaken platform immunity protections or create government authorities with the power to determine truth and falsehood in political discourse. Section 230 and similar protections in other countries provide essential safeguards for both platform innovation and free expression. Second, transparency requirements should focus on providing information that enables user choice and academic research rather than mandating specific content moderation approaches. Platforms should be required to disclose their algorithms, content policies, and enforcement statistics while retaining discretion over particular content decisions. This transparency enables accountability without government preapproval of speech restrictions.

Third, public investment should prioritize media literacy education, support for fact-checking organizations and investigative journalism, and research into misinformation’s effects. These approaches enhance societal capacity to address information challenges through education and counterspeech rather than censorship. Government funding should be structured to maintain editorial independence and avoid political interference in content decisions. Fourth, international cooperation should focus on protecting information freedom globally rather than harmonizing content restrictions across different legal systems. Democratic countries should coordinate their resistance to authoritarian information manipulation while respecting different cultural approaches to balancing free expression with other values. Trade agreements and diplomatic pressure should be used to resist authoritarian demands for platform compliance with repressive information policies (Brandeis 1914; Ananny and Crawford 2018; Hobbs 2010; Ash 2016; Sen 1999).

Fifth, antitrust enforcement should address concerns about concentrated platform power through structural remedies rather than content regulation. Promoting competition in social media markets provides solutions to concerns about platform power that are more sustainable than attempts to regulate platform speech decisions directly. Interoperability requirements and data portability standards could enhance competition while preserving user choice (Katz and Shapiro 1994; Bork 1978).

The Stakes for Liberal Democracy

The stakes of the misinformation challenge extend beyond national boundaries to the global contest between democratic and authoritarian models of governance. How democratic societies respond to misinformation influences the credibility of democratic values worldwide and affects the ability of democratic countries to resist authoritarian information manipulation. Adopting authoritarian approaches legitimizes similar practices by repressive governments while undermining the principled foundation for international human rights advocacy. Ultimately, the choice is not between perfect information and dangerous falsehood but imperfect freedom and imperfect control. Democratic societies must choose whether to trust their citizens with the responsibility of evaluating competing claims and participating in debates about contested issues, on the one hand, or to delegate this responsibility to government authorities or private corporations with their own interests and limitations, on the other (Tocqueville [1835] 2000; Madison [1787] 1961).

The classical liberal tradition offers clear guidance on this choice. From John Stuart Mill’s defense of free expression to Friedrich Hayek’s analysis of distributed knowledge and Karl Popper’s philosophy of the open society, liberal thinkers have consistently argued that human fallibility makes it essential to preserve space for criticism, dissent, and revision of accepted beliefs. This tradition does not naively assume that freedom always produces optimal outcomes, but it maintains that the alternatives are worse. The misinformation challenge provides an opportunity to reaffirm and strengthen these foundational commitments while adapting them to contemporary circumstances. Rather than abandoning liberal principles in response to new technologies and challenges, democratic societies should demonstrate their continued relevance by developing innovative approaches that harness the benefits of digital communication while addressing its genuine risks (Nozick 1974; Oakeshott 1962).

This requires moving beyond apocalyptic narratives that exaggerate misinformation’s threats as well as dismissive attitudes that ignore its real effects. The goal should be to develop proportionate responses that acknowledge the complexity of information environments in democratic societies while preserving the openness and adaptability that enable democratic self-governance. The path forward requires patience, intellectual humility, and confidence in democratic processes even when they produce outcomes that seem suboptimal in the short term. It requires investing in education and civil society institutions rather than seeking quick fixes through content regulation. Most importantly, it requires maintaining faith in ordinary citizens’ capacity to participate meaningfully in democratic discourse when provided with appropriate tools, education, and institutional support (Rauch 1993).

Information freedom is not a luxury that democratic societies can afford only in times of stability and consensus. It is precisely when information environments become contested and confusing that preserving space for open debate becomes most important. The test of democratic commitment to free expression comes not when protecting popular speech but when defending the rights of those who challenge conventional wisdom, question authority, or advance unpopular viewpoints. The misinformation challenge will ultimately be resolved not through perfect policies or technologies but through the ongoing commitment of democratic citizens to engage thoughtfully with competing claims, support institutions that promote accuracy and accountability, and resist the temptation to silence those with whom they disagree. This commitment cannot be mandated by law or enforced by algorithms but must emerge from democratic culture and the understanding that information freedom serves the long-term interests of democratic society even when it sometimes seems to impede immediate goals (Hamilton [1788] 1961).


  1. Dealing with the ethical status of technology remains an issue in many countries that spans ideological and political divides within a given culture. This is why I argue that the problems posed by misinformation are uniquely political.

  2. I am excluding militarized and weaponized AI, which, ideally, will remain under human control.

  3. It is estimated that social media users in 2025 generate roughly half a billion tweets, over 95 million Instagram shares, and around 34 million TikTok videos every day—a colossal amount of information.

  4. This is not a problem isolated to misinformation: public budgets expand over time, even without proof of concrete and measurable outcomes.

  5. That is, even when intended to protect or nurture the vulnerable, young, infirm, or weak.

  6. This may seem like a paradox—how can we have community-based moderation but no popular censorship? The difference lies in the virality of censorship, on the one hand, and the stability of moderating truth claims based in knowledge and the methods of scientific and moral inquiry, on the other.

  7. Note that this theory requires democratic systems. Any other system would require a fundamental shift in how we think about the legal boundaries of legitimacy and legislation and many of the principles expounded here may not apply.

  8. There is an argument to be made, in line with the themes of the present article, that defunding centralized misinformation research will result in more access and less government control. This is true, so long as private organizations and scholars pursue the subject and field with adequate intellectual power and resources. Independent work on the subject must be a check on both increasingly bureaucratic tech platforms as well as governments themselves.

Submitted: August 19, 2025 CDT

Accepted: November 03, 2025 CDT

References

Allan, Stuart, and Einar Thorsen. 2013. Citizen Journalism: Global Perspectives. New York: Peter Lang.
Google Scholar
Allcott, Hunt, and Matthew Gentzkow. 2017. “Social Media and Fake News in the 2016 Election.” Journal of Economic Perspectives 31 (2): 211–36. https:/​/​doi.org/​10.1257/​jep.31.2.211.
Google Scholar
Ananny, Mike, and Kate Crawford. 2018. “Seeing without Knowing: Limitations of the Transparency Ideal and Its Application to Algorithmic Accountability.” New Media and Society 20 (3): 973–89. https:/​/​doi.org/​10.1177/​1461444816676645.
Google Scholar
Anderson, Chris. 2006. The Long Tail: Why the Future of Business Is Selling Less of More. New York: Hyperion.
Google Scholar
Ash, Timothy. 2016. Free Speech: Ten Principles for a Connected World. New Haven, Conn.: Yale University Press.
Google Scholar
Bail, Christopher A., Lisa P. Argyle, Taylor W. Brown, John P. Bumpus, Haohan Chen, Michael B. F. Hunzaker, Jaemin Lee, Marcus Mann, Friedolin Merhout, and Alexander Volfovsky. 2018. “Exposure to Opposing Views on Social Media Can Increase Political Polarization.” Proceedings of the National Academy of Sciences 115 (37): 9216–21. https:/​/​doi.org/​10.1073/​pnas.1804840115.
Google Scholar
Baker, C. Edwin. 1989. Human Liberty and Freedom of Speech. Oxford: Oxford University Press.
Google Scholar
Bakshy, Eytan, Solomon Messing, and Lada Adamic. 2015. “Exposure to Ideologically Diverse News and Opinion on Facebook.” Science 348 (6239): 1130–32. https:/​/​doi.org/​10.1126/​science.aaa1160.
Google Scholar
Balkin, Jack M. 2018. “Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation.” UC Davis Law Review 51:1149–1210. http:/​/​hdl.handle.net/​20.500.13051/​4699.
Google Scholar
Barocas, Solon, and Andrew D. Selbst. 2016. “Big Data’s Disparate Impact.” California Law Review 104 (3): 671–732. https:/​/​doi.org/​10.15779/​Z38BG31.
Google Scholar
Benkler, Yochai. 2002. “Coase’s Penguin, or, Linux and ‘The Nature of the Firm.’” Yale Law Journal 112 (3): 369–446. https:/​/​doi.org/​10.2307/​1562247.
Google Scholar
———. 2006. The Wealth of Networks: How Social Production Transforms Markets and Freedom. New Haven, Conn.: Yale University Press.
Google Scholar
Benkler, Yochai, Robert Faris, and Hal Roberts. 2018. Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics. New York: Oxford University Press. https:/​/​doi.org/​10.1093/​oso/​9780190923624.001.0001.
Google Scholar
Berlin, Isaiah. 1969. “Two Concepts of Liberty.” In Four Essays on Liberty, 118–72. Oxford: Oxford University Press.
Google Scholar
Bork, Robert H. 1978. The Antitrust Paradox: A Policy at War with Itself. New York: Basic Books.
Google Scholar
Boxell, Levi, Matthew Gentzkow, and Jesse M. Shapiro. 2017. “Greater Internet Use Is Not Associated with Faster Growth in Political Polarization among US Demographic Groups.” Proceedings of the National Academy of Sciences 114 (40): 10612–17. https:/​/​doi.org/​10.1073/​pnas.1706588114.
Google Scholar
Brandeis, Louis D. 1914. Other People’s Money, and How the Bankers Use It. New York: Frederick A. Stokes.
Google Scholar
Buchanan, James M., and Gordon Tullock. 1962. The Calculus of Consent: Logical Foundations of Constitutional Democracy. Ann Arbor: University of Michigan Press.
Google Scholar
Candeub, Adam. 2021. “Reading Section 230 as Written.” Journal of Free Speech Law 1 (1): 139–74.
Google Scholar
Christians, Clifford G., Mark Fackler, Kathy Richardson, Andrew Kuypers, and Robert H. Woods Jr. 2009. Media Ethics: Cases and Moral Reasoning. Boston: Allyn and Bacon.
Google Scholar
Cohen, Stanley. 1972. Folk Devils and Moral Panics: The Creation of the Mods and Rockers. London: MacGibbon and Kee.
Google Scholar
Diakopoulos, Nicholas. 2016. “Accountability in Algorithmic Decision Making.” Communications of the ACM 59 (2): 56–62. https:/​/​doi.org/​10.1145/​2844110.
Google Scholar
Diamond, Larry, and Orville Schell, eds. 2019. Chinese Influence and American Interests: Promoting Constructive Vigilance. Stanford, Calif.: Hoover Institution Press.
Google Scholar
Drezner, Daniel W. 2021. The Ideas Industry: How Pessimists, Partisans, and Plutocrats Are Transforming the Marketplace of Ideas. Oxford: Oxford University Press.
Google Scholar
Dubois, Elizabeth, and Grant Blank. 2018. “The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media.” Information, Communication and Society 21 (5): 729–45. https:/​/​doi.org/​10.1080/​1369118X.2018.1428656.
Google Scholar
Eady, Gregory, Jonathan Nagler, Andrew Guess, January Zilinsky, and Joshua A. Tucker. 2023. “How Many People Live in Political Bubbles on Social Media? Evidence from Linked Survey and Twitter Data.” Sage Open 9 (1). https:/​/​doi.org/​10.1177/​2158244019832705.
Google Scholar
Eisenstein, Elizabeth L. 1980. The Printing Press as an Agent of Change. Cambridge: Cambridge University Press. https:/​/​doi.org/​10.1017/​CBO9781107049963.
Google Scholar
Evans, David S., and Richard Schmalensee. 2016. Matchmakers: The New Economics of Multisided Platforms. Boston: Harvard Business Review Press.
Google Scholar
Flaxman, Seth, Sharad Goel, and Justin M. Rao. 2016. “Filter Bubbles, Echo Chambers, and Online News Consumption.” Public Opinion Quarterly 80 (S1): 298–320. https:/​/​doi.org/​10.1093/​poq/​nfw006.
Google Scholar
Flew, Terry. 2021. Digital Communication: An Introduction. New York: Oxford University Press.
Google Scholar
Freelon, Deen, and Chris Wells. 2020. “Disinformation as Political Communication.” Political Communication 37 (2): 145–56. https:/​/​doi.org/​10.1080/​10584609.2020.1723755.
Google Scholar
Friedman, Milton. 1962. Capitalism and Freedom. Chicago: University of Chicago Press.
Google Scholar
Giles, Jim. 2005. “Internet Encyclopedias Go Head to Head.” Nature 438 (7070): 900–901. https:/​/​doi.org/​10.1038/​438900a.
Google Scholar
Gillespie, Tarleton. 2018. Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media. New Haven, Conn.: Yale University Press. https:/​/​doi.org/​10.12987/​9780300235029.
Google Scholar
———. 2020. “Content Moderation, AI, and the Question of Scale.” Big Data and Society 7 (2): 1–5. https:/​/​doi.org/​10.1177/​2053951720943234.
Google Scholar
Goldman, Eric. 2019. “Why Section 230 Is Better than the First Amendment.” Notre Dame Law Review Reflection 95 (1): 33–46. https:/​/​doi.org/​10.2139/​ssrn.3351323.
Google Scholar
Goode, Erich, and Nachman Ben-Yehuda. 1994. Moral Panics: The Social Construction of Deviance. Oxford: Blackwell.
Google Scholar
Gorwa, Robert. 2019. “What Is Platform Governance?” Information, Communication and Society 22 (6): 854–71. https:/​/​doi.org/​10.1080/​1369118X.2019.1573914.
Google Scholar
Gorwa, Robert, Reuben Binns, and Christian Katzenbach. 2020. “Algorithmic Content Moderation: Technical and Political Challenges in the Automation of Platform Governance.” Big Data and Society 7 (1): 1–15. https:/​/​doi.org/​10.1177/​2053951719897945.
Google Scholar
Graves, Lucas. 2016. Deciding What’s True: The Rise of Political Fact-Checking in American Journalism. New York: Columbia University Press. https:/​/​doi.org/​10.7312/​grav17506.
Google Scholar
Greenstein, Shane, and Feng Zhu. 2012. “Is Wikipedia Biased?” American Economic Review 102 (3): 343–48. https:/​/​doi.org/​10.1257/​aer.102.3.343.
Google Scholar
Grinberg, Nir, Kenneth Joseph, Lisa Friedland, Briony Swire-Thompson, and David Lazer. 2019. “Fake News on Twitter during the 2016 US Presidential Election.” Science 363 (6425): 374–78. https:/​/​doi.org/​10.1126/​science.aau2706.
Google Scholar
Guess, Andrew, Jonathan Nagler, and Joshua Tucker. 2019. “Less than You Think: Prevalence and Predictors of Fake News Dissemination on Facebook.” Science Advances 5 (1): 1–8. https:/​/​doi.org/​10.1126/​sciadv.aau4586.
Google Scholar
Hamilton, Alexander. (1788) 1961. “Federalist, No. 51.” In The Federalist Papers, by Alexander Hamilton, James Madison, and John Jay, edited by Clinton Rossiter, 320–25. New York: Signet Classics. https:/​/​doi.org/​10.4159/​harvard.9780674332133.
Google Scholar
Hayek, F. 1960. The Constitution of Liberty. Chicago: University of Chicago Press.
Google Scholar
Hayek, F. A. 1945. “The Use of Knowledge in Society.” American Economic Review 35 (4): 519–30. https:/​/​www.jstor.org/​stable/​1809376.
Google Scholar
Helberger, Natali, Jo Pierson, and Thomas Poell. 2018. “Governing Online Platforms: From Contested to Cooperative Responsibility.” Information Society 34 (1): 1–14. https:/​/​doi.org/​10.1080/​01972243.2017.1391913.
Google Scholar
Hirschman, Albert O. 1970. Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States. Cambridge, Mass.: Harvard University Press.
Google Scholar
Hobbs, Renee. 2010. Digital and Media Literacy: A Plan of Action. Washington, D.C.: Aspen Institute.
Google Scholar
Jack, Caroline. 2017. Lexicon of Lies: Terms for Problematic Information. New York: Data and Society Research Institute. https:/​/​doi.org/​10.69985/​KMPZ3134.
Google Scholar
Jenkins, Henry. 2006. Convergence Culture: Where Old and New Media Collide. New York: New York University Press.
Google Scholar
Kahan, Dan M. 2017. “Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition.” Cultural Cognition Project Working Paper Series No. 164, Yale Law School, New Haven, Conn., June 27. https:/​/​doi.org/​10.2139/​ssrn.2973067.
Kahneman, Daniel. 2011. Thinking, Fast and Slow. New York: Farrar, Straus and Giroux.
Google Scholar
Katz, Michael L., and Carl Shapiro. 1994. “Systems Competition and Network Effects.” Journal of Economic Perspectives 8 (2): 93–115. https:/​/​doi.org/​10.1257/​jep.8.2.93.
Google Scholar
Kaye, David. 2019. Speech Police: The Global Struggle to Govern the Internet. New York: Columbia Global Reports. https:/​/​doi.org/​10.2307/​j.ctv1fx4h8v.
Google Scholar
King, Gary, and Nathaniel Persily. 2020. “A New Model for Industry–Academic Partnerships.” PS: Political Science and Politics 53 (4): 703–9. https:/​/​doi.org/​10.1017/​S1049096519001021.
Google Scholar
Klonick, Kate. 2018. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review 131 (6): 1598–1670. https:/​/​www.jstor.org/​stable/​44865879.
Google Scholar
Kosseff, Jeff. 2019. The Twenty-Six Words That Created the Internet. Ithaca, N.Y.: Cornell University Press. https:/​/​doi.org/​10.7591/​9781501735783.
Google Scholar
Kraut, Robert E., and Paul Resnick. 2012. Building Successful Online Communities: Evidence-Based Social Design. Cambridge, Mass.: MIT Press. https:/​/​doi.org/​10.7551/​mitpress/​8472.001.0001.
Google Scholar
Lazer, David M. J., Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, et al. 2018. “The Science of Fake News.” Science 359 (6380): 1094–96. https:/​/​doi.org/​10.1126/​science.aao2998.
Google Scholar
Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook. 2012. “Misinformation and Its Correction: Continued Influence and Successful Debiasing.” Psychological Science in the Public Interest 13 (3): 106–31. https:/​/​doi.org/​10.1177/​1529100612451018.
Google Scholar
Lodge, Milton, and Charles S. Taber. 2013. The Rationalizing Voter. Cambridge: Cambridge University Press. https:/​/​doi.org/​10.1017/​CBO9781139032490.
Google Scholar
Loomba, Sahil, Alexandre de Figueiredo, Simon J. Piatek, Kristen de Graaf, and Heidi J. Larson. 2021. “Measuring the Impact of COVID-19 Vaccine Misinformation on Vaccination Intent in the UK and USA.” Nature Human Behavior 5 (3): 337–48. https:/​/​doi.org/​10.1038/​s41562-021-01056-1.
Google Scholar
Madison, James. (1787) 1961. Federalist, no. 10. In Alexander Hamilton, James Madison, and John Jay, The Federalist Papers, edited by Clinton Rossiter, 77–84. New York: Signet Classics.
Google Scholar
Martin, Kirsten. 2019. “Ethical Implications and Accountability of Algorithms.” Journal of Business Ethics 160 (4): 835–50. https:/​/​doi.org/​10.1007/​s10551-018-3921-3.
Google Scholar
McGuire, William J. 1964. “Inducing Resistance to Persuasion: Some Contemporary Approaches.” Advances in Experimental Social Psychology 1:191–229. https:/​/​doi.org/​10.1016/​S0065-2601(08)60052-0.
Google Scholar
Menand, Louis. 2010. The Marketplace of Ideas: Reform and Resistance in the American University. New York: W. W. Norton.
Google Scholar
Mendoza, Marcelo, Barbara Poblete, and Carlos Castillo. 2010. “Twitter under Crisis: Can We Trust What We RT?” In Proceedings of the First Workshop on Social Media Analytics, 71–79. New York: Association for Computing Machinery. https:/​/​doi.org/​10.1145/​1964858.1964869.
Google Scholar
Mill, John Stuart. (1859) 2012. On Liberty. Cambridge: Cambridge University Press.
Google Scholar
Milton, John. (1918) 1999. Areopagitica, and Other Political Writings of John Milton. Indianapolis: Liberty Fund.
Google Scholar
Nozick, Robert. 1974. Anarchy, State, and Utopia. New York: Basic Books.
Google Scholar
Nyhan, Brendan, and Jason Reifler. 2010. “When Corrections Fail: The Persistence of Political Misperceptions.” Political Behavior 32 (2): 303–30. https:/​/​doi.org/​10.1007/​s11109-010-9112-2.
Google Scholar
Oakeshott, Michael. 1962. Rationalism in Politics and Other Essays. London: Methuen.
Google Scholar
Orben, Amy, and Andrew K. Przybylski. 2019. “The Association between Adolescent Well-Being and Digital Technology Use.” Nature Human Behavior 3:173–82. https:/​/​doi.org/​10.1038/​s41562-018-0506-1.
Google Scholar
Ostrom, Elinor. 1990. Governing the Commons: The Evolution of Institutions for Collective Action. New York: Cambridge University Press. https:/​/​doi.org/​10.1017/​CBO9780511807763.
Google Scholar
———. 2010. “Beyond Markets and States: Polycentric Governance of Complex Economic Systems.” American Economic Review 100 (3): 641–72. https:/​/​doi.org/​10.1257/​aer.100.3.641.
Google Scholar
Pennycook, Gordon, and David G. Rand. 2019. “Lazy, Not Biased: Susceptibility to Partisan Fake News Is Better Explained by Lack of Reasoning than by Motivated Reasoning.” Cognition 188:39–50. https:/​/​doi.org/​10.1016/​j.cognition.2018.06.011.
Google Scholar
Persily, Nathaniel. 2017. “The 2016 US Election: Can Democracy Survive the Internet?” Journal of Democracy 28 (2): 63–76. https:/​/​doi.org/​10.1353/​jod.2017.0025.
Google Scholar
Phillips, Whitney. 2018. The Oxygen of Amplification: Better Practices for Reporting on Extremists, Antagonists, and Manipulators Online. New York: Data and Society Research Institute. https:/​/​doi.org/​10.69985/​WGTI7516.
Google Scholar
Polanyi, Michael. 1962. “The Republic of Science: Its Political and Economic Theory.” Minerva 1 (1): 54–73. https:/​/​doi.org/​10.1007/​BF01101453.
Google Scholar
Polyakova, Alina, and Chris Meserole. 2019. Exporting Digital Authoritarianism: The Russian and Chinese Models. Foreign policy brief. [Washington, D.C.]: Brookings Institution. https:/​/​www.brookings.edu/​wp-content/​uploads/​2019/​08/​FP_20190827_digital_authoritarianism_polyakova_meserole.pdf.
Google Scholar
Popper, Karl. 1945. The Open Society and Its Enemies. London: Routledge.
Google Scholar
Post, Robert. 2009. Democracy, Expertise, and Academic Freedom: A First Amendment Jurisprudence for the Modern State. New Haven, Conn.: Yale University Press.
Google Scholar
Postman, Neil. 1985. Amusing Ourselves to Death: Public Discourse in the Age of Show Business. New York: Penguin.
Google Scholar
Potter, W. James. 2004. Theory of Media Literacy: A Cognitive Approach. London: Sage. https:/​/​doi.org/​10.4135/​9781483328881.
Google Scholar
Rader, Emilee, Kelley Cotter, and Janghee Cho. 2018. “Explanations as Mechanisms for Supporting Algorithmic Transparency.” In CHI 2018: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–13. New York: Association for Computing Machinery. https:/​/​doi.org/​10.1145/​3173574.3173677.
Google Scholar
Rauch, Jonathan. 1993. Kindly Inquisitors: The New Attacks on Free Thought. Chicago: University of Chicago Press.
Google Scholar
———. 2021. The Constitution of Knowledge: A Defense of Truth. Washington, D.C.: Brookings Institution Press. https:/​/​doi.org/​10.5040/​9780815750376.
Google Scholar
Raymond, Mark, and Laura DeNardis. 2015. “Multistakeholderism: Anatomy of an Inchoate Global Institution.” International Theory 7 (3): 572–616. https:/​/​doi.org/​10.1017/​S1752971915000081.
Google Scholar
Resnick, Paul, Ko Kuwabara, Richard Zeckhauser, and Eric Friedman. 2000. “Reputation Systems.” Communications of the ACM 43 (12): 45–48. https:/​/​doi.org/​10.1145/​355112.355122.
Google Scholar
Roberts, Margaret E. 2018. Censored: Distraction and Diversion inside China’s Great Firewall. Princeton: Princeton University Press. https:/​/​doi.org/​10.23943/​9781400890057.
Google Scholar
Roberts, Sarah T. 2019. Behind the Screen: Content Moderation in the Shadows of Social Media. New Haven, Conn.: Yale University Press. https:/​/​doi.org/​10.12987/​9780300245318.
Google Scholar
Rothbard, Murray N. 1962. Man, Economy, and State: A Treatise on Economic Principles. Princeton, N.J.: D. Van Nostrand.
Google Scholar
———. 1978. For a New Liberty: The Libertarian Manifesto. New York: Macmillan.
Google Scholar
Scanlon, Thomas. 1972. “A Theory of Freedom of Expression.” Philosophy and Public Affairs 1 (2): 204–26. https:/​/​www.jstor.org/​stable/​2264971.
Google Scholar
Schauer, Frederick. 1982. Free Speech: A Philosophical Enquiry. Cambridge: Cambridge University Press.
Google Scholar
Schneider, Nathan. 2021. “Modular Politics: Toward a Governance Layer for Online Communities.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW1): e16. https:/​/​doi.org/​10.1145/​3449090.
Google Scholar
Sen, Amartya. 1999. Development as Freedom. Oxford: Oxford University Press.
Google Scholar
Shirky, Clay. 2008. Here Comes Everybody: The Power of Organizing without Organizations. New York: Penguin.
Google Scholar
Starr, Paul. 2004. The Creation of the Media: Political Origins of Modern Communications. London: Basic Books.
Google Scholar
Stigler, George J. 1971. “The Theory of Economic Regulation.” Bell Journal of Economics and Management Science 2 (1): 3–21. https:/​/​doi.org/​10.2307/​3003160.
Google Scholar
Suzor, Nicolas. 2019. Lawless: The Secret Rules That Govern Our Digital Lives. New York: Cambridge University Press. https:/​/​doi.org/​10.1017/​9781108666428.
Google Scholar
Thierer, Adam. 2014. Permissionless Innovation: The Continuing Case for Comprehensive Technological Freedom. Fairfax, Va.: Mercatus Center.
Google Scholar
Tiebout, Charles M. 1956. “A Pure Theory of Local Expenditures.” Journal of Political Economy 64 (5): 416–24. https:/​/​doi.org/​10.1086/​257839.
Google Scholar
Tocqueville, Alexis de. (1835) 2000. Democracy in America. Edited and translated by Harvey C. Mansfield and Delba Winthrop. Chicago: University of Chicago Press.
Google Scholar
Troiano, G., and A. Nardi. 2021. “Vaccine Hesitancy in the Era of COVID-19.” Public Health 194:245–51. https:/​/​doi.org/​10.1016/​j.puhe.2021.02.025.
Google Scholar
Tucker, Joshua A., Andrew Guess, Pablo Barberá, Cristian Vaccari, Alexandra Siegel, Sergey Sanovich, Denis Stukal, et al. 2018. Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. Menlo Park, Calif.: Hewlett Foundation. https:/​/​doi.org/​10.2139/​ssrn.3144139.
Google Scholar
Tufekci, Zeynep. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. New Haven, Conn.: Yale University Press.
Google Scholar
van der Linden, Sander, and Jon Roozenbeek. 2024. The Psychology of Misinformation. Cambridge: Cambridge University Press.
Google Scholar
Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–51. https:/​/​doi.org/​10.1126/​science.aap9559.
Google Scholar
Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making. Strasbourg, Fran.: Council of Europe.
Google Scholar
Werbach, Kevin. 2018. The Blockchain and the New Architecture of Trust. Cambridge, Mass.: MIT Press. https:/​/​doi.org/​10.7551/​mitpress/​11449.001.0001.
Google Scholar
Wineburg, Sam, Sarah McGrew, Joel Breakstone, and Teresa Ortega. 2016. Evaluating Information: The Cornerstone of Civic Online Reasoning. Palo Alto, Calif.: Stanford University.
Google Scholar
Wood, Thomas, and Ethan Porter. 2019. “The Elusive Backfire Effect: Mass Attitudes’ Steadfast Factual Adherence.” Political Behavior 41 (1): 135–63. https:/​/​doi.org/​10.1007/​s11109-018-9443-y.
Google Scholar

Powered by Scholastica, the modern academic journal management system