top of page

Artificial Intelligence’s Erosion of American Democracy 

Written by Veronica Barber, MSc Political Theory 

 

Artificial intelligence (AI refers to software systems designed to replicate aspects of human intelligence. Originally defined as computerized intelligence capable of mimicking human cognition, today AI is an umbrella term  that encompasses a broad range of technologies such as machine learning, deep learning, and natural language processing (NLP). These systems are fueled by massive datasets that are drawn from publicly available web content, company data, commercial data providers, and even synthetically generated information. Increasingly capable of shaping political discourse and state power, it is important to bring to attention its potential recourse on American democracy. The ground on which democracy is built relies on autonomy, privacy, and access to trustworthy information. These preconditions enable citizens to participate in deliberation and civic duties. Nonetheless, they are threatened by subtle transformations of advancing technologies that are shaping up an environment where citizens are becoming less likely to fulfill the requirements of democracy. 

 

Trust in the democratic process began to decline after the interference of the 2016 U.S. presidential election  by Russia. During the same election, Cambridge Analytica  demonstrated the consequences of data-driven political influence by using intimate Facebook data to micro-target and manipulate voters in the U.S election. In 2016 the United States fell from a “full democracy” to a “flawed democracy” according to the “Democracy Index” . Hence, trust in U.S. government institutions is declining and some citizens  have begun to question the viability of democracy itself. This ranking continues to decline, primarily due to “erosion of confidence in government and public institutions.” This reveals preexisting vulnerabilities that may amplify AI’s potential to damage democratic processes.  

 

Since 2016, AI-driven personalization, surveillance capitalism, generative content, chatbots, and large-scale data collection have further eroded democratic trust. These systems discreetly shape what Americans see, share, and believe, often without realizing it. This discussion thus underscores the principle that citizens must retain access to autonomy, privacy, and trustworthy information as the lifeblood of democracy. When AI disrupts these conditions, the result is not technological progress but democratic erosion.  

 

AI and Autonomy 


Democratic practice relies on autonomy. Citizens must make decisions grounded in personal values rather than beliefs that are coerced, deceived, or manipulated by external forces. AI increasingly challenges this democratic foundation through algorithmic personalization and behavioral prediction. “Black box”  systems obscure the tools through which information is curated and presented, shaping perceptions and influencing behavior without transparency. One immediate consequence is the increase of “echo chambers,” or algorithms that expose individuals to content that is aligned with past preferences. In these chambers exposure is limited to narrow perspectives that were previously engaged with, which ultimately shrink the intellectual openness required for such a pluralistic nation as the United States. “Hyper-personalization” amplifies these effects. AI systems construct detailed psychological and behavioral profiles, delivering content tailored to predicted preferences. Citizens may perceive their decisions as freely made, but algorithms subtly influence choices, interests, and emotions. This leads to manipulation (“nudging” users toward social validation and certain actions), cognitive offloading (outsourcing one’s judgment and weakening their ability to think), and loss of control or agency. These “sophisticated manipulation technologies”  create the illusion of autonomy while pushing behaviors according to predictive models. 

 

Recent research highlights the scale of this threat. A 2024 study on GPT-4  found that the AI used large language models (LLMs) that equaled or exceeded humans in persuasive ability across multiple topics. When provided with personal information about its conversational partner, GPT-4 was 64 percent more persuasive than humans without access to such data. Humans that were given access to the same personal information performed worse than those without, suggesting that AI leverages personalization more effectively and boosts rhetorical impact. This research shows the democratic concern that if AI can out-persuade citizens then political actors, institutions and civic deliberation risks being replaced by machine-driven influence. The long-term danger here is that the unchecked corrosion of autonomy could lead to an engineered perception of private and civic life. In turn, democracy could turn from a governing of self-rule into a series of prompts and consent where citizens’ choices are merely shaped by the algorithms they experience, rather than their own reasoned judgment.  

 

AI and Privacy 


From fingerprints and facial recognition to geolocation data, web beacons, pixel tags, and biometrics, citizens’ data is continuously captured, analyzed, and traded—fueling both corporate capitalism  and government surveillance . These massive datasets that create detailed profiles of individuals’ habits, preferences, beliefs, and relationships are compiled and stored, often without expressive consent or transparency. While AI stores this data, it also interprets and acts upon it. Technologies such as machine learning transform this raw surveillance data into powerful tools of prediction and unregulated control. Algorithms can now rank and sort individuals to determine creditworthiness, employment potential, or likelihood of criminal activity . This predictive governance represents how institutions anticipate and influence citizen behavior, reducing those they are supposed to represent as data points rather than the material life of democracy.  


In 2024, the U.S. House of Representatives’ Bipartisan Artificial Intelligence Task Force issued a “Report on Artificial Intelligence,”  mentioning the term “privacy” 156 times. The report highlighted AI’s capacity to exacerbate privacy harms, noted Americans’ limited remedy for these harms, and recommended increasing existing federal and state privacy laws. The lack of regulation has created opportunities for government and corporate actors to exploit data in ways that undermine democracy. For example, the Department of Government Efficiency (DOGE) has been accused of weaponizing sensitive data against citizens . An National Labor Relations Board whistleblower  alleged that DOGE operatives systematically “exfiltrated” large volumes of sensitive government records, using them to target individuals based on housing, tax, and benefits information. Citizens who provide data in good faith may have that data used without consent or recourse for weaponization; such as the Trump administration  providing deportation officials with Medicaid enrollees immigration statuses. These practices may violate the Fourth Amendment's protections against unreasonable searches and infringe upon First Amendment freedoms of speech and association. AI-powered surveillance threatens privacy through its ability to analyze massive datasets, like medical  records, while their training on historical data can also reinforce societal inequalities existing in credit scoring and insurance pricing . The opacity of these systems exacerbates harm because citizens rarely know when their data is being used, how it is interpreted, or the consequences of algorithmic inferences.  

 

In this context, privacy is not simply about secrecy or concealment; it is about having control over personal information and the interpretations that are derived from it. When AI systems claim authority to interpret our data, they compromise individual democratic participation. A society in which citizens cannot meaningfully manage their own information is one where democratic accountability is weakened through the distrust and dishonor of its citizens When privacy is thwarted, so is the capacity for citizens to participate effectively and freely.  

 

AI and Access to Trustworthy Information 


Perhaps the most immediate threat AI poses to American democracy is its capacity to undermine access to trustworthy information. Generative AI, especially, is hidden in plain sight; with deepfakes  and altered soundbites that can distort public understanding. For voters , these manipulations threaten the fundamental capacity to monitor elected officials and hold them accountable. A healthy democracy depends on citizens making informed decisions in free and fair elections; yet AI-generated propaganda can be as persuasive and credible as human-authored content. Exposure to a continuous stream of targeted misinformation can skew perceptions of policy and political performance, therefore destabilizing measures of accountability.  

 

A 2023 study examining GPT-3 highlights  this challenge. Researchers analyzed “organic” tweets (human-authored) and “synthetic” tweets generated by GPT-3—both accurate and misleading—across topics including vaccines, climate change, and 5G. Among 697 participants, results showed that people could not reliably distinguish synthetic from organic content. Moreover, GPT-3’s accurate tweets were recognized as true more often than human-authored accurate tweets, while its false tweets were harder to detect than human misinformation. Participants processed GPT-3 content faster and judged it with higher confidence. Humans outperformed GPT-3 at detecting true information (~78% vs. ~64%), but both were roughly equivalent (~90%) in identifying disinformation. The implication is that GPT-3, and more powerful models, can seamlessly blend credible and manipulative content to swamp the digital ecosystem. Synthetic information, and a human’s inability to distinguish it from organic information, undermines a person’s capacity to access accurate, verifiable, and trustworthy information. Furthermore, as synthetic content circulates at scale, the shared factual foundation essential for civic discourse flattens. Democratic accountability cannot happen without organic, trustworthy information, and it certainly cannot function without reasonably informed citizens.  

 

Generative AI amplifies these risks through hyper-realistic media. Text-to-image algorithms and deep-fake tools can convincingly replicate the voice and mannerisms of political figures. During the 2024 U.S. presidential race, AI-generated deep-fakes  circulated widely, portraying candidates in fabricated scenarios and attributing false statements to them. From 2022 to 2023, over 15 billion images  were generated via text-to-image AI, with approximately 34 million new creations daily on platforms such as OpenAI’s DALL-E 2. While not all are used maliciously, the sheer volume demonstrates the potential to overwhelm the digital ecosystem with synthetic content. Global assessments can attest to the severity of these challenges. The International Panel on the Information Environment reported  that AI contributed harm in 69% of analyzed election cases, confirming a widespread pattern of weakened knowledge. Over time, this erosion undermines trust not only in the media but in government institutions themselves. When citizens can no longer discern truth from fabrication, democratic mechanisms, such as representation and electoral responsiveness, become performative instead of substantive. The ultimate consequence is when the line between truth and falsehood blurs, trust evaporates. 

 

Mitigating Threats to Democracy 


The central question surrounding AI regulation is not merely whether the U.S. federal government should regulate emerging technologies, but how authority should be distributed between federal, state, and private actors. Currently, AI governance in the United States is a fragmented patchwork: constitutional protections (e.g., Carpenter v. United States ), federal agencies (e.g., FTC acting against discriminatory AI systems ), and state-level initiatives (eg, California’s CCPA ). This hybrid model seems to be the most defensible path. Federal legislation should establish baseline protections that safeguard autonomy, privacy, and trustworthy information. Agencies should be empowered to enforce transparency, nondiscrimination, and accountability with agility. States should be allowed to experiment with more protective measures tailored to local contexts. Such a model would then respect the U.S. constitutional structure while preventing corporations or individual states from unilaterally determining AI’s societal impact. However, it becomes a matter of which protections should be considered for such a federal baseline approach. 

 

Autonomy requires that citizens retain the ability to determine when and how information shapes their beliefs. AI systems that incorporate human-in-the-loop oversight for high-risk decisions and restrict manipulative designs, such as behavioral micro-targeting or addictive engagement algorithms. Citizens should have mechanisms to contest adverse automated outcomes. Biden’s 2023 Executive Order  on AI promoted human oversight and safety standards, but the 2025 Executive Order “Removing Barriers to American Leadership in AI”  shifted priorities toward deregulation, emphasizing industrial competition over democratic safeguards. Without statutory protections, corporations retain extensive power to shape civic behavior through opaque algorithms.  

 

Comprehensive federal privacy legislation is critical to prevent misuse of personal data by government and private actors. In the meantime, states have emerged as de facto guardians of privacy. California’s CCPA and CPRA  provides residents with the right to opt out of data sales, correct personal information, and limit the use of sensitive data. Utah  and Colorado   are developing narrower, but growing, safeguards. Federal interventions, heeding state-level regulations, could appear as data minimization (collect only what is necessary), purpose limitation (prevent secondary, harmful uses of personal data), and redress (empower citizens to know when and why their data is used and seek remedies for violations). Comparative democracies like Estonia  demonstrate the effectiveness of citizen-controlled data architectures. Estonian citizens can access their government data securely and see who has accessed it, ensuring accountability and trust. The U.S. could adopt similarly strong privacy guardrails to prevent digital innovation from eroding democratic participation.  

 

Ensuring access to reliable information is essential for democratic legitimacy. Federal regulation should establish risk-based standards modeled on frameworks such as the EU AI Act . High-risk AI systems should undergo external auditing, maintain transparency, and implement provenance tracking for synthetic media that affects elections. Platforms should be held accountable for content flows that shape civic perception. While voluntary initiatives like the NIST AI Risk Management Framework  provide guidance, enforcement is limited. Without enforceable rules, AI systems remain capable of manipulating public understanding. A resilient democratic framework will require the U.S. to enact federal guardrails that protect autonomy, privacy, and trustworthy information while allowing states to strengthen protections and prevent corporations from holding unilateral power over the public sphere. This hybrid governance approach balances innovation with accountability, ensuring that AI serves society rather than undermines it.  

 

Conclusion 


Human autonomy, data privacy, and access to trustworthy information are fundamental preconditions for a functioning democracy. Technologies that once promised empowerment now challenge the foundations of democratic life. The question becomes whether democracy can remain resilient under that transformation. Still, the outlook on artificial intelligence is not entirely grim. Similar to previous technological revolutions, fear is both natural and warranted. AI has the capacity to hold meaningful promise for a better future, so long as this sort of intelligence allows nations to enable more responsive governments and properly empower citizens to participate more meaningfully—like Estonia. These benefits can only be realized if they are accompanied by guardrails that can ensure AI systems are accurate, transparent, and aligned with democratic norms.  

 

 Sources

 

① Cole Stryker & E. Kavlakoglu, « What is Artifical Intelligence (AI)? », IBM 

 

② « Definitions », Information Commissioner’s Office.  


③ Marci A. Hamilton, « The Russian Meddling in the 2016 Election: The Internet Meets the Democratic System », Verdict, 09 November 2017.  


④ Joey Westby « ‘The Great Hack’: Cambridge Anaytica is just the tip of the iceberg », Amnesty International, 24 July 2019. 


⑤ « Democracy Index », Our World in Data, September 2025. 


⑥ « Public Trust in Government: 1958–2024 », Pew Research Center, 24 June 2024. 


⑦ Patrizia Attar & Anna Kohler, « Looking inside the black box—semantic investigations on a frequently used expression beyond AI », Frontiers, 05 September 2025. 


⑧ Karl Manheim & L. Kaplan, « Artifical Intelligence: Risks to Privacy and Democracy », The Yale Journal of Law and Technology, 2019. 


⑨ F. Salvi, et al. « On the Conversational Persuasiveness of GPT-4 », Nat Hum Behav, 2025. 


⑩ Mehrdad Kordi, « Surveillance Capitalism: The Transformation of Raw Online Data into Valuable Assets by High-Tech Companies – Is AI Governance a Threat or a Solution to Our Privacy Concerns? », The Palgrave Handbook of Sustainable Digitalization for Business, Industry, and Society, January 2024. 


⑪ Suresh Venkatasubramanian, « How AI and Surveillance Capitalism are Undermining Democracy », 21 August 2025 


⑫ Nicole Turner Lee, Paul Resnick, & Genie Barton, « Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms », Brookings, 22 May 2019. 


⑬ « Bipartisan House Task Force Report on Artificial Intelligence », 118th Congress, December 2024 


⑭ Stephanie K. Pell,  Josie Stewart, & Brooke Tanner, « Privacy under siege: DOGE’s one big, beautiful database”, Brookings, 25 June 2025.  



⑯ Kimberly Kindy & Amanda Seitz, « Trump administration gives personal data of immigrant Medicaid enrollees to deportation officials », AP, 14 June 2025.  


⑰ Blake Murdoch, « Privacy and artificial intelligence: challenges for protecting health information in a new era », BMC Medical Ethics, 15 September 2021 


⑱ Chidimma Umeaduma & Adedapo Adeniyi, « AI-Powered Credit Scoring Models: Ethical Considerations, Bias Reduction, and Financial Inclusion Strategies », International Journal of Research Publication and Reviews, March 2025..  


⑲ Phil Swatton & Margaux Leblanc, « What are deepfakes and how can we detect them? », The Alan Turing Institute, 07 June 2024. 


⑳ Sam Stockwell, « How can we stop AI-enabled threats damaging our democracy? », The Alan Turing Institute, 29 May 2024.  


㉑ Giovanni Spitale, Nikola Biller-Andorno, & Federico Germani, « AI model GPT-3 (dis)informs us better than humans », ScienceAdvances, 28 June 2023. 


㉒ Thalia Khan, « What Role Did AI Play in the 2024 U.S. Election? », Partnership on AI, 4 November 2024.  


㉓ Alina Valyaeva « People Are Creating an Average of 34 Million Images Per Day. Statistics for 2024», Everypixel Journal, 2024.  



㉕ « Carpenter v. United States », Supreme Court of the United States, October term 2017.  



㉗ « California Consumer Privacy Act (CCPA) », State of California, 13 March 2024. 



㉙ « Removing Barriers to American Leadership in Artificial Intelligence », The White House, 23 January 2025.  


㉚ « The California Privacy Rights Act of 2020 », State of California, November 2020.  


㉛ « S.B. 149 Artificial Intelligence Amendements », State of Utah, 2024.  


㉜ « SB 24-205: Consumer Protections for Artifical Intelligence », State of Colorado, 2024. 


㉝ Shreya Ghimire, « Why Estonia is Europe’s Digital Powerhouse: A Study in E-governance Transformation », Frost & Sullivan Institute, 18 September 2025.  


㉞ « The EU Artificial Intelligence Act », Future of Life Institute, 2025. 


㉟ « AI Risk Management Framework », NIST, 2023.  

Comments


LSESU Think Tank | Best New Society 2024

bottom of page