Abstract
This study examines the role of Large Language Models (LLMs) in the dissemination of propaganda and misinformation, focusing on recent corporate and state-led efforts to manipulate public perception. The rise of sophisticated LLMs, such as GPT-4, has transformed digital communication, enabling rapid content generation that can be both beneficial and detrimental. While these models enhance information dissemination across various sectors, they also present risks as tools for spreading disinformation. The literature review highlights how advanced technologies, including AI, have intensified the spread of false narratives, particularly during significant political events, such as the 2016 U.S. presidential election. This study utilizes a qualitative approach, combining SWOT analysis with inductive research, to explore the implications of LLMs in propaganda dissemination. Key findings indicate that LLMs can overwhelm traditional fact-checking mechanisms, facilitate targeted messaging, and erode internet power distance in authoritarian regimes. The paper concludes with recommendations for developing robust regulatory frameworks, advancing detection technologies, and enhancing public education on AI literacy to mitigate the potential harms of LLM-driven propaganda while leveraging their capabilities for positive societal impact.
Introduction
In this study, researchers assess the role of Large Language Models (LLMs) in recent corporate and state-led efforts to propagandize the masses with disinformation and unethical conditioning tactics. The increasing sophistication and widespread use of LLMs in generating and disseminating information have transformed the landscape of digital communication. LLMs like GPT-4o possess the capability to produce coherent and contextually relevant text, making them valuable tools in various sectors, including education, healthcare, and customer service (Floridi & Cowls, 2022). However, this technological advancement has also raised concerns surrounding the role of LLMs in the spread of propaganda and misinformation. The ability of these models to generate vast amounts of content quickly and tailor messages to specific demographics presents a double-edged sword; while they can disseminate valuable information efficiently, they can also be exploited by bad actors looking to manipulate public sentiment (Huang & Wang, 2021).
Literature Review
The proliferation of misinformation and propaganda facilitated by digital technologies has been the subject of extensive research. For instance, Anderson and Rainie (2017), explored how advanced online tools, including artificial intelligence, magnify the spread of false narratives at unprecedented scales, making it easier for bad actors to manipulate public perception on a vast level. During United States’ 2016 presidential election, false news stories on Facebook outperformed legitimate news sources in terms of engagement, with the top twenty fake stories generating over 8.7 million shares, reactions, and comments (Allcott & Gentzkow, 2017). Extremist echo chambers use social media to spread fake news stories and manipulated content, hoping to sway electoral outcomes and public sentiment (Törnberg, 2018).
The integration of AI, particularly LLMs, into this ecosystem intensifies the challenges due to their ability to produce convincing and personalized content at scale. OpenAI’s GPT-3, for example, has 175 billion parameters, enabling it to generate highly coherent and contextually relevant text, which can be misused to create sophisticated propaganda (Bender et al., 2021).
Xenophobic biases in LLM training data have perpetuated to end-users, leading to unintended amplification of stereotypes and misinformation. Sheng et al. (2019) revealed that language models could generate biased and toxic content, with sixty-six percent of the outputs reflecting negative stereotypes when prompted with certain demographic descriptors. The lack of contextual understanding further exacerbates this issue, as LLMs may generate content that appears coherent but lacks factual accuracy or ethical foundation (Vincent, 2022).
In centralized and oppressive governments, the use of AI for state propaganda has reduced internet power distance, enabling regimes to directly influence public discourse without intermediaries (Sedova et al., 2021). The Chinese government employs over two million internet commentators, known as the “Fifty Cent Party,” to manipulate public sentiment (King et al., 2017). The deployment of AI amplifies this effort, allowing for the automated generation of pro-government content at an unprecedented rate (Feldstein, 2019).
The ethical and legal frameworks governing AI technologies lag their rapid advancement, necessitating policies that address the misuse of LLMs in disseminating propaganda while balancing innovation and freedom of expression (O’Neil, 2022). Public education on AI literacy is also crucial in empowering individuals to critically evaluate the information they encounter (Gunkel, 2021).
Methodology
This study utilizes a qualitative approach by combining SWOT analysis with an inductive research methodology where researchers collect secondary data, identify patterns, and develop theories to conclude findings. The SWOT analysis evaluates the strengths, weaknesses, opportunities, and threats of LLMs in the context of propaganda dissemination. The inductive research draws on case studies from recent political campaigns and global events to identify patterns and emergent themes related to the influence of LLMs on public sentiment and the loss of internet power distance. Secondhand data from recent academic journals, industry reports, and reputable news sources was collected, ensuring a comprehensive understanding of probable AI implications.
SWOT Analysis of LLMs in the Dissemination of Propaganda
The SWOT analysis provides a structured evaluation of the internal and external factors affecting LLM use for the spreading propaganda:
Strengths
While LLMs speed up propaganda, they also offer significant advantages in its dissemination. Their ability to rapidly generate large volumes of content enables swift message propagation before counter-narratives or fact-checking can occur. They can tailor content to specific demographics by personalizing messages based on user data such as age, location, interests, and online behavior, enhancing their effectiveness in influencing target audiences. Additionally, their dual-use nature allows them to produce both factual information and misinformation, making them versatile tools for either truth dissemination or manipulation depending on the user’s intent.
Efficiency and Speed of Content Generation. LLMs can generate large volumes of content rapidly. For instance, an AI system can produce up to 20,000 words per hour, significantly outpacing human capabilities (Hao, 2018). This feature was exploited during political campaigns to disseminate narratives swiftly (Huang & Wang, 2021). This efficiency enables the propagation of messages before counter-narratives or fact-checking can occur, influencing public opinion in real-time.
Customization and Targeting Capabilities. The ability of LLMs to tailor content to specific demographics enhances their effectiveness as propaganda tools. With access to user data, LLMs can personalize messages based on age, location, interests, and online behavior. Facebook’s advertising platform, for example, allows targeting based on over ninety-eight personal data points (Zuboff, 2020). By analyzing such data, LLMs can create personalized messages that resonate more deeply with target audiences.
Dual-Use Nature. It is within the capacity of LLMs to generate both information and misinformation. This duality makes them powerful instruments that can be leveraged for truth dissemination or manipulation, depending on the intent of the user (Floridi & Cowls, 2022). For example, GPT-3 has been used to write news articles indistinguishable from those written by humans, raising concerns about its potential for misuse (Brown et al., 2020).
Weaknesses
Despite their advanced capabilities, LLMs exhibit significant weaknesses that can contribute to the unintentional spread of misinformation and propaganda. They are susceptible to inherent biases present in their vast training datasets, leading to prejudiced or misleading outputs without deliberate intent. Additionally, LLMs lack genuine comprehension and contextual understanding, relying solely on pattern recognition. This limitation can result in contextually inappropriate or even dangerous content, such as incorrect medical advice. Furthermore, the sophistication of LLM-generated text makes it challenging for both users and experts to distinguish between authentic and manipulated messages, complicating efforts to detect and mitigate AI-generated propaganda.
Susceptibility to Biases in Training Data. LLMs are trained on vast datasets that may contain inherent biases. Studies have shown that models like GPT-2 and GPT-3 can produce outputs that are racist, sexist, or otherwise biased in fifteen-to-nineteen percent of cases when prompted with certain keywords (Bender et al., 2021). This leads to unintentional propagation of prejudiced or misleading content.
Lack of Contextual Understanding. Despite their advanced capabilities, LLMs lack genuine comprehension and contextual awareness. They rely on pattern recognition rather than understanding. This limitation can result in outputs that are contextually inappropriate or misleading. For example, an LLM might generate medical advice that is incorrect or dangerous because it cannot verify facts or understand the implications (Marcus & Davis, 2020).
Difficulty in Detecting Manipulative Content. The sophistication of LLM-generated content poses challenges in distinguishing between authentic and manipulated messages. Deepfake text is harder to detect than deepfake images or videos, with detection algorithms achieving only around seventy-three percent accuracy (Ippolito et al., 2020). Both users and experts may struggle to identify AI-generated propaganda, complicating efforts to mitigate its impact.
Opportunities
Despite the challenges posed by LLMs, significant opportunities exist to mitigate their misuse in propaganda dissemination. Advances in detection and transparency tools offer promising solutions; emerging technologies can identify and flag AI-generated content, helping to combat misinformation. Regulatory developments are also underway, with legislative bodies increasingly recognizing the need for oversight. Initiatives like the European Union’s proposed AI Act aim to establish guidelines for transparency, checks and balances, and accountability regarding AI technologies. Additionally, enhancing public education on AI can empower individuals to critically assess information. By fostering media literacy and critical thinking skills, educational initiatives can reduce the public’s susceptibility to propaganda.
Advances in Detection and Transparency Tools. Emerging technologies in AI transparency and content detection offer promising avenues for combating AI-driven propaganda. For instance, OpenAI developed a tool called “GPT-2 Output Detector” with and ninety-five percent accuracy rate in distinguishing between human and AI-generated text under certain conditions (Solaiman et al., 2019). Improved algorithms and tools can help identify and flag misleading content generated by LLMs.
Regulatory Developments. Legislative bodies are increasingly recognizing the ethical and legal challenges posed by LLMs. The European Union’s proposed AI Act aims to regulate AI technologies, including provisions for transparency and accountability (European Commission, 2021). This awareness is leading to the development of stronger governance and regulatory frameworks aimed at overseeing the use of these technologies.
Public Education on AI. Enhancing public understanding of AI-generated content can empower individuals to critically assess information. Surveys indicate that sixty-two percent of internet users are unaware of deepfake technologies (Europol, 2020). Educational initiatives can reduce the susceptibility of the public to propaganda by fostering media literacy and critical thinking skills (Gunkel, 2021).
Threats
Misused LLMs pose several significant threats for propaganda dissemination. They can automate and scale misinformation campaigns, rapidly spreading false information that risks public health and undermines democratic processes. The erosion of internet power distance is another concern, as governments can directly infiltrate personal information spaces with tailored content, undermining the internet’s role as a platform for open discourse and allowing authoritative entities greater control over public perception. Additionally, the decentralized and anonymous nature of LLM deployment makes it difficult to hold malicious users accountable, hindering efforts to deter the spread of propaganda. Finally, the use of LLMs in spreading misinformation can exacerbate political polarization and social divisions, potentially leading to increased instability and conflict within societies.
Escalating Misinformation Campaigns. Bad actors can harness LLMs to automate and scale misinformation efforts. In 2019, Twitter reported over seventy million fake or suspicious accounts, many of which were bots potentially powered by AI (Timberg & Romm, 2018). This escalation poses significant risks to public health and democratic processes, as seen during the COVID-19 pandemic where misinformation spread rapidly online (Creemers, 2020).
Erosion of Internet Power Distance. The use of AI in state propaganda reduces the internet power distance by allowing governments to directly infiltrate personal information spaces with tailored content. In Russia, the Internet Research Agency employed thousands of bots to reach over 126 million Americans on Facebook alone during the 2016 U.S. election (Howard et al., 2018). This erosion undermines the internet’s traditional role as a platform for open discourse and empowers authoritative entities to exert greater control over public perception.
Difficulty Holding Users Accountable. The decentralized and often anonymous nature of LLM deployment makes it challenging to trace and hold accountable those who misuse these tools for malicious purposes. For example, eighty percent of malicious domains used for phishing and propaganda are active for less than twenty-four-hours, complicating law enforcement efforts (Moore & Clayton, 2019). This lack of accountability hinders efforts to deter the spread of propaganda.
Political Instability. The use of LLMs in spreading misinformation can exacerbate political polarization and social divisions. A study by Bail et al. (2018) found that exposure to opposing political views on social media can increase polarization by thirteen-to-fourteen percent This amplification of divisions potentially leads to increased instability and conflict within societies (Allcott & Gentzkow, 2017).
Inductive Research Analysis and Findings
The inductive research approach examines specific instances where LLMs have influenced public sentiment, leading to broader generalizations about their role in propaganda dissemination.
Emergent Themes
Several key themes have emerged regarding the influence of LLMs on propaganda dissemination. Firstly, the speed and volume at which LLMs can generate content overwhelm traditional fact-checking mechanisms, allowing thousands of misleading posts to spread rapidly and influence public opinion in real-time (Sedova et al., 2021). Secondly, LLMs’ ability to analyze and personalize content makes them effective tools for targeted messaging, enabling political entities to micro-target specific groups based on demographics, interests, or behavioral patterns. Thirdly, in authoritarian regimes, the deployment of LLMs for propaganda has led to an erosion of internet power distance, with governments bypassing traditional media filters to directly influence individual users through personalized content. Lastly, according to Burell et al. (2022), there are significant ethical and regulatory gaps, as the absence of comprehensive frameworks addressing issues associated with LLM-driven propaganda allows for the unchecked spread of misinformation.
Speed and Volume of LLM-Generated Content. Case studies from recent political campaigns reveal that LLMs have been used to generate thousands of misleading posts in real-time. During the 2020 U.S. election, researchers identified over 200,000 tweets linked to bot accounts spreading misinformation within a forty-eight-hour period (Ferrara, 2020). The rapid production and dissemination of content overwhelm traditional fact-checking mechanisms.
Targeted Messaging. The ability of LLMs to analyze and utilize data for content personalization makes them effective tools for micro-targeting. In the 2016 Brexit referendum, targeted ads reached up to seven-million voters with customized messages based on psychographic profiling (Cadwalladr & Graham-Harrison, 2018). Political entities have exploited this capability to influence specific groups based on demographics, interests, or behavioral patterns.
Erosion of Internet Power Distance in Authoritarian Regimes. In countries where the state exerts significant control over information dissemination, the deployment of LLMs for propaganda purposes has led to a loss of internet power distance. China’s “Great Firewall” blocks over 10,000 websites and employs AI to monitor and censor online content for its 989 million internet users (China Internet Network Information Center, 2021). Governments leverage AI to bypass traditional media filters and directly influence individual users through personalized content, effectively narrowing the gap between state power and citizen interaction online (Feldstein, 2019).
Ethical and Regulatory Gaps. Despite the growing influence of AI in content generation, there is a notable absence of comprehensive regulatory frameworks addressing the ethical issues associated with LLM-driven propaganda. As of 2021, only twenty percent of countries have adopted policies or guidelines on AI ethics (UNESCO, 2021). This gap allows for the unchecked spread of misinformation.
Findings on the Influence of LLMs on Public Sentiment. The analysis indicates that LLMs are actively shaping public sentiment through the mass production of personalized propaganda. Their role in political disinformation campaigns demonstrates their capacity to sway opinions and alter the course of public discourse. For instance, studies estimate that misinformation influenced the voting decisions of four percent of the U.S. electorate in 2016 (Narayanan et al., 2020). In authoritarian contexts, the loss of internet power distance facilitated by AI allows for more pervasive state influence over individuals, undermining the autonomy of online discourse (Roberts, 2018). The lack of effective oversight and regulation exacerbates this influence, highlighting the urgent need for interventions to mitigate the negative impacts of LLM-driven propaganda (Floridi & Taddeo, 2021).
The Loss of Internet Power Distance in Countries Using AI for State Propaganda
The deployment of AI and LLMs by states for propaganda purposes has led to a loss of internet power distance, reversing the internet’s initial democratizing effect by enabling governments to more effectively monitor, influence, and manipulate online content, thereby reducing the autonomy and influence of individual users.
Impact on Power Dynamics
The internet was initially heralded as a democratizing force, reducing power distance by empowering individuals with access to information and platforms for expression (Shirky, 2011). However, the use of AI and LLMs by states for propaganda purposes has reversed this trend in some countries. By utilizing sophisticated algorithms, governments can monitor, influence, and manipulate online content more effectively. In 2020, sixty-two percent of the world’s internet users lived in countries where social media platforms were used for state-sponsored manipulation of online discussions (Freedom House, 2020). This reduces the perceived autonomy and influence of individual users (Deibert, 2019).
Case Studies
China exemplifies the loss of internet power distance through its use of AI technologies to control and influence online discourse. The government employs LLMs to generate pro-government narratives and suppress dissenting views, with studies estimating that it fabricates and posts about 488 million social media comments annually to distract from critical discussions (King et al., 2017). This strategy enables direct engagement with citizens, shaping public sentiment in favor of state policies and strengthening governmental control over public opinion.
China’s Use of AI for State Propaganda. China’s deployment of AI technologies to control and influence online discourse exemplifies the loss of internet power distance. The state employs LLMs to generate pro-government narratives and suppress dissenting views. Studies estimate that the Chinese government fabricates and posts about 488 million social media comments annually to distract from critical discussions (King et al., 2017). This strategy allows for direct engagement with citizens, shaping public sentiment in favor of state policies.
Russia’s Information Warfare. Russia has utilized AI-driven propaganda to influence both domestic and international audiences. During the 2016 U.S. election, Russian-backed accounts generated ten-million tweets, 116,000 Instagram posts, and 61,000 Facebook posts, reaching tens of millions of users (Howard et al., 2018). By deploying bots and LLMs to spread disinformation, the state reduces the power distance between government narratives and public perception, directly affecting the opinions and behaviors of individuals.
Implications for Internet Freedom. The loss of internet power distance undermines the principles of internet freedom and open discourse. In 2020, global internet freedom declined for the tenth consecutive year, with twenty-six countries experiencing record drops (Freedom House, 2020). When states employ AI for propaganda, they can dominate online spaces, marginalizing independent voices and limiting the diversity of information available to the public. This concentration of power threatens democratic ideals and can lead to increased censorship and surveillance.
Discussion
The findings of this study detail the complex role of LLMs in modern information dissemination. While they offer significant benefits in efficiency and personalization, their potential misuse poses substantial risks to society. The strengths of LLMs in generating and targeting content are leveraged in propaganda efforts, amplifying misinformation and manipulating public sentiment (Huang & Wang, 2021).
In countries utilizing AI for state propaganda, the loss of internet power distance represents a significant shift in power dynamics. Governments can directly influence individuals without intermediaries, consolidating control over public discourse. For example, in Iran, the government increased internet shutdowns by fifty percent in 2019 to control the flow of information during protests, demonstrating how states can leverage technology to suppress dissent (Tadayoni & Tsivintzelis, 2020).
Opportunities exist in the advancement of detection tools and the development of regulatory frameworks. By investing in AI transparency and accountability measures, it is possible to reduce the negative impacts of LLMs. Projects like the “Deepfake Detection Challenge” initiated by Facebook, Microsoft, and academic institutions aim to improve detection technologies (Dolhansky et al., 2020). Public education initiatives can also play a crucial role in enhancing media literacy and critical evaluation of information (Gunkel, 2021).
However, significant threats remain. The escalating scale of misinformation campaigns facilitated by LLMs poses a direct challenge to democratic institutions and public health. During the COVID-19 pandemic, forty-six percent of adults reported encountering false or misleading information about the virus on social media (Nielsen et al., 2020). The loss of internet power distance in authoritarian contexts exacerbates these challenges, necessitating international cooperation and policy interventions to protect internet freedom (Deibert, 2019).
Conclusion
This study highlights the dual nature of LLMs as both powerful tools for information dissemination and potential instruments of propaganda. The SWOT analysis and inductive research reveal that LLMs significantly influence public sentiment, particularly through targeted and large-scale dissemination of content. In countries using AI for state propaganda, the loss of internet power distance allows governments to exert direct influence over individuals, altering traditional power dynamics online. Addressing the challenges posed by LLMs requires a multifaceted approach, including the development of robust regulatory frameworks, advancement of detection technologies, enhancement of public education on AI literacy, and efforts to preserve internet freedom. By proactively addressing these issues, it is possible to harness the benefits of LLMs while mitigating their potential harms to society.
Author’s Note:
I have conflicts of interest to disclose.
References
Allcott, H., & Gentzkow, M. (2017). Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, 31(2), 211–236.
Anderson, J. and Rainie, L. (2017, October 19). The future of truth and misinformation online. Pew Research Center. https://www.pewresearch.org/internet/2017/10/19/the-future-of-truth-and-misinformation-online/.
Bail, C. A., Argyle, L. P., Brown, T. W., et al. (2018). Exposure to Opposing Views on Social Media Can Increase Political Polarization. Proceedings of the National Academy of Sciences, 115(37), 9216–9221.
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 610–623.
Brandt, J. (2023, November 8). Propaganda, foreign interference, and generative AI. Brookings.https://www.brookings.edu/testimonies/propaganda-foreign-interference-and-generative-a.
Brown, T. B., Mann, B., Ryder, N., et al. (2020). Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems, 33, 1877–1901.
Brundage, M., Avin, S., Wang, J., & Krueger, G. (2020). Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv preprint arXiv:2004.07213.
Cadwalladr, C., & Graham-Harrison, E. (2018). Revealed: 50 Million Facebook Profiles Harvested for Cambridge Analytica in Major Data Breach. The Guardian.
China Internet Network Information Center. (2021). Statistical Report on Internet Development in China.
Creemers, R. (2020). China’s Approach to Artificial Intelligence: How It Frames AI Regulation and Ethics. AI & Society, 35(1), 35–51.
Deibert, R. (2019). The Road to Digital Unfreedom: Three Painful Truths About Social Media. Journal of Democracy, 30(1), 25–39. https://www.journalofdemocracy.org/articles/the-road-to-digital-unfreedom-three-painful-truths-about-social-media/.
Dolhansky, B., Howes, R., Pflaum, B., et al. (2020). The Deepfake Detection Challenge Dataset. arXiv preprint arXiv:2006.07397.
European Commission. (2021). Proposal for a Regulation Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act). https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206.
Europol. (2020). Catching the Virus: Cybercrime, Disinformation and the COVID-19 Pandemic. Europa. https://www.europol.europa.eu/publications-events/publications/catching-virus-cybercrime-disinformation-and-covid-19-pandemic.
Ferrara, E. (2020). What Types of COVID-19 Conspiracies are Populated by Twitter Bots? First Monday, 25(6).https://firstmonday.org/ojs/index.php/fm/article/view/10633.
Feldstein, S. (2019). The Road to Unfreedom: How Artificial Intelligence Is Reshaping Repression. Project Muse,Journal of Democracy, 30(1), 40–52. https://muse.jhu.edu/article/713721.
Floridi, L., & Cowls, J. (2022). AI and Its Consequences: A Study on Ethical and Social Impacts. Cambridge University Press.
Floridi, L., & Taddeo, M. (2021). The Ethics of Artificial Intelligence: Balancing Innovation and Responsibility. Ethics and Information Technology, 23(1), 15–24.
Freedom House. (2020). Freedom on the Net 2020: The Pandemic’s Digital Shadow. Freedom House.
Gunkel, D. J. (2021). The Machine Question: Critical Perspectives on AI, Robots, and Ethics. MIT Press.
Hao, K. (2018). AI Can Write Just Like Me. MIT Technology Review.
Howard, P. N., Ganesh, B., Liotsiou, D., Kelly, J., & Francois, C. (2018). The IRA, Social Media and Political Polarization in the United States, 2012-2018. Computational Propaganda Research Project.
Huang, P., & Wang, Z. (2021). AI in the Age of Misinformation: A Study of LLMs and Political Propaganda. Journal of Political Communication, 58(2), 123–137.
Ippolito, D., Duckworth, D., Callison-Burch, C., & Eck, D. (2020). Automatic Detection of Generated Text is Easiest When Humans are Fooled. arXiv preprint arXiv:1911.00650.
King, G., Pan, J., & Roberts, M. E. (2017). How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument. American Political Science Review, 111(3), 484–501.
Marcus, G., & Davis, E. (2020). Rebooting AI: Building Artificial Intelligence We Can Trust. Vintage.
Moore, T., & Clayton, R. (2019). The Impact of Public Information on Phishing Attack and Defense. Communications of the ACM, 62(6), 76–83.
Morozov, E. (2021). Big Tech and the Emerging Propaganda Wars. Technology and Society, 38(4), 225–239.
Narayanan, V., Barash, V., Kelly, J., Kollanyi, B., Neudert, L.-M., & Howard, P. N. (2020). Polarization and Partisanship: Social Media and the Fragmentation of Public Opinion. Oxford Internet Institute.
Nielsen, R. K., Fletcher, R., Newman, N., Brennen, J. S., & Howard, P. N. (2020). Navigating the ‘Infodemic’: How People in Six Countries Access and Rate News and Information about Coronavirus. Reuters Institute.https://reutersinstitute.politics.ox.ac.uk/infodemic-how-people-six-countries-access-and-rate-news-and-information-about-coronavirus.
O’Neil, C. (2022). AI in the Public Sphere: A Call for Regulatory Oversight. Ethical AI Journal, 9(2), 77–92.
Paul, C., & Matthews, M. (2016). The Russian “Firehose of Falsehood” Propaganda Model: Why It Might Work and Options to Counter It. RAND Corporation. https://www.rand.org/pubs/perspectives/PE198.html.
Roberts, M. E. (2018). Censored: Distraction and Diversion Inside China’s Great Firewall. Princeton University Press.https://press.princeton.edu/books/hardcover/9780691178868/censored?srsltid=AfmBOoqSMLnF-Pg6PO6HIGq-wxWqG0Ykec8-choSLpIpcksifwVuOS7d.
Ryan-Mosley, T. (2023, October 4). How generative AI is boosting the spread of disinformation and propaganda. MIT Technology Review. https://www.technologyreview.com/2023/10/04/1080801/generative-ai-boosting-disinformation-and-propaganda-freedom-house/.
Sedova, K., McNeill, C., Johnson, A., Joshi, A., & Wulkan, I. (2021, December). AI and the future of disinformation campaigns: Part 2: A threat model. Center for Security and Emerging Technology.https://cset.georgetown.edu/publication/ai-and-the-future-of-disinformation-campaigns-part-2/.
Sheng, E., Chang, K.-W., Natarajan, P., & Peng, N. (2019). The Woman Worked as a Babysitter: On Biases in Language Generation. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing, 3407–3412. https://aclanthology.org/D19-1339/.
Shirky, C. (2011). The Political Power of Social Media: Technology, the Public Sphere, and Political Change. Foreign Affairs, 90(1), 28–41. https://faculty.cc.gatech.edu/~beki/cs4001/Shirky.pdf.
Solaiman, I., Brundage, M., Clark, J., et al. (2019). Release Strategies and the Social Impacts of Language Models. arXiv preprint arXiv:1908.09203.
Tadayoni, R., & Tsivintzelis, S. (2020). Internet Shutdowns and the Limits of Social Media Surveillance in Iran. Journal of Cyber Policy, 5(1), 64–81.
Timberg, C., & Romm, T. (2018). Twitter is Sweeping Out Fake Accounts Like Never Before, Putting User Growth at Risk. The Washington Post.
Törnberg P. (2018). Echo chambers and viral misinformation: Modeling fake news as complex contagion. PloS one, 13(9), e0203958. https://doi.org/10.1371/journal.pone.0203958.
UNESCO. (2021). Recommendation on the Ethics of Artificial Intelligence.
Vincent, J. (2022). The Challenge of Regulating AI: Legal and Ethical Perspectives. AI Law Review, 19(1), 1–25.
Whittaker, M., Crawford, K., Dobbe, R., & Gilbert, T. (2020). AI Now 2020 Report. AI Now Institute.
Yang, G., & Roberts, M. E. (2020). State Control in the Digital Age: The Internet, Power, and Authoritarianism. International Journal of Communication, 14, 4032–4051.
Zhao, Y. (2020). AI and Digital Propaganda in China: Emerging Trends and Future Directions. Journal of Contemporary China, 29(125), 1–17.
Zuboff, S. (2020). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs.

Leave a comment