Tech
AI in War and Propaganda: Anna Mysyshyn on Disinformation, Democracy, and Digital Governance
Anna Mysyshyn stands at the crossroads of law, technology, and global governance—a Ukrainian legal scholar whose expertise in AI policy, cybersecurity, and digital governance places her at the vanguard of some of today’s most pressing challenges. With a Ph.D. in Law from Ivan Franko Lviv National University and an LL.M. in Innovation, Technology, and Law from the University of Edinburgh, Anna’s academic credentials are as impressive as her practical achievements.
As the Director and Co-Founder of the Institute of Innovative Governance, she leads transformative initiatives to foster digital inclusion and ensure secure transitions to digital landscapes. Her career spans international platforms, from working with the United Nations and UNDP in Ukraine to serving as a fellow in the Canadian Parliament. Most recently, as a research fellow at the German Marshall Fund of the United States, Anna has focused on the cutting-edge application of advanced technologies in the war in Ukraine—adding a timely and poignant dimension to her already remarkable career.
Scott Douglas Jacobsen: AI has rapidly transformed the landscape of propaganda. How is this technological evolution reshaping its use in today’s political and social contexts?
Anna Mysyshyn: Focusing on the Ukrainian situation, the rapid advancement of AI technologies has significantly enhanced the ability to generate and disseminate disinformation and propaganda on a massive scale and at unprecedented speed. The advent of generative AI, deepfakes, and voice-cloning technologies has dramatically transformed the landscape of information warfare and general information dissemination.
Emerging technologies, particularly generative AI, are widely utilized in informational warfare to spread propaganda and disinformation. Russia, for instance, deploys false narratives through highly sophisticated and interconnected networks. These networks include AI-generated content disseminated via traditional state-controlled media, social media platforms, and other technological mediums. Despite being a country with significant economic challenges, Russia has capitalized on these technologies to amplify its influence.
Jacobsen: Do these emerging forms of information warfare offer a cost-effective strategy for states or other actors?
Mysyshyn: This represents a relatively low-cost but highly impactful form of warfare. Before its full-scale invasion of Ukraine, Russia had already invested over $9 billion in propaganda campaigns, primarily using digital platforms and traditional media outlets like newspapers. However, with the emergence of technologies such as generative AI, especially after the boom in AI platforms like OpenAI in 2022, propaganda has evolved into a hybrid format.
This modern approach combines traditional media with advanced AI tools to confuse audiences, erode trust, and manipulate public perception of political figures and situations. By employing generative AI, propaganda becomes not only faster and cheaper to produce but also more convincing and harder to detect, posing a significant threat to information integrity and democratic resilience.
What makes this even more concerning is the scalability of AI-driven propaganda. With tools capable of generating thousands of variations of the same disinformation narrative, actors like Russia can target specific demographics with tailored messaging. These campaigns exploit existing social and political divisions, creating a ripple effect that destabilizes societies.
A critical challenge today is detecting AI-generated propaganda. These hybrid methods show that AI technologies are not only accessible but also more persuasive to the general public.
Jacobsen: In terms of impact, how effective are these AI-driven tools? Do they lead to significant shifts in public opinion, or are their effects more subtle and insidious?
Mysyshyn: AI technologies enable Russian propagandists to craft highly targeted and emotionally charged narratives that are difficult to differentiate from authentic content. Platforms such as TikTok, often viewed as harmless entertainment spaces, are increasingly used to spread harmful disinformation. This is particularly effective because many people consume information on social media without fact-checking tools or sufficient media literacy skills to verify what they encounter.
Since people are inclined to trust the information they read or see in the media and are often unaware of the extent to which AI can fabricate content, the impact of disinformation becomes even more significant. This highlights the urgent need for enhanced fact-checking resources and improved media literacy to counter the rising influence of AI-driven propaganda.
Unfortunately, people often believe everything they see and read due to low media literacy skills. Russia understands this and is increasingly disseminating information using a mixed approach. They combine real, factual information with AI-generated, fake narratives. This combination easily confuses individuals because they may read one publication that contains truthful information but then encounter a second one – AI-generated and presenting a false narrative, which they might also perceive as true. This mix of techniques makes it easier to mislead individuals lacking media literacy or fact-checking skills.
The effectiveness of these tools lies in their dual impact, combining immediate and long-term effects. In the short term, they can change public perception, especially when deployed during war or political instability. Fabricated videos or AI-generated “official” statements can rapidly erode trust in public institutions, fuel polarization, or incite unrest. However, their more insidious and enduring impact becomes evident over time. Disinformation campaigns work gradually to weaken societal cohesion, erode trust in democratic institutions, and amplify social divisions.
The cumulative effect is that the public becomes increasingly confused and skeptical of all information sources, fostering an environment where truth is devalued and irrelevant.
Jacobsen: You referenced generational differences and AI tools tailored to these variations. Could you delve deeper into what sets these apart?
Mysyshyn: Yes, indeed. Media literacy skills are critical core competencies, especially in generational differences and the rise of generative AI tools. As AI technologies become more sophisticated and accessible, the ability to critically evaluate and verify information is essential for navigating the modern media landscape.
For younger generations, who are digital natives, media literacy involves understanding how algorithms and AI shape the content they encounter on platforms like TikTok, Instagram, or YouTube. Many are unaware that tailored content is designed to capture attention and provoke emotional responses. Teaching them to question authenticity and recognize manipulation is vital for building resilience against disinformation.
For older generations, media literacy requires addressing their trust in traditional media formats. This demographic is particularly vulnerable to AI-generated content mimicking authoritative sources, such as deepfake videos or fabricated news articles. Developing their ability to identify such fabrications is crucial to countering the spread of false narratives.
What’s particularly concerning is how generative AI tools exploit the unique habits of each generation. Younger audiences are targeted through short, visually engaging content on social media, while older audiences are influenced by AI-driven material that reinforces existing trust in traditional media. Addressing these tailored approaches requires generationally nuanced media literacy strategies to equip all individuals with the tools to discern fact from fiction.
Jacobsen: What distinguishes misinformation from disinformation, particularly in their intent and impact?
Mysyshyn: Disinformation refers to deliberately false or misleading information spread to deceive or manipulate, while misinformation is incorrect information shared without malicious intent. For example, Russian propaganda often uses disinformation to manipulate public opinion by spreading false narratives about the war in Ukraine. However, misinformation can also occur when individuals with low media literacy or even major media outlets share misleading content without fact-checking. In both cases, spreading false information can have harmful effects, even if the intent differs.
Jacobsen: In what ways should information warfare be conceptualized as a legitimate form of modern warfare?
Mysyshyn: Information warfare is a form of warfare because it targets societal trust, cohesion, and decision-making processes, often intending to destabilize or weaken an adversary. While it lacks the physical devastation of traditional warfare, its effects can be equally profound, especially in highly polarized or vulnerable societies. AI technologies have amplified these impacts, transforming information warfare into a sophisticated tool for manipulation and disruption.
In a paper I wrote for the German Marshall Fund, I examined how Russia has weaponized generative AI, deepfakes, and voice-cloning technologies to erode trust, destabilize Ukraine, and influence international perceptions of the war.
For example, AI-generated deepfake videos, such as one depicting President Volodymyr Zelensky announcing Ukraine’s surrender – spread rapidly on social media and caused widespread confusion, even after being debunked. Similarly, altered audio tracks using voice-cloning tools have been employed to create fake messages from Ukrainian leaders, sowing discord and demoralization.
These disinformation campaigns are designed to weaken Ukraine internally and undermine international support, particularly from Western allies. By spreading manipulative narratives, such as fabricated stories of corruption, inefficiency, or infighting, they seek to create skepticism abroad about the legitimacy and effectiveness of Ukrainian leadership.
This erosion of trust can reduce public support for aid and military assistance, which is vital for Ukraine’s defense efforts. Information warfare’s objectives align with traditional military goals, which are to weaken the enemy and disrupt their strategies.
Jacobsen: What strategies should democratic societies adopt to counter these evolving threats effectively?
Mysyshyn: Democratic societies can address the threat of AI-driven information warfare through a multifaceted approach that includes education, technology, policy, and collaboration. Public education, particularly media literacy, must equip individuals with the skills to recognize and counter disinformation.
In 2023, our Institute of Innovative Governance developed a guide and conducted lectures on AI and disinformation at leading Ukrainian universities. Initiatives like StopFake, Nota Yenota, and various government-led programs have strengthened Ukraine’s efforts to build media literacy and societal resilience. These programs emphasize core critical thinking strategies, such as questioning sources, verifying information, and analyzing biases, which are essential in helping individuals navigate the modern information landscape.
Developing trust in media is equally critical. Societies must support independent journalism and fact-checking initiatives that prioritize transparency and accountability. For example, Detector Media has played a vital role in Ukraine, fostering trust by exposing disinformation and providing verified reliable news. Similarly, public awareness campaigns must focus on promoting trustworthy media outlets and encouraging audiences to engage critically with the content they consume. Trust in media is a cornerstone of societal cohesion, especially during war or political instability.
Investing in advanced detection tools is another crucial step. Ukrainian organizations such as Osavul and Let’s Data, Mantis Analytics, and international companies like Originality.ai and OpenOrigins have played key roles in developing technologies to detect and debunk deepfakes and AI-generated propaganda quickly and effectively. These tools counter disinformation campaigns that exploit emerging technologies to spread fabricated narratives designed to mislead or destabilize.
By combining media literacy, critical thinking, trust-building in media, and cutting-edge technological solutions, democratic societies can build resilience against the growing threat of AI-driven information warfare. Ukraine’s proactive approach demonstrates how these strategies can be implemented effectively to protect domestic and international audiences from manipulation.
Jacobsen: How are autocratic regimes leveraging these technologies to pose new and unique challenges to the free world?
Mysyshyn: Well, these regimes exploit technological innovations to wage information warfare, conduct cyberattacks, and surveil populations both domestically and abroad, creating significant risks for open societies. Russia has weaponized AI to create and disseminate deepfakes, voice clones, and other forms of fabricated content.
Autocratic regimes also pose a technological challenge by exporting surveillance tools to suppress dissent and monitor citizens. China, for instance, has developed sophisticated facial recognition surveillance systems that track individuals’ movements, online behavior, and even emotional responses. These tools are being exported to other autocratic states, enabling a global spread of authoritarian control mechanisms that undermine freedoms and human rights.
Cyberattacks are another dimension of this threat. Autocracies increasingly use advanced cyber capabilities to target critical infrastructure in democracies, including energy grids, financial systems, and public health databases. The United States, Europe and other democracies face a dual challenge: protecting their values while countering autocracies’ misuse of emerging technologies.
Jacobsen: Could these same technologies be harnessed to empower dissenters and dissidents within authoritarian regimes?
Mysyshyn: Yes, these technologies can empower dissenters and dissidents in less free countries by providing tools for secure communication, spreading information, and documenting abuses. They also play an increasingly important role in accountability and justice, particularly in wartime scenarios. Technologies based on blockchain provide a decentralized and tamper-proof means of recording evidence of human rights abuses.
Additionally, AI-enhanced tools can assist in verifying, categorizing, and securely storing such data. Communication platforms such as Signal, powered by advanced encryption technologies, have become lifelines for activists and defenders. To maximize the empowering potential of these technologies, democratic societies and international organizations must support secure, open-source tools, invest in training for activists, and push back against the misuse of technology by authoritarian regimes. These efforts and ongoing innovation can help level the playing field for dissenters fighting for freedom.
Jacobsen: Finally, in the face of blatant and absurd narratives—like labeling Ukraine’s Jewish president as a neo-Nazi—what tools and resources does Ukraine need most urgently to counter such misinformation?
Mysyshyn: Ukraine needs a comprehensive strategy to combat misinformation, combining technological innovation, public education, media collaboration, and international support. The sheer absurdity of certain disinformation only highlights its manipulative intent and potential to mislead, regardless of how outrageous it may seem.
These narratives often exploit preexisting biases, emotional responses, and gaps in media literacy, making them surprisingly effective. Once again, this emphasizes the crucial need for critical thinking and diligent fact-checking – because, in a world saturated with disinformation, questioning the narrative is not just a skill but a responsibility.
Jacobsen: Thank you for the opportunity and your time, Anna.
Mysyshyn: You’re very welcome! It was a pleasure. Thank you for your time as well.