The Platform
Latest Articles
by Karl Gading Sayudha
by Gordon Feller
by Corentin Barré
by Mirza Abdul Aleem Baig
by Deepak M. Gupta
by Vince Hooper
by Ganesh Ashok Pandit
by Varnika Arun
by Taasha Mistry
by Karl Gading Sayudha
by Gordon Feller
by Corentin Barré
by Mirza Abdul Aleem Baig
by Deepak M. Gupta
by Vince Hooper
by Ganesh Ashok Pandit
by Varnika Arun
by Taasha Mistry
The Unseen Battlefield: How AI Models Could Be the Next Cyber Weapons
02.13.2025
AI, as external intelligence, is only as dangerous as the hands that wield it.
As global competition over artificial intelligence escalates, the geopolitical rivalry between the United States and China—particularly under the Trump administration—could push AI development into a new era of digital warfare. Both nations increasingly treat AI as a strategic asset, and any perceived hostility between them could spill over into their most advanced AI models—OpenAI’s ChatGPT and China’s DeepSeek.
Unlike traditional technological rivalries driven by market forces, this AI arms race is shaped by national security concerns and the fear of technological dominance. If relations deteriorate further, AI models could become indirect combatants in cyber conflicts—weaponized to spread misinformation, disrupt digital infrastructure, or even infiltrate rival systems. What was once a competition over semiconductor access could morph into an AI-driven cyber arms race, where intelligence models are caught in the crossfire.
A troubling reality lies at the heart of this emerging conflict: the growing dependence on AI-driven decision-making without adequate oversight. AI’s ability to process vast amounts of information in real-time is often mistaken for infallibility, leading policymakers, business leaders, and security professionals to over-rely on machine-generated intelligence. The greatest threats in this scenario stem from low-probability yet high-impact events. A single catastrophic AI-triggered event—an algorithm-driven market crash, a large-scale misinformation campaign, or a cyber intrusion too sophisticated to detect—could send shockwaves through global economies and security systems. Yet decision-makers frequently assume AI will self-regulate or remain benign, a dangerously complacent attitude that only heightens the risk of an AI-induced disaster.
While much of the discourse around AI focuses on cost and efficiency, policymakers must confront a far more pressing concern: the possibility of AI models directly engaging in cyber warfare. These systems are no longer passive tools; they are evolving forms of external intelligence capable of operating at speeds and complexities beyond human comprehension. If weaponized against one another, the consequences could be catastrophic—disrupting economies, undermining governments, and corroding the foundation of digital trust worldwide.
Artificial intelligence is no longer just an instrument of convenience—it is an autonomous form of intelligence that absorbs, processes and generates knowledge at an unprecedented scale. While this capability fuels innovation, it also introduces new and largely unseen threats, particularly in cybersecurity. As models like ChatGPT and DeepSeek push the boundaries of machine intelligence, the prospect of AI-driven hacking is no longer a distant science-fiction scenario but an imminent reality—one that could fundamentally reshape cyber warfare, misinformation campaigns, and privacy breaches in the digital age.
Unlike human intelligence, constrained by cognitive limits and the speed of thought, AI models trained on vast datasets possess limitless recall and computational efficiency. This dual-edged capability makes AI both a powerful guardian of cybersecurity and a potent weapon in the wrong hands. The threat is no longer limited to individual hackers penetrating systems but extends to AI-powered attacks that can adapt, learn, and outmaneuver traditional cyber defenses at unprecedented speeds. If AI systems were to be programmed to infiltrate one another—an AI hacking an AI—the consequences could range from manipulated information ecosystems to large-scale infrastructure failures.
One of the most immediate threats is AI-powered misinformation. Deepfake videos, synthetic voices, and AI-generated text can fabricate convincing political statements, impersonate executives, or forge fraudulent financial reports. AI can mass-produce misleading content faster than fact-checkers can debunk it, making it increasingly difficult to separate truth from manipulation. If left unchecked, this could trigger global instability—financial markets could plummet due to fabricated news, elections could be swayed by AI-generated propaganda, and social trust could erode to the point where objective reality becomes nearly indistinguishable from fiction.
Another alarming concern is AI-enhanced cyberattacks. Traditional hacking relies on human expertise, but AI can automate phishing attacks, exploit system vulnerabilities, and orchestrate sophisticated cyber intrusions at a scale no human hacker could match. AI can scan billions of lines of code in seconds, identifying weaknesses in banking networks, national security infrastructure, and rival AI systems. As AI autonomy increases, the possibility of one system breaching another—analyzing its defenses, corrupting its outputs, or even disabling it entirely—becomes a real danger. Governments and corporations could find themselves locked in an invisible cyber arms race, where the battlefield is no longer physical but digital and where AI-driven attacks occur at a pace, humans struggle to monitor in real-time.
The sectors most vulnerable to AI-driven cyber threats rely heavily on digital trust: financial institutions, governments, media organizations, and individual consumers. Banks and stock markets could be manipulated through AI-generated misinformation, wiping out billions in economic value within minutes. Governments that fail to secure their AI systems risk compromising their intelligence, potentially shifting the balance of global diplomacy and security. The media, already battling waves of misinformation, faces an even greater challenge—verifying facts in an era when AI-generated falsehoods are produced at breakneck speed. And at the center of this storm is the average user, whose personal data, financial stability, and fundamental understanding of reality are all at risk from an AI-driven disinformation crisis.
So, what can be done? The first step is awareness. Users must recognize that not all digital content—text, video, or voice—is authentic. Governments and corporations must implement stringent verification systems and enforce AI transparency requirements to ensure that machine-generated content is distinguishable from human-created material. The second step is cybersecurity readiness. Organizations must invest in AI-based security solutions capable of countering AI-driven threats, ensuring that artificial intelligence serves as a defensive shield rather than an offensive weapon. Finally, there must be a concerted effort toward global AI regulation—an international framework that prevents any single entity, whether a nation-state, corporation, or rogue actor, from using AI unchecked to destabilize the digital landscape.
AI, as external intelligence, is only as dangerous as the hands that wield it. Left unregulated, it risks becoming the very threat it was meant to mitigate. The time to act is now—before external intelligence devolves into external chaos, eroding digital trust and leaving the world in a state where reality itself is up for debate.
Mohammad Ibrahim Fheili is currently serving as an Executive in Residence with Suliman S. Olayan School of Business (OSB) at the American University of Beirut (AUB), a Risk Strategist, and Capacity Building Expert with focus on the financial sector. He has served in a number of financial institutions in the Levant region. He served as an advisor to the Union of Arab Banks, and the World Union of Arab Bankers on risk and capacity building. Mohammad taught economics, banking and risk management at Louisiana State University (LSU) - Baton Rouge, and the Lebanese American University (LAU) - Beirut. Mohammad received his university education at Louisiana State University, main campus in Baton Rouge, Louisiana.