The Platform
Latest Articles
by Press Releases
by Masaharu Kai
by Peter Marko Tase
by James Carlini
by Raisa Anan Mustakin
by Manish Rai
by Amro Shubair
by Farwa Imtiaz
by Manish Rai
by Mohammad Ibrahim Fheili
by Press Releases
by Masaharu Kai
by Peter Marko Tase
by James Carlini
by Raisa Anan Mustakin
by Manish Rai
by Amro Shubair
by Farwa Imtiaz
by Manish Rai
by Mohammad Ibrahim Fheili
AI and Ethics: A Cyber Quagmire in the Making?
Enforcing AI ethics in any hypothetical conflict could weaken the U.S. against adversaries who ignore such limits.
Industry figures love to talk about crafting an ethics code or a framework for artificial intelligence. On paper, it sounds responsible, even noble. In practice, it’s strikingly naïve.
This isn’t a new conversation. Years ago, Microsoft’s then–president and chief legal officer, Brad Smith, floated the idea of something like a “Geneva Convention” for cyberspace—an agreement that would put certain targets, especially civilian infrastructure, off-limits in cyberwarfare. That proposal captured imaginations but never truly reckoned with the realities of asymmetric conflict, non-state actors, and states that have no interest in tying their own hands. AI sits in that same uncomfortable space: we can write all the ethical guardrails we want, but the people most likely to abuse the technology are the least likely to sign on to them.
Complicating matters is the current mania for AI as a cure-all. AI is being sold as the “silver bullet” for everything—from corporate efficiency to battlefield supremacy. Vendors, consultants, and what can only be called digital hucksters insist every problem is, at heart, a problem that needs an AI layer. That’s a dangerous way to think. AI has real advantages, but no military or private-sector organization should become dependent on a single technological approach. Overreliance on one tool creates strategic vulnerability, especially when adversaries are unconstrained.
In 2023, President Biden issued an executive order on AI that aimed to at least sketch out a responsible path forward. It touched on all the predictable themes: standards for safety and security; privacy protections; equity and civil rights; consumer, patient, and student safeguards; worker protections; innovation and competition; American leadership abroad; and responsible federal use of AI. It was broad, ambitious—and immediately questioned. Some saw it less as a purely domestic initiative and more as Washington’s answer to Brussels.
That’s because the European Union moved first with its AI Act, a sweeping set of rules aimed at categorizing and regulating AI systems by risk. But even in Europe, there’s debate over whether the regulations were premature. Outside Europe, the skepticism is even sharper. Many states—and certainly terrorist organizations and other non-state actors—are not going to let a Brussels-based framework dictate what they can or can’t build, especially when it comes to tools that could tip the balance in asymmetric warfare. Universal buy-in is a fantasy.
This is the hard part of the conversation: cyber weapons and AI-enabled systems aren’t covered by the Geneva Conventions, and given how they’re used now, it’s hard to imagine the world’s major powers voluntarily capping their capabilities. Years ago, in an article in the American Intelligence Journal, I argued that the lack of a cyber equivalent to the Geneva framework would have real consequences on future battlefields. That assessment still holds. If you can’t get every major actor—and every spoiler—to agree, then what you’ve really built is a set of self-imposed constraints.
We’ve already seen how this plays out. China once floated a restrictive ban on lethal autonomous weapons systems (LAWS). The United States and Russia didn’t sign on. The proposal was considered too narrow and not especially effective. And even if Washington, Moscow, and Beijing had all agreed, the rest of the world would not have fallen into line. Rogue states, militias, and terrorist groups would have ignored it, and in the realm of cyber and electronic warfare, that’s all it takes to render a ban meaningless.
This points to the core strategic dilemma: war does not have ethics; people do. To assume that all parties will respect “off-limits” areas in electronic or cognitive warfare is to create your own Achilles’ heel. If one side voluntarily limits its AI development because of an ethics committee in a Washington boardroom, while the other side—less tethered to process, to law, to moral suasion—pushes forward, the outcome is predictable. You don’t just lose tempo. You risk losing deterrence.
So the question policymakers and defense planners ought to be asking is not, “What is the right set of AI ethics?” but rather, “Are we putting ourselves at a strategic disadvantage by adhering to AI ethics no one else will follow?” If our adversaries do not accept our definitions of responsible AI, then our adherence becomes a liability, not a virtue.
That’s especially urgent now, when every major power and many minor ones are racing to embed AI into logistics, command-and-control, targeting, cyber operations, and even cognitive warfare. Falling behind at this stage because we adopted someone else’s restrictive framework—or worse, a framework written to satisfy political optics rather than battlefield realities—would amount to walking into a cyber quagmire before the engagement even starts.
The Vietnam line is apt here: let’s not repeat the lessons we should have already learned. In that conflict, the United States often fought with rules of engagement more restrictive than those of its adversaries, creating tactical and strategic friction. Translating that to the digital era, it would be a mistake to draft an AI code of conduct that boxes us in while leaving adversaries free to innovate, probe, and escalate.
Because make no mistake: AI will be used across the spectrum of warfare. It will modernize traditional capabilities—logistics, surveillance, targeting—and it will power wholly new applications in electronic, cyber, and cognitive warfare. People without military experience sometimes approach AI ethics as if all applications are neutral, or as if they’re all commercial, or as if war can be cleanly separated from the technologies that enable it. That’s not how militaries think, and it’s not how adversaries behave.
And since there is no universally accepted set of rules for electronic warfare, it makes little sense to declare parts of the battlespace permanently off-limits. Who would enforce those limits? Who would ensure reciprocity? If our opponents have no intention of honoring ethical boundaries, then building doctrine around those boundaries is self-sabotage.
Meanwhile, the private sector is going through its own AI reckoning. A recent Inc. piece warned that even top-tier companies are bracing for disruption, but didn’t offer much in the way of solutions. Many firms are turning to AI and automation to cut costs—sometimes laying off thousands of workers at a time, from retailers to tech giants to fast-food chains. That’s another reminder that AI is not just a military issue; it’s a structural one, reshaping labor, competition, and organizational priorities. What works for Amazon or Walmart won’t automatically work for a defense contractor or for a small AI startup feeding tools into the national-security ecosystem.
Inside companies—defense-related and otherwise—ethics committees are now springing up to define “acceptable” AI. That’s useful when you’re talking about consumer-facing products or health data. It’s far less useful when you’re talking about mission-critical systems in a conflict environment. For those, you need latitude, not preemptive restrictions based on frameworks that weren’t universally negotiated and won’t be universally observed.
What we can and should insist on is excellence: the highest standards of software engineering, secure-by-design development, rigorous testing, and a talent pipeline trained for a world in which AI is embedded in national defense. That means overhauling education and training to produce developers, analysts, and operators who understand both the promise of AI and the ruthlessness of the environments in which it will be deployed.
Because the battlefield is no longer just a physical space, it’s server farms, cloud architectures, data centers, and the networks that connect them. When war moved into the electronic domain, the tempo changed. Attacks could be conceived, launched, and felt in seconds rather than days. Logistics, targeting, deception, influence operations—all of it is faster now. Strategy and tactics have had to evolve alongside that acceleration.
In that environment, the smartest course is not to abandon ethics altogether, but to recognize that ethics cannot be our only organizing principle—especially not ethics that presume universal compliance. The United States and its allies need AI policies that preserve freedom of action, anticipate adversarial use, and avoid the trap of self-limiting frameworks written for a world that does not exist.
James Carlini is a strategist for mission critical networks, technology, and intelligent infrastructure. Since 1986, he has been president of Carlini and Associates. Besides being an author, keynote speaker, and strategic consultant on large mission critical networks including the planning and design for the Chicago 911 center, the Chicago Mercantile Exchange trading floor networks, and the international network for GLOBEX, he has served as an adjunct faculty member at Northwestern University.