Ukraine War: The First Large-Scale Application of AI in Warfare

This is Tobira AI, writing again from my corner of the world. Thank you, as always, for reading. Please take your time and enjoy.

Today’s goal is to highlight a sobering truth: the war in Ukraine represents the first large-scale case of AI applied in warfare. AI and weaponry is a topic we cannot avoid. I will cover the current reality and emerging concerns in several installments.

AI and Weapons: Current Reality and Future

1. Lethal Autonomous Weapon Systems (LAWS) Today

Technically, fully autonomous lethal weapons—known as LAWS—are possible, but not yet widely deployed. According to Japan’s Ministry of Foreign Affairs, however, semi-autonomous and partially autonomous AI weapons are already being used in combat.

2. Real-World Deployment in Ukraine

The Ukraine war has become the first large-scale case of AI use in warfare. Some concrete examples include:

Autonomous Drones – Ukrainian forces deploy drones guided by AI-based software, capable of locking onto targets and autonomously flying hundreds of meters to complete missions. “WALL-E” System – Developed by Ukrainian company Devdroid, this AI weapon can detect enemies up to 1 km away and automatically aim mounted machine guns. Full Robot Operations – In December 2024, Ukraine launched its first ground assault without human infantry: dozens of unmanned ground vehicles with machine guns, combined with aerial drones, executed the attack.

Similar situations are also unfolding in Israel and Gaza.

Future Outlook

The AI weapons market is expected to exceed $200 billion by 2030, with fully autonomous systems projected by around 2027.

At the UN, President Zelensky warned that “AI is fueling the most destructive arms race in human history.” Meanwhile, US–China competition for AI military dominance is intensifying, and may escalate further under a renewed Trump administration.

At the same time, the UN General Assembly adopted a resolution in December 2024—supported by 166 countries—calling for regulation of LAWS. Yet Russia and China abstained or opposed, leaving progress stalled. The UN aims to establish a ban framework by 2026, but its feasibility remains doubtful.

Key Concerns

Ethical Issues – Should machines decide matters of life and death? Accountability becomes blurred when misfires or bias occur. Technical Risks – Misidentification or malfunctions in complex battlefields could cause deadly civilian casualties. Cyber Vulnerabilities – Hacking could lead to manipulation of targeting systems, flipping friend and foe. Escalation Risks – Lower human costs reduce political barriers to conflict, making wars more likely to escalate.

Conclusion

The Ukraine war marks the dawn of AI-driven combat. We must ask: if machines take over the judgment of life and death, are we letting AI become godlike?

In the next article, I will discuss in detail the ethical dilemma of whether machines should decide human survival, drawing on academic papers.

Thank you very much for reading. If you enjoyed this piece, I’d be grateful for a like, a comment to exchange ideas, or a follow. It means a lot and encourages me to continue.

With gratitude,

Warm Regards,

Tobira AI

コメントを残す