Military
Trending

The Race for AI Soldiers: Which Military Will Deploy Autonomous Robots First?

From armed robot dogs to unmanned tanks, the world's major militaries are racing to integrate increasingly intelligent and autonomous machines into their ranks, raising urgent ethical and strategic questions about the future of warfare.

AI Soldiers – AI Military Robots Race: US, China & Russia Developing Armed Autonomous Systems for Future Warfare | Boreal Times

From armed robot dogs to unmanned tanks, the world’s major militaries are racing to integrate increasingly intelligent and autonomous machines into their ranks, raising urgent ethical and strategic questions about the future of warfare.

On a dusty training ground, a unit of soldiers advances alongside an unfamiliar teammate: a four-legged robot, moving with an uncanny, animal-like gait, carrying supplies. In another scenario, a similar machine is remotely fitted with a light weapon system, its sensors scanning for targets designated by a human operator. This is not a sci-fi film set; it is a recent demonstration by the People’s Liberation Army (PLA) of China, showcasing its testing of armed “robot dogs.” These images, which circulated globally in late 2024, served as a stark, visual announcement: the era of AI-powered ground robots on the battlefield is transitioning from laboratory prototypes to tactical evaluation.

China is not alone. In the United States, the Department of Defense is investing billions in programs like the Robotic Combat Vehicle (RCV) and collaborative drone swarms under the Replicator Initiative. Russia, despite economic constraints, has deployed rudimentary unmanned ground vehicles in Ukraine and boasts of developing AI-guided armored vehicles like the Marker. A fundamental shift is underway in military affairs, driven by artificial intelligence, robotics, and autonomy. This article provides an empirical analysis of the key developers leading this charge, their distinct technological approaches, the projected timelines for deploying increasingly intelligent systems, and the profound implications of introducing machines that can decide, move, and potentially use lethal force with minimal human oversight.

The core objective for modern militaries is to achieve what strategists call “human-machine teaming” or Manned-Unmanned Teaming (MUM-T). The goal is not necessarily to create a fully autonomous humanoid soldier, but to develop a ecosystem of intelligent machines that augment human warfighters. These systems are designed to perform the “dull, dirty, and dangerous” tasks: persistent reconnaissance, logistics under fire, breaching fortifications, and clearing urban areas. By taking humans out of the most vulnerable roles, commanders hope to reduce casualties. By leveraging AI’s speed in data processing and target recognition, they aim to accelerate the “observe, orient, decide, act” (OODA) loop to a pace no human force can match.

However, this pursuit sits on a spectrum of autonomy. Current leading systems are largely human-supervised or human-delegated, where a human approves targets or sets high-level rules of engagement. The race now is toward greater conditional autonomy, where machines can perform complex missions (like a squad of drones clearing a building) within a pre-defined boundary, and eventually toward highly autonomous systems capable of adapting to dynamic combat environments. The nations at the forefront of this race are approaching it with different strategies, resources, and ethical postures.

The Contenders: Strategies and Systems

United States: Ecosystem Integration and Ethical Frameworks
The U.S. approach is characterized by large-scale institutional programs, a strong private-sector tech ecosystem, and a stated (though debated) commitment to “responsible” AI. Its lead is less about flashy individual robots and more about integrating autonomy into a vast, networked force.

  • Key Programs: The Army’s Robotic Combat Vehicle (RCV) program is developing light, medium, and heavy unmanned tanks to accompany infantry and manned armored vehicles. Companies like Textron, Qinetiq, and Oshkosh have built functional prototypes. The Air Force’s Skyborg and Navy’s Project Overlord aim to create AI “brains” for loyal wingman drones and unmanned surface vessels. The most ambitious is the Defense Advanced Research Projects Agency (DARPA)-inspired Replicator Initiative, which seeks to field thousands of attritable, autonomous drones across domains within two years to counter mass forces.
  • AI Focus: The U.S. excels in the core AI/ML algorithms for perception, navigation in GPS-denied environments, and collaborative swarm behaviors. A major focus is on human-machine interface and ensuring a human remains “in-the-loop” or “on-the-loop” for lethal decisions, as outlined in DoD Directive 3000.09.
  • Timeline: Expect platoon-level integration of RCVs and operational drone swarms by 2026-2028. Fully autonomous combat missions without human intervention are not an official near-term goal.

China: Rapid Prototyping and Strategic Ambition
China’s military-civil fusion strategy allows it to leverage rapid commercial advances in drones, robotics, and AI from companies like DJI, Unitree (maker of the robot dog), and China North Industries (NORINCO). The PLA’s approach appears more willing to publicly test armed platforms and discuss operational concepts for intelligent warfare.

  • Key Systems: The armed robot dog demonstrations are the most visceral example. China has also shown a wide array of unmanned ground vehicles (UGVs), from the small Sharp Claw to the larger Type 59 tank conversion. Its CETC corporation has demonstrated frighteningly effective drone swarms capable of coordinated attacks. In the air, the GJ-11 Sharp Sword stealth combat drone shows advanced capabilities.
  • AI Focus: Chinese military writings emphasize “unmanned, intelligent” warfare as central to overtaking the U.S. They invest heavily in AI for command decision-making (“AI commanders”), electronic warfare, and saturation attacks with smart munitions. There is less public discussion of ethical constraints compared to the West.
  • Timeline: China is likely capable of large-scale deployment of armed UGVs and sophisticated drone swarms by 2027-2030, a date it has set for modernizing its military. Integration into brigade-level forces is probable within this decade.

Russia: Asymmetric Innovation and Battlefield Testing
Russia’s efforts are more constrained by resources but are hardened by direct combat experience in Ukraine and Syria. They prioritize cost-effective, rugged systems that provide an immediate asymmetric advantage.

  • Key Systems: Russia has used the Uran-9 UGV (with mixed success) in Syria and deployed smaller kamikaze drones and UGVs extensively in Ukraine. The much-hyped Marker UGV is a modular, AI-driven platform for testing autonomy and has reportedly been sent to Ukraine for experimental use. The Platform-M and Scorpion are smaller combat UGVs.
  • AI Focus: Russian doctrine, as seen in practice, leans toward remotely operated systems and autonomous loitering munitions (like the Lancet drone) with final target confirmation by human. Their stated goal for AI is to create “fully robotic units” but current technology lags behind U.S. and Chinese sophistication.
  • Timeline: Russia will continue to field and refine lower-cost, task-specific autonomous munitions and UGVs in the near term (2024-2026). A mature, integrated robotic force is a longer-term prospect dependent on economic and technological factors.

Other Notable Players:

  • Israel: A world leader in autonomous defense systems (Iron Dome) and lethal loitering munitions (Harop). Its expertise is in tight sensor-to-shooter loops and counter-drone warfare.
  • South Korea: Fields the SGR-A1 sentry gun, which can operate autonomously, and is developing UGVs for the heavily fortified DMZ.
  • Turkey: Has achieved notable success with its Kargu rotary-wing kamikaze drone, which manufacturers claim can operate autonomously in swarm mode with target selection via facial recognition.

The “When”: Projected Timelines and the Autonomy Threshold

Predicting exact deployment dates is difficult, but trends are clear. Logistics and reconnaissance robots are already in service today. Armed, remotely operated systems are in advanced testing and limited deployment now. The next five years (2025-2030) will see the proliferation of:

  1. Manned-Unmanned Teaming at the Company Level: Infantry squads accompanied by armed robot dogs or small UGVs for scouting and fire support. Tank platoons partnered with robotic wingmen.
  2. Operational Drone Swarms: Battalions deploying hundreds of small, collaborative drones for surveillance, electronic attack, or kinetic strikes.
  3. AI-Enhanced Command Systems: Algorithms recommending courses of action to human commanders based on fused intelligence data.

The deployment of fully autonomous systems authorized to use lethal force without a human in the decision chain remains the most contentious horizon. While technology may permit it by the end of this decade, political, legal, and ethical barriers are immense. The United Nations Convention on Certain Conventional Weapons (CCW) holds ongoing discussions on Lethal Autonomous Weapon Systems (LAWS), but a binding treaty seems distant. The U.S., China, and Russia all resist preemptive bans, arguing AI can make warfare more precise and lawful. The most likely scenario is a gradual, creeping autonomy where the human role becomes more supervisory and attenuated, raising the risk of accountability gaps and accidental escalation.

Implications: The Future Battlefield and Global Stability

The integration of AI combat robots will transform warfare. It will enable new tactics like pervasive sensing and constant harassment. It may favor the defense, making assaults prohibitively costly against AI-directed swarms. It could also lower the threshold for conflict, as politicians may be more willing to commit machines than human troops. An “AI kill chain” that operates at machine speed risks accidental engagements and rapid, uncontrollable escalation, especially if two AI-powered forces clash directly.

The race itself is a source of instability. A nation fearing its adversary is near a breakthrough in autonomous swarms may feel compelled to strike first. There is also a profound risk of proliferation; the underlying technology for a smart drone is increasingly commercial and dual-use. Non-state actors will eventually access these capabilities.

Ultimately, the world stands at a strategic inflection point. The nations developing these systems are not just racing for a tactical edge; they are competing to set the norms and rules for the next epoch of armed conflict. Whether this leads to a more precise, casualty-averse form of warfare or a dehumanized, destabilizing one will depend on the choices made in laboratories and defense ministries today. The march of the machines has begun, and its destination will be written by the humans who build and, hopefully, control them.

👉 Share your thoughts in the comments, and explore more insights on our Journal and Magazine. Please consider becoming a subscriber, thank you: https://borealtimes.org/subscriptions – Follow The Boreal Times on social media. Join the Oslo Meet by connecting experiences and uniting solutions: https://oslomeet.org


References

  1. U.S. Department of Defense. (2023). Department of Defense Directive 3000.09: Autonomy in Weapon Systems. https://www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf
  2. The State Council Information Office of the People’s Republic of China. (2021). China’s Military Strategy. http://english.www.gov.cn/archive/whitepaper/202105/26/content_WS60aeab7ec6d0df57f98db001.html
  3. Congressional Research Service. (2024). U.S. Ground Forces Robotics and Autonomous Systems (RAS) and Artificial Intelligence (AI): Considerations for Congress. https://crsreports.congress.gov/product/pdf/R/R47757
  4. Russian Ministry of Defence. (2023). “Marker robotic platform to be tested in special operation zone” (TASS Report). https://tass.com/defense/1719259
  5. United Nations Office at Geneva. (2024). Group of Governmental Experts on Lethal Autonomous Weapons Systems (LAWS). https://meetings.unoda.org/meeting/121792/
  6. Kallenborn, Z. & Bleek, P. C. (2022). “Swarm Wars: A Guide to Drone Swarms.” Center for the Study of Weapons of Mass Destruction, National Defense University. https://ndupress.ndu.edu/Media/News/Article/3022215/swarm-wars/
  7. Sayler, K. M. (2023). “Emerging Military Technologies: Background and Issues for Congress.” Congressional Research Service. https://crsreports.congress.gov/product/pdf/R/R47599
  8. The PLA’s Science of Military Strategy (2020 edition) – Analysis of relevant sections on intelligent warfare by think tanks like The Center for Strategic and International Studies (CSIS). https://www.csis.org/analysis

Discover more from The Boreal Times

Subscribe to get the latest posts sent to your email.

OSLO MEET
Directory of Ideas & Businesses
Connecting Experiences • Inspiring Solutions
Discover

Boreal Times Newsroom

Boreal Times Newsroom represents the collective editorial work of the Boreal Times. Articles published under this byline are produced through collaborative efforts involving editors, journalists, researchers, and contributors, following the publication’s editorial standards and ethical guidelines. This byline is typically used for institutional editorials, newsroom reports, breaking news updates, and articles that reflect the official voice or combined work of the Boreal Times editorial team. All content published by the Newsroom adheres to our Editorial Policy, with a clear distinction between news reporting, analysis, and opinion.
Back to top button