NATIONAL HARBOR, Md. (AP) — The U.S. military is employing artificial intelligence to pilot small spy drones on special operations missions and help Ukraine in its war with Russia. It’s also tracking soldiers’ health, predicting when Air Force planes need maintenance and helping keep an eye on rivals in space.
Now, to keep up with China, the Pentagon wants to deploy thousands of relatively cheap, disposable, AI-equipped self-driving vehicles by 2026. The ambitious effort, called Replicator, aims to “jumpstart a long-overdue technological shift for the U.S. military toward smaller, smarter, cheaper and more numerous platforms,” Deputy Secretary of Defense Kathleen Hicks said in August.
While funding is uncertain and details are hazy, Replicator is expected to accelerate tough decisions about which AI technologies are mature and trustworthy enough to be deployed, including in weapons systems.
There’s little debate among scientists, industry experts and Defense Department officials that the U.S. will have fully autonomous lethal weapons within a few years. Officials insist that humans will always be in control, but experts say advances in data-processing speeds and machine-to-machine communication will inevitably relegate humans to an oversight role.
As expected, that’s especially true if lethal weapons are deployed en masse in drone swarms. Many countries are working on this, but none of China, Russia, Iran, India or Pakistan have signed the U.S.-led pledge for the responsible use of military AI.
The Longshot air-launched drone, being developed by General Atomics in collaboration with the Defense Advanced Research Projects Agency and intended to work alongside manned Air Force jets, is displayed at the Aerospace Force Association’s Air, Space & Cyber conference, Wednesday, Sept. 13, 2023, in Oxon Hill, Maryland. Pentagon planners envision such drones being used in “human-machine teams” to overwhelm opponents. But to be deployed in combat, developers will need to prove that the AI technology is reliable and trustworthy enough. (AP Photo/Alex Brandon)
It is unclear whether the Pentagon is currently formally evaluating fielding a fully autonomous lethal weapons system, as required by the 2012 directive. A Pentagon spokesman declined to comment.
Paradigm Shift
The Replicator highlights the enormous technical and people challenges the Department of Defense faces in procurement and development at a time when the AI revolution promises to change the way war is fought.
“The Department of Defense has struggled to adopt AI developments that have come from recent breakthroughs in machine learning,” said Gregory Allen, a former senior Pentagon AI official now at the Center for Strategic and International Studies think tank.
The Department of Defense has more than 800 unclassified AI-related projects in its portfolio, many of which are still in the testing phase. In general, machine learning and neural networks help humans gain insights and create efficiencies.
“The AI that the Pentagon has today is being used a lot to augment humans,” says Missy Cummings, director of the Robotics Center at George Mason University and a former Navy fighter pilot. “It’s not running in a vacuum. People are using AI to get a better understanding of the fog of war.”
Space, the new frontier of war
One area where AI-assisted tools are tracking potential threats is the latest frontier in the arms race: space.
Lisa Costa, the U.S. Space Force’s chief technology and innovation officer, said at an online conference this month that China envisions using AI, including satellites, to “determine who is and is not an adversary.”
The United States is aiming to catch up.
The Space Force’s operational prototype, called “Machina,” autonomously monitors more than 40,000 objects in space, coordinating the collection of thousands of data points each night with a worldwide network of telescopes.
Machina’s algorithms will orchestrate the telescope’s sensors; computer vision and large-scale language models will tell it which objects to track; the AI will then use on-the-fly data sets on celestial mechanics and physics to choreograph, Col. Wallace “Rhett” Turnbull of Space Systems Command said at a conference in August.
Another Space Force AI project will analyze radar data to detect enemy missile launches, he said.
Maintenance of aircraft and soldiers
And AI’s predictive capabilities are helping the Air Force keep planes flying by forecasting maintenance needs for more than 2,600 aircraft, including B-1 bombers and Blackhawk helicopters.
Tom Siebel, CEO of the Silicon Valley firm C3 AI, which is under contract, said the machine learning models identify potential malfunctions dozens of hours in advance. C3’s technology also models missile trajectories for the U.S. Missile Defense Agency and identifies insider threats among federal employees for the Defense Intelligence and Security Agency.
Among the health-related efforts is a pilot project to track the health of the entire Army’s 3rd Infantry Division, more than 13,000 soldiers, using predictive modeling and AI to help reduce injuries and improve performance, Maj. Matt Visser said.
Support for Ukraine
In Ukraine, AI provided by the Pentagon and its NATO allies is helping to thwart Russian aggression.
NATO allies are sharing intelligence from satellites, drones and human-collected data, some of which is aggregated with software from U.S. contractor Palantir. Some data comes from Maven, a Defense Department path-finding AI project that’s now largely managed by the National Geospatial-Intelligence Agency, said officials including retired Air Force Gen. Jack Shanahan, the Defense Department’s first AI director.
Maven began in 2017 as an effort to process video from drones over the Middle East in support of U.S. Special Forces fighting ISIS and al-Qaida, and now aggregates and analyzes a variety of sensor and human-generated data.
AI has also helped the U.S.-created Ukraine Security Assistance Group coordinate the logistics of military support from a 40-nation coalition, Pentagon officials said.
All-Domain Command and Control
To survive on today’s battlefield, troops need to be small, nearly invisible and fast-moving because exponentially expanding sensor networks allow anyone to “see anywhere on the planet at any time,” Gen. Mark Milley, then chairman of the Joint Chiefs of Staff, said in a June speech. “And what you see, you can shoot.”
The Pentagon has prioritized developing an intertwined battle network called joint all-domain command and control, which aims to automate the processing of optical, infrared, radar and other data across the service to connect warfighters more quickly. But the challenge is vast and riddled with bureaucracy.
Christian Brose, a former staff director for the Senate Armed Services Committee who now works for defense technology company Anduril, is among those who support military reform who believe “we may have won to some extent.”
“The debate may not be about whether this is the right thing to do, but rather about how to do it and the accelerated timeline that would be required,” he said. Brosz’s 2020 book, “The Kill Chain,” argues for an urgent need to retool to match China in the race to develop smarter, cheaper networked weapons systems.
To that end, the U.S. military is working hard on “man-machine collaboration.” Dozens of unmanned aircraft and ships are currently monitoring Iranian activity. U.S. Marines and Special Forces also use Anduril’s autonomous Ghost minicopters, sensor towers, and counter-drone technology to protect U.S. forces.
Industry advances in computer vision are essential: Shield AI enables drones to fly without GPS, communications or even a remote pilot, which is key to the Nova quadcopter used by U.S. Special Forces to scout buildings in conflict zones.
Near future: The Air Force’s “Loyal Wingman” program intends to combine manned and autonomous aircraft. For example, F-16 pilots could fly drones for reconnaissance, to lure enemy fire, and to attack targets. Air Force leaders are aiming for a debut later this decade.
The race to full autonomy
The Loyal Wing’s timeline doesn’t quite line up with that of the Replicator, which many consider overly ambitious, and the Pentagon’s vagueness about the Replicator could be intended to keep rivals confused, but it also means planners are still figuring out capabilities and mission goals, said Paul Scharre, a military AI expert and author of “Four Battlegrounds.”
Anduril and Shield AI, each backed by hundreds of millions of dollars in venture capital funding, are among the companies vying for the contract.
Shield AI’s chief technology officer, Nathan Michael, estimates that the company could have an autonomous swarm of at least three unmanned aerial vehicles ready within a year using the V-BAT aerial drone. The U.S. military currently uses the AI-free V-BAT on Navy ships, in counter-drug missions, and to support Marine Expeditionary Units, according to the company.
It will be a while before they can reliably deploy larger flocks, Michael said: “It’s all about crawling, walking and running, unless you do something that will cause it to fail.”
Shanahan, the Pentagon’s first AI chief, currently trusts only purely defensive weapons systems, like the Phalanx missile defense system on ships, to operate autonomously. He worries less about autonomous weapons making their own decisions and more about systems that don’t work as advertised or that end up killing noncombatants or friendly forces.
The department’s current chief digital and AI officer, Craig Martell, is determined not to let that happen.
“Regardless of the autonomy of the system, there will always be a responsible agent who understands the limitations of the system, is well trained on the system, has a reasonable amount of confidence in when and where it can be deployed, and is always accountable,” said Martel, who previously led machine learning at LinkedIn and Lyft. “That’s never going to go away.”
When asked when AI will be reliable enough for lethal self-driving, Martel said it’s pointless to generalize. For example, Martel trusts the adaptive cruise control in his car, but not the technology that’s supposed to stop it from changing lanes. “As a person in charge, I wouldn’t deploy this in anything other than very constrained situations,” he said. “Now apply that to the military.”
Martell’s office is evaluating potential use cases for generative AI (it has even created a special task force for that purpose) and is focused on testing and evaluating AI as it’s developed.
An urgent challenge is recruiting and retaining the talent needed to test AI technologies, said Jane Pinellis, a principal AI engineer at the Johns Hopkins University Applied Physics Laboratory who previously served as chief of AI assurance in Martell’s office. The Defense Department can’t compete on salaries: Computer science PhDs with AI-related skills can earn more than the military’s top generals and admirals.
A recent National Academy of Sciences report on Air Force AI also noted that testing and evaluation standards are immature.
Does that mean the U.S. will be forced to deploy less-than-fully-compliant autonomous weapons in the future?
“We’re still operating under the assumption that we have time to do this as rigorously and as diligently as we can,” Pinellis said. “When the time comes for us to act and we’re not prepared, I think someone will be forced to make a decision.”