Artificial Intelligence and war: 5 things to know about Maven Smart System
Artificial intelligence is again at the center of a wartime dispute, this time over who carries responsibility when software helps shape strike decisions. Palantir’s UK and Europe head, Louis Mosley, has argued that the decisive power remains with militaries, not the company supplying the tool. That debate matters now because scrutiny has intensified over Maven Smart System, a Pentagon-launched platform that helps personnel process large volumes of intelligence and recommend targets. The question is no longer whether the system can speed decisions. It is whether speed can be separated from accountability.
Why the debate over Artificial Intelligence in targeting has sharpened
The latest concerns focus on the use of Palantir’s AI-powered defense platform during the war with Iran. Experts have warned that when a system helps plan attacks, there may be little time for meaningful verification of its output. That warning carries weight because errors in target selection can have grave consequences, including the risk of civilians being hit. Mosley pushed back on the idea that the platform acts as an automated targeting engine. He described it as a support tool meant to help military personnel synthesize information faster than they could manually.
The core dispute is not about whether the system is used, but how much judgment remains in human hands. Mosley said there is always a human in the loop and that the ultimate decision sits with the military organization. That position places the policy burden squarely on military customers, not the company building the software.
What Maven Smart System is designed to do
Maven Smart System was launched by the Pentagon in 2017. Its purpose is to speed up military targeting decisions by bringing together large amounts of data, including intelligence, satellite images and drone imagery. The system analyzes that material and can then provide recommendations for targeting. It can also suggest the level of force to use, depending on the personnel and hardware available, including aircraft.
That capability explains why the platform has drawn attention. By compressing a process that once required manual review, it gives commanders a faster picture of the battlefield. But that same speed is exactly what worries critics. They argue that the faster the recommendation arrives, the shorter the window for scrutiny. In a combat environment, that may create pressure to trust the output rather than challenge it. In that sense, Artificial Intelligence becomes less a replacement for command judgment than a force multiplier for existing military routines.
The tension between speed and verification
Since the war with Iran began in February, the US has reportedly used Maven to plan strikes across the country. Since 28 February, the US has launched more than 11, 000 strikes against Iran, and many were reportedly identified by Maven. Those figures underscore why analysts are asking whether accelerated targeting can coexist with careful verification. Prof Elke Schwarz of Queen Mary University of London said the prioritization of speed and scale, combined with the use of force, leaves very little time for meaningful verification of targets to ensure civilians are not accidentally included.
Mosley stressed that the platform should be understood as a guide for military personnel rather than an autonomous targeting system. He argued that the software helps officers synthesize vast amounts of information that previously would have been handled one item at a time. Yet the concern remains that a support tool can become operationally influential when commanders are under pressure and time is short. That is where the debate over Artificial Intelligence turns from technical design to battlefield governance.
Expert warnings and the limits of company responsibility
The Pentagon’s decision in February to phase out Anthropic’s Claude AI system, which helps power Maven, added another layer to the controversy. The company refused to allow its AI to be used in autonomous weapons and surveillance, and Palantir says alternatives can replace it. That change shows how the technology stack around defense systems can shift quickly, even as the same strategic concerns remain.
Louis Mosley’s central argument is clear: militaries determine the policy framework that governs who makes what decision. He said that is not the company’s role. Adm Brad Cooper, head of the US military in the Middle East, has offered a contrasting operational view, saying AI systems help officers sift through vast amounts of data in seconds so leaders can cut through noise and make smarter decisions faster than the enemy can react. Together, those positions reveal the fault line in the debate: one side emphasizes efficiency, the other emphasizes verification and liability.
Regional and global implications of AI-enabled warfare
The implications extend beyond one system or one conflict. If military customers set the rules for how Artificial Intelligence is used in targeting, then the standards can vary by state, by battlefield and by command culture. That makes oversight harder to compare and harder to enforce. It also raises questions about what happens when speed becomes a strategic advantage and caution becomes a delay.
For defense planners, the attraction is obvious: software can process more data than human teams can manage alone. For critics, the danger is just as obvious: faster recommendations may narrow the space for independent review. The present debate over Maven suggests that the future of war may be shaped not only by who owns the technology, but by how much trust militaries are willing to place in it.
As Artificial Intelligence becomes more embedded in military decision-making, the unanswered question is whether the promise of faster action can ever be matched by equally fast accountability.