02 Sep 2023

Autonomous Military Systems Must Operate as Commanded

To protect our way of life, the primary purpose of military innovation is to achieve and maintain operational advantage over our adversaries.

 

In the modern age, this practically translates to advanced operational systems and hyperintelligent software to manage these systems in an increasingly autonomous way.


But we must be able to trust these systems...

Words by
Alex Carruthers
DSEI Cover Image

Robustness is crucial for autonomous military systems.

To protect our way of life, the primary purpose of military innovation is to achieve and maintain operational advantage over our adversaries.

In the modern age, this practically translates to advanced operational systems and hyperintelligent software to manage these systems in an increasingly autonomous way.

Human-machine teams can increase capacity, enhance capability, and facilitate rapid and complex decision making.

This article was originally published on the DSEI event blog: https://www.dsei.co.uk/news/robustness-crucial-autonomous-military-systems

Image 1 A

This purpose must be balanced with a respect for human rights, such as privacy. Therefore, a secondary inhibitory force countering innovation is at work, a motivation to uphold human rights and maintain responsibility on the global stage. These two motivations are taut with conflict because raw innovation and assurance are opposing mechanisms. This theme of balancing priorities is not unique to military operations. Do you want it strong or accurate? Fast or reliable? Large or flexible? And so on.


In artificial intelligence (AI) terms, ‘AI Robustness’ research is the means of delivering human control when AI systems are operating autonomously because it ensures AI operates reliably and under diverse, sometimes adversarial conditions. Robustness could be seen as a compliance requirement that stifles military innovation; however, we show that prioritising robustness in military systems and operations offers numerous operational advantages. Simply, robustness in AI is a force enabler.


The War in Ukraine is accelerating the deployment of autonomous operations and the involvement of private companies, who have their own visions for the future of war and exert their own pressure. 
We conclude that at the forefront of autonomous military systems, sits the field of AI Robustness research.

 

Image: Adding recognition and tracking to Parrot Anafi USA quadcopters, used MOD, which will integrate with comms via a soldier's mesh radio.

An autonomous military future is necessary because if you don’t your competition will. It is an arms race.

Former national security advisor Lord Mark Sedwill has made headlines recently for stating this fact "For this country to punch our weight in military terms over the next 25 to 50 years we should really be investing in this area and that also means being able to invest probably in a way that has a higher risk appetite than traditionally government investment.”

Software firm Palantir's Alex Karp told the BBC "The race is on - the question is do we stay ahead, or do we cede the lead?". The expected growth of AI in military markets is from $9 billion to $39 billion in five years. Artificial Intelligence is inevitable.

AI acts as an accelerant for all other technologies, and so the side which deploys the most effective solution will gain an asymmetric advantage over their adversary.

"Future conflicts may be won or lost on the speed and efficacy of the AI solutions employed.”

Image 1 B

AI Robustness is required to remain compliant with impending international standards.

Whilst the European Regulation on Artificial Intelligence (‘the AI Act’) compels AI creators to build robust and reliable systems, it “mentions in a footnote that military AI uses are not within its scope.” The title says it all: Why the EU must now tackle the risks posed by military AI – CEPS.

 

As of June 2023, there is no unanimous Western framework for the use of AI-enabled systems, and, amongst general fear of the “terrible harm” AI could deal to humanity, this has led the UK to host the first global summit on AI.

 

Prioritizing AI Robustness is the primary mechanism to ensure compliance with theoretical future regulations because it directly enhances the predictability and reliability of autonomous weapon systems. However, critics of this idea would suggest this may not be enough of a driver. Whilst Mark Sedwill says that “It’s important to have the guardrails in place” he is quoted in the same article that “incredible times call for incredible actions…”.

 

Image: automated counter-drone tests performed at NPSA C-UAS test site.

Image 1 C

Private companies exert their own vision of military future.

The reality of war today is that AI-driven autonomous systems are already in use. In Ukraine, older efforts used modified console controllers to pilot drones. Now, AI-enabled recognition systems are powering reconnaissance missions. Fallen soldiers are being remotely identified and triaged with behaviour recognition, and even enemy soldiers are being matched to social media data.

 

Image: an uncrewed Royal Navy test submarine automatically tracking vessels, fusing computer vision and sensors.

 

The Netherlands has gifted Ukraine with Seafox to autonomously detect mines; Palantir claims to have supplied Ukraine with AI solutions to improve targeting systems; and Zvook offers Ukraine a missile detection system that uses AI to monitor a network of acoustic sensors.

 

Closer to home, deep tech start-ups like Vizgard are focusing on developing autonomous capabilities. Vizgard have run tests on an uncrewed test submarine for DASA and showcased an unmanned ground and air vehicle intelligently coordinating as they hone in on a target.

 

Are we to believe uniform military standards of robustness are found across the array of systems provided by multiple service providers?

 

Alex Kehoe, CEO of Vizgard, captures the initiative private companies must take: “Now, AI companies worth their salt need to put mechanisms in place to ensure that their tech doesn't fall over when it matters most…the responsibility has shifted to private companies proving and quantifying the anticipated real-world performance."

 

The successful use of these AI-enabled tools signals AI is enhancing military effect already. But to whose robustness standards are these systems being built, and to what extent will they remain useable in a changing operational environment? Whilst the military does not enforce their own robustness standards, private military companies delivering AI-enabled military capability will be forced to set their own.

 

A lack of universal robustness means different tools come with distinct levels of built-in trust. This lack of discipline could cost the trustworthiness of AI tools in general.

 

One thing is for sure, the rise in autonomous military operations is a certainty and will be heavily influenced by the private companies involved. In lieu of a dedicated international military framework for AI, its best we review autonomous operations through the lens of self-benefit.

In a military context, AI robustness is self-justifying.

The fact is: robustness provides advantage.

  1. Information advantage (knowing more than your enemy),
  2. Decision advantage (deciding before your enemy), and
  3. Operational advantage (acting before the enemy).

 

Reliability is a key feature of robustness. At the basest level, safe and effective military operations depend on reliable information and systems that can handle a diverse range of situations and under extreme pressure. Most obvious is the military need to resist adversarial attacks. The UK and her adversaries will compete to deceive our systems to extract or manipulate our information. We must be able to withstand these efforts.

 

The defence operating environment is – by definition – a dynamic and evolving battlefield and so military use-cases must contend with unknowns. Robust AI systems may handle incomplete data better and recognise when they do not have the data they need.

 

If operators do not trust the systems, then they cease to be useful no atter how ‘innovative’ they are. Robust systems are better able to adapt to rapidly changing environment keeping up with the needs and situations of their users. Often, lives at stake. AI systems must operate as they are commanded.

 

The virtues of adversarial resistance, reliability, trustworthiness, adaptability, and self-awareness of missing information are fundamental when considered in the context of autonomous military operations. Truly, these virtues are virtues we would expect of any soldier assigned with managing operations! If the military is to maintain effective human control over operations, then we must be able to trust that these virtues can be delegated to autonomous control systems in a meaningful way.

There is a sequencing problem: innovation, then robustness.

We believe there is a sequencing issue in the military’s adoption of AI. The same is true across the commercial landscape, but this is beside the point. The competitive pressure for any advantage naturally expedites the development, trial, and use of technologies. Whilst militaries are more effective than most organisations at delivering rushed causes, moving fast does expose an organisation to unanticipated consequences or system vulnerabilities.

 

The sequencing problem is that robustness is considered after the fact – after use case analysis, after early innovation, after system tests, after autonomous tool development. Surely, if a system does not hold to robustness values, then its practical value for the military is zero? If so, then are these not priorities to be held alongside innovation in the earliest exploratory phases?

 

Do we build a bridge before designing it? Do we design a bridge without knowing anything about the materials we may build it with? Or what kind of bridge is needed? Why would AI be any different? AI where robustness is considered after the build is destined to be one of the 90% that never make it to deployment.

 

The push for rapid innovation in military tech can leads to trade-offs with robustness. Rushed F-35s were plagued by software problems, the US Army’s flagship multi-source intelligence system DGRS had “poor reliability” (see archive), U.S.S Gerald R. Ford Aircraft carrier suffered delays from the use of new technologies, the UK Ajax armoured vehicle program suffered a “litany of issues” and injured soldiers, and so on. AI-enabled autonomous systems could transform warfare, but a lack of sufficient guardrails could cause collateral damage, inefficient operations, frustrated military users, or worse.

 

Slow regulations, an absence of unified Western agreement for AI in the military, War in Ukraine, and the influence of private companies, could mean the premature deployment of systems. Therefore, the argument for robustness guardrails may rest on the self-serving gains that robust systems bring.

Conclusion

The widespread adoption of autonomous military AI systems is necessary and inevitable due to competitive pressure. The rapid pace of this competitive innovation race, the involvement of private companies and the urgency of current conflicts, could lead to compromised robustness in deployed AI systems.

 

Whilst universal frameworks do not exist, retaining effective human control is necessary to remain compliant with any future universal military standards. In practice, ‘AI Robustness’ is the field of AI research that achieves this. Robust AI systems are how human control is effectively delegated through to our autonomous systems. As we trust our alarm clocks to enable us to reach through time and wake us up, so too we must trust that our autonomous systems to operate exactly as we would want them to.

 

If the likely future need for regulatory compliance is not motivation enough, then there is a self-serving motivation for AI Robustness, too. Robustness ensures that our AI operates as commanded, so we can trust it as an extension of our own decision making. Robustness ensures our systems operate reliably under tough, diverse conditions. Robustness helps us avoid unintended consequences and expensive or frustrating problems.

 

The virtues of robust AI systems and the military’s need for discipline in all things are perfectly aligned.

 

In the military, a robust AI system is the minimum requirement for viable autonomous weapons systems. A failure of robustness means a failed weapons system. Robustness is therefore the first step in autonomous weapons development, not an afterthought.

Who are Advai?

Advai is a deep tech AI start-up based in the UK that has spent several years working with UK government and defence to understand and develop tooling for testing and validating AI in a manner that allows for KPIs to be derived throughout its lifecycle that allows data scientists, engineers, and decision makers to be able to quantify risks and deploy AI in a safe, responsible, and trustworthy manner.

If you would like to discuss this in more detail, please reach out to [email protected]