14 May 2024

Advai’s Day Out Teaching the Military how to Exploit AI Vulnerabilities

"It’s in this moment where the profound importance of adversarial AI really clicks. The moment when a non-technical General can see a live video feed, with a small bounding box following their face, identifying them, and pictures the enemy use-case for such a technology.

Then, a small amount of code is run and in a heartbeat the box surrounding their face disappears.

Click."

Read more about our day with the UK Ministry of Defence…

Words by
Alex Carruthers
Advai Day Out With Military

Advai’s Day Out Teaching the Military how to Exploit AI Vulnerabilities

Introduction

In an increasingly digitised world, warfare is increasingly in the electronic spectrum. Armed forces must prepare for the battlespace of the future by training in cyber warfare – the invisible fight of exploitation across the electromagnetic spectrum.

Defence Cyber Marvel (DCM) is a training exercise organised by the Army Cyber Association to enable personnel to develop the skills needed to compete effectively in the cyber domain.

The competition tests and develops the skills of the participants to stop potential cyber-attacks against allied forces in a real-world scenario.

This year marked the third time the UK hosted this global effort, with over 40 teams from across Defence and Government, including Army, Navy, RAF, Department of Work and Pensions, DE&S, Met Police and NCA, as well as international partners including South Korea, United States Army, Japan, India, Georgia and more.

This year’s scenario involved the discovery of an enemy laptop. 

Analysis of the hard-drive revealed object detection and face recognition models the enemy could use to automate the identification of allied bases, machinery and forces.

These are the types of artificial intelligence models that can enable both automatic recognition capabilities, and, dangerously, autonomous weaponry.

The challenge to contestants: how can we turn this into our own military advantage?

Niche expertise is required to ensure the competition keeps its participants ahead of the latest threats. Much of Advai’s research is experimental in nature, and therefore cutting edge.

To enable our clients to shore up the vulnerabilities of their AI systems, we have developed methodologies that provide analysis of AI points of failure – a kind of crash testing that uncovers weaknesses.

This is primarily achieved by algorithmically ‘attacking’ AI models, using optimisation techniques to home in on weaknesses that would otherwise be impossible for a human to detect. It’s this expertise in ‘adversarial AI’ that Advai were invited to bring to the event.

 

Advai’s regular work is to identify vulnerabilities so they can be patched, not exploited.

The Advai technical team therefore found the opportunity to flip their perspective exciting and were delighted to pass on their expertise in this unique and meaningful military context.

Faster Rcnn

The Technology

The incremental technical advances of artificial intelligence are often too nuanced to make headlines – few people keep track of subtle improvements in a gradient descent optimisation model, for example.

It’s not easy to visualise maths. But seeing is believing, no?

Captain Abbygayle Kirtley took to the stage with a live demonstration of our technology.

In real time, a model was spun up to recognise a few of the audience members. Then, using Advai’s adversarial methodology, she optimised a filter overlay for the image that prevented the model from recognising them.

It’s in this moment where the profound importance of adversarial AI really clicks.

The moment when a non-technical General can see a live video feed, with a small bounding box following their face, identifying them, and pictures the enemy use-case for such a technology.

Then, a small amount of code is run and in a heartbeat the box surrounding their face disappears. Click.

The bizarre thing about this technology is that it’s invisible to the human eye.

From a human perspective, the image doesn’t change.

Two images can look identical and yet an object detector sees the person in one, and in the other, nothing.

Subtle mathematically calculated arrays of pixels combine in such an optimised way so as to elicit a targeted result

– now a person is a tree,

– and now they’ve disappeared.

Apology Poem

Conclusion: The world has changed, Warfare is ever changing, and so too must training change.

The understanding that arrays of pixels, painted together mathematically and precisely, are in fact attack vectors in a military context is an important one for military personnel.

Advai researchers conduct novel attacks on all manner of AI systems and have never once failed to find an exploitable weakness.

Yet, the rate at which AI – harbouring undiscovered flaws – is being deployed into commercial and government ecosystems is startling.

AI is impressive.

People are understandably impressed with code that can write them apology poems using words that rhyme with conciliatory.

The engineering feat behind these systems is just as astounding as it is bamboozling.

Yet here a [human] cognitive bias reveals itself: we mistakenly imbue these models with a halo effect that leads us to assume their superhuman intelligence implies an imperviousness to error.

We simply expect them to be cleverer than they are because they are so clever at other things. When new vulnerabilities are discovered, it’s often surprising because their flaws are inhuman and unintuitive.

A military drive to seek the advantage of AI models is natural, and necessary to remain competitive in the international arena.

However, and even more so than in the commercial and civilian domains, the burden of reliability and control under real world circumstances is paramount.

The truth is that AI vulnerabilities are aplenty and strange, and systems where AI models are granted any degree of autonomy – or even responsible for framing or selecting information – must be evaluated intensively.

The complexity AI systems often have means that failure can result from any number of stakeholders, architecture decisions, supply chains aspects, assets and data, interfaces, or from any interaction between any of these things.

That failure can occur so innocuously and so perniciously, too.

AI Assurance techniques are therefore not a simple quality control mechanism to sit at the end of the production line. The underlying ‘adversarial AI’ methodologies that provoke and reveal failures must be built into the very fabric of AI development and Governance must exist across every stage, person and asset.

Playing our small part in this meaningful and mammoth challenge is our privilege.

Further reading

Read our two opinion pieces on the AI Act: