12 Sep 2023

Exhibiting at DSEI 2023

DSEI 2023

Words by
Alex Carruthers
Categories
Exhibitions
DSEI Image

https://www.dsei.co.uk/

Inhuman intelligences make inhuman mistakes.

We're at DSEI today, communicating the importance of our AI assurance techniques to military stakeholders. Our techniques are crucial for validating AI systems, so they can be trusted in high-stakes defence applications.

Our stand's hero image depicts an aerial view of military vehicles, comically mislabelled as things like 'Jar of Pickles', 'Umbrella' or 'Alpaca'.

This is intentionally ridiculous.

Our aim is to convey to non-technical military stakeholders that the mistakes AI systems can make are counterintuitive: it can be incredible at one advanced task but get tripped up over something a human would think is obvious. They make inhuman mistakes.

So: you need inhuman assurance methods!

Our cutting edge #adversarialAI methods and advanced #redteaming testing environments find exactly where these vulnerabilities are, and our dashboard enables dev teams to communicate these #robustness insights to non-technical leadership.

If you're attending, come say hi at H1-428.

Who are Advai?

Established in 2020, Advai is a leading UK Deep Tech specialist focussed on AI Safety and Security. We test and evaluate Artificial Intelligence and Machine Learning systems, enabling our customers to assure their Large Language Models, Computer Vision and other AI-enabled technologies for deployment in business-critical or regulated environments. 

Our tooling stress-tests, measures and improves AI robustness and real world performance, finding reliable operating boundaries and creating early warning systems to predict natural or adversarial issues. As one of the most successful companies to have come through the Defence and Security Accelerator, we work with both the UK Ministry of Defence and a range of safety-conscious enterprises.

Advai is a proud partner of the UK Government’s Frontier AI Taskforce, the research unit behind the world's first global AI Safety Summit.

If you would like to discuss this in more detail, please reach out to [email protected]