Inhuman intelligences make inhuman mistakes.
We're at DSEI today, communicating the importance of our AI assurance techniques to military stakeholders. Our techniques are crucial for validating AI systems, so they can be trusted in high-stakes defence applications.
Our stand's hero image depicts an aerial view of military vehicles, comically mislabelled as things like 'Jar of Pickles', 'Umbrella' or 'Alpaca'.
This is intentionally ridiculous.
Our aim is to convey to non-technical military stakeholders that the mistakes AI systems can make are counterintuitive: it can be incredible at one advanced task but get tripped up over something a human would think is obvious. They make inhuman mistakes.
So: you need inhuman assurance methods!
Our cutting edge #adversarialAI methods and advanced #redteaming testing environments find exactly where these vulnerabilities are, and our dashboard enables dev teams to communicate these #robustness insights to non-technical leadership.
If you're attending, come say hi at H1-428.