• Highlights
    • AI Safety across categories
    • Defence Military solutions as exhibited at DSEI 2023.
    • ID Verification Benchmark identity verification providers with world-first research methods.
    • LLM Alignment Assuring Large Language Models.
    • Solving AI Adoption Problems How we improve AI success rates.
    • Products
    • How to work with Advai
    • Advai Advance Intensive discovery process.
    • Advai Versus Developer workbench.
    • Advai Insight Better AI decision making.
    • Company
    • Company Who we are and what we're trying to achieve.
    • Careers Help us build a world with trustworthy AI.
    • Our Approach A little glimpse under the hood.
    • Company News Explore our news, mentions and other Advai happenings.
    • Thinking
    • Journal Thought leadership and editorial relating to AI Assurance.
    • AI Ethics AI Ethics from an Assurance perspective.
    • AI Adoption Playbook A must-have download. An assurance-engineered guide to AI adoption.
    • Solve LLM Adoption A focus on how Advai's services solve LLM adoption challenges.
  • FAQs
  • [email protected]
  • LinkedIn
  • Twitter
  • Book Call
Contact Us

Journal

View All
Assurance Techniques
Learn Article

11 Sep 2024

A Look at Advai’s Assurance Techniques as Listed on CDEI

In lieu of standardisation, it is up to the present-day adopters of #ArtificialIntelligence systems to do their best to select the most appropriate assurance methods themselves.

Here's an article about a few of our approaches, with some introductory commentary about the UK Government's drive to promote transparency across the #AISafety
sector.

AI Safety AI Robustness Language Models Trends and Insights AI Assurance Adversarial Attacks AI Governance AI Ethics AI Compliance AI Risk Case Study

Advai Day Out With Military Cover
Learn Article

14 May 2024

Advai’s Day Out Teaching the Military how to Exploit AI Vulnerabilities

"It’s in this moment where the profound importance of adversarial AI really clicks. The moment when a non-technical General can see a live video feed, with a small bounding box following their face, identifying them, and pictures the enemy use-case for such a technology.

Then, a small amount of code is run and in a heartbeat the box surrounding their face disappears.

Click."

Read more about our day with the UK Ministry of Defence…

AI Safety AI Robustness Adversarial Attacks Computer Vision Defence Case Study

Ncsc List Section
Learn Article

18 Apr 2024

Uncovering the Vulnerabilities of Object Detection Models: A Collaborative Effort by Advai and the NCSC

Object detectors can be manipulated. -The car is no longer recognised as a car. -The person is no longer there. ...As the use of these detection systems becomes increasingly widespread, their resilience to manipulation becomes increasingly important.

The purpose of this work is to both demonstrate vulnerabilities of these systems and to showcase how manipulations might be detected and ultimately prevented.

In this blog, we retell of our technical examination of five advanced object detectors' vulnerabilities, with sponsorship and strategic oversight from the National Cyber Security Centre (NCSC).

AI Safety AI Robustness Adversarial Attacks Computer Vision Defence Case Study

Security Vision
Article Learn

15 Mar 2023

Assuring Computer Vision in the Security Industry

Advai assessed an AI's performance, security, and robustness in object detection, identifying imbalances in data and model vulnerabilities to adversarial attacks. Recommendations included training data augmentation, edge case handling, and securing the AI's physical container.

Computer Vision AI Governance AI Assurance Adversarial Attacks AI Robustness Case Study AI Risk

View All

Contact

Contact

Join the Team

Address

20-22 Wenlock Road
London N1 7GU

Social

LinkedIn

Twitter

Company Journal
© 2025 Advai Limited.

Cookie Policy

|

Privacy Policy

|

Terms and Conditions