Scenario

A client dealing with a conversational agent, struggled to improve performance across all the question domains, and cannot deploy the system. But how do they identify where they can deploy and what they need to prioritise?
Pexels Daria Shevtsova 3627937

The Challenge

  1. Benchmarking - which question classes are most unreliable and susceptible to changes in speech?

  2. Mitigation - how do we mitigate underperforming classes?

  3. Deployment - can we deploy the model to a production environment?

Pexels Nothing Ahead 4440715

Advai’s Capability

  1. Benchmarking - Advai's library of NLP stress tests and dictionaries, can identify common failures for NLP systems.

  2. Optimisation - Common dictionaries and spelling mistakes can be identified for incorporation into the training pipeline and used to increase overall model performance.

  3. Deployment - Underperforming classes can be linked to manual interventions and model retraining to ensure the system can be partially deployed to increase the base level of automation.

Pexels Sami Anas 5641953

Result

  1. By understanding the strengths and weaknesses of the the language being processed the NLP model can be made more robust to changes in terminology.

  2. The data scientist is able to proactively identify issues before deployment. This allows an increased velocity in the improvement of model performance, and a more robust model

  3. Because the customer knows which parts of their AI models perform well or poorly, they are able to automate the triage process effectively and release value from the project in production .

Dots

You can trust robust AI.

Book Call
Cta