05 May 2021

AI: Unknown Decision Space is holding us back

Decision Space sits at the core of AI and helps explain issues such as bias and why AI performs poorly in the wild compared to the lab.  It also helps shed light on recent trends towards slower adoption of AI.  Here, we run through the basics.

Words by
David Sully
Missing Pieces Jigsaw Shortened

HOW CAN YOU MAKE DECISIONS IN EMPTY SPACE?

A recent fascination has been with the concept of Decision Space in AI systems.  It sits at the heart of so many of the opportunities and issues that AI currently faces but doesn’t seem to be talked about much.

The ‘Decision Space’ in an AI system, represents all the possible decisions that are available to that model – i.e., every theoretical decision from every possible input that could ever come into it, even all of the completely impractical ones. 

 

The first problem is right there – all possible inputs and decisions.  Even disregarding the completely irrational ones, there are a lot of possible inputs.  How is it possible to understand all of them?  The answer: it isn’t. Even for a simple model, it is an almost unfathomable number of different possibilities.  For modern AI systems, it doesn’t matter how much data you collect, it will only fill a tiny fraction of all of the Decision Space.

The second problem is lack of explainability.  We don’t yet understand how state-of-the-art AI systems make their decisions.  They are black boxes.  There are some very clever people looking at the problem, but incredible as it sounds, we have invented something that we can’t currently unpick.

So, you have a black box, with an unquantifiable number of ways that it could make a decision.  Now, if you are a company, wanting to put that AI black box into a business-critical decision-making function that you could be held accountable for… no wonder we are seeing a slowing of AI adoption due to perceived risks.

When More Is Enough

The way a Data Scientist tries to tame this problem is to use training data.  They collect as much as they can afford (in time, money or capacity) and label it in ‘classes’.  So, each picture is labelled correctly as a cat, helicopter, ice cream, etc.  This is done thousands or millions of times.  All of this training data is then fed into the AI and it provides an output.  If the output is wrong, the AI is corrected and learns all over again, until it is able to regularly label cats, helicopters and ice creams.  What’s happening in this process is that each of those inputs is filling out the ‘known’ Decision Space.  At the end of the process, you have a trained AI with great accuracy and lightning quick decision-making times.  It is amazing at identifying the things it was trained on.

But the issue here is that no matter how many pictures are gathered, they are still just a fraction of all the possible pictures of cats in different poses, locations, sizes, shapes, furriness, shaved (hopefully not), etc.  The unknown Decision Space is still far, far more than the known Decision Space.

Pexels Jill Burrow 8953576

Taking the Plunge

And this is where so many AI issues lie.  Every time the AI developer or data scientist collects new test pictures and throws them at a trained AI, it seems to work as it should.  But then new stuff comes in.  And the new stuff sits outside of the known decision space.  It’s a cat in a completely different position, or with different colour fur.  And all of a sudden, the AI throws a wobbly.  It doesn’t think the picture is a cat – instead it classifies it as an ice cream.

This is what is happening when AI ‘goes wrong’.  AI Bias, for example, creeps in when training data fails to fill out an important part of the Decision Space – perhaps the developer forgot or ran out of relevant data (or money to buy data).  Or perhaps it’s simply that suitable data never existed before – a never before known breed of cat perhaps?  The reason for a trading model failing?  It received something lying in the unknown Decision Space.  Your medical AI suddenly gave the all-clear to an obviously ill patient?  Yep, you guessed it – unknown Decision Space.  In some cases, a human can identify the issue, but in many, the input looks normal and there is no sense as to why it has thrown the AI off.  (AI doesn’t perceive information the way a human does – something for another blog on another day).

 

There are multiple problems at work here – human mistakes, the limitations of historic data, biases (in human designers or in the data itself), lack of representative data, etc.  All of them will leave holes and it’s currently almost impossible to identify where they are without taking the plunge and deploying the AI in the real world.

Once you step outside of the boundaries of training data, weird (and bad) things start happening with AI.  And because there is no Explainability, we don’t know why.

Which is why I feel that what we do at Advai is so important.  Our stress tests of AI systems help us understand how robust the system is and identify issues with unknown Decision Space.  We can then work at resolving those issues.  That way, as the owner or developer of an AI, you don’t have to just rely on collecting infinite quantities of data.  Do the best you can, then come to us, use our platform and make it better.