15 Aug 2023

A Robust Day for AI: Trust, Robustness, and Europe’s ‘The AI Act’.

European Parliament approved the draft legislation for The AI Act recently.

 

What does this mean for AI innovation and the businesses behind it? Find our our robustness perspective on the evolving regulatory landscape. 

Words by
Alex Carruthers
Robust Day For Ai

What happened?

A Robust Day for AI: Trust, Robustness, and Europe’s ‘The AI Act’.

 

European Parliament approved the draft legislation for The AI Act recently.

 

The Guardian reported ““AI raises a lot of questions socially, ethically, economically. But now is not the time to hit any ‘pause button’.” On the contrary, it is about acting fast and taking responsibility,” said Thierry Breton, the European commissioner for the internal market.”

 

The AI Act takes a ‘fundamental rights approach’ by focusing on the impact of the use of AI, versus regulating any specific technology. The legal reasoning for doing this is to create a set of standards that will stand the test of time. European parliament president, Roberta Metsola, refers to it as “legislation that will no doubt be setting the global standard for years to come.”

 

This law isn't immediate. Even if agreed upon by the end of the year, it won't come into effect until 2026. In the meantime, the EU plans to encourage tech companies to voluntarily live up to these new standards. In simple terms, the EU is saying: ‘We need clear rules for AI. Let's all agree on what's acceptable and put this into practice by 2026.’

1. Aligning Mlops With Reg Principles

What’s the context?

The AI Act arrives on the coattails of a group of industry executives and AI leaders calling for a 6 month pause on AI development and the AI giants themselves calling for AI regulation (Microsoft, OpenAI, and Google).

 

Interestingly, existing EU privacy legislation is creating friction enough for these companies already. Google’s Bard was not authorised to enter the Irish market and OpenAI had hurdles to jump to enter the Italian market (with a lite version). These hurdles will be molehills next to the mountainous – but achievable – challenges the AI Act will present.

 

It isn’t an AI post without quoting Elon Musk: “things are getting weird, and they’re getting weird fast". However, contrary to Elon’s often apocalyptic view, EU commission Margrethe Vestager says there are bigger (or at least more immediate) problems than the end of the world and human extinction. That focusing on responsible AI today is the practical thing to do.

 

Her major point is that existential risks are a future ‘maybe,’ but discrimination is a real problem today.

“Probably [the risk of extinction] may exist, but I think the likelihood is quite small…. If it’s a bank using it to decide whether I can get a mortgage or not, or if it’s social services on your municipality, then you want to make sure that you’re not being discriminated [against] because of your gender or your colour or your postal code.”

The other element that is less spoken of is the lack of clarity.  While regulations are sometimes perceived as a barrier, in some cases they can give surety.  If you are a large Bank, for example, you may hold back until you have confidence that the AI you are investing in is not going to be banned in 6-12 months’ time, or get you in trouble with authorities.  In an ideal situation, regulation could provide the clarity that unleashes further investment.

What’s the AI Act again?

The Artificial intelligence act (europa.eu) is “a technology-neutral definition of AI systems in EU law and to lay down a classification for AI systems with different requirements and obligations tailored on a 'risk-based approach'.”

 

  • AI systems with 'unacceptable' risks are banned;
  • Use-cases categorised as 'high-risk' must be authorised before their release and are subject to ongoing obligations;
  • 'Limited risk' use-cases are subject to light transparency obligations.

 

The Act clearly tries to meet a need for regulation without overly inhibiting AI innovation. After all, it is still a critical technology battleground in a global and geo-politically tense arena.

 

The vast number of use-cases will be considered risk free, but models impacting social scoring and biometric identification will be banned; and models impacting safety, health, education, or employment will be subject to strict oversight.

What do we think?

It’s “a good step towards the governance of responsible AI” says Chris Jefferson, CTO and Co-Founder of Advai, “it means understanding the importance of use case and risk and knowing how to measure your model for the EU AI Act principles.”

 

David Sully, CEO of Advai, follows that the legislation “The EU has history on successfully applying regulations to Big Tech.  When it comes to AI Regulation, they also have first-mover advantage.  GDPR has impacted Tech companies outside the geographic boundaries of the EU, and there is a chance the EU AI Act will be similarly influential.  There is going to be a big challenge in getting Regulators, AI Companies and Large Enterprises all talking at the same level and understanding one another.  That’s where the Deep Tech research that companies like Advai are conducting on areas such as Security and Robustness will become so important.”

 

It’s not easy to conform to regulations and The AI Act will be no exception. It will require focus on all stages of the AI production line. Companies that use AI will need to be ‘compliance-centric’ because a single mistake could be costly (violations will draw fines of up to 6 per cent of a company’s annual global revenue). There are benefits to taking this approach though.  Understanding the AI you are building and implementing better is only going to improve them. Up to 87% of AI projects fail. In some ways, the AI Act can be considered to encourage Best Practice, by asking that the head knows what the hands are doing.

 

David continues, “From a robustness perspective, the AI Act could be a positive.  Regulation may not sound fun, but Robust, Safe and Secure AI is the way to start sifting out what is realistically deployable now, versus aspirational – it forces us to start to ask and address the hard questions in AI, rather than simply hoping stuff will work and then being surprised when something odd happens.”

First

From ‘Best Practice and Principles,’ to Standards. 

No doubt, The AI Act will be a burden. This is a new age for AI regulation, standards, and quality. It may be that it is an overstep, in which case, this could be an issue as it’s always harder to rescind regulation than make it.  

 

But at least it comes with degree of predictability, right? A major part of a business’ challenge is to attempt to predict what governments will do. This is a degree of certainty that leaves no question on how companies need to behave when handling the technology.  

 

At Advai, we’ve been working for 3 years helping large organisations understand and develop Robust, Trustworthy AI.  Our position has always been that developing Safe, Secure and Robust AI might be hard, but sees the greatest return for the business.  The same technology can be applied directly to regulatory requirements. 

 

No alt text provided for this image

 

See more in our blog post What is Robust AI.  

 

 

AI we can trust.   

 

Trust and avoiding abuse have to be core.  You likely interact with some form of AI every day, for navigation, weather predictions, music playlists or internet searches.  This is seamless and demonstrates the benefits to be found.  You trust Google to navigate you to dinner with your friend. 

The key to this though is that, as we give AI greater active control, we need to ensure our trust is well-placed, and that the system is Robust to changing circumstances or abuse by malicious actors.  

Who are Advai?

Advai is a deep tech AI start-up based in the UK that has spent several years working with UK government and defence to understand and develop tooling for testing and validating AI in a manner that allows for KPIs to be derived throughout its lifecycle that allows data scientists, engineers, and decision makers to be able to quantify risks and deploy AI in a safe, responsible, and trustworthy manner.

If you would like to discuss this in more detail, please reach out to [email protected]

Useful Resources