17 Mar 2023

What is The NIST Artificial Intelligence Framework

Welcome to the What is.. series? Bite size blogs on all things AI.

In this instalment we explore the what, where and why of The NIST AI Framework. What is it? Where is it used? Why is it important?

Words by
Chris Jefferson
Categories
Article Learn
RMF Redesign Rounded Other Graphics 04

The AI RMF is intended to provide a resource to organizations designing, developing, deploying, or using AI systems to manage risks and promote trustworthy and responsible development and use of AI systems. Although compliance with the AI RMF is voluntary, given regulators’ increased scrutiny of AI, the AI RMF can help companies looking for practical tips on how to manage AI risks.

The AI RMF is divided into two parts: (I) Foundational Information and (II) Core and Profiles. Part I addresses how organizations should consider framing risks related to their AI systems, including:

  1. Understanding and addressing the risk, impact and harm that may be associated with AI systems.
  2. Addressing the challenges for AI risk management, including those related to third-party software, hardware and data.
  3. Incorporating a broad set of perspectives across the AI life cycle.

Part I also describes trustworthy AI systems, including characteristics such as validity and reliability, safety, security and resilience, accountability and transparency, explainability and interpretability, privacy-enhanced, and fairness with harmful bias managed.

Part II describes features to address risks associated with the use and deployment of AI systems. These features include:

  1. Governance: a culture of risk management;
  2. Mapping: context is recognized and risks identified;
  3. Measurement: identified risks are assessed, analysed or tracked; and
  4. Management: risks are prioritized and acted upon based on a projected impact.
Composite Hero

Where is it used?

The AI RMF can be applied wherever AI may form part of a business. As it is considered for voluntary use it is not required, but importantly it demonstrates that a business is considering the risk associated to AI and working on a governance process to mitigate these risks. The reputational impact that this can have could be very positive for an organisation as it demonstrates a responsible approach to AI, but also importantly provides a resilient and robust framework for managing an AI over its lifecycle.

This fundamentally promotes trust, but the framework also encourages diversity of stakeholders, to maximise diversity of thought, and knowledge of risks. This is an important aspect as it allows for any system built on this to be considered for all points in its lifecycle from Development through to Deployment and maintenance.

Likely quick adopters of this framework will be:

  1. Government;
  2. Defence;
  3. Finance;
  4. Policing and Security;
  5. Healthcare;
  6. Safety Critical Applications.

The Framework is designed to equip organizations and individuals – referred to as AI actors – with approaches that increase the trustworthiness of AI systems, and to help foster the responsible design, development, deployment, and use of AI systems.

Why is it important?

The main reason the AI RMF is important is that it is a methodical framework for managing AI risk, which incorporates global standards for ISO, IEEE, OECD. This will likely form the foundation for future standard which may become a route for certification of de-risked AI systems.

Additionally governments and regulators are looking to design regulation for AI that will be used to protect society from adverse activities, and manage the risks of AI at a policy level.

Future regulation will strongly borrow from this Risk Framework, and will also align with emerging standards as a whole.

Who are Advai?

Advai is a deep tech AI start-up based in the UK that has spent several years working with UK government and defence to understand and develop tooling for testing and validating AI in a manner that allows for KPIs to be derived throughout its lifecycle that allows data scientists, engineers, and decision makers to be able to quantify risks and deploy AI in a safe, responsible, and trustworthy manner.

If you would like to discuss this in more detail, please reach out to [email protected]

Useful Resources