https://cetas.turing.ac.uk/sites/default/files/2024-01/01.17.2024_assurance_report.pdf
Expert Contributor to Alan Turing Institute Report for National Security use of Generative AI.
The report arrives at a recommended System Card template. It’s designed for the high standards of the national security sector. However, there is excellent insight for private sector organisations seeking the advantages of external AI vendors.
In the report, AI assurance is defined as the portfolio of processes required to evaluate and communicate, iteratively throughout the AI lifecycle, the extent to which a given AI system:
- Does everything it says it is going to do, and nothing it shouldn’t do.
- Complies with the values of the deploying organisation.
- Is appropriate to the specific use case and envisioned deployment context.
The report compared strengths and weaknesses of 3 methods of documenting AI system properties.
In result, they propose an ideal method to document AI system properties. Balance the level of detail, be interpretable to non-experts, ensure consistent structure, accommodate context-specific flexibility, build on industry practice but push for more transparency, clarify integration with other processes (i.e. legal and procurement).
Check out or LinkedIn post and download the summary:
https://www.linkedin.com/posts/advai_assurance-for-third-party-ai-vendors-activity-7155516946447286272-pGhS?utm_source=share&utm_medium=member_desktop