03 Oct 2023
Superintelligence alignment and AI Safety
OpenAI recently unveiled their 'Introducing Superalignment' initiative 📚, with the powerful statement: “We need scientific and technical breakthroughs to steer and control AI systems much smarter than us.”
We couldn’t agree more. As our Chief Researcher, @Damian Ruck says, “No one predicted Generative AI would take off quite as fast as it has. Things that didn’t seem possible even a few months ago are very much possible now.”
We’re biased though; AI Safety and Alignment represents everything we believe and have been working on for the last few years. It may be to prevent bias, to ensure security, to maintain privacy. Or it could be a totally different and unforeseen consequence that you avoid.
How do we meet the challenge of steering AI systems? With AI.
“There’s no point if your guardrail development isn’t happening at the same speed as your AI development”.
07 Jul 2023
Biased Age Estimation Algorithms
Biased age estimation is a great example of algorithmic #discrimination. Such #AI algorithms are therefore unfit for use. Right?
Well, their use is threatening to happen anyway. With multiple US federal bills and the UK’s Online Safety Bill looking to legislate online age verification, improving the robustness of these systems is becoming growingly urgent.
21 Apr 2021
Beyond Jeopardy: How Can We Trust Medical AI?
In 2011, just two days after Watson beat two human champions at Jeopardy!, IBM announced that their brilliant Artificial Intelligence (AI) would be turning its considerable brainpower towards transforming medicine. Stand aside, Sherlock: Dr. Watson is on the case.