AI Safety Institute (AISI)
Basic Information
- UK AISI (now AI Security Institute):
- Official Website: https://www.aisi.gov.uk/
- Established: November 2023 (after the AI Safety Summit)
- Renamed to AI Security Institute in 2025
- Status: The largest government AI safety research team globally
- US AISI (now CAISI):
- Affiliation: NIST (National Institute of Standards and Technology)
- Renamed to Center for AI Standards and Innovation in 2025
- Type: Government-led AI safety research institution
Product Description
The AI Safety Institute is a government-led AI safety research institution responsible for understanding the capabilities and impacts of advanced AI, as well as developing and testing risk mitigation measures. The UK's AI Safety Institute (renamed AI Security Institute in 2025) is the first of its kind globally, followed by similar institutions in the US, Japan, and other countries. The UK and the US have signed a landmark agreement to jointly test advanced AI models.
Core Functions
UK AI Security Institute (2026)
- Model Testing: Conduct safety assessments on cutting-edge AI models
- Threat Modeling: Evaluate the potential contributions of advanced AI to criminal activities
- Reasoning Monitoring: Collaborate with Google DeepMind to research monitoring techniques for AI "chain-of-thought" reasoning
- Social-Emotional Misalignment: Study the ethical implications of AI's social-emotional misalignment
- Economic System Impact: Explore the potential impacts of AI on economic systems
- Safety Cases: Develop safety case frameworks for more advanced models
- Research Funding: Support external AI safety research through grants
US CAISI
- Standard Setting: Participate in the NIST AI Risk Management Framework
- Evaluation Methods: Develop methodologies for assessing AI systems
- 2025 Policy Changes: Strategic adjustments under the Trump administration
Key Research Areas
- Risks of loss of control and autonomy in advanced AI systems
- Assessment of AI applications in crime and malicious use
- Model alignment and safety evaluation methods
- Explainability and auditability of AI systems
- International AI safety cooperation frameworks
UK-US Collaboration
- Joint testing of advanced AI models
- Sharing research findings and methodologies
- Exchange of expert talent
- Co-development of evaluation standards
Global Expansion
- Establishment of similar institutions in Japan, South Korea, and other countries
- Global coordination driven by the 2023 AI Safety Summit
- Information-sharing network among AISIs worldwide
Relationship with OpenClaw
AISI's safety evaluation methods and standards can be adopted by OpenClaw to assess and improve the safety of its AI agents. AISI's research on the risks of autonomous AI systems is particularly relevant to platforms like OpenClaw that host autonomous AI agents.