MIRI - Machine Intelligence Research

Non-profit research organization in the United States M Applications & Practices

Basic Information

  • Name: Machine Intelligence Research Institute (MIRI)
  • Official Website: https://intelligence.org/
  • Founder: Eliezer Yudkowsky (2000)
  • Type: Non-profit research organization in the United States
  • Headquarters: Berkeley, California
  • History: Formerly known as the Singularity Institute for Artificial Intelligence

Product Description

MIRI is one of the earliest research institutions globally to focus on existential risks posed by AI. Since 2000 (as its predecessor), it has been dedicated to understanding and addressing the critical challenges that artificial superintelligence might bring. MIRI is renowned for its in-depth theoretical research on AI alignment and early warnings about AI risks. In 2024, MIRI underwent a significant strategic shift, transitioning from technical alignment research to policy advocacy.

2024 Strategic Transformation

Reasons for Transformation

  • MIRI believes that AI alignment research (including the entire field) is progressing too slowly
  • Considers technical research "extremely unlikely" to succeed in time before a catastrophe occurs
  • Decided to shift focus from technical research to policy solutions

New Direction

  • Policy Advocacy: Becomes the primary activity
  • International Coordination: Calls for international coordination to pause superintelligence development
  • Pause Order: Advocates for a temporary halt to the development of increasingly general AI
  • Governance Framework: Researches international agreement frameworks to prevent premature superintelligence development
  • Technical Alignment Research: Continues but on a significantly reduced scale

Historical Contributions

  • Early Definition of AI Alignment Problems: Began research in this area before it was widely recognized
  • Decision Theory Research: Formalized AI decision-making problems
  • Logical Uncertainty: Studied the mathematical foundations of AI systems
  • AIXI and General Intelligence Theory: Theoretical framework research
  • Talent Cultivation: Many AI safety researchers are associated with MIRI

Key Figures

  • Eliezer Yudkowsky: Founder, one of the most influential thinkers in AI safety
  • Author of works such as *Rationality: From AI to Zombies*
  • In 2023, published an article in TIME magazine urging government intervention in AI development

2026 Activities

  • Continues policy research and advocacy
  • Plans to offer research fellowships in 2026 to support technical governance research
  • Publishes reports on international agreement frameworks
  • Limited technical alignment research

Controversies

  • Criticized for being overly pessimistic and alarmist
  • Effectiveness of technical alignment research questioned
  • Strategic transformation sparked internal and external debates
  • Some consider the call for a pause unrealistic

Business Model

  • Non-profit organization
  • Relies on donations from individuals and foundations
  • Major donors include Peter Thiel, Open Philanthropy, etc.

Relationship with OpenClaw

MIRI's research reminds OpenClaw that it must take the risks of AI agent autonomy seriously. Although OpenClaw's current capabilities are far from the superintelligence level that MIRI is concerned about, its ability to autonomously execute tasks already requires careful safety design.

Sources