Machine Intelligence Research Institute (MIRI)
The Machine Intelligence Research Institute (MIRI) is a nonprofit research organization focused on ensuring that advanced artificial intelligence has a positive impact on humanity. Originally founded as the Singularity Institute for Artificial Intelligence in 2000, it was renamed to MIRI in 2013.
Overview
MIRI was one of the earliest organizations dedicated to AI safety research. The institute focuses on foundational mathematical research aimed at understanding and solving the core challenges of building aligned AI systems.
Research Focus
Agent Foundations
Research on the mathematical foundations of rational agency, including decision theory, logical uncertainty, and embedded agency.
Alignment Theory
Work on corrigibility, value learning, and other theoretical frameworks for building AI systems that remain aligned with human values.
AI Forecasting
Research on predicting AI development trajectories and potential risks.
Key Contributions
- Early advocacy for AI safety as a research priority
- Development of the concept of corrigibility
- Research on logical uncertainty and decision theory
- Training and supporting AI safety researchers
Notable Publications
- "Corrigibility" (Soares et al., 2015)
- "Agent Foundations for Aligning Machine Intelligence with Human Interests" (2014)
- "Embedded Agency" (Demski & Garrabrant, 2019)
Key People
- Eliezer Yudkowsky - Co-founder, Research Fellow
- Nate Soares - Executive Director