Eliezer Yudkowsky

PersonResearcherWriter
Suggest Edit
Eliezer Yudkowsky
RoleResearch Fellow, MIRI
Known ForAI Safety Advocacy, LessWrong
FoundedMIRI

Eliezer Yudkowsky is an AI safety researcher and writer who co-founded the Machine Intelligence Research Institute (MIRI). He is one of the earliest and most influential advocates for taking AI existential risk seriously.

Career

MIRI (2000-present)

Yudkowsky co-founded the Singularity Institute (now MIRI) in 2000, making it one of the first organizations dedicated to AI safety research. He continues to serve as a Research Fellow.

LessWrong

Founded the LessWrong community blog focused on rationality, which became an influential hub for discussions of AI safety and effective altruism.

Key Contributions

  • AI Risk Awareness: Pioneered public discourse on existential risk from AI
  • Coherent Extrapolated Volition: Proposed approach to value alignment
  • Rationality Writing: Extensive writings on human reasoning and decision-making
  • Corrigibility: Contributed to foundational concepts

Views

Yudkowsky holds a pessimistic view of humanity's prospects for surviving the development of superintelligent AI. He has argued that current approaches to AI safety are insufficient and that alignment is fundamentally difficult.

He has been critical of large AI labs for what he sees as insufficient attention to safety concerns and has called for pausing AI development.

Writing

  • "Rationality: From AI to Zombies" (2015) - Collection of essays on rationality
  • "Harry Potter and the Methods of Rationality" - Popular fanfiction exploring rationalist themes
  • Extensive posts on LessWrong and the AI Alignment Forum

See Also

Last updated: November 27, 2025