Nick Bostrom

PersonResearcherAcademia
Suggest Edit
Nick Bostrom
RoleDirector, FHI (former)
Known ForSuperintelligence book
InstitutionOxford University
FieldPhilosophy, Existential Risk

Nick Bostrom is a Swedish philosopher known for his work on existential risk, the anthropic principle, and superintelligence. He founded the Future of Humanity Institute (FHI) at Oxford University and authored the influential book Superintelligence: Paths, Dangers, Strategies.

Career

Future of Humanity Institute

Bostrom founded FHI in 2005, establishing one of the first academic research centers dedicated to studying existential risks, including those from advanced AI. The institute became a leading hub for AI safety research and global catastrophic risk analysis.

Superintelligence

Published in 2014, Superintelligence became a landmark work in AI safety discourse. The book argues that superintelligent AI could pose an existential threat to humanity and explores potential control problems and safety strategies.

Key Concepts

  • Orthogonality Thesis: Intelligence and goals are independent; high intelligence doesn't imply benevolent goals
  • Instrumental Convergence: Sufficiently intelligent agents will pursue certain instrumental goals regardless of final goals
  • Treacherous Turn: An AI might behave cooperatively until powerful enough to pursue its actual objectives

Other Contributions

  • Simulation Argument - probability we live in a simulation
  • Astronomical Waste argument for urgency of life extension
  • Analysis of global catastrophic and existential risks
  • Contributions to transhumanist philosophy

Influence

Bostrom's work has significantly influenced how technologists, policymakers, and researchers think about advanced AI risks. His concepts are foundational to the AI alignment field.

See Also

Last updated: November 28, 2025