Future of Humanity Institute

OrganizationResearchAcademia
Suggest Edit
Future of Humanity Institute
TypeAcademic Research Institute
Founded2005
LocationOxford, UK
FounderNick Bostrom
StatusClosed (2024)

The Future of Humanity Institute (FHI) was a multidisciplinary research institute at the University of Oxford, focused on big-picture questions about humanity's future, including existential risks from advanced AI.

History

FHI was founded in 2005 by philosopher Nick Bostromwithin Oxford's Faculty of Philosophy. It became one of the first academic institutions to take AI existential risk seriously as a research topic.

In 2024, FHI closed its doors after years of administrative difficulties with the university. Many of its researchers moved to other organizations focused on AI safety and existential risk.

Research Areas

  • Existential risk analysis
  • AI safety and alignment
  • Macrostrategy and global priorities
  • Human enhancement ethics
  • Simulation argument and anthropics

Notable Contributions

  • Early framing of AI existential risk
  • Development of existential risk as academic field
  • Training many researchers now working in AI safety
  • Bostrom's Superintelligence book

Legacy

Despite its closure, FHI's influence continues through its alumni network and the research paradigms it established. Many leading AI safety researchers either worked at FHI or were influenced by its work.

See Also

Last updated: November 28, 2025