About Alignment Wiki

Alignment Wiki is a comprehensive encyclopedia of AI alignment research, theories, organizations, and key figures. Our goal is to provide neutral, well-sourced information about the field of AI alignment.

Mission

As AI systems become more capable, ensuring they remain beneficial and aligned with human values becomes increasingly important. Alignment Wiki aims to make the field more accessible by documenting:

  • Technical approaches to AI alignment
  • Key researchers and their contributions
  • Organizations working on AI safety
  • Foundational papers and their insights
  • Open problems and ongoing debates

Editorial Policy

Alignment Wiki strives to be neutral and encyclopedic. We present different perspectives in the field fairly and avoid advocating for particular approaches. All content should be well-sourced and verifiable.

Contributing

Alignment Wiki is a moderated wiki. Anyone can suggest edits, which are reviewed by moderators before publication. This approach maintains quality while allowing broad participation.

To contribute:

  1. Create an account
  2. Navigate to any article
  3. Click "Suggest Edit" to propose changes
  4. A moderator will review your suggestion

Contact

For questions, suggestions, or issues, please reach out through GitHub or submit feedback via the edit suggestion system.

License

Content on Alignment Wiki is available under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0).