Ilya Sutskever
Ilya Sutskever is a leading AI researcher and co-founder of Safe Superintelligence Inc. (SSI). He was previously co-founder and Chief Scientist at OpenAI, where he led research efforts on GPT models.
Career
Early Research
Sutskever studied under Geoffrey Hinton at the University of Toronto and was a co-author of the influential AlexNet paper (2012), which demonstrated the power of deep convolutional networks and sparked the modern deep learning era.
OpenAI (2015-2024)
As Chief Scientist at OpenAI, Sutskever oversaw the development of the GPT series of language models. He led the Superalignment team, a research effort focused on ensuring superintelligent AI systems remain aligned with human values.
Safe Superintelligence Inc. (2024-present)
In 2024, Sutskever departed OpenAI to co-found SSI, a company focused exclusively on developing safe superintelligence. The company's singular focus on safety-first AI development reflects Sutskever's growing concern about alignment challenges.
Views on AI Safety
Sutskever has emphasized that:
- Superintelligence is coming sooner than many expect
- Alignment research is critically important
- Current approaches may be insufficient for superintelligent systems
- Safety and capabilities research should proceed together
Key Contributions
- AlexNet (with Krizhevsky and Hinton)
- Sequence-to-sequence learning
- GPT architecture development
- Superalignment research program