Note: the following content is not necessarily endorsed by the experts listed in the AISCC database.
In May several hundred AI experts, from industry, academia and nonprofit organisations, signed a statement declaring that “mitigating the risk of extinction from AI should be a global priority”. But why do so many people think that’s a risk? Here are some of our favourite introductory materials on the topic:
- An Overview of Catastrophic AI Risks (Centre for AI Safety; see the corresponding paper here)
- How rogue AIs may arise (Yoshua Bengio)
- Compilation: Why Might Misaligned, Advanced AI Cause Catastrophe? (AI Safety Fundamentals)
- Is Power-Seeking AI an Existential Risk? (Joseph Carlsmith)
- Why AI alignment could be hard with modern deep learning (Ajeya Cotra)
- Compilation: Arguments why AI safety might be less important (Jakub Kraus)
AISCC experts who can talk about this in more detail:
- Ajeya Cotra, Open Philanthropy
- Joseph Carlsmith, Open Philanthropy
- Dan Hendrycks, Center for AI Safety
- Markus Anderljung, Centre for the Governance of AI