What is AI Alignment?
As an emerging field, there are many definitions of the field AI Alignment. Broadly AI Alignment is a research field aimed at tackling the questions “how do we ensure the development of advanced artificial intelligence benefits humanity? and how do we avoid catastrophic failures while building advanced AI systems?”
Starter resources
Short Article: The case for taking AI seriously as a threat to humanity by Kelsey Piper, Vox
Article: Preventing an AI-related catastrophe by Ben Hilton, 80,0000 Hours
Video: Intro to AI Safety by Rob Miles
Report: Benefits & Risks of Artificial Intelligence by Ariel Conn, Future of Life Institute
Syllabus: AGI Safety Fundamentals Curriculum by Richard Ngo, OpenAI
More: Lots of Links from AI Safety Support
How can I work on this problem?
A career in AI Alignment may be the most impactful way to spend your working hours. As a university group, much of our focus is on preparing students for such pursuits.
Starter resources
Career guide: Guide to working in AI policy and strategy from 80,000 Hours
Career guide: Guide to pursuing a career in technical AI safety research from 80,000 Hours
Career guide: Your biggest opportunity to make a difference: our guide to what makes for a high-impact career by 80,000 Hours
More: Lots of Links from AI Safety Support