AI will change the world as we know it

Let’s ensure those changes are positive.

Our Mission

Stanford AI Alignment (SAIA) is a Stanford Existential Risks Initiative (SERI) initiative and research community at Stanford University. Our mission is to help mitigate the full range of AI risks through building the AI safety community at Stanford, conducting AI safety research, and accelerating students into highly impactful careers in AI safety. 

Why AI Safety

AI systems have improved at a staggering rate. While AI can benefit humanity, its rapid development presents a series of risks we’ve never had to worry about before. Between deepfakes skewing perceptions of reality, autonomous weapons wreaking destruction, and the potential for AI misuse to create novel pathogens and nerve agents, AI poses a wide spectrum of risks. What will happen when we have AI systems more intelligent than us? AI may pose catastrophic or even existential risks.

As AI becomes increasingly integrated into our daily lives, we must ensure that systems are aligned with human values and goals, and that we have regulations and strategies in place to mitigate the full range of harms that AI systems can pose. AI safety is a new and small field — there are only a few thousand people working on this full-time around the world.

Let’s change that.