SAIA Fall 2024 Programs

Governance and Humanities in Alignment

  • This reading group is open to the public and meets to discuss recent AI policy and governance issues. The goal is gain a better understanding of current problems in AI governance, and understand of how different stakeholders are attempting to solve AI governance-related problems.

  • SAIA leaders will run a weekly seminar covering the risks, proposals, and needs relevant to ensuring that advanced AI systems are built and used safely. It is based on the AI Safety Fundamentals Governance course designed by experts. Our fellowship is application based and runs once per quarter

  • A²E² is a discussion group showcasing speakers, case studies, and projects to highlight the social and professional value of ethical AI practices.

  • Biweekly field trips and events focused on humanities- and philosophy-adjacent aspects of AI safety.

Community Events and Partnerships

  • SAIA’s weekly group meetings are a general Schelling point for SAIA members and community members interested in AI safety to meet up. After announcements about upcoming SAIA opportunities and global AI safety news, a speaker will present some AI safety research or lead a discussion, then we’ll have a social afterwards for attendees. Past speakers include Dan Hendrycks, Nicholas Carlini, and Evan Hubinger.

  • SAIA will host quaterly socials for club members

  • IBAR is a weekend retreat connecting promising students with AI safety researchers and professionals working on critical challenges. SAIA co-hosts it with alignment student groups at UC Berkeley, USC, UCLA, and Caltech.

  • Lapis Labs is a student-led academic research group focused on advancing machine learning and artificial intelligence research. Operating independently, they have published over 14 papers in top conferences such as ICML, ICLR, NeurIPS, and CVPR, collaborating with organizations like IBM Research, Intel Labs, and the Center for AI Safety. SAIA and Lapis Labs have a partnership, where SAIA can refer and channel members eager for alignment research experience to impactful projects run by Lapis Labs.

  • SAIA members attend Constellation’s Academic Afternoons to present their research and network with the professional alignment community of Berkeley

  • SPAR is a virtual, part-time program that allows students and professionals to work together to develop valuable experience in AI Safety through mentorship. SPAR is now its own organization outside of SAIA, and we highly recommend people participate. Learn more here.

Technical and Career Development

  •  Weekly reading club where members explore and discuss blog posts, article excerpts, book snippets, and tweets on critical issues in AI safety and ethics.

  • SAIA Members read and discuss two chosen papers on technical AI safety each week

  • SAIA will run a hybrid upskilling bootcamp following the ARENA (Alignment Research Engineer Accelerator) curriculum.

  • This event is a space of SAIA members to think critically and share ideas about careers in AI safety. Attendees are encouraged to work on career-related activities, like applying for internships or drafting career-adjacent emails and scheduling meetings.

  • The curriculum and lectures for SAIA’s former course, STS 10SI: Introduction to AI Safety, run at Stanford are available here. The course is entirely open-sourced and is an excellent resource for those interested in self-studying AI safety.

Not sure which program is right for you?

Program FAQS

  • Not a problem at all! Our weekly meeting and AI policy reading group are open to all and beginner friendly. We also have other events like A2E2 and The Play of AI that are very accessible. If you want to learn more, we recommend taking our AI governance fellowship, self-studying STS 10SI, or taking CS 120/MS&E 338.

  • Mitigating the risks of AI is an interdisciplinary problem that should not just involve computer scientists! Philosophy, neuroscience, cognitive science, law, policy, engineering, and anthropology are among the many disciplines that are critical to effectively regulating, improving, and adapting to AI.

  • Different events have different recommended backgrounds. Generally, we recommend that those attending our technical events have a firm basis in neural networks and classic AI models. A big bonus is a solid foundation in large language models, especially transformer architectures.

  • Join our newsletter and Slack — see our get involved page for more information.

  • Our events are hosted in-person on Stanford campus. Some events may be virtual or have hybrid accommodations, but you should reach out to our team to check.

Our Members Have Worked With:

Past Programming

SAIA Spring 2024 Programs

Governance and Humanities in Alignment

  • This reading group is open to the public and meets to discuss recent AI policy and governance issues. The goal is gain a better understanding of current problems in AI governance, and understand of how different stakeholders are attempting to solve AI governance-related problems.

  • SAIA leaders will run a weekly seminar covering the risks, proposals, and needs relevant to ensuring that advanced AI systems are built and used safely. It is based on the AI Safety Fundamentals Governance course designed by experts. Our fellowship is application based and runs once per quarter

  • A²E² is a new project aiming to bring real-world stories, philosophy, and a speaker series together to show how ethical AI practices are not just socially responsible but also crucial for successful careers and companies. SAIA hosted its inaugural mixer and plans to unroll fully in the Fall.

  • SAIA went on our first field trip to the Misalignment Museum in SF, where we generously received a guided tour by Museum Curator Audrey Kim.

  • SAIA and Stanford Improvisors (SImps) co-hosted the workshop “The Play of AI: What role does Humanity play?“ Members participated in super fun improvising games and exercises, and learned from a discussion panel.

Weekly Meeting x Speaker and Socials

  • SAIA’s weekly group meetings are a general Schelling point for SAIA members and community members interested in AI safety to meet up. After announcements about upcoming SAIA opportunities and global AI safety news, a speaker will present some AI safety research or lead a discussion, then we’ll have a boba social afterwards for attendees. Past speakers include Dan Hendrycks, Nicholas Carlini, and Evan Hubinger.

  • SAIA will host quaterly socials for club members

  • IBAR is a weekend retreat connecting promising students with AI safety researchers and professionals working on critical challenges. SAIA co-hosts it with alignment student groups at UC Berkeley, USC, UCLA, and Caltech.

Technical and Career Development

  • SAIA Members read and discuss two chosen papers on technical AI safety each week

  • SAIA is piloting a workshop following the ARENA (Alignment Research Engineer Accelerator) curriculum for SAIA members to provide technical upskilling.

  • SPAR is a virtual, part-time program that allows students and professionals to work together to develop valuable experience in AI Safety through mentorship. SPAR is now its own organization outside of SAIA, and we highly recommend people participate. Learn more here.

  • This event is a space of SAIA members to think critically and share ideas about careers in AI safety. Attendees are encouraged to work on career-related activities, like applying for internships or drafting career-adjacent emails and scheduling meetings. This event is co-run with Stanford Effective Altruism.

  • Each quarter, SAIA leaders review submitted research from members and compile a research report for the website.

  • SAIA members attend Constellation’s Academic Afternoons to present their research and network with the professional alignment community of Berkeley

  • The curriculum and lectures for SAIA’s former course, STS 10SI: Introduction to AI Safety, run at Stanford are available here. The course is entirely open-sourced and is an excellent resource for those interested in self-studying AI safety.

  • CS 120 [Formerly STS 10SI] Introduction to AI Safety and MS&E 338: Aligning Superintelligence are SAIA unaffiliated courses at Stanford that we recommend to those interested in AI safety.