Our Mission

Stanford AI Alignment is a student group and research community at Stanford University. SAIA accelerates students into highly impactful careers in AI alignment and governance, builds the AI safety community at Stanford, and does excellent research to ensure the development of advanced AI is safe and beneficial for everyone.

Leadership Team (2023-24)

  • Gabriel Mukobi

    FOUNDER & PRESIDENT
    Gabe is the founder and current president of SAIA. He is an M.S. student in Computer Science building a career in AI safety research and strategy with an emphasis on AI governance and coordination. His other interests include animal welfare, music, virtual reality, games, fantasy, film, 3D art, photography, and tea!

  • Max Lamparth

    POSTDOCTORAL RESEARCHER
    Max Lamparth is a postdoc at SERI & CISAC. His research aims for safe and responsible use of AI to reduce risks and benefit society. He is studying the internal representations and emergent capabilities of large language models in collaboration with the Stanford Computer Science Department. Max has a PhD from the Technical University of Munich.

  • Emma Beharry

    Emma is a sophomore in computer science who is passionate about developing ethical and aligned computer systems. She joined SAIA her freshman year through its student-initiated course STS 10SI and was then inspired to complete SAIA’s Supervised Program in Alignment Research (SPAR) in the spring. Afterwards, she joined the SAIA Executive Board and is now helping direct STS 10SI and outreach initiatives.

  • Patrick Ye

    Patrick is a computer science major at Stanford excited about AI and interdisciplinary frontiers. He’s passionate about complex systems, human intellectuality, and world-building prospects. Nothing excites him more than freshly-baked ideas!

  • Sarah Chen

    Sarah Chen is a junior at Stanford working on research in NLP and interpretability.

  • Scott Viteri

    Scott Viteri is a 5th year CS PhD student under Clark Barrett, researching how to produce honesty and empathy in language models. Scott's main research directions involve posting precursors to human prosocial behavior and creating ML training setups which reflect those conditions.

  • Lyna Kim

    Lyna is a senior studying Computer Science (B.S.) and Management Science and Engineering (M.S.) at Stanford. Her background primarily rests in research at the intersection of AI and policy, previously in climate, and has conducted projects with the Stanford AI Lab and Stanford Law School. In her free time, she enjoys hiking to waterfalls and performing with Stanford’s Viennese Ball.

  • Lora Xie

    Lora is a junior studying math and CS. She is interested in understanding AI models (both in the behavioral psychology way and the cognitive science way), evaluating them for catastrophically risky capabilities and tendencies, and messy, inelegant approaches to AI safety. She would love to be alive and happy under AGI.

  • Ishan Gaur

    Ishan Gaur is an EE coterm starting his PhD at Berkeley AI Research Lab next Fall. He is interested in preference learning, model organisms of misalignment, and AI agents for scientific discovery. He is currently a member of the Lundberg Lab, using machine learning to map the spatial organization of the human proteome.

  • Tom Shlomi

    Tom is a Harvard AI Safety Team member on a gap year working at an AI inference startup, running a forecasting tournament, and helping with SAIA events.

  • Arpit Gupta

    Arpit Gupta is currently an LL.M. (Law, Science and Technology) student at Stanford Law School. He is a lawyer with ~4 years' work experience in the technology law and policy space. His last role was with the Cyber Laws Division in the Ministry of Electronics and IT, Govt. of India. LinkedIn-

  • Alana Xiang

    Alana is a Stanford student conducting research in AI alignment and AI control. They are currently interested in developing detailed threat models for future AI systems. In the past, Alana has been a MATS research scholar, a Summer Research Fellow at the Center on Long-Term Risk, and worked on model interactions at ARC Evals.