Get Involved
Join the MAIA community and contribute to AI safety research and education at MIT.
AI Safety Fundamentals
MAIA runs programs for people at all skill levels to explore deep learning and AI safety.
Machine Learning track
The machine learning track of AI Safety Fundamentals is a seven-week research-oriented reading group on technical AI safety. Topics covered include neural network interpretability, learning from human feedback, goal misgeneralization in reinforcement learning settings, and potential catastrophic risks from advanced AI systems. The program is open to both undergraduate and graduate students. Students with machine learning experience are especially encouraged to apply, although no prior experience is required.
Participants meet weekly in small sections facilitated by a TA that is a graduate student or an upperclassman with experience in AI safety research. Dinner is provided, and no work is assigned outside of weekly meetings. Our curriculum is based on a course developed by OpenAI researcher Richard Ngo. We revise this curriculum every year to keep it up to date. For Fall 2024, see our 284-page AI safety textbook.
Policy track
We are not running the AISF governance track this year. Please see the Harvard AI Safety Student Team’s Introductory AI Policy Fellowship.
Every semester, HAIST runs an 8-week introductory reading group on the foundational policy and governance issues posed by advanced AI systems. The fellowship meets weekly in small groups, with dinner provided, and no additional work required beyond meetings.
Questions discussed include:
- How much progress in AI should we expect over the next few years?
- What are the risks associated with the misuse and misalignment of advanced AI systems?
- How can regulators audit frontier AI systems for potentially dangerous capabilities?
- How could novel hardware mechanisms prevent malicious or irresponsible actors from creating powerful AI models?
Workshops
Every semester, MAIA and the AI Student Safety Team at Harvard (AISST) collaborate to run weekend workshops on AI safety. We gather students, professors, and professionals working on AI safety to discuss and collaborate on the cutting edge of AI safety research and policy.
More information about the Spring 2025 workshops will be released soon.
Bootcamps
MAIA, in partnership with the Cambridge Boston Alignment Initiative (CBAI), hosts ML Bootcamps outside of semester time, aimed at quickly getting students up to speed in deep learning and developing skills useful for conducting AI safety research in the real world.
The bootcamps are in-person, with teaching assistants experienced with ML and AI safety research. We follow the highly-rated MLAB (Machine Learning for Alignment) curriculum designed by Redwood Research, which provides a thorough, hands-on introduction to state-of-the-art ML techniques (e.g. transformers, deep RL, mechanistic interpretability) and is meant to get you to the level of replicating ML papers in PyTorch. The program’s only prerequisites are comfort with Python and introductory linear algebra.
Membership
Being a member of the MAIA community has both a number of opportunities and a number of responsibilities. Membership entails:
- 24/7 access to our office, where you can co-work on AI safety with MAIA Members.
- Compute and research tools
- Weekly member meetings to read and discuss alignment research
- Small group discussions with alignment researchers and professors (recent guests included Chris Olah from Anthropic and Daniel Kokotajlo from OpenAI)
- Connections with and potential opportunities to collaborate with top alignment organizations like Redwood Research, The U.S. AI Safety Institute, and METR
- Opportunities to participate in AI safety community workshops and connect with leaders in policy, academia, and technical research. (Ex: Professors Hidenori Tanaka and David Bau)
- Participating in a community of talented undergraduate and graduate students interested in reducing risks from advanced AI (Social events and talks, but also fantastic spontaneous discussions.)
Members generally contribute to the community by running or participating in workshops, discussions, socials, hackathons, initiatives and more. While we are an MIT-recognized student group, membership is not restricted to MIT students. Independent researchers and students from other universities are welcome!
If you aren't very familiar with AI safety, we recommend applying for AI Safety Fundamentals above. We typically offer participants of this program a streamlined application process.
The application has both technical and non-technical portions and typically takes around an hour to complete. Membership admission is rolling, but the board typically meets at the start and the end of the semester to parse through applications, so if we are slow to respond, don't hesitate to email us at maia-exec@mit.edu.
Questions? Contact us at maia-exec@mit.edu