Applications are now open for AI Safety Fundamentals! Apply here by Wednesday, September 18th, 11:59pm EST.


MAIA runs programs for people at all skill levels to explore deep learning and AI safety.

AI Safety Fundamentals

Learn the basics of AI safety and how to prevent harm from AI systems. MAIA offers two intro tracks, focusing on technical and policy respectively. We recommend applying to the program you are most interested in. You can participate in both tracks simultaneously.

Machine Learning track

The machine learning track of AI Safety Fundamentals is a seven-week research-oriented reading group on technical AI safety. Topics covered include neural network interpretability, learning from human feedback,goal misgeneralization in reinforcement learning settings, and potential catastrophic risks from advanced AI systems. The program is open to both undergraduate and graduate students. Students with machine learning experience are especially encouraged to apply, although no prior experience is required.

Participants meet weekly in small sections facilitated by a TA that is a graduate student or an upperclassman with experience in AI safety research. Dinner is provided, and no work is assigned outside of weekly meetings. Our curriculum is based on a course developed by OpenAI researcher Richard Ngo.

Apply here by Wednesday, September 18th, 11:59pm EST.

Policy track

The policy track of AI Safety Fundamentals is a seven-week reading group on the foundational governance and policy challenges posed by advanced AI systems. Topics discussed include the proliferation of dangerous AI models, AI-induced explosive economic growth, and methods for predicting when transformative AI will be developed.

Participants meet weekly in small sections facilitated by a TA that is a graduate student or an upperclassman with relevant experience. Dinner is provided, and no work is assigned outside of weekly meetings. Our curriculum is based on a course developed by experts on AI policy.

Apply here by Wednesday, September 18th, 11:59pm EST.

Workshops

Every semester, MAIA and the AI Student Safety Team at Harvard (AISST) collaborate to run weekend workshops on AI safety. We gather students, professors, and professionals working on AI safety to discuss and collaborate on the cutting edge of AI safety research and policy.

More information about the Fall 2024 workshops will be released soon.

Bootcamps

MAIA, in partnership with the Cambridge Boston Alignment Initiative (CBAI), hosts ML Bootcamps outside of semester time, aimed at quickly getting students up to speed in deep learning and developing skills useful for conducting AI safety research in the real world.

The bootcamps are in-person, with teaching assistants experienced with ML and AI safety research. We follow the highly-rated MLAB (Machine Learning for Alignment) curriculum designed by Redwood Research, which provides a thorough, hands-on introduction to state-of-the-art ML techniques (e.g. transformers, deep RL, mechanistic interpretability) and is meant to get you to the level of replicating ML papers in PyTorch. The program’s only prerequisites are comfort with Python and introductory linear algebra.

We may run a bootcamp in January 2025, please express interest here.