Applications for our virtual Summer AISF are now open! Apply now by May 22nd!

Get Involved

Join the MAIA community and contribute to AI safety research and education at MIT.

AI Safety Fundamentals

An introductory fellowship to the field of AI Safety.

Fundamentals Fellowship

The AI Safety Fundamentals Fellowship (AISF) is MAIA's flagship introduction to AI safety and the main way people get involved with MIT AI Alignment. It's an 8-week reading group designed to help you understand why the field matters and what researchers and policymakers are doing about it.

Over the course of the fellowship, you'll explore:

  • The current trajectory of AI development
  • Empirical evidence for misalignment
  • Threat models for how misalignment could cause harm
  • Technical approaches to AI safety
  • The AI policy landscape
  • Opportunities and careers in AI safety

The fellowship is facilitated by top MAIA members with experience in AI safety research. During the fall and spring, sections of about 10 fellows meet weekly in our office with two facilitators, and dinner is provided. The summer program runs virtually with one facilitator and around 6 fellows per section. No work is assigned outside of weekly meetings, making the fellowship easy to fit alongside a full course load.

The program is open to anyone, with preference given to MIT undergraduate and graduate students. Applicants with machine learning experience are especially encouraged to apply, but no prior background is required—just curiosity and a willingness to engage with hard, open questions.

Applications for the Summer AISF are now live and are due May 22nd. If you cannot participate in the summer cohort, you can fill out the fall interest form to hear when fall applications open.

Policy Fellowship

Every semester our sister organization at Harvard, AISST, runs an 8-week introductory reading group on the foundational policy and governance issues posed by advanced AI systems. The fellowship meets weekly in small groups, with dinner provided, and no additional work required beyond meetings.

Questions discussed include:

  • How much progress in AI should we expect over the next few years?
  • What are the risks associated with the misuse and misalignment of advanced AI systems?
  • How can regulators audit frontier AI systems for potentially dangerous capabilities?
  • How could novel hardware mechanisms prevent malicious or irresponsible actors from creating powerful AI models?

Membership

MAIA membership is structured to move you toward a full-time role in AI safety. Members get the resources, research experience, and professional connections needed to be competitive for positions at leading AI safety organizations. Specifically, membership includes:

  • Workspace and infrastructure: 24/7 office access for focused research, plus compute and research tools
  • Technical development: Weekly meetings to engage with current AI safety research alongside other members working toward the same goal
  • Upskilling programs: Structured technical programs like ARENA with hired TAs, plus additional upskilling programs in development to help members build the skills needed for safety research roles
  • Direct access to researchers: Small-group discussions with safety researchers shaping the field. Recent guests include Aryan Bhatt (Redwood Research), Josh Clymer (OpenAI), and Nate Soares (MIRI)
  • Pathways to top organizations: We have connections across nearly every major AI safety organization, and our members have gone on to work at OpenAI, Anthropic, METR, Redwood Research, the AI Futures Project, and the Center for AI Standards and Innovation (formerly the U.S. AI Safety Institute). While many members pursue technical research, alumni have also moved into policy, government, and other non-technical roles shaping the trajectory of AI.
  • A serious peer group: Undergraduate and graduate researchers oriented around the same career trajectory, who become collaborators, references, and future colleagues

Beyond participating, many of our top members are also organizers who help run these programs—leading workshops, discussions, hackathons, and the initiatives that bring the next cohort into AI safety work.

While MAIA is an MIT-recognized student group, membership is not restricted to MIT students; independent researchers and students from other universities are welcome to apply. If you're newer to AI safety, we recommend applying for AI Safety Fundamentals first, as AISF alumni typically receive priority in the membership process.

The application itself has technical and non-technical portions and takes about an hour. Admissions are rolling, with the board reviewing applications monthly—and if we're slow to respond, feel free to email maia-exec@mit.edu.

Workshops

Every semester, MAIA and the AI Student Safety Team at Harvard (AISST) collaborate to run weekend retreats on AI safety. We gather students, professors, and professionals working on AI safety to discuss and collaborate on the cutting edge of AI safety research and policy.

In the past, we have run workshops building transformers from scratch, replicating papers in the field, and learning from industry leaders.

2025 Workshop with all participants

Bootcamps

MAIA, in partnership with the Cambridge Boston Alignment Initiative (CBAI), hosts ML Bootcamps outside of semester time, aimed at quickly getting students up to speed in deep learning and developing skills useful for conducting AI safety research in the real world.

The bootcamp is CAMBRIA (Cambridge Bootcamp for Research in Interpretability and Alignment). It is in-person, with teaching assistants experienced with ML and AI safety research. We follow the highly-rated ARENA curriculum, which provides a thorough, hands-on introduction to state-of-the-art ML techniques (e.g. transformers, deep RL, mechanistic interpretability) and is meant to get you to the level of replicating ML papers in PyTorch. The program's only prerequisites are comfort with Python and introductory linear algebra.