Week 1: Current Trajectory
Transformative AI and Current Trajectory
Overview
This week builds a grounded picture of where frontier AI capabilities are headed and how quickly they may improve. We will review evidence on scaling drivers—compute, data, algorithmic efficiency, and inference-time scaling—alongside time-horizon forecasting. The goal is to connect today's trends to plausible paths toward AGI-level systems, weighing both smooth progress and the possibility of rapid capability jumps.
Learning Objectives
By the end of Week 1, fellows should be able to:
- Understand what AISF is, the general format, and the basic expectations
- Describe current trends in AI capabilities and likely impacts
- Understand major arguments for acceleration vs. slowdown
Core Readings
Recommended Readings
Further Readings
- Measuring the Impact of Early-2025 AI on Experienced Open-Source Developer Productivity (2025)
- Future ML Systems will be Qualitatively Different (Steinhardt, 2022)
- Neural scaling laws and GPT-3 (Kaplan, 2020)
- Training Compute-Optimal Large Language Models (GDM, 2022)
- Biological Anchors: A Trick That Might Or Might Not Work (Alexander, 2022)
- Explaining Neural Scaling Laws (Bahri, 2024)
- The Hanson-Yudkowsky AI-Foom Debate (2008)