Week 7: AI Governance & Liability
Tort Law, Compute Governance, Export Controls, and the Regulator's Toolbox
Overview
The previous weeks examined technical approaches to making AI systems safe. This session shifts to the institutional level, asking: how can governments and legal systems shape AI development and deployment? We begin with liability, exploring how existing tort law applies — and struggles to apply — to AI-caused harms, and whether the emerging patchwork of state-level legislation can provide adequate accountability. From there, we turn to compute governance as a uniquely promising policy lever, made feasible by the extreme concentration of the AI chip supply chain, and examine how the US has used export controls to maintain its lead over China in AI-relevant hardware. We close with a survey of concrete regulatory tools available across the AI lifecycle.
Learning Objectives
By the end of Week 7, fellows should be able to:
- Explain why AI liability is difficult under existing tort law and evaluate the tradeoffs of different legal approaches (negligence, strict liability, and state-level statutory frameworks)
- Describe how compute governance works as a policy lever — including its use for visibility, allocation, and enforcement — and articulate both its strengths and its limitations
- Assess the effectiveness of US semiconductor export controls on China's AI ecosystem and explain the broader strategic dynamics they reflect
Core Readings
- A simple solution to regulate AI (The Hill, 2023)
- We're not Ready for AI Liability (AI Frontiers, 2024)
- Computing power and the governance of AI (GovAI, 2024)
- How US export controls have (and haven't) curbed Chinese AI (AI Frontiers, 2024)
- The AI regulator's toolbox: A list of concrete AI governance practices (Jones, 2024)
Recommended Readings
- Securing Model Weights (RAND, 2024)
- The Finalized EU AI Act: Implications and Insights (Hoffmann, CSET, 2024)
- China's AI Policy at the Crossroads: Balancing Development and Control in the DeepSeek Era (Sheehan, Carnegie, 2025)
- SB 53 (California Legislature, 2025)
- Is China serious about AI safety? (Carnegie, 2024)
- The State of State AI Law (Carnegie, 2025)
- Frontier AI Regulation: Managing Emerging Risks to Public Safety (Anderljung et al., 2023)
- International Institutions for Advanced AI (Ho et al., 2023)
- If-Then Commitments for AI Risk Reduction (Carnegie, 2024)
- Verifying International Agreements on AI (RAND, 2024)
- The Case for AI Liability (AI Frontiers, 2024)
- Tort Law as a Tool for Mitigating Catastrophic Risk from AI (Weil, 2024)
Further Readings
- Trump's Plan for AI: Recapping the White House's AI Action Plan (Friedland, CSET, 2025)
- AI Safety under the EU AI Code of Practice — A New Global Standard? (Hoffmann, CSET, 2025)
- AISI Frontier AI Trends Report 2025 (UK AI Security Institute, 2025)
- Common Elements of Frontier AI Safety Policies (METR, 2025)
- The AI Governance Arms Race: From Summit Pageantry to Progress? (Renda, Carnegie, 2024)
- Superintelligence Strategy: Expert Version (Hendrycks, Schmidt, & Wang, 2025)
- Seeking Stability in the Competition for AI Advantage (Mazarr, RAND, 2025)
- Oversight for Frontier AI through a KYC Scheme for Compute Providers (Egan & Heim, GovAI, 2023)
- Secure, Governable Chips (Aarne, Fist, & Withers, CNAS/IAPS, 2024)
- Anthropic's RSP v3.0: How it Works, What's Changed, and Some Reflections (GovAI, 2025)
- Regulatory Markets (Hadfield & Clark, 2023)
- Situational Awareness: Lock Down the Labs (Aschenbrenner, 2024)
- A National Center for Advanced AI Reliability and Security (FAS, 2024)
- U.S. Tort Liability for Large-Scale AI Damages: A Primer (Ramakrishnan et al., RAND, 2024)
- Tort Law and Frontier AI Governance (van der Merwe, Ramakrishnan, & Anderljung, Lawfare, 2024)
- Tort Law Should Be the Centerpiece of AI Governance (Weil, Lawfare, 2024)