This is the main textbook I use for my course at ASU. It is based on the class notes I developed over the years 2019-2023. It is a standalone book, but can also be used in conjunction with my videoolectures and slides, available at this site.

The print copy of the book is available from the publishing company Athena Scientific, or from Amazon. The book is also available as an EBOOK from Google Play. Moreover, the book can be downloaded and used freely for instructional purposes.

The textbook is about 440 pages long and includes end-of-chapter exercises. It places primary emphasis on intuitive reasoning, based on the mathematical framework of dynamic programming. While mathematical proofs are deemphasized, the textbook relies on the theoretical development and analysis given in my Dynamic Programming (DP) and Reinforcement Learning (RL) books listed at this site. All of these books share a consistent notation and terminology.

An important structural characteristic of the textbook is that it is organized in a modular way, with a view towards flexibility, so it can accommodate changes and variations in course content. In particular, it is divided in two parts:

(1) A **foundational platform**, which consists of Chapter 1. It contains a selective overview of the approximate DP/RL landscape, and a starting point for a more detailed in-class development of other RL topics, whose choice can be at the instructor's discretion.

(2) An **in-depth coverage** of selected methodologies. In Chapter 2, we discuss
methods of approximation in value space with one-step or multi-step lookahead. Methods of deterministic and stochastic rollout, and lookahead tree search receive special attention. Other topics of interest include multiagent rollout, adaptive control by rollout reoptimization, Bayesian optimization, and minimax problems. In Chapter 3, we discuss off-line training of neural networks and other approximation architectures, in conjunction with policy iteration/self-learning, Q-learning, policy gradient, and aggregation methods.

In a different course, other choices for in-depth coverage may be made, using the same foundational platform. For example, an optimal control/MPC/adaptive control course can be built upon the platform of Chapter 1. Similarly, more and less mathematically-oriented courses can be built upon this platform.

**Chapter 1, Exact and Approximate Dynamic Programming.** Contents: AlphaZero Off-Line Training, and On-Line Play, Deterministic Dynamic Programming, Stochastic Exact and Approximate Dynamic Programming, Infinite Horizon Problems - An Overview, Infinite Horizon Linear Quadratic Problems, Examples Reformulations, and Simplifications, Reinforcement Learning and Decision/Control.

**Chapter 2, Approximation in Value Space - Rollout Algorithms.** Contents: Deterministic Finite Horizon Problems, Approximation in Value Space - Deterministic Problems, Rollout Algorithms for Discrete Optimization, Rollout and Approximation in Value Space with Multistep Lookahead, Constrained Forms of Rollout Algorithms, Small Stage Costs and Long Horizon - Continuous-Time Rollout, Stochastic Rollout and Monte Carlo Tree Search, Rollout for Infinite-Spaces Problems - Optimization, Multiagent Rollout, Rollout for Bayesian Optimization and Sequential Estimation, Adaptive Control by Rollout with a POMDP Formulation, Rollout for Minimax Control.

**Chapter 3, Learning Values and Policies.** Contents: Parametric Approximation Architectures, Neural Networks, Training of Cost Functions in Approximate DP, Training of Policies in Approximate DP, Policy Gradient and Related Methods, Aggregation.

Syllabus of the 2024 Reinforcement Learning course at ASU

**Complete Set of Videolectures and Slides:**

Note that the 1st videolecture of 2024 is the same as the 1st videolecture of 2023 (the sound of the 1st videolecture of 2024 came out degraded). The contents of the 1st videolectures of 2023 and 2024 are very similar.

Video-Lecture 1, Video-Lecture 2, Video-Lecture 3, Video-Lecture 4, Video-Lecture 5, Video-Lecture 6, Video-Lecture 7, Video-Lecture 8, Video-Lecture 9, Video-Lecture 10, Video-Lecture 11, Video-Lecture 12, Video-Lecture 13

Slides-Lecture 1, Slides-Lecture 2, Slides-Lecture 3, Slides-Lecture 4, Slides-Lecture 5, Slides-Lecture 6, Slides-Lecture 7, Slides-Lecture 8, Slides-Lecture 9, Slides-Lecture 10, Slides-Lecture 11, Slides-Lecture 12, Slides-Lecture 13

**Supplementary Textbooks:**

**Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control**. Tthe book can be downloaded and used freely for instructional purposes. The print copy of the book is available from the publishing company, Athena Scientific. The book is also available as an EBOOK from Google Play.

"Reinforcement Learning and Optimal Control," also available as an EBOOK from Google Play.

"Rollout, Policy Iteration, and Distributed Reinforcement Learning," also available as an EBOOK from Google Play.

**Additional Videolectures:**

The failure of theoretical error bounds in Reinforcement Learning: A 15-minute discussion at MIT on the role of the Newton step interpretation of approximation in value space in understanding the puzzle of success and failure in RL, Oct. 2023. Slides.

Overview of the book Lessons from AlphaZero: A summary one-hour presentation at MIT, Oct. 2022. Slides.

Overview of the book Lessons from AlphaZero: A fairly detailed two-hour presentation at KTH, Nov. 2021. Slides.

Overview Lecture on Multiagent Reinforcement Learning at ASU, Oct. 2020. Slides.

Overview Lecture on Distributed Reinforcement Learning from IPAM workshop at UCLA, Feb. 2020. Slides.

Syllabus of the 2023 Reinforcement Learning course at ASU

Video-Lecture 1, Video-Lecture 2, Video-Lecture 3, Video-Lecture 4, Video-Lecture 5, Video-Lecture 6, Video-Lecture 7, Video-Lecture 8 (multiagent rollout demo), Video-Lecture 9, Video-Lecture 10, Video-Lecture 11

Slides-Lecture 1, Slides-Lecture 2, Slides-Lecture 3, Slides-Lecture 4, Slides-Lecture 5, Slides-Lecture 6, Slides-Lecture 7, Slides-Lecture 8, Slides-Lecture 9, Slides-Lecture 10, Slides-Lecture 11

Video-Lecture 1, Video-Lecture 2, Video-Lecture 3, Video-Lecture 4, Video-Lecture 5, Video-Lecture 6, Video-Lecture 7, Video-Lecture 8, Video-Lecture 9, Video-Lecture 10, Video-Lecture 11, Video-Lecture 12, Video-Lecture 13

Slides-Lecture 1, Slides-Lecture 2, Slides-Lecture 3, Slides-Lecture 4, Slides-Lecture 5, Slides-Lecture 6, Slides-Lecture 7 Slides-Lecture 8 Slides-Lecture 9 Slides-Lecture 10 Slides-Lecture 11 Slides-Lecture 12 Slides-Lecture 13

The final Lecture 13 is an overview of the entire course.

Videolectures, slides, and other material for the 2021 course on Reinforcement Learning and Optimal Control, at Arizona State University:

Video-Lecture 1, Video-Lecture 2, Video-Lecture 3, Video-Lecture 4, Video-Lecture 5, Video-Lecture 6, Video-Lecture 7, Video-Lecture 8, Video-Lecture 9, Video-Lecture 10, Video-Lecture 11, Video-Lecture 12, Video-Lecture 13

Slides-Lecture 1, Slides-Lecture 2, Slides-Lecture 3, Slides-Lecture 4, Slides-Lecture 5, Slides-Lecture 6, Slides-Lecture 7, Slides-Lecture 8, Slides-Lecture 9, Slides-Lecture 10, Slides-Lecture 11, Slides-Lecture 12, Slides-Lecture 13

The final Lecture 13 is an overview of the entire course.

Slides-Lecture 1, Slides-Lecture 2, Slides-Lecture 3, Slides-Lecture 4, Slides-Lecture 5, Slides-Lecture 6, Slides-Lecture 7, Slides-Lecture 8, Slides-Lecture 9, Slides-Lecture 10, Slides-Lecture 11, Slides-Lecture 12, Slides-Lecture 13.

Video-Lecture 1, Video-Lecture 2, Video-Lecture 3,Video-Lecture 4, Video-Lecture 5, Video-Lecture 6, Video-Lecture 7, Video-Lecture 8, Video-Lecture 9, Video-Lecture 10, Video-Lecture 11, Video-Lecture 12, Video-Lecture 13.

Lecture 13 is an overview of the entire course.

The print version of the book is available from the publishing company Athena Scientific, or from Amazon.com. The book is also available as an Ebook from Google Books.

The purpose of the book is to consider large and challenging multistage decision problems, which can be solved in principle by dynamic programming and optimal control, but their exact solution is computationally intractable. We discuss solution methods that rely on approximations to produce suboptimal policies with adequate performance. These methods are collectively referred to as reinforcement learning, and also by alternative names such as approximate dynamic programming, and neuro-dynamic programming.

Our subject has benefited enormously from the interplay of ideas from optimal control and from artificial intelligence. One of the aims of this monograph is to explore the common boundary between these two fields and to form a bridge that is accessible by workers with background in either field.

The mathematical style of the book is somewhat different from the author's dynamic programming books, and the neuro-dynamic programming monograph, written jointly with John Tsitsiklis. We rely more on intuitive explanations and less on proof-based insights. Still we provide a rigorous short account of the theory of finite and infinite horizon dynamic programming, and some basic approximation methods, in an appendix. For this we require a modest mathematical background: calculus, elementary probability, and a minimal use of matrix-vector algebra.

The methods of this book have been successful in practice, and often spectacularly so, as evidenced by recent amazing accomplishments in the games of chess and Go. However, across a wide range of problems, their performance properties may be less than solid. This is a reflection of the state of the art in the field: there are no methods that are guaranteed to work for all or even most problems, but there are enough methods to try on a given challenging problem with a reasonable chance that one or more of them will be successful in the end. Accordingly, we have aimed to present a broad range of methods that are based on sound principles, and to provide intuition into their properties, even when these properties do not include a solid performance guarantee. Hopefully, with enough exploration with some of these methods and their variations, the reader will be able to address adequately his/her own problem.

The print version of the book is available from the publishing company Athena Scientific, and from Amazon.com. The book is also available as an Ebook from Google Books.

This is a research monograph at the forefront of research on reinforcement learning, also referred to by other names such as approximate dynamic programming and neuro-dynamic programming. The purpose of the monograph is to develop in greater depth some of the methods from the author's recently published textbook on Reinforcement Learning (Athena Scientific, 2019). In particular, we present new research, relating to systems involving multiple agents, partitioned architectures, and distributed asynchronous computation. We pay special attention to the contexts of dynamic programming/policy iteration and control theory/model predictive control. We also discuss in some detail the application of the methodology to challenging discrete/combinatorial optimization problems, such as routing, scheduling, assignment, and mixed integer programming, including the use of neural network approximations within these contexts.

The book focuses on the fundamental idea of policy iteration, i.e., start from some policy, and successively generate one or more improved policies. If just one improved policy is generated, this is called rollout, which, based on broad and consistent computational experience, appears to be one of the most versatile and reliable of all reinforcement learning methods. Among others, it can be applied on-line using easily implementable simulation, and it can be used for discrete deterministic combinatorial optimization, as well as for stochastic Markov decision problems. Moreover, rollout can make on-line use of the policy produced off-line by policy iteration or by any other method (including a policy gradient method), and improve on the performance of that policy,

Much of the new research is inspired by the remarkable AlphaZero chess program, where policy iteration, value and policy networks, approximate lookahead minimization, and parallel computation all play an important role. In addition to the fundamental process of successive policy iteration/improvement, this program includes the use of deep neural networks for representation of both value functions and policies, the extensive use of large scale parallelization, and the simplification of lookahead minimization, through methods involving Monte Carlo tree search and pruning of the lookahead tree. In this book, we also focus on policy iteration, value and policy neural network representations, multiagent systems, parallel and distributed computation, and lookahead simplification. Thus while there are significant differences, the principal design ideas that form the core of this monograph are shared by the AlphaZero architecture, except that we develop these ideas within a broader and less application-specific framework.

Lessons from AlphaZero for Optimal, Model Predictive, and Adaptive Control, (Athena Scientific, 2022); click here for a free .pdf copy of the book. Click here for the EBOOK version from Google Play.

The print copy of the book is available from the publishing company, Athena Scientific.

A journal paper: D. P. Bertsekas, "Newton's Method for Reinforcement Learning and Model Predictive Control," Results in Control and Optimization J., Vol. 7, 2022, pp. 100-121.

** Lessons from AlphaZero Videolecture**: A summary presentation at KTH, Nov. 2021

**A brief description:** Some of the most exciting success stories in artificial intelligence and reinforcement learning have been in the area of games. Primary examples are the recent AlphaZero program (which plays chess), and the similarly structured and earlier (1990s) TD-Gammon program (which plays backgammon).
These programs were trained off-line extensively using sophisticated self-play/approximate policy iteration algorithms and neural networks. Yet the AlphaZero player that has been obtained off-line is not used directly during on-line play (it is too inaccurate due to approximation errors that are inherent in off-line neural network training). Instead a separate on-line player is used, which is based on multistep lookahead and a terminal position evaluator that was trained off-line. The on-line player performs a form of policy improvement, which unlike the off-line player, is not degraded by neural network approximations. As a result, it has greatly improved performance.

Similarly, TD-Gammon performs on-line a policy improvement step using lookahead minimization that is not degraded by neural network approximations. To this end it uses an off-line neural-network trained terminal position evaluator, and importantly it also extends its on-line lookahead by rollout (simulation with the one-step lookahead player that is based on the position evaluator).

An important lesson from AlphaZero and TD-Gammon is that performance of an off-line trained controller can be greatly improved by on-line play, with long lookahead (involving minimization or rollout with an off-line obtained policy, or both), and terminal cost approximation that is obtained off-line. This performance enhancement is often dramatic and is due to a simple fact, which is the central point of our work: on-line play amounts to a step of Newton's method for solving Bellman's equation, while the starting point for the Newton step is based on the results of off-line training and may be enhanced by longer lookahead and on-line rollout. This process can be understood in terms of abstract models of dynamic programming and simple geometrical constructions. It manifests itself to some extent in model predictive control, but it seems that it has yet to be fully appreciated within the decision and control community.

In this work we aim to provide insights (often based on visualization), which explain the beneficial effects of on-line decision making on top of off-line training. While we will deemphasize mathematical proofs, there is considerable related analysis, which supports our conclusions and can be found in the author's recent RL books (2019, 2020). One of our principal aims is to show through the unifying principles of abstract DP that the AlphaZero/TD-Gammon ideas of approximation in value space and rollout apply very broadly to deterministic and stochastic optimal control problems, involving both discrete and continuous search spaces. Moreover, these ideas can be effectively integrated with other important methodologies such as model predictive control, adaptive control, decentralized control, discrete and Bayesian optimization, neural network-based value and policy approximations, and heuristic algorithms for discrete optimization.

Video of an Overview Lecture on Distributed RL from IPAM workshop at UCLA, Feb. 2020 (Slides).

Video of an Overview Lecture on Multiagent RL from a lecture at ASU, Oct. 2020 (Slides).

Slides for an extended overview lecture on RL: Ten Key Ideas for Reinforcement Learning and Optimal Control.

**The following recent papers and reports have a strong connection to material in my reinforcement learning books, and amplify on their analysis and its range of applications.**

This is historically the first book that fully explained the neuro-dynamic programming/reinforcement learning methodology, a breakthrough in the practical application of neural networks and dynamic programming to complex problems of planning, optimal decision making, and intelligent control.

Neuro-dynamic programming uses neural network approximations to overcome the "curse of dimensionality" and the "curse of modeling" that have been the bottlenecks to the practical application of dynamic programming and stochastic control to complex problems. The methodology allows systems to learn about their behavior through simulation, and to improve their performance through iterative reinforcement.

This book provides the first systematic presentation of the science and the art behind this exciting and far-reaching methodology. It develops a comprehensive analysis of reinforcement learning algorithms, and guides the reader to their successful application through case studies from complex problem areas. It contains material that is not available elsewhere in book form, such as a comprehensive and rigorous analysis of temporal difference methods, Q-learning, and error bounds associated with various methods.

The fourth edition (February 2017) contains a substantial amount of new material, particularly on approximate DP in Chapter 6. This chapter was thoroughly reorganized and rewritten, to bring it in line, both with the contents of Vol. II, whose latest edition appeared in 2012, and with recent developments, which have propelled approximate DP to the forefront of attention.

Some of the highlights of the revision of Chapter 6 are an increased emphasis on one-step and multistep lookahead methods, parametric approximation architectures, neural networks, rollout, and Monte Carlo tree search. Among other applications, these methods have been instrumental in the recent spectacular success of computer Go programs. The material on approximate DP also provides an introduction and some perspective for the more analytically oriented treatment of Vol. II.

Click here for direct ordering from the publisher and preface, table of contents, supplementary educational material, lecture slides, videos, etc

The fourth edition of Vol. II of the two-volume DP textbook was published in June 2012. This is a major revision of Vol. II and contains a substantial amount of new material, as well as a reorganization of old material. The length has increased by more than 60% from the third edition, and most of the old material has been restructured and/or revised. Volume II now numbers more than 700 pages and is larger in size than Vol. I. It can arguably be viewed as a new book!

Approximate DP has become the central focal point of this volume, and occupies more than half of the book (the last two chapters, and large parts of Chapters 1-3). Thus one may also view this new edition as a followup of the author's 1996 book "Neuro-Dynamic Programming" (coauthored with John Tsitsiklis). A lot of new material, the outgrowth of research conducted in the six years since the previous edition, has been included.

A new printing of the fourth edition (January 2018) contains some updated material, particularly on undiscounted problems in Chapter 4, and approximate DP in Chapter 6. References were also made to the contents of the 2017 edition of Vol. I, and to high profile developments in deep reinforcement learning, which have brought approximate DP to the forefront of attention.

Click here for an updated version of Chapter 4, which incorporates recent research on a variety of undiscounted problem topics, including

Click here for preface and detailed information.

Click here to order at Amazon.com

Lectures on Exact and Approximate Finite Horizon DP: Videos from a 4-lecture, 4-hour short course at the University of Cyprus on finite horizon DP, Nicosia, 2017. Videos from Youtube. (Lecture Slides: Lecture 1, Lecture 2, Lecture 3, Lecture 4.)

Videos from a 6-lecture, 12-hour short course at Tsinghua Univ., Beijing, China, 2014. From the Tsinghua course site, and from Youtube. Click here to download Approximate Dynamic Programming Lecture slides, for this 12-hour video course.

Click here to download lecture slides for a 7-lecture short course on Approximate Dynamic Programming, Caradache, France, 2012.

Click here to download lecture slides for the MIT course "Dynamic Programming and Stochastic Control (6.231), Dec. 2015. The last six lectures cover a lot of the approximate dynamic programming material.

Click here to download research papers and other material on Dynamic Programming and Approximate Dynamic Programming.

The 3rd edition of the book is available as an Ebook from Google Books. The print version of the 3rd edition of the book is available from the publishing company, Athena Scientific.

This research monograph provides a synthesis of old research on the foundations of dynamic programming (DP), with the modern theory of approximate DP and new research on semicontractive models. It aims at a unified and economical development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. The analysis focuses on the abstract mapping that underlies DP and defines the mathematical character of the associated problem. The discussion centers on two fundamental properties that this mapping may have: monotonicity and (weighted sup-norm) contraction. It turns out that the nature of the analytical and algorithmic DP theory is determined primarily by the presence or absence of these two properties, and the rest of the problem's structure is largely inconsequential. New research is focused on two areas: 1) The ramifications of these properties in the context of algorithms for approximate DP, and 2) The new class of semicontractive models, exemplified by stochastic shortest path problems, where some but not all policies are contractive.

The 3rd edition is very similar to the 2nd edition, except for the addition of a new 40-page Chapter 5, which introduces a contractive abstract DP framework and related policy iteration algorithms, specifically designed for sequential zero-sum games and minimax problems with a general structure, and based on a recent paper by the author. Aside from greater generality, the advantage of our algorithms over alternatives is that they resolve some long-standing convergence difficulties of the natural PI algorithm, which have been known since the Pollatschek and Avi-Itzhak method for finite-state Markov games. Mathematically, this natural algorithm is a form of Newton's method for solving Bellman's equation, but Newton's method, contrary to the case of single-player DP problems, is not globally convergent in the case of a minimax problem, because of an additional difficulty: the Bellman operator may have components that are neither convex nor concave. The algorithms address this difficulty by introducing alternating player choices, and by using a policy-dependent mapping with a uniform sup-norm contraction property, similar to earlier works by Bertsekas and Yu, which is described in part in Chapter 2. Moreover, our algorithms allow a convergent and highly parallelizable implementation, which is based on state space partitioning, and distributed asynchronous policy evaluation and policy improvement operations within each set of the partition. They are also suitable for approximations based on an aggregation approach.

The book can be downloaded and used freely for noncommercial purposes: Abstract Dynamic Programming, 3RD EDITION, Complete

The 3rd and 2nd editions do not contain some material of the 1st edition that deals with restricted policies and Borel space models (Chapter 5 and Appendix C). These models are motivated in part by the complex measurability questions that arise in mathematically rigorous theories of stochastic optimal control involving continuous probability spaces. The restricted policies framework aims primarily to extend abstract DP ideas to Borel space models. Since this material is fully covered in Chapter 6 of the 1978 monograph by Bertsekas and Shreve, and followup research on the subject has been limited, Chapter 5 and Appendix C of the 1st edition were ommitted from the 2nd and 3rd editions and are posted below.

**The following papers and reports have a strong connection to the book, and amplify on the analysis and the range of applications of the semicontractive models of Chapters 3 and 4:**

A series of 5 Videolectures on Abstract Dynamic Programming and corresponding slides posted at Youtube.

Visits since April 9, 2019