Abstract Dynamic Programming, 1st Edition, 2013

by Dimitri P. Bertsekas


abstractdpcover1.jpg

The research monograph "Abstract Dynamic Programming," published in 2013, may be ordered in hardcover from the publishing company, Athena Scientific, or from Amazon.com.

The research monograph provides a synthesis of research on the foundations of dynamic programming that started nearly 50 years ago, with the modern theory of approximate dynamic programming and the new class of semicontractive models.

It aims at a unified and economical development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. The analysis focuses on the abstract mapping that underlies dynamic programming and defines the mathematical character of the associated problem. The discussion centers on two fundamental properties that this mapping may have: monotonicity and (weighted sup-norm) contraction. It turns out that the nature of the analytical and algorithmic DP theory is determined primarily by the presence or absence of these two properties, and the rest of the problem's structure is largely inconsequential. New research is focused on two areas:

  • The ramifications of these properties in the context of algorithms for approximate dynamic programming.
  • The new class of semicontractive models, exemplified by stochastic shortest path problems, where some but not all policies are contractive.

    Click here to visit the book's web site at Athena Scientific for contents, preface, Chapter 1, slides, videos, and other instructional material, or to order the book directly from the publisher for faster service.


    The following documents have a strong connection to the book, and amplify on the analysis and the range of applications of the semicontractive models of Chapters 3 and 4:

  • D. P. Bertsekas, "Regular Policies in Abstract Dynamic Programming", Lab. for Information and Decision Systems Report LIDS-P-3173, MIT, May 2015 (revised August, 2016); to appear in SIAM J. on Control and Optimization (Related Lecture Slides); (Related Video Lectures).

  • D. P. Bertsekas, "Value and Policy Iteration in Deterministic Optimal Control and Adaptive Dynamic Programming", Lab. for Information and Decision Systems Report LIDS-P-3174, MIT, May 2015 (revised Sept. 2015); to appear in IEEE Transactions on Neural Networks and Learning Systems.

  • D. P. Bertsekas and H. Yu, "Stochastic Shortest Path Problems Under Weak Conditions", Lab. for Information and Decision Systems Report LIDS-P-2909, MIT, January 2016; to appear in Math. of OR.

  • D. P. Bertsekas, "Robust Shortest Path Planning and Semicontractive Dynamic Programming", Lab. for Information and Decision Systems Report LIDS-P-2915, MIT, Jan. 2016; revised in June 2016; to appear in Naval Research Logistics.

  • D. P. Bertsekas, "Affine Monotonic and Risk-Sensitive Models in Dynamic Programming", Lab. for Information and Decision Systems Report LIDS-3204, MIT, June 2016.

  • D. P. Bertsekas, "Stable Optimal Control and Semicontractive Dynamic Programming", Lab. for Information and Decision Systems Report LIDS-P-3506, MIT, May 2017; (Related Lecture Slides).

  • D. P. Bertsekas, Proper Policies in Infinite-State Stochastic Shortest Path Problems", Lab. for Information and Decision Systems Report LIDS-3507, MIT, May 2017; (Related Lecture Slides).

  • An updated version of Chapter 4 of the author's Dynamic Programming book, Vol. II, which incorporates recent research on a variety of undiscounted problems and relates to abstract DP topics; (Related Lecture Slides).

    Abstract Dynamic Programming, 2nd Edition, 2017 (Electronic Preprint Version)

    by Dimitri P. Bertsekas


    Cover_2nd_Edition_small.jpg

    The 2nd edition of the research monograph "Abstract Dynamic Programming," may be downloaded from here. The print version will become available from the publishing company, Athena Scientific, in early 2018.

    The 2nd edition aims primarily to amplify the presentation of the semicontractive models of Chapter 3 and Chapter 4 (see above), and to supplement it with a broad spectrum of research results that I obtained and published in journals and reports since the first edition was written (see above). As a result, the size of this material more than doubled, and the size of the book increased by nearly 40%.


    Contents and Preface, 2ND EDITION

    Chapter 1, 2ND EDITION, Introduction

    Chapter 2, 2ND EDITION, Contractive Models

    Chapter 3, 2ND EDITION, Semicontractive Models

    Chapter 4, 2ND EDITION, Noncontractive Models

    Appendixes and References, 2ND EDITION

    In addition to the changes in Chapters 3, and 4, I have also eliminated from the second edition the material of the first edition that deals with restricted policies and Borel space models (Chapter 5 and Appendix C). These models are motivated in part by the complex measurability questions that arise in mathematically rigorous theories of stochastic optimal control involving continuous probability spaces. The restricted policies framework aims primarily to extend abstract DP ideas to Borel space models. Since this material is fully covered in Chapter 6 of the 1978 monograph by Bertsekas and Shreve, and followup research on the subject has been limited, I decided to omit Chapter 5 and Appendix C of the first edition from the second edition and just post them below.

    Chapter 5 of 1st Edition

    Appendix C of 1st Edition