Parallel and Distributed Computation: Numerical Methods
Click here to obtain a searchable electronic version of the book from Google Play.
The hardcover version can be purchased from Athena Scientific. The book (including the solutions manual) can be freely downloaded from MIT, or from Athena Scientific.
Click here for a survey containing followup research, and for a related subsequent paper.
For further discussions of asynchronous algorithms in specialized contexts based on material from this book, see the books Nonlinear Programming, 3rd edition, Athena Scientific, 2016;
Convex Optimization Algorithms, Athena Scientific, 2015; and Abstract Dynamic Programming, 2nd edition, Athena Scientific, 2018;
The book is a comprehensive and theoretically sound treatment of parallel and
distributed numerical methods. It focuses on algorithms that
are naturally suited for massive parallelization, and it explores
the fundamental convergence, rate of convergence, communication,
and synchronization issues associated with such algorithms.
Reviews :
"This major contribution to the literature belongs on the bookshelf of
every scientist with an interest in computational science, directly beside
Knuth's three volumes and Numerical Recipes..."
Anna Nagurney, University of Massachusetts, in the International
Journal of Supercomputer Applications
"This major work of exceptional scholarship summarizes more than
three decades of research into general-purpose
algorithms for solving systems of equations
and optimization problems."
W. F. Smyth, in Computing Reviews
"This book marks an important landmark in the theory of distributed
systems and I highly recommend it to students and practicing engineers
in the fields of operations research and computer science, as well
as to mathematicians interested in numerical methods."
Lorne G. Mason, in IEEE Communications Magazine
Parallel and Distributed Computation: Numerical Methods
Table of Contents:
Introduction
- Parallel and distributed architectures
- The need for parallel and distributed computation
- Parallel computing systems and their classification
- Models, complexity measures, and some simple algorithms
- Models
-
Complexity measures
-
Examples: Vector, and matrix computations
-
Parallelization of iterative methods
- Communication aspects of parallel and distributed systems
-
Communication links
- Data link control
- Routing
- Network topologies
- Concurrency and communication tradeoffs
- Examples of matrix-vector calculations
- Synchronization issues in parallel and distributed algorithms
- Synchronous algorithms
- Asynchronous algorithms and the reduction of the
synchronization penalty
PART I: SYNCHRONOUS ALGORITHMS
Algorithms for Systems of Linear Equations
and Matrix
Inversion
-
Parallel algorithms for linear systems with special structure
- Triangular matrices and back substitution
- Tridiagonal sytems and odd-even reduction
- Parallel direct methods for general linear equations
- Gaussian elimination
- Triangularization using Givens rotations
- A fast direct matrix inversion algorithm
- Classical iterative methods for systems of linear equations
- Parallel implementation of classical iterative methods
- An example: Poisson's equation
- Multigrid methods
- Convergence analysis of classical iterative methods
- Background on maximum norms and nonnegative matrices
- Convergence analysis using maximum norms
- Convergence analysis using a quadratic cost function
- The Poisson equation revisited
- The conjugate gradient method
- Description of the algorithm
- Speed of convergence
- Preconditioned conjugate gradient method
- Parallel implementation
- Computation of the invariant distribution of a Markov chain
- Fast iterative matrix inversion
Iterative Methods for Nonlinear Problems
- Contraction mappings
- General results
-
Contractions over Cartesian product sets
-
Some useful contraction mappings
- Unconstrained optimization
-
The main algorithms
-
Convergence analysis using the descent approach
-
The case of a convex cost function
-
Nonlinear algorithms
- Constrained convex optimization
- Optimality conditions and the projection theorem
- The gradient projection algorithm
- Scaled gradient projection algorithms
- The case of a product constraint set: parallel
implementations
- Nonlinear agorithms
-
Parallelization and decomposition of optimization problems
- Quadratic programming
- Separable stritly convex programming
- The proximal minimization algorithm
- Augmented Lagrangian methods
- Algorithms for variational inequalities
- Examples of variational inequality problems
- Preliminaries
- The projection algorithm
- Linearized algorithms
- The Cartesian product case: Parallel implementations
- Nonlinear algorithms
- Decomposition methods for variational inequalities
Shortest Paths and Dynamic Programming
- The shortest path problem
- The Bellman-Ford algorithm
- Other
parallel
shortest path methods
- Markov chains with transition costs
- Markovian decision problems
- Discounted problems
- Undiscounted problems - Stochastic shortest paths
- Parallel implementation of the Dynamic Programming
iteration
Network Flow Problems
-
The linear network flow problem and its dual
- The relaxation method
- Application to the shortest path problem
- Multiple node relaxation method
- The epsilon-relaxation method
- The auction algorithm for the assignment problem
- Parallel versions of the epsilon-relaxation and the
auction
algorithms
- Complexity analysis of the epsilon-relaxation method and its
scaled version
- The scaled version of the algorithm
- Application to the assignment problem
-
Network flow problems with strictly convex cost
- The relaxation method
- Convergence analysis
- The problem without arc flow bounds
- An example: Constrained matrix problems
- Parallel implementations of the relaxation method
-
Nonlinear multicommodity flow problems - Routing applications
PART II: ASYNCHRONOUS ALGORITHMS
Totally Asynchronous Iterative Methods
- The totally asynchronous algorithmic model
- A general convergence theorem
- Applications to problems involving maximum norm contraction
mappings
- Solution of linear systems of equations
- Unconstrained optimization
- Constrained optimization and variational inequalities
- Dynamic programming
- Convergence rate comparison of synchronous and
asynchronous
algorithms
- Applications to monotone mappings and the shortest
path
problem
- Linear network flow problems
- Nonlinear network flow problems
- Asynchronous relaxation for
ordinary differential equations and two--point boundary value problems
- The asynchronous relaxation algorithm
- Two-point boundary value problems
- The discrete time case
Partially Asynchronous Iterative Methods
-
The partially asynchronous algorithmic model
- Algorithms for fixed points of nonexpansive mappings
- A convergence result
- Weakly diagonally dominant systems of linear
equations
- Strictly convex network flow problems
-
Algorithms for agreement and for Markov chain problems
- The agreement algorithm
- An asynchronous algorithm for the invariant
distribution of
a Markov chain
- Load balancing in a computer network
- Gradient-like optimization algorithms
- The algorithm and its convergence
- The role of the various parameters
- Block - iterative algorithms
- Gradient projection algorithms
- Distributed asynchronous routing in data networks
- Problem definition
- The algorihm and its convergence
- Discussion
- A model in which several processors may update the same
variables
- Stochastic gradient algorithms
- Description of the algorithm and assumptions
- A convergence result
- Discussion and extensions
Organizing an Asynchronous Network of Processors
for Distributed Computation
-
Detecting termination of a distributed algorithm
- Snapshots
- Resource scheduling
- Synchronization using rollback: asynchronous simulation
- Maintaining communication with a center
APPENDIXES
- Appendix A: Linear algebra and analysis
- Appendix B: Graph theory
- Appendix C: Optimization and duality theory
- Appendix D: Probability theory and Markov chains