Numerical Tools for Large-Scale Kinetic Models
Paul I. Barton and William H. Green, Massachusetts Institute of Technology



To develop a software package for the numerical analysis of large-scale chemical kinetic models, incorporating recent advances in methods for solving large, sparse, stiff systems of ordinary differential and algebraic equations and computing parametric sensitivities for such models. These back-end numerical tools will be flexible, easy to use, and compatible with popular existing front-end formats for reaction schemes and property databases (e.g., CHEMKIN). To develop physically sensible and robust numerical methods for automatic model reduction appropriate to any specified range of reaction conditions with desired precision in specified observables.



The formation and transformation of airborne organics invariably involves rather complex multi-component kinetics. We are now reaching the point where these processes can be modeled in some detail. For the larger systems (Nspecies>103) required for chemically complex situations such as fuel-rich combustion, users are often disappointed with the performance of existing numerical tools. As we understand the formation of airborne organics in more detail, and as we develop the experimental results, parameter databases and computer tools for constructing many-component kinetic models, the need for a new generation of numerical tools for simulation, sensitivity analysis and model reduction is becoming pressing.


Method of Approach

The project is divided into two major tasks. Task A is developing a software package (DAEPACK) for the numerical analysis of large-scale kinetic models. Task B is developing physically sensible and robust numerical methods for automatic model reduction appropriate to any specified range of reaction conditions with desired precision in specified observables.

As requested by the SAC reviewers, Task A is paying particular attention to the issues of compatibility with existing CHEMKIN models and the eventual need for a standard to replace CHEMKIN. In fact, the tools developed will be completely compatible with any reaction scheme generator and property database similar to CHEMKIN. DAEPACK employs automatic differentiation technology to analyze automatically legacy FORTRAN codes, such as the CHEMKIN subroutine library. This analysis extracts all the structural, symbolic and numerical information required by modern numerical algorithms for solving large, sparse, stiff systems of ordinary differential and algebraic equations and computing parametric sensitivities. Another eventual benefit of DAEPACK will be an increase in the flexibility of problem formulation. This can be understood via a simple illustration. Rather than viewing a chemical kinetics model as a fully determined (square) system of equations, the DAEPACK analysis can extract all the variables incident in the equations, yielding typically an underdetermined (rectangular) system of equations. DAEPACK will then be able interrogate the user interactively for which variables he or she wishes to specify in the problem formulation, leaving the remaining variables to be determined by the calculation in question.

The motivation of Task B is that it is often desirable to reduce the size of a large-scale kinetic model by removing negligible reactions and species. Removing unimportant reactions not only speeds the computation, it can also make the problem sparse and improve the conditioning of the corrector matrix, allowing larger time steps. Reducing the number of species even more dramatically reduces CPU time and memory requirements, and is particularly important for dynamic reacting flow simulations, where the kinetic equations must be solved for every species at each point on the spatial grid. Reducing the number of species also improves the chances that a user will be able to understand and correctly interpret the model predictions. We are focusing on model reductions achieved by simply deleting reactions and species that are negligible under specified reaction conditions. This type of reduction leaves all of the model parameters at physical values, which can be checked independently. The model reduction problem is formulated as a mixed-integer dynamic optimization (MIDO) problem in which binary (0-1) decision variables are introduced to represent the deletion of a reaction, the deletion of a species, the selection of a species to be at pseudo steady-state, and/or the selection of a reaction to be at equilibrium. The objective may be to minimize some norm measuring the error introduced by model reduction, and/or to minimize the number of species or reactions in the reduced model. We are developing deterministic global optimization algorithms for such problems.


Summary of Progress and Accomplishments

Task A: Software Package for the Numerical Analysis of Large-Scale Kinetic Models

During the past six months we have completed the first version of DAEPACK (version 1.0), and the software is now available for academic and commercial licensing. This first version of DAEPACK has the following features:

We urge the reader to visit the DAEPACK Web site. ( This contains a description of DAEPACK, numerous examples, and the opportunity to download user manuals and papers related to DAEPACK.

We have also applied DAEPACK to numerous examples, including process simulation models (e.g., the Tennessee Eastman process control challenge problem, a membrane bioreactor, etc.), a number of models from the MINPACK library, and several CHEMKIN examples. The CHEMKIN example is of particular interest, because the CHEMKIN subroutine library remains unchanged from problem to problem, all that changes is the input parameters for the subroutines, which are set by running the CHEMKIN interpreter. DAEPACK is designed so that it only needs to analyze a given code once; the new subroutines generated will work for all values of the input parameters to the original code. So, in essence the analysis of CHEMKIN by DAEPACK is now complete.

However, the real power of DAEPACK comes from enabling a legacy system such as CHEMKIN to be interfaced to new numerical algorithms. As a test example, we have taken the CONP example (zero dimensional flame simulation) supplied with CHEMKIN, and modified it so that the numerical solver uses analytical partial derivatives generated by the (one time) DAEPACK analysis of CHEMKIN. We then simulated three different mechanisms (a methane mechanism supplied with CHEMKIN, GRIMECH 3.0, and a mechanism by Wang and Frenklach for acetylene and ethylene flames). These three simulations were performed using analytical partial derivatives merely by running the CHEMKIN interpreter to generate the required input file.

We have also developed a new numerical subroutine (DSL48E) that will perform simulation and event location for discontinuous models (based on the algorithm of Park and Barton (1996)), using the discontinuity locked code automatically generated by DAEPACK. DSL48E calls our existing sparse, stiff DAE and parametric sensitivity solver (DSL48S) for the numerical integration.

Task B: Model Reduction

During the past six months we have explored a number of mixed-integer optimization formulations for the model reduction problem. The most useful point formulation (i.e., model reduction is performed for one specific initial condition) appears to be one originally proposed in 1998 by Edwards, Edgar and Manousiouthakis. This formulation has the feature of being able to prioritize the objective of model size reduction over model accuracy, whilst still enforcing hard constraints on the accuracy of the reduced model. These authors explored the application of genetic algorithms for the solution of the resulting nonconvex integer nonlinear program (INLP) with a dynamic system embedded. We have also explored the application of genetic algorithms to a variety of literature problems. All of these studies indicate that the genetic algorithms can solve smaller problems quite effectively, but the global solution is most often not found, the algorithms have difficulty with constraints, and a lot of heuristic tinkering of the genetic algorithm is required for each individual problem solved.

As stated in the original proposal, our ultimate goal is to solve a min-max formulation that will generate the optimal reduced model for a region of state space, rather the point formulations discussed above. However, in order to address the min-max problem it is first necessary to be able to solve the point formulation to guaranteed global optimality. This calls for a deterministic global optimization algorithm for the problem that establishes rigorous proofs that guarantee the global optimum solution is found. Stochastic search approaches such as genetic algorithms cannot provide such guarantees. We have therefore been attempting to develop deterministic global optimization algorithms for nonconvex INLPs with dynamic systems embedded. It is important to note the background for this field. Deterministic global optimization algorithms for INLPs have only appeared in the literature beginning in 1997, and all of these approaches require explicit knowledge of the functions that make up the objective and constraints of the optimization problem. This information is not known for the model reduction problem because the functions in question are determined by the solution of a (in general) nonlinear dynamic system.

However, in the past six months we have discovered that it is possible to proceed via analogy with the existing INLP algorithms. This is a major breakthrough because, until our recent results, the research community has been at a loss as to how to address deterministic global optimization of problems involving dynamic systems in a rigorous fashion. Existing INLPs algorithms for algebraic problems rely on the notion of constructing convex underestimators for the nonconvex functions participating in the formulation. These underestimators are constructed via algebraic manipulation of the participating functions; hence it is necessary to know their functional form explicitly. We have applied some elementary results from convexity analysis in the calculus of variations to demonstrate for the first time how convex underestimators may also be constructed for optimization problems with dynamic systems embedded. These results rely only on knowledge of the functional form of the integrand of the objective functional and the differential equations. Although limited at the moment, these results indicate that we may exploit them in an analogous manner to develop deterministic global optimization algorithms for INLPs with dynamic systems embedded. At this point we have solved some small problems numerically that involve a quite general nonconvex, nonlinear objective function subject to a linear dynamic system, and have found the global solution. We have also demonstrated that standard methods for convex INLPs, when applied to these problems, generate arbitrary suboptimal solutions (a fact that was know theoretically, but now we have the global solution we can quantify the suboptimality of other approaches).


Interactions with other CAO Projects

During the past six months we have had quite intense interactions with Prof. Howard's project on PAH and soot formation kinetics. The lessons learnt from the application of our numerical tools to Prof. Howard's PAH and soot formation problems has prompted a new recently funded CAO proposal to conduct a numerical study of PAH and soot formation in premixed flames with axial diffusion. Prof. Green's group is also beginning to use the sparsity pattern generation and sensitivity analysis capabilities of DAEPACK for his other CAO funded project. We have also had a research contract on related topics funded by ABB Alstom Power, so progress on this CAO project is benefiting from the leverage of these additional resources.


Status of Original Plan

We have substantially exceeded all our original goals for the first six months. In particular, DAEPACK is now a working system ready for deployment with all the promised features (except for uncertainty analysis). However, we have also learnt that completion of the model reduction task as originally envisioned will require a sustained long term effort. On the other hand, it now appears that the deterministic global optimization approach is feasible, and that significant progress can be made within the time frame of this CAO project.


Future Plans

In the very near future we will demonstrate the interfacing of CHEMKIN to DSL48E and DSL48S via DAEPACK. This will enable us to perform efficient simulation and sensitivity analysis of arbitrary, large-scale kinetic models described by CHEMKIN, whilst exploiting sparsity and analytical derivatives in the numerical calculations, and with proper discontinuity handling. Obviously, many improvements and refinements of this first implementation are necessary and we will be actively making these in the coming months. In particular, it is necessary to add a variety of different options for derivative computation, because the optimal choice is highly dependent on the features of the problem in question. We are also actively seeking as many new codes as possible to test DAEPACK on.

One advantage of the formulation of Edwards, Edgar and Manousiouthakis (1998) is that it confines all nonlinearity in the formulation to the integrand of the objective functional and a set of disjunctive Lagrangian inequalities (inequality constraints that must hold over the entire time horizon of interest). If the model reduction problem is isothermal and the mechanism only involves first order and pseudo first order reactions, these become linear Lagrangian inequalities. Thus, our immediate goal over the next six months will be to figure out how to deal with linear and then nonlinear Lagrangian inequalities within our global optimization algorithm. Linear Lagrangian inequalities yield a convex variational problem, but we need to resolve the complications in solving the resulting Euler-Lagrange equations to determine a lower bound on the global solution. We hope to be able to solve isothermal first order model reduction problems to guaranteed global optimality within this period.


Handling of QA/QC

All software has been designed according to modern software engineering principles, particularly software encapsulation, software reusability, component-based design and the use of automatically generated code wherever possible (e.g., automated compiler generators (yacc, lex), GUI development tools (Visual Basic), etc.). These principles improve the quality of software by minimizing the potential for unforeseen interactions between program units, and eliminating the need for human written code for many routine tasks. The code is extensively documented. Our ongoing program of extensive testing and many case studies aims to eliminate bugs and deficiencies in the software. A major advantage of the deterministic global optimization approach to model reduction is that the proofs required to establish the properties of the algorithms guarantee that the solution found is the global optimum.


Key Personnel

Graduate Student:

Binita Bhattacharjee

Post-doctoral Fellow:

John E. Tolsma

[Home] [Previous] [Next]

[Source and Control Projects] [Transport and Transformation Projects] [Monitoring and Source Attribution Project]