FNL HomePage
Editorial Board
E-mail FNL
FNL Archives
Faculty Bulletin Board
MIT HomePage


Prioritizing Projects in the Department of Facilities'
Infrastructure Renewal Program

Joe Gifun and Caleb Cochran

Angie: What do you wanna do tonight?

Marty Pilletti: I dunno, Angie. What do you wanna do

This exchange is repeated many times throughout the 1955 film, Marty. Marty, a shy butcher from the Bronx, finally takes responsibility for his own happiness and sets out to woo a woman he met at a dance. While deciding with whom you would like to spend the rest of your life, as Marty was, is certainly an important and complicated decision, look how difficult it was for Ernest Borgnine (Marty) and Joe Mantell (Angie) to make a simple decision. When a decision is very important, when the opinions of many people must be considered, and when the stakes are very high, the level of difficulty expands dramatically. Thus, the need to simplify the decision-making process.

In the Department of Facilities' Infrastructure Renewal Program, we are beginning to use a methodology based on multi-attribute utility theory, developed by Professor George Apostolakis and MIT graduate student Rick Weil, to prioritize projects intended to renew the existing campus buildings. According to the report entitled Infrastructure Renewal at MIT: Planning, Persistence and Improved Communication, published in February 2001, infrastructure is defined as, "A process of systematically evaluating and investing in maintenance of facility systems and basic structure." That is, a process to make our existing campus buildings and their systems whole and prevent them from deteriorating once again.

The first question one may ask is: "Why would the Infrastructure Renewal Program need a methodology to make decisions about the selection and prioritization of projects?" The answer is, there is much work to do on the existing infrastructure and not enough resources (nor is it practical) to do them all now. Since 1957, the floor area of MIT buildings on campus has tripled and these buildings have all reached their maturity. Systems are no longer new and in many cases need to be replaced or substantially updated. From an independent audit of building structure and systems completed several years ago, we learned that we have a backlog of deficiencies that is projected to be $690 million in the next 7 to 10 years. Therefore, Facilities needs to make effective decisions that make the best use of every dollar invested in our infrastructure. As mistakes will be expensive, we need to focus on the most important projects first. We need to determine which project gets funded now and which projects are set aside for consideration at a later date. Even if we had all the money, people, and resources necessary, we would still need to determine the order in which projects will be done, as they all cannot be done at the same time.

We learned from Professor Apostolakis, Mr. Weil, and Dr. Dimitrios Karydas that the best way to determine the importance of a project is to look at it through the filter of risk. What could happen if you do not do the project? Risk-informed decisions help you focus on a project's possible impact, good or bad, by excluding cost, emotion, and politics. Each project is ranked according to its individually derived measure of impact or importance. Cost, emotion, and politics are certainly considered, and they cause one to make subsequent adjustments to the order, but only after a project's ranking is initially determined. This allows you to know the risk of not doing the project you have selected to bypass. To get to this point, one needs to determine criteria by which you, or the team, will make decisions. In Facilities, the Infrastructure Renewal Core Team is responsible for prioritizing projects. The team is made up of Facilities' decision makers with backgrounds in engineering, architecture, utilities, building operations, space planning, and finance.

Mr. Weil and Professor Apostolakis's methodology was adapted to fit Facilities' needs by Dr. Karydas of FM Global (MIT's insurance provider) and members of Facilities' leadership. Dr. Karydas, whose expertise is provided graciously by FM Global, has much experience in risk-informed decision making. Facilities proceeded as follows:

1. Identify and define objectives

2. Identify and define performance measures

3. Weigh objectives and performance measures

4. Create and assess utility functions of performance measures

5. Perform consistency checks

6. Validate results through benchmarking

Although it is not specifically enumerated above, deliberation among decision makers is an important aspect of the methodology and must take place throughout the process. Decision makers must create agreed upon, easy-to-understand objectives, performance measures, and utility functions, and make many pairwise comparisons. A pairwise comparison is nothing more than a decision-maker's preference for one criterion over another and by how much. For example, Facilities' core team slightly favored, by 6:4, minimizing the impact on people over minimizing the impact on the environment. To learn how we deal with differences of opinion, let's presume that all but one decision-maker moderately favors one criterion over the other while the lone decision-maker is diametrically opposed to the extreme degree. One may say that a geometric mean of the decision makers' individual votes would provide a good answer. If all votes are close, then yes, the geometric mean is appropriate, as it is very hard to resolve the difference between adjacent rankings, and confusing the issue with false accuracy is not helpful. But the real value of the deliberation is to determine the reason for the extremes. In our experience with Facilities decision makers, it was not because one did not understand the question before them; it was because they had information or a perspective the others did not have. This result aligns directly with the research of Mr. Weil and Professor Apostolakis. For example, one team member may be more sensitive to regulatory issues because he or she is the one who frequently interacts with regulators. The key is to learn about the other team member's position and come to an agreement.


Our objectives and performance measures look like this:

I. Impact on Health, Safety, and the Environment (Weight: 0.491)

A. Minimize the Impact on People (Weight: 0.600)

B. Minimize the Impact on the Environment (Weight: 0.400)

II. Economic Impact of the Project (Weight: 0.231)

C. Impact on Property and Academic and Institute Operations (Weight: 0.600)
a. Physical Property Damage (Weight: 0.210)

b. Intellectual Property Damage (Weight: 0.550)

c. Interruption of Academic Activities and Institute Operations (Weight: 0.240)

(a). Duration of Interruption (Weight: 0.333)

(b). Cost of Contingencies (Weight: 0.333)

(c). Complexity of Contingency Arrangements (Weight: 0.333)

D. Loss of Cost Savings (Weight: 0.400)

III. Coordination with Policies, Programs, and Operations (Weight: 0.276)

E. Impact on Institute Image (Weight: 0.500)
c. Internal Public Image (Weight: 0.400)

d. External Public Image (Weight: 0.600)

F. Programs Affected by the Project (Weight: 0.500)


The Infrastructure Renewal Core Team agreed that the three highest-level criteria would be Impact on Health, Safety, and the Environment; Economic Impact of the Project; and Coordination with Policies, Programs, and Operations. Weighting was agreed upon using a series of pairwise comparisons and deliberations among team members. The Analytic Hierarchy Process (AHP) is the mathematical tool that helps one resolve the pairwise comparisons and was developed by Professor Thomas L. Saaty, formerly of the Wharton School at the University of Pennsylvania. Using a scale of 1 to 9, Core Team members indicated their preference of Impact on Health, Safety, and the Environment versus Economic Impact of the Project, the Impact on Health, Safety, and the Environment versus Coordination with Policies, Programs, and Operations, and the Economic Impact of the Project versus Coordination with Policies, Programs, and Operations; where 1 represents equal preference of the two criteria and 9 represents the extreme favoring of one criterion over the other. This results in the matrix as shown:

Through AHP, the matrix is resolved to its eigenvector and the relative weights of the objectives are determined. They are: Impact on Health, Safety, and the Environment 0.491; Economic Impact of the Project 0.231; and Coordination with Policies, Programs, and Operations 0.276. The AHP tool helps decision makers evaluate the consistency of their decision, i.e., to be certain that if A>B and B>C then A>C. That is, a minor disruption in a program must not outweigh a potential expense of millions of dollars. Core Team members deliberated once again to be certain that the relative position of each objective makes sense. If the decision was determined to be inconsistent or if a decision maker was uneasy with the relative position of the objective, the pairwise comparisons and deliberation process was repeated. This process is repeated until the relative weights of all performance measures are completed.

Creating and assessing the utility functions from constructed scales of each performance measure is our next step. The more specific and measurable each level is, the easier it will be to use. Once done, they will become the entry points to the entire prioritization process. For example, the constructed scale for property damage might be:


Catastrophic property damage (more than $10M)

Major property damage ($5M to $10M)

Moderate property damage ($1M to $5M)

Minor property damage (less than $1M)

No property damage


Similar to the objectives and performance measures, pairwise comparisons are made for each element of the constructed scale and relative weights subsequently calculated.

When all of the above is completed for all performance measures, we will validate our work through benchmarking. As determined by Rick Weil and Professor Apostolakis's research, we would pass several known projects through the process and make sure the results obtained through the methodology match our experiences. This year the Infrastructure Renewal Program created their FY2002 project list using an earlier iteration of this process and found that the results matched the list we produced from internal deliberations alone.

To use this process, Infrastructure Renewal Core Team members asked the question: "What would happen if we do not do the project under consideration in the context of each performance measure?" Their answer to this question is entered into the application and a numerical representation of the project's importance is the result. Once this is done for all projects in consideration, a list of projects in order of their importance is created. Unless other compelling reasons are present, the projects with the highest numbers are to be done first.
FNL HomePage
Editorial Board
E-mail FNL
FNL Archives
Faculty Bulletin Board
MIT HomePage