Click HERE to go to the Leaders for Manufacturing Research Group 5 Home Page.


How We Helped Hewlett Packard Make Millions

The following is an excerpt from Mitchell Burman's Ph.D. thesis:

1.1 Motivation

Several years ago, the process engineering department at the Vancouver Division of Hewlett Packard was planning to build a new large automated manufacturing system. They had already committed to a flow line design with the added complication of multiple subassembly input flow lines (SA) feeding it at various points through buffers (B) as shown in Figure 1-2. The group built financial models to calculate the viability of the investment with an estimate of the system throughput rate as a key input. Third party simulation to estimate system throughput would have delivered results too late to influence the design given the aggressive project schedule. Simulation benefits were therefore greatly diminished for this phase of the program. To keep the project moving forward, HP utilized analytic modeling as a tool to help analyze and modify the line architecture and predict the resulting effects on throughput.

Figure 1-2: Flow Line with Subassemblies

Months later, it was decided to develop a real-time scheduling procedure for the system that would meet the projected system throughput requirements without an excessive amount of work-in-process (WIP). This would limit system flow time (the time from entering M1 to leaving Mk, Figure 1-2) and the costs associated with WIP. The small buffers made it easy to estimate system throughput using the zero-buffer model of Buzacott (1968). However, the estimated output of the non-modified line was significantly lower than HP's original estimate. It was at this point that HP's attention shifted towards reengineering their production process.

Since the system was already in fabrication, individual machines in the production system could not be changed without a significant disruption to the product development cycle. Therefore, there was no easy way to change the isolated efficiencies of the components of the production system. The best way to improve the throughput would be to selectively install limited buffers at strategic points. This would dampen the effects of machine failures by limiting the propagation effects through the rest of the production line. These buffers were themselves machines such as conveyers, accumulators and other such material handling equipment. This type of machinery was also subject to failures. No modeling technique in the literature at the time accounted for failing buffers. A second and much more costly option for improving expected throughput was to increase the capacity of the slowest stages in the system by installing additional parallel machines. With this approach, when a machine fails, its processing responsibility is immediately covered by another machine, preventing an overall disruption to the line. However, the problem remained to estimate the impact on throughput of installing new in-line pallet accumulators or other extra machinery.

Due to the magnitude and sensitivity of the business investments, HP required a more rigorous approach to system design. Simulation lead-time, complexity, sensitivity to inputs, and iteration difficulties greatly diminished its usefulness. Therefore, we decided to use some of the quicker analytic models found in the flow line literature (Dallery and Gershwin, 1992). We chose the decomposition method introduced in Gershwin (1987a), the decomposition equations developed in Gershwin (1989) and the DDX decomposition algorithm (Dallery, David and Xie, 1988). The assembly aspect of the system was handled less formally by speeding up the subassembly cells to a level where they virtually never starved the main line of material. The failing buffers were accounted for by multiplying the results of the decomposition estimates by a scaling factor. This overall approach provided a means of comparing the relative improvements associated with different buffer configurations on the production line. Multiple experiments were conducted and within one week, recommendations were in place that promised to substantially increase the expected system throughput (approximately double) with a minimum impact on flow time and WIP, and a relatively low price tag. The benefit of this work is on the order of millions of dollars per month.

The HP story highlights typical problems faced by process engineers. Our goal is to make this entire process easier and better in the future. Product introductions and changes are occurring so rapidly that process engineers are faced with increasing pressure to reduce development cycles. Judgments made on simplified capacity models are not adequate to manage investment risks of this magnitude on aggressive schedules. Typical simulation exercises take far too long to model even very few design options. Although there were limitations in the analytical models used at HP (specifically, the inability to account for parallel machines, deterministic asynchronous processing times, failures in the material handling system and subassembly cells) there was the opportunity to employ some creative adjustments in using the models. It was these creative approaches that motivated this thesis.

References:

Burman, Mitchell H., "New Results in Flow Line Analysis", Ph.D. Thesis, MIT Operations Research Center, June, 1995.

Buzacott, J. A. (1968), "Prediction of the Efficiency of Production Systems without Internal Storage," Int. J. Prod. Res., Vol. 6, No. 3, pp. 173-188.

Dallery, Y., David., R., and Xie, X.-L. (1988), "An Efficient Algorithm for Analysis of Transfer Lines with Unreliable Machines and Finite Buffers," IIE Transactions, Vol. 20, pp. 280-283.

Dallery, Y., and Gershwin, S. B., "Manufacturing Flow Line Systems: A Review of Models and Analytical Results," Queueing Systems Theory and Applications, Special Issue on Queueing Models of Manufacturing Systems, Volume 12, December, 1992, pp. 3-94.

Gershwin, S. B., "An Efficient Decomposition Method for the Approximate Evaluation of Tandem Queues with Finite Storage Space and Blocking," Operations Research, pp. 291-305, March-April, 1987.

Gershwin, S. B., "An Efficient Decomposition Algorithm for Unreliable Tandem Queuing Systems with Finite Buffers," in Queuing Networks with Blocking, Proceedings of the First International Workshop held in Raleigh, North Carolina, May 20-21, 1988, Edited by H. G. Perros and T. Altiok, Elsevier, 1989, pp. 127-146.

Copyright © Massachusetts Institute of Technology 1996. All rights reserved.