8.3.6 Training: Conquering the Learning Curve
The analyst usually has the important responsibility of training the user. With the analyst's assistance and guidance, the user must invest time and effort to become comfortable with and knowledgeable about the model and its use. Psychologists model such a process with a learning curve, which starts low at a point of relative ignorance, often climbs steeply during the first phases of learning, and then tends to a gradually decreasing positive slope as the learner asymptotically approaches his final level of understanding9 (Figure 8.6). The urban analyst should be aware that the model he or she developed and is attempting to implement represents a complex set of processes that even other analysts might have difficulty fully grasping. Thus, an urban decision maker, usually trained by case study (often on the job) and utilizing ad hoc techniques that have been proven historically at least not to fail, can be expected to have difficulty with models and their uses. The urban analyst should be committed to helping the user climb his or her own learning curve, beyond naive acceptance of the model, to full understanding of the uses and potential abuses of the model.
One technique that we have found useful in the learning (training) process is to develop simple examples, augmented by "rules of thumb," to aid insight and intuition. The example of Chapter 1, comparing deterministic and probabilistic reasoning for a single police patrol beat, illustrates several concepts that reappear in more complex settings. Rules of thumb include 11 square-root laws" for travel distances in an urban setting, which have the additional advantage of diminishing the importance of thinking purely in terms of linear relationships. Simple queueing formulas also demonstrate the highly nonlinear performance of probabilistic systems. From Chapter 5 we know that the fraction of dispatch assignments that are interresponse area assignments is at least as great as the average system utilization factor. This can be argued intuitively in a way which (1) gives the user another rule of thumb and (2) helps him or her develop more insight into the probabilistic nature of operations. We have found such approaches invaluable in explaining the hypercube model to users [LARS 72b]. The New York City Rand Fire Project utilized this approach with simulation models [IGNA 751, and now mathematical programming enthusiasts are arguing for a similar approach [GEOF 76].
When learning to operate the model, the user must eventually confront
the problem of specifying performance standards that he or she wishes
to achieve by utilizing the results of the model. A user's ability to
articulate performance standards seems to vary with the extent to which
the model's performance measures capture an agency's true objectives and
with the user's understanding of these measures. Some complicated weighted
formula, even if derived from the latest treatise on decision theory, may
be opaque to a user. In general, simple performance measures that arise
naturally in ongoing operations are preferable to fabricated ones.
The analyst, when acting as trainer, may discover that the user is
reluctant to articulate performance standards. Most operating urban.
systems reflect an evolution of decisions made in response to feedback
from citizens, workers, and managers. The current picture, the status
quo, is the net result of all these historical forces. Recall the police
commissioner who at first refused to specify the T and P values for his
911 emergency system and who finally settled on T = P = 0. To articulate
clearly the performance objectives of the system would almost certainly
reveal the inadequacy of the status quo to some groups. To be sure, the
present state of affairs has its own implicit performance standards. But
discomfort with these standards, and hence possible disruption of the
status quo, are more likely to occur only when they are made explicit.
(Somewhat ironically, it is the stark political neutrality of a model,
with its ever-visible performance measures, that makes it, in a sense,
political.) If the user is unwilling to specify performance measures,
the analyst/trainer can suggest exploring policies that maintain current
manpower levels (as was done for the 911 study) or otherwise satisfy
constraints with which the user is comfortable; however, his or her
selection of a final policy will still require some consideration of
achievable performance standards.