2.13 RANDOM INCIDENCEThe Poisson process is one of many stochastic processes that one encounters in urban service systems. The Poisson process is one example of a "point process" in which discrete events (arrivals) occur at particular points in time. For a general point process having its zeroth arrival at time T_{0} and the remaining arrivals at times T_{1}, T_{2}, T_{3}, . . ., the interarrival times are Such a stochastic process is fully characterized by the family of joint pdf's for all integer values of p and all possible combinations of different n_{1}, n_{2}, . . ., where each n_{i} is a positive integer denoting a particular interarrival time. Maintaining the depiction of a stochastic process at such a general level, although fine in theory, yields an intractable model and one for which the data (to estimate all the joint pdf 's) are virtually impossible to obtain. So, in the study of stochastic processes, one is motivated to make assumptions about this family of pdf's that (1) are realistic for an important class of problems and (2) yield a tractable model. We wish to consider here the class of point stochastic processes
for which the marginal pdf's for all of the interarrival times
(Y_{k}) are identical. That is, we assume that Thus, for Y_{k}, if we selected any one of the family of joint pdf's f_{Yn1,Yn2, . . ., Ynp} (y_{n1}, y_{n2}, . . . , y_{k}, . . ., y_{nP}) and "integrated out" all variables except y_{k}, we would obtain f_{Y}(.). Note that we have said nothing about independence of the Y_{k}'s They need not be mutually independent, pairwise independent, or conditionally independent in any way. For the special case in which the Y_{k}'s are mutually independent, the point process is called a renewal process. The Poisson process is a special case of a renewal process, being the only continuoustime renewal process having "no memory." However, the kind of process we are considering can exhibit both memory and dependence among the interevent times. In fact, the dependence could be so strong that once we know the value of one of the Y_{k}'s we might know a great deal (perhaps even the exact values) of any number of the remaining Y_{k}'s.
In situations such as these, for which we know f_{Y}(.) or at least the mean and variance of Y, we are often interested in the following problem. An individual, say a potential bus passenger or a homeowner looking for a police patrol car, starts observing the process at a random time, and he or she wishes to obtain the probability law (or at least the mean) of the time he or she must wait until the next arrival occurs. In various applications this time could be the waiting time for a bus, subway, or elevator or the time until arrival of a patrol car. This is said to be a problem of random incidence, since the individual observer is incident to the process at a random time. The random time assumption is important: the time of random incidence of the observer can in no way depend on the past history of actual arrival times in the process. We now derive the probability law for V, the time from the moment of random incidence until the next arrival occurs. We do this for continuous random variables since the same reasoning applies in the discrete case. The derivation proceeds in stages, first conditioning on W, the length of the interarrival gap entered by random incidence. For instance, the gap in which a potential bus passenger arrives has length equal to the sum of two time intervals: (1) the time between the arrival of the most recent bus and the arrival of the potential passenger, and (2) the time between the passenger's arrival and the arrival of the next bus. We now argue that the probability that the gap entered by random incidence assumes a value between w and w + dw is proportional to both the relative frequency of occurrence of such gaps f_{Y}(w)dw and the duration of the gap w. That is, This result says that random incidence favors gaps of longer duration in direct proportion to their duration. The argument for this result is most simply given by example. Given two gap lengths w_{1} and w_{2} = 2w_{1} for which the relative frequencies are identical [f_{Y}(w_{1})dw = f_{Y}(w_{2})dw], then one is twice as likely to enter the gap of length 2w_{1}1 compared to the gap of length w_{1}. Or, given the same two gap lengths, w_{1} and w_{2} = 2w_{1}, for which the relative frequency of the large gap length is only half that of the smaller [f_{Y}(w_{2})dw = (1/2)f_{Y}(w_{1})dw], we are equally likely to enter either of the two types of gaps; here the doubling of relative frequency for w, "makes up for" the doubling of duration of w_{2}. Now, given that we have entered a gap of length w by random incidence, we are equally likely to be anywhere within the gap. More precisely, there is a constant probability of being in any interval to + h for any fixed h > 0, assuming that [, + h] is fully contained within the gap. Thus, given w, the time until gap completion (i.e., the time until the next event) has a uniform pdf: The marginal for V, which is what we want, is formed simply by "integrating out" W, Question: Assuming that F_{Y}(O) = 0, does this result make intuitive sense for values of v near zero?
