6.2.5 Complexity of Algorithms
A natural question to ask at this point is the following: Which one of the two shortest-path algorithms that we have described is better for computing the shortest paths between all possible pairs4 of nodes on a network? To answer this question, a brief discussion of the subject of algorithmic complexity is first in order. The "goodness" of an algorithm is described in good part by its complexity.
The complexity of an algorithm is most often measured through the number of elementary operations that the algorithm requires to arrive at an answer under "worst-case conditions." An elementary operation is any one of the arithmetic operations (addition, subtraction, multiplication, division) or a comparison between two numbers or the execution of a branching instruction. We shall discuss the meaning of "worst-case conditions" in a short while.
Consider, for instance, the application of Algorithm 6.2 to the case of a network with n nodes. Each pass through Step 2 of the algorithm requires the checking and possible updating of (n - 1)2 matrix elements. (Remember that there is no need to check and update the elements of the kth row and of the kth column--see also Example 2--since we know that these elements will remain unchanged.) Thus, each pass through Step 2 requires (n - 1)2 additions [to compute the quantities dk-1(i, k) + dk-1(k,j)] as well as (n - 1)2 comparisons between two numbers. Similarly, Step 3 of the algorithm requires (n - 1)2 comparisons on each pass, while Step 4 requires a single comparison (k with n), an addition (incrementing k by 1), and a branching5 (return to Step 2). Thus, a total of 3(n - 1)2 + 3 elemenary operations are needed for each pass of the algorithm. Since there are a total of n passes, we conclude that the number of elementary operations, T, required for Algorithm 6.2 to provide the desired answers is
T=3n(n - 1)2 + 3n = 3n3 - 6n2 + 6n
Now, as n increases, the value of T is largely determined by the 3n3 term. Furthermore, what really counts when we compare the performance of this algorithm with the performance of other algorithms for large values of n is the term n3--after all, the factor of 3 is insignificant in view of the tenfold advances in computer speeds that occur every few years and in view of the differences in speeds among different types of computers. Thus, we say that "the complexity of Algorithm 6.2 is proportional to n3" or, more simply, that "Algorithm 6.2 is6 O(n3)." Note that the complexity of the algorithm is measured with reference to the size of the input to the algorithm (i.e., the quantity n). Finally, under the assumption that each elementary operation takes approximately one unit of time (be that 10-9 second or 10-3 second depending on the speed of the computer), we also say that "Algorithm 6.2 requires O(n3) time."
Consider now a case in which a network with n nodes is completely connected (i.e., it is possible to go from any node to any other node directly, without passing through any intermediate nodes).
Exercise 6.1 Show that if Algorithm 6.1 is used with such a network to find the shortest path from any given source node to all other nodes (as in Example 1), the algorithm requires O(n2) time.
It follows from this exercise that, since Algorithm 6.1 must be repeated n times (once for each possible source node) in order to find the shortest paths between all possible pairs of nodes, time O(n3) is required to find all shortest paths through repeated uses of Algorithm 6.1.
Note now that the case of a completely connected network is actually a worst case as far as Algorithm 6.1 is concerned. For it is possible to find other instances of networks for which it may take less than O(n3) time to find all shortest paths through the algorithm. For example, if the network is, say, an undirected tree--so that only a single path exists between each pair of nodes and the distance matrix [d(i, j)], is symmetric -- Algorithm 6.1 could be adjusted so that it requires far less than O(n3) time.7 This example should clarify what we mean by worst-case conditions.
We conclude that the complexity of Algorithms 6.1 and 6.2 is the same, O(n3), under worst-case conditions. Thus, according to our definition, Algorithms 6.1 and 6.2 are equally "good," although it is true that Algorithm 6.1 is more flexible and can thus better take advantage of special network structures.8 (On the other hand, Algorithm 6.2 is easier to program on a computer.)
Algorithms are generally classified by computer scientists into two broad categories. Polynomial algorithms are those whose complexity is proportional to or bounded by a polynomial function of the size of the input. Exponential (or non-polynomial) algorithms are those that, for sufficiently large sizes of the input, violate all polynomial bounds. For instance, algorithms whose complexity is O(n3) or O(n5.) are obviously polynomial ones; so is an algorithm with complexity, say, O(n · log2 n), since the quantity n · log2 n is bounded from above by n2. On the other hand, algorithms that are 0(2n) or O(kn)---where k is a constant--or O(n!)--remember Stirling's approximation for factorials--or O(nlog2n) are exponential.
Polynomial algorithms are considered "good" algorithms while exponential algorithms are "poor" ones. The terms "efficient" and "inefficient" are most often used for "good" and "poor," respectively. While an algorithm that is O(2n) may terminate faster than an algorithm that is O(n5) for small values of n, it is possible to find an input size n0, such that if n n0, the former algorithm will terminate after the latter one.
Many computer scientists have argued that it would perhaps be better to measure complexity in a probabilistic sense (i.e., in terms of the number of operations required by the algorithm in a "typical" application). The problem with this is defining what "typical" means. While we shall continue to use worst-case performance as a measure of algorithmic complexity, it is important to appreciate at this point that this is only one of several choices (and still the most common one) that could be made.
Finally, algorithms are also described as exact or heuristic. The former are guaranteed to terminate, given sufficient (computer) time, with an optimal solution to the problem which they are designed to solve. Heuristic algorithms do not offer such a guarantee; they usually trade off "guaranteed optimality" for speed in providing an answer.
Later in this chapter we shall have occasion to use all the terms introduced in this section.
4 For Algorithm 6.1, as we have already noted, this implies its repeated application, making each node of the graph, in succession, the source node.
5 On the last pass only a comparison and a branching to STOP are needed
6 Do not confuse this with the notation o(.) which was used in Chapter 2 in connection with the definition of infinitesimal terms to the Poisson process.
7 Note that the same is not true for Algorithm 6.2 as described here. It always requires O(n3) time, irrespective of the structure of the network.
8 When two algorithms are both O(nk) for some k, then other factors such as the respective coefficients of nk, ease of programming, computer memory requirements and average performance should be considered.