Next: 7.3.2 Nonisolated stationary points Up: 7.3 More about stationary Previous: 7.3 More about stationary   Contents   Index


7.3.1 Classification of stationary points

Let us first recall the definitions of local extrema at stationary points:

Definition 7.3.1. Suppose that is a scalar field on . Let be a stationary point of , that is . Then

  1. is a local maximum if there exists a neighborhood of such that for all , .
  2. is a local minimum if there exists a neighborhood of such that for all , .

To illustrate the application of these definitions, let us begin with a simple one-parameter function (which may arise, for example, in the point-curve distance problem). Suppose is a stationary point of , i.e. . Then a Taylor series expansion tells us that

    (7.18)

where is some specific value depending on . From Definition 7.3.1, we see that is a local maximum if the right hand side of (7.18) is never positive for close to 0 and a local minimum if the right hand side is nonnegative for close to 0 . Now, if , then by continuity, there is some such that for , has the same sign as . Therefore, if , then for and hence is a local maximum. Similarly, if , then is a local minimum. These results are familiar from calculus [166].

Unfortunately, the problem becomes more complicated if . In this case we can not say anything about the sign of and must therefore expand the Taylor series to higher derivatives. For example, if the second derivative is zero but the third derivative is nonzero, then we will have neither a maximum nor a minimum but a point of inflection. Consider the function ; in any neighborhood of the stationary point , the function takes on both positive and negative values and thus is neither a maximum nor a minimum. If the third derivative is also zero, we have to look at the fourth derivative, and so on. Fortunately, these exceptions are rare, but a robust point classification algorithm should be able to handle such cases. In general, if is the first derivative function that does not vanish, then the function has [152]

Similar analysis can be performed with functions of more than one variable. Consider a function of two variables, which might arise, for example, in a curve-curve distance problem. A Taylor expansion around the stationary point to second order leads to

    (7.19)

where . Instead of one second order term, this time we have three to deal with, which of course makes classification more difficult. However, by simply completing the square with the second order terms, we can easily formulate conditions for minima, maxima, and saddle points. To see this, let and . Then we have
    (7.20)

Now and appear only within squared expressions. This new form enables us to ignore the signs of and and instead concentrate on the signs of the coefficients and . To see this, notice that within any arbitrarily small neighborhood of , the ratio takes on all possible values between and , and therefore the first term of the right hand side of (7.20) can become extremely small or extremely large compared to the second term. However, the signs of each term are not so easily changed. These are determined solely by the signs of and . If both coefficients are nonzero at , a continuity argument shows that the expressions and do not change sign as long as is sufficiently close to . Therefore, in order to determine the signs of these coefficients within a small neighborhood of , we can simply evaluate them at . If the coefficients are nonzero there, then their signs will enable us to classify the stationary point. For example, if both coefficients are positive at the stationary point, then clearly the right hand side of (7.20) is positive within some neighborhood of the stationary point, and hence can be classified as a minimum. If the first coefficient is negative and the second is positive, then because either term may be made zero at different parts of any neighborhood, the right hand side of (7.20) may be positive or negative arbitrarily close to the stationary point, and thus is neither a maximum nor a minimum. The other possibilities lead to the following theorem, which is proven more fully in [166]:

Theorem 7.3.1.

  1. If and at the stationary point , then is a local maximum.

  2. If and , then is a local minimum.

  3. If then is a saddle point (neither a maximum nor a minimum).

  4. If none of the above conditions apply, then it is necessary to examine higher-order derivatives.

Notice that the third condition above applies even if . In this case, the second order term reduces to . As long as , specific choices of and within any arbitrarily small neighborhood of the stationary point can make this term either positive or negative, and hence is neither a maximum nor a minimum.

It is possible to complete the square for functions of more than two variables; however, in order to develop general conditions for minima and maxima of such functions, we will employ the powerful theory of quadratic forms [410].

Definition 7.3.2. A quadratic form is an expression of the form , where is an -dimensional column vector of unknowns and is an matrix. A quadratic form is said to be

  1. positive definite if for all .

  2. positive semidefinite if for all .

  3. negative definite (semidefinite) if is positive definite (semidefinite).

  4. indefinite otherwise (i.e. the form takes on both positive and negative values).

To see how these quadratic forms are used, let us take a Taylor expansion of a function of variables around the stationary point :

    (7.21)

Here is the Hessian matrix of ; the element of the th row, th column, is given by
    (7.22)

Now clearly, if the quadratic form is positive definite, then within some neighborhood of the stationary point , the right hand side of (7.21) is nonnegative, and therefore is a local minimum. Similarly, if the quadratic form is negative definite, then is a local maximum.

At this point, we can use a familiar theorem of linear algebra whose proof is given in [410]:

Theorem 7.3.2. Let be a quadratic form, let be a real symmetric matrix, and for , let be the determinant of the upper left submatrix of A (which we will denote ). Then the quadratic form is positive definite if and only if for all .

If the condition in the above theorem is relaxed to , a corresponding theorem applies to positive semidefinite forms (that is, a form is positive semidefinite if and only if for all ).

Theorem 7.3.2 immediately gives us a necessary and sufficient condition for negative definiteness as well. For, if we let , the quadratic form is negative definite if and only if is positive definite if and only if for all . But and so we have the following corollary:

Corollary 7.3.1. Let and be as before. Then the quadratic form is negative definite if and only if for all .

Simply applying these conditions gives us the following theorem:

Theorem 7.3.3. Let be as before, and let be its Hessian matrix, evaluated at the stationary point . Let be the determinant of the upper left submatrix of .

  1. If for all , then is a local minimum.

  2. If for all , then is a local maximum.

If, on the other hand, the form is indefinite, we can conclude that is neither a minimum nor a maximum (in two dimensions, such points are usually called saddle points), as the following theorem shows:

Theorem 7.3.4. Let and be as before. Suppose that the do not satisfy the conditions for definiteness or semidefiniteness. Then is neither a local minimum nor a local maximum.

Proof: Because is indefinite, there exist vectors and such that and . Pick an arbitrary neighborhood of . Then for any smaller than some positive number , and .

Using (7.21), we obtain

    (7.23)
    (7.24)

Clearly, for sufficiently small , the right hand side of (7.23) will remain negative, while the right hand side of (7.24) will remain positive. Thus we have, for sufficiently small ,
    (7.25)

Since was an arbitrary neighborhood of , there is no neighborhood of where for all or where for all . Therefore is neither a local minimum nor a local maximum.

Unfortunately, trouble can occur if the Hessian is only semidefinite and not definite. In this case we need to look at higher order derivatives or use some other method to classify the point precisely. Although this is not too difficult for one-parameter functions, it becomes progressively more involved as the number of variables increases (see for example, [323,129]).



Next: 7.3.2 Nonisolated stationary points Up: 7.3 More about stationary Previous: 7.3 More about stationary   Contents   Index
December 2009