18.03 Class 20, Mar 22


Numerical Methods

Vocabulary: Euler's method, Heun = Improved Euler = RK2, Runge-Kutta;
Step size.


The study of differential equations rests on three legs:

. Analytic, exact, symbolic methods

. Quantitative methods (critical points, for example)

. Numerical methods.

As an example, take the first order linear ODE  x' + tx = 1.  (*)
We can "solve" this: The homogeneous equation  x' = -tx  is separable,
and has as solution  xh = exp(-t^2/2).  Thus the general solution of
(*)  is

x = exp(-t^2/2) integral exp(t^2/2) dt

We are "done" in the sense that we have "reduced" the ODE to an integral.
The trouble is that I can't express this integral as an elementary function
-- made up of the familiar functions from calculus, and neither can you;
it can't be done. It CAN be expressed in terms of the "error function,"
but that's another story.

To be more precise, suppose we impose the initial condition  x(0) = 0.
This specifies a constant of integration, or, what is the same, limits
of integration on the integral:

x = exp(-t^2/2) integral_0 ^t exp(tau^2/2) d tau.

Here I have had to introduce a new symbol for the variable inside the
integral, since  t  is being used as a limit of integration and so as far
as the integral is concerned it is constant.

But this doesn't really help us to compute. Suppose for concreteness that
we were interested in the value of  x(2),  perhaps to high accuracy.

I showed a Matlab plot of the direction field and this solution.

Here's an approach: use the tangent line approximation.

At  (0,0),  the direction field  F(t,x) = 1 - tx has value  1,  so the
first approximation to  x(2)  is  2.

Better: go to  t = 1  along the tangent line and look at the value of
the direction field there. You are at  (1,1),  and  F(1,1) = 0:  so now
you should head off with slope zero, and arrive at  (2,1):  so the next
approximation is  1.

We can do this with more breaks, too. Let's set up the general picture with
some notation.  We have the ODE  x' = F(t,x),  with initial condition
x(a) = x0.  Suppose we want to compute  x  on the interval  [a,b],  or
even just find  x(b),  with good accuracy.  Divide the interval into some
number,  N,  of equal pieces. Each will have width  h = (b-a)/N.  This is
called the "stepsize."  The endpoints of these pieces are

t0 = a, t1 = a + h, t2 = a + 2h, ... , tn = a + nh, ... tN = b.


Make a table with the following entries:

n		tn		xn		F(tn,xn)

0		t0=a		x0		F(t0,x0)

1		t1=a+h		x1		F(t1,x1)

2		t2=a+2h		x2		F(t2,x2)

...


In the line  n = 1 ,  x1 = x0 + h F(t0,x0)

In the line  n = 2 ,  x2 = x1 + h F(t1,x1)

and in general    x(n+1) = xn + h F(tn,xn).

This is "Euler's method."

I showed a table of computed values for our ODE  x' + tx = 1  using
h = .02, N = 100;  we find the estimate

x(100) = 0.643303...

Before we see how accurate this is let's identify some sources for error.
Much of numerical analysis is understanding and bounding potential error.

(1) Round off and stepsize.  Each computation involves a small error,
from round off, and these errors accumulate. So you want to use the fewest
number of steps possible. On the other hand making the stepsize small
keeps the polygonal approximation close to the actual solution. There is
a tension here.

(2) Dependence on initial condition.  I showed a slide of the Riccati
equation  x' = x^2 - t , using initial condition (0,0.725).  Matlab
indicates that the solution is asymptotic to  - sqrt(t),  while Euler's
method with  h = 0.1  traces a curve blowing up to infinity.  The initial
condition was in a region where the direction field is spreading apart,
and this produces sensitive dependence upon initial conditions;  so slight
errors get amplified.

(3) Solutions may not continue. I showed the example  x' = - t/x.
The solution curves are concentric circles about the origin. Starting at
(-1,-1), for example, with  h = 0.1,  you trace nicely around the circle
till you get near (1,0). Then the slope becomes very large, and when
you take that step of 0.1 the Euler polynomial jumps way up, very far
from the true solution. It then follows a new circle back down to the
vicinity of the  t-axis, and then gets kicked again, either up or down.
This now has nothing at all to do with the actual solution of the ODE,
which as a function simply ends at  t = sqrt(2).  This kind of behavior
is inevitable if you use a constant stepsize. Sophisticated numerical
methods therefore test the size of the direction field and use a smaller
stepsize if it gets large.

(4) Instability. I showed a larger portion of the Euler approximation
in our original case, with  h = 0.2.  At around  t = 20  the curve seems
to go wild.  This happens also with  h = 0.3,  but earlier, at around
t = 15.  This is a real phenomenon: The direction field changes very
rapidly from very negative (just above the nullcline) to very positive
(just below the nullcline). The effect on actual solutions is to drive
them rapidly towards the nullcline. However, since the Euler polygon is
not an exact solution, it will be either slightly above or slightly below.
When this error gets big enough, the curve gets into the large slope
area, and starts zooming around.  Sophisticated methods detect rapidly
changing direction fields and take precautions.


The issue of getting the most accuracy from the minimal number of steps
is addressed by "Runge-Kutta" methods. These involve polling the value
of the direction field several times for each step, in the area through
which the solution is going to travel. There is an RK method for each
"order"; the order is the number of times the direction field is polled
at each step. RK1 is Euler. RK2 is also called "improved Euler" or "Heun's
method." RK4 is sometimes called simply Runge-Kutta; it is the most commonly
used method.

RK2 works as follows: We are at (tn,xn),  and want to come up with  x(n+1).
First use the Euler approach, to get

u(n+1) = xn + h A,  where  A = F(tn,xn).

Then compute the value of the direction field there:

B = F(t(n+1),u(n+1)).

The solution curve will run through the region between these two,
and so it is reasonable to approximate it by a straight line segment
having slope given by the average of these two extreme values. Thus

x(n+1) = xn + h (A+B)/2

RK4 is similar but more complicated; you pole these end positions,
and also a couple of values at  t = tn + (h/2).


It is interesting to compare these methods. Here's a comparison
using the ODE  x' = x,  x(0) = 1;  the solution is  x = e^t,
and we study  x(0) = e.


Method 		Steps 		Evaluations 	 Error

Euler  		1000  		1000 		 1.35 x 10^{-3}

Heun  		 500  		1000		 1.81 x 10^{-6}

RK4   		 250  		1000		 7.99 x 10^{-15}


Each evaluation of the direction field costs money - it takes time.
Heun polls twice per step, and RK4 polls 4 times, so the cost of
Euler with 1000 steps is about the same as the cost of Heun with 500
steps or RK4 with 250 steps.

The error for the Euler method is around 1/1000,  even using 1000
steps of stepsize  1/1000.  This reflects a general theorem: the
expected error of Euler's method is proportional to the stepsize  h.

For Heun the theory predicts error proportional to  h^2,  and this
is borne out here.  For RK4 it predicts error proportional to  h^4,
which is  10^{-12} here; in the event RK4 is even more accurate.

The moral is that for good accuracy Euler is essentially useless,
and RK4 will always win. There are still higher order methods,
but they involve more overhead as well and experience has shown
that RK4 is a good compromise.

In lecture I didn't have the chance to display the table generated
by Matlab's standard ODE solver,  ode45,  for the solution to
x' + tx = 1 with  x(0) = 0;  it gives us  x(2) = 0.63998776963661,
probably accurate except for the last digit. The Euler method with
h = .02  gave  0.6433... , around 0.0034 off.