Euler s Method, Taylor Series Method, Runge Kutta Methods

Preparing to load PDF file. please wait...

0 of 0
Euler s Method, Taylor Series Method, Runge Kutta Methods

Transcript Of Euler s Method, Taylor Series Method, Runge Kutta Methods

Euler’s Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability.

REVIEW: We start with the differential equation
dy(t) = f (t, y(t))
dt y(0) = y0


This equation can be nonlinear, or even a system of nonlinear equations (in which case y is a vector and f is a vector of n different functions).
Numerical Solution of an ODE: The idea behind numerical solutions of a Differential Equation is to replace differentiation by differencing. A computer cannot differentiate but it can easily do a difference. (Differentiation is a continuous process. Differencing is a discrete process.) Now we introduce the most important tool that will be used in this section. By the time you’ve mastered this section, you’ll be able to do Taylor Expansions in your sleep. (I am already doing Taylor expansions in your sleep, right?!)
Taylor Series Expansion: You’ll recall (?) from your calculus class that if a function y(t) behaves nicely enough, then its Taylor series expansion converges:

y(t + ∆t) = y(t) + ∆ty (t) + 1 ∆t2y (t) + 1 ∆t3y (t) + ...



The Taylor series with remainder term is

y(t + ∆t) = y(t) + ∆ty (t) + 1 ∆t2y (t) + 1 ∆t3y (t) + ... + 1 ∆tny(n)(τ )




where τ is some value between t and t + ∆t. You can truncate this for any value of n. Euler’s Method: If we truncate the Taylor series at the first term

y(t + ∆t) = y(t) + ∆ty (t) + 1 ∆t2y (τ ), 2

we can rearrange this and solve for y (t)

y(t + ∆t) − y(t)

y (t) =

+ O(∆t).


Now we can attempt to solve (1.1) by replacing the derivative with a difference:

y((n + 1)∆t) ≈ y(n∆t) + ∆tf (n∆t, y(n∆t))

Start with y(0) and step forward to solve for any time.
What’s good about this? If the O term is something nice looking, this quantity decays with ∆t, so if we take ∆t smaller and smaller, this gets closer and closer to the real value. What can go wrong? The O term may be ugly. The errors can accumulate as I step forward


in time. Also, even though this may be a good approximation for y (t) it may not converge to the right solution. To answer these questions, we look at this scheme in depth.
Terminology: From now on, we’ll call yn the numerical approximation to the solution y(n∆t); tn = n∆t. Euler’s method can then be written

yn+1 = yn + ∆tf (tn, yn) n = 1, ..., N − 1


This method assumes that you can move from one location to the next using the slope given by the equation (1.1). We saw last time that when we do this, our errors will decay linearly with ∆t. We will show this again today, but in two steps, so that we can generalize it. The proof should look very familiar!
Local Truncation Error: To be able to evaluate what we expect the order of a method to look like, we look at the

y(t + ∆t) − y(t)

LT E(t) =

− f (t, y(t)),


i.e. it is the residue when the exact solution of the ODE (1.1) is plugged into the numerical scheme. If yn is close to y(tn) then the LTE will be close to zero.

The local truncation error represents the terms neglected by truncating the Taylor series. This is not the error that we get from the method, (i.e. the difference between the real solution and the numerical solution) but will be connected.

If I don’t know y(t), what is the use of this definition? (and if I do know y(t), what do I need the method for?!). It turns out that even without explicit knowledge of the solution we can still calculate the LTE and use it as an estimate and control of the error, by placing certain smoothness assumptions on y(t) and using the Taylor Expansions.

Clearly, at time tn, Euler’s method has Local Truncation Error:

LT E = y(tn + ∆t) − y(tn) − f (tn, y(tn)) = O(∆t), ∆t

in other words, we can write this

y(tn+1) = y(tn) + ∆tf (tn, y(tn)) + ∆tLT E.

Of course, the method is Subtract these two,

yn+1 = y(tn) + ∆tf (tn, yn).

|y(tn+1) − yn+1| = |y(tn) − yn + ∆t (f (tn, y(tn)) − f (tn, yn)) + ∆tLT E| ≤ |y(tn) − yn| + ∆t |f (tn, y(tn)) − f (tn, yn)| + ∆t|LT E| ≤ |y(tn) − yn| + ∆tL |y(tn) − yn| + ∆t|LT E| .

Because f is Lipschitz continuous,

| f (tn, y(tn)) − f (tn, yn) | ≤ L. y(tn) − yn


And so, if we let the Global Error be en = |y(tn) − yn|, then we can bound the growth of this error:
en+1 ≤ en(1 + ∆tL) + LT E∆t.

How does this help us bound the method?
Lemma: If zi+1 ≤ zi(1 + a∆t) + b Then zi ≤ eai∆t(z0 + a∆b t ) Proof:

zi+1 ≤ zi (1 + a∆t) + b ≤ (zi−1(1 + a∆t) + b)(1 + a∆t) + b ·
· ≤ z0(1 + a∆t)i+1 + b(1 + (1 + a∆t)... + (1 + a∆t)i) = z0(1 + a∆t)i+1 + b (1 + a∆t)i+1 − 1
1 + a∆t − 1 ≤ z0(1 + a∆t)i+1 + a∆b t(1 + a∆t)i+1 ≤ (1 + a∆t)i+1 z0 + a∆b t
≤ ea∆t(i+1) z0 + a∆b t


zi ≤ eai∆t(z0 +

b )


Applying this lemma to the global error, we have |en| ≤ eLn∆t(|e0| + 2ML∆t) Now, if n∆t ≤ T then |en| ≤ eLT (|e0| + 2ML∆t) and since |e0| = 0 we have:

|en| ≤ eLT ( M ∆t). 2L

Compare this with the local error: LT E ≤ 12 M ∆t we see that the global error has the same order as the local error with a different coefficient in the estimates. They are related by the Lipschitz constant L and the final time T.
The Order of a scheme r, is defined by |en| = O(∆tr). The higher the order of the scheme, the faster the error decays.
Comment: The important thing to understand is that the Local Truncation Error is not always an indicator of what the global error will do. Schemes that have the same order of LTE and global error are good schemes. We need to define what makes the method have the property that the global error will be of same order as the LTE.


Taylor Series Methods: To derive these methods we start with a Taylor Expansion:

y(t + ∆t) ≈ y(t) + ∆ty (t) + 1 ∆t2y (t) + ... + 1 y(r)(t)∆tr.



Let’s say we want to truncate this at the second derivative and base a method on that.

The scheme is, then:

yn+1 = yn + fn∆t + f2tn ∆t2.

The Taylor series method can be written as

yn+1 = yn + ∆tF (tn, yn, ∆t)
where F = f + 12 ∆tf . If we take the LTE for this scheme, we get (as expected)
LT E(t) = y(tn + ∆∆t)t − y(tn) − f (tn, y(tn)) − 21 ∆tf (tn, y(tn)) = O(∆t2).
Of course, we designed this method to give us this order, so it shouldn’t be a surprise! So the LTE is reasonable, but what about the global error? Just as in the Euler Forward
case, we can show that the global error is of the same order as the LTE. How do we do this? We have two facts,
y(tn+1) = y(tn) + ∆tF (tn, y(tn), ∆t), and
yn+1 = yn + ∆tF (tn, yn, ∆t) where F = f + 12 ∆tf . Now we subtract these two
|y(tn+1) − yn+1| = |y(tn) − yn + ∆t (F (tn, y(tn)) − F (tn, yn)) + ∆tLT E| ≤ |y(tn) − yn| + ∆t |F (tn, y(tn)) − F (tn, yn)| + ∆t|LT E| .

Now, if F is Lipschitz continuous, we can say

en+1 ≤ (1 + ∆tL)en + ∆t|LT E|.

Of course, this is the same proof as for Euler’s method, except that now we are looking at F , not f , and the LT E is of higher order. We can do this no matter which Taylor series method we use, how many terms we go forward before we truncate.

Advantages and Disadvantages of the Taylor Series Method:

advantages disadvantages

a) One step, explicit b) can be high order c) easy to show that global error is the same order as LTE Needs the explicit form of derivatives of f .


Runge-Kutta Methods To avoid the disadvantage of the Taylor series method, we can use Runge-Kutta methods. These are still one step methods, but they depend on estimates of the solution at different points. They are written out so that they don’t look messy:
Second Order Runge-Kutta Methods:
k1 = ∆tf (ti, yi) k2 = ∆tf (ti + α∆t, yi + βk1) yi+1 = yi + ak1 + bk2

let’s see how we can chose the parameters a,b, α, β so that this method has the highest order LT E possible. Take the Taylor expansions to express the LTE:

k1(t) = ∆tf (t, y(t))

k2(t) = ∆tf (t + α∆t, y + βk1(t)

= ∆t f (t, y(t) + ft(t, y(t))α∆t + fy(t, y(t))βk1(t) + O(∆t2)

y(t + ∆t) − y(t) a


LT E(t) =

∆t − ∆tf (t, y(t))∆t − ∆t (ft(t, y(t))α∆t + fy(t, y(t)βk1(t)

+ f (t, y(t)) ∆t + O(∆t2)

y(t + ∆t) − y(t) = ∆t − af (t, y(t)) − bf (t, y(t)) − bft(t, y(t))α

− bfy(t, y(t)βf (t, y(t)) + O(∆t2)



(t) +

1 ∆ty

(t) − (a + b)f (t, y(t)) − ∆t(bαft(t, y(t)) + bβf (t, y(t))fy(t, y(t)) + O(∆t2)







1 (



1 (






So we want a = 1 − b, α = β = 21b. Fourth Order Runge-Kutta Methods:

k1 = ∆tf (ti, yi)



k2 = ∆tf (ti + 2 ∆t, yi + 2 k1)



k3 = ∆tf (ti + 2 ∆t, yi + 2 k2)

k4 = ∆tf (ti + ∆t, yi + k3)

1 yi+1 = yi + 6 (k1 + k2 + k3 + k4)

(1.3) (1.4)
(1.5) (1.6) (1.7)

The second order method requires 2 evaluations of f at every timestep, the fourth order

method requires 4 evaluations of f at every timestep. In general: For an rth order Runge-

Kutta method we need S(r) evaluations of f for each timestep, where

 r S(r) =  r≥+r 1+ 2

for r ≤ 4 for r = 5 and r = 6 for r ≥ 7


Practically speaking, people stop at r = 5. Advantages of Runge-Kutta Methods 1. One step method – global error is of the same order as local error. 2. Don’t need to know derivatives of f . 3. Easy for ”Automatic Error Control”. Automatic Error Control Uniform grid spacing – in this case, time steps – are good for some cases but not always. Sometimes we deal with problems where varying the gridsize makes sense. How do you know when to change the stepsize? If we have an rth order scheme and and r + 1th order scheme, we can take the difference between these two to be the error in the scheme, and make the stepsize smaller if we prefer a smaller error, or larger if we can tolerate a larger error. For Automatic error control yo are computing a ”useless” (r+1)th order shceme . . . what a waste! But with Runge Kutta we can take a fifth order method and a fourth order method, using the same ks. only a little extra work at each step.