Next: Summary and Conclusions Up: Lab 3: Onward to Previous: Problem 1: Lorenz in

## Order out of Chaos: The Lorenz equations and prediction

The peculiar result of the Lorenz equations is that they produce deterministic chaos. The problem is deterministic, because we know everything there is about how it will instantaneously change (there is nothing random about these equations). For high enough Rayleigh numbers, however, it is chaotic because even small changes in the initial conditions can lead to very different behavior at long times because the small differences grow in a non-linear feedback with time. This is known as the Butterfly effect because Lorenz suggested that a butterfly flapping in one part of the world might make it impossible to predict the weather a week in advance.

Nevertheless, Chaos does not mean random unpredictability. After watching these equations flip and flop for a while, you can see that in some respects they are fairly well behaved. The overall behavior is restricted to lie in the Strange attractor (the system can't suddenly stop or start spinning a thousand times faster) and the overall patterns repeat in a quasi-periodic fashion. The problem is that you can't follow the problem for an arbitrarily long period of time.

But watching these equations perform, you might get the feeling that you should be able to follow them for some period of time. If you consider each wobble in the problem a cycle, then Lorenz actually showed that you can actually predict the behavior of the equations at least one cycle ahead. Here we will reproduce that result (and more) and introduce some of the basics of non-linear forecasting.

Here is the basic problem. Suppose that you were handed a time-series of (or anything else, maybe rainfall in Arizona). You might want to ask yourself ``how much information is in this time-series...if I know a part of it, how much into the future can I predict?'' This is the basic question of non-linear forecasting (e.g. [8, 9]). If you're data is produced by a low dimensional chaotic attractor, the answer may be that you can predict more than you thought.

What Lorenz asked was ``if I know the peak value of at some time, can I predict the value of the next peak?'' To do this he first formed a series of all the maximum values of with time (i.e. he had a list of peak1,peak2,...,peakN), then for each peak i he made a plot of peak i+1 as a function of peak i (easier to show than say). And he found the following remarkable picture known as The Lorenz Map

Figure 8: The Lorenz Map for predicting the height of the next peak in given knowledge of the current height. Except for the narrow peak near , the narrowness of this graph suggests that you can predict the height of the next peak extremely well.

What is remarkable about this picture, is how narrow the graph is. While it is not a line, it is so narrow, that except maybe very near the peak, you have an excellent chance of predicting the size of the next peak if you know the size of the current one. Order out of Chaos indeed. What I want to do now though is find out if it is possible to repeat this feat for 2 cycles in the future, 3 cycles etc. I.e. the question is how far can you predict into the future?

To answer that, do the following problems

1. Reproduce the Lorenz Map. I've already made a file with a large number of peak values in it (the run was made at very high precision for r=32 for a time of 100). Get a copy of the file T2max.txt and load it into Excel. Then form a second column that is a repeat of the first, except that it starts with the second value (this is easier to do than it sounds, I'll show you). Plot column A against column B and voila! the Lorenz Map.
2. Now repeat the exercise for larger and larger offsets, i.e. make similar plots but start with 2 peaks ahead, 3 peaks ahead etc. If we define being reasonably predictable as having a small range of possible values (a thin graph) answer the following questions.
1. At what offset do you think prediction becomes meaningless? Explain your answer.
2. Is there any range (or ranges) of peak values where you can almost always make a good prediction, even though you can't predict for the whole range? If so, explain what this means in terms of the behavior of the attractor.

Next: Summary and Conclusions Up: Lab 3: Onward to Previous: Problem 1: Lorenz in

marc spiegelman
Mon Sep 22 21:30:22 EDT 1997