Theses
Thesis/Dissertation Collections
1993
Nonlinear system identification and prediction
Manu K. Mathew
Follow this and additional works at:
http://scholarworks.rit.edu/theses
This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion
in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact
ritscholarworks@rit.edu.
Recommended Citation
NONLINEAR SYSTEM IDENTIFICATION
AND
PREDICTION
by
Manu K. Mathew
A Thesis Submitted
m
Partial Fulfi))ment
of the
Requirements for the Degree of
MASTER OF SCIENCE
m
Mechanical Engineering
Approved by:
1.8. Torok - Thesis Advisor
Prof. C. W. Haines (Department Head)
Prof. R Orr (Mathematics Department)
Department of Mechanical Engineering
College of Engineering
Rochester Institute of Technology
Rochester, New York
I, Manu K. Mathew, hereby grant permission to the Wallace Memorial
Library of RIT, to reproduce my thesis
in
whole or
in
part. Any reproduction
will not be for commercial use or profit.
"The
forecast," saidMr.
Oliver,
turning
thepages tillhe found
it,
"says:Variable
winds;fair
average
temperature,
rain attimes."There
was afecklessness,
alack
ofsymmetry
andorder
in
theclouds,
asthey
thinned and thickened.Was
it
their ownlaw,
or nolaw,
they
obeyed?
--
Virginia
town
may
have
falling
rain.But
weatheris
thephenomenon we share.With its
variability,
generaldependability
, and momentto
moment unpredictability, weatherinfiltrates
ourschedules,
sets or undermines ourplans,
affects ourmoods,
and unites us withthe
environment and each other.Weather is only
oneACKNOWLEDGMENTS
For
helping
me write and completethis
thesis,
I
wouldlike
to take this
opportunity
to
graciously
thank the
following
people:-
Dr. Joseph
Torok,
for
allthe
help
he has
given mein
this
endeavor,
for
pushing
meto
find,
and understand more aboutthe subject,
andforthe
supportand guidance
he
gave meduring
my
schoolyearsto
cope withthe
madness andchaos
of
each quarter.-
Edward
Zeigler,
for
allthe times
he has
gone outof
his
way
to
help
me outin
my
time
of
need,
andfor
the
numerous occasionshe has helped
meto
solvemany
problemsduring
the
past year.
-Soni,
my
fiancee,
for
allher
supportin
my
work,
for
alwaysbeing
there,
andespecially for her
patienceduring
the
endlesshours I
spent eitherwriting
orworking in
the
lab.
-My
Dad
andMom,
withoutwhom,
I
would nothave
this
cherished chancefor
a great
future,
andfor
allthe
sacrificesthey
have
madeto
make allmy
dreams
within grasp.Thank
you all1. Introduction
1
2.
Creating
the
Delayed Vector Space
(i)
Finding
the
optimaldecorrelation
value12
(ii)
Determining
the
embedding
dimension
27
3.
Constructing
aNonlinear Model
35
4. Prediction
using
developed Nonlinear Model
50
5.
Conclusion
60
6. Appendices
63
INTRODUCTION
The
great power ofsciencelies
in
its
ability
to
relate cause and effect.On
the
basis
ofthe
laws
ofgravitation, for example,
eclipses canbe
predictedthousands
of yearsin
advance.There
are other natural phenomenonthat
are not as predictable.The
weather,
the
flow
of amountainstream,
the
roll ofthe
dice
allhave
unpredictableaspects.
Since
there
areis
no clear relationbetween
cause andeffect,
suchphenomena are said
to
have
randomelements.Yet
untilrecently,
there waslittle
reasonto
doubt
that
precisepredictability
couldin
principle
be
achieved.It
was assumedthat
it
wasonly necessary
to
gather andprocess a sufficient amountof
information.
Such
aviewpointhas been
alteredby
astriking
discovery:
simpledeterministic
systems withonly
afew
elements cangenerate apparent random
behavior.
The
randomnessis
fundamental;
gathering
moreinformation does
not makeit
go away.Randomness
generatedin
this
way
has
cometo
be
called chaos.A
seeming
paradoxis
that
chaosis
deterministic,
generatedby
fixed
rulesthat
do
not
themselves
involve
any
elements ofchance.In
principle,
the
future is
completely
determined
by
the past,
but in
practice small uncertainties areamplified,
sothat
eventhough the
behavior is
predictablein
the
shortterm,
it is
unpredictablein
the
long
term.
There
is
an orderin
chaos,
generating
chaoticbehavior
that
createsrandomness
in
the
sameway
as a carddealer
shuffles adeck
of
cards or ablender
mixes cakebatter.
Chaos
is
the
irregular
behavior
of simpledeterministic
equations.Irregular
with
relatively
simpledetenriinistic
equations, then
it
becomes
possibleto
predictfuture
fluctuations,
atleast
in
the
shortterm,
and perhaps evento
controlthem.
Traditionally,
the
modeling
of adynamical
system proceedsalong
one oftwo
lines.
In
oneapproach,
deterministic
equations of motion arederived
from
principles,
initial
conditions aremeasured,
andthe equations of motion areintegrated
forward
in
time.
Alternatively,
when a modelis
unavailable orintractable,
then
the
dynamics
can
be
modeled as a randomprocess, using
nondeterministic
andtypically
linear
laws
of motion.Until
recently,
the
notions ofdeterminism
and randomness were seenas oppositesand were studied as separate subjects with
little
or no overlap.Complicated
phenomena were assumed
to
resultfrom
complicated physicsamong many
degrees
of
freedom,
andthus
were analyzed as random processes.Simple
dynamical
systems were assumed
to
produce simplephenomena,
soonly
simple phenomenawere modeled
deterministically.
Chaos
provides alink between detenninistic
systems and randomprocesses,
withboth
good andbad implications for
the
predicationproblem.In
adeterministic
system,
chaoticdynamics
canamplify
smalldifferences,
whichin
the
long
runproduces
effectively
unpredictablebehavior. On
the
otherhand,
chaosimplies
that
not all
random-looking behavior
is
the
product of complicated physics.Under
the
are
necessary
to
generate chaotic motion.In
this case,
it
is
possibleto
modelthe
behavior
deterministically
andto
make short-termpredictionsthat
arefar better
thanthose
that
wouldbe
obtainedfrom
alinear
stochastic model.Chaos
is
thus adouble-edged
sword;
it
implies
that
eventhough
approximatelong-term
predictionsmay
be
impossible,
very
accurate short-term predictionsmay
be
possible.Apparent
randomnessin
time
seriesmay
be due
to
chaoticbehavior
of a nonlinearbut deterministic
system.In
such casesit is
possibleto
exploitthe
determinism
to
make short-term
forecasts
that
are much more accuratethan
one could makefrom
alinear
stochastic model.This
is done
by
first
reconstructing
a state space andthen
using
nonlinearregression methodsto
create adynamical
model.Nonlinear
modelsare valuable not
only
as short-termforecasters,
but
also asdiagnostic
tools
for
identifying
andquantifying
low-dimensional
chaoticbehavior.
Building
adynamical
modeldirectly
from
the
data involves
two
steps:1
.State
space reconstruction.In
general,
we candefine
the
phase space or statespace,
asthe
space whose axes arethe
coordinates of position andvelocity,
andthe
phase
trajectory,
as a curvein
this
spacerepresenting
the
evolution ofthe
system.A
statex(t)
is
aninformation
set,
typically
a realvector,
whichfully
describes
the system at afixed
time t.
If
it
is known
with completeaccuracy
andif
the
systemis
strictly
detenninistic,
thenthe
state contains sufficientinformation
to
determine
the
future
ofthe
system.The
goal ofthe
state space reconstructionis
to
usethe
immediate
pastbehavior
ofthe
time
seriesto
reconstructthe
current state ofthe
system,
atleast
to
alevel
ofaccuracy
permittedby
the
presence ofnoise.2.
Nonlinearfunction
approximation.The
dynamics
canbe
representedby
afunction
that
mapsthe current statex(t)
to
afuture
state x(t +T).
An
approximationThe
systemsbeing
presentedarethe
following
nonlinearsystems,
which arebest
described
using
the
equationsthat
follow
[16]
:ROSSLER
: X' =-(Y +
Z)
Y
=X
+0.2
Y
Z'
=
0.4
-5.8Z
+XZ
LORENZ
: X* =-10
X
+10 Y
Y
=28 X
-Y
-X Z
Z* =
-2.67
Z
+XZ
The
systemsdescribed
abovewillbe
usedto test the
robustness ofthe
methodsbeing
usedin
this
study.(These
systems wereimplemented
using
Matlab,
through the
programs 'rossler.m'and
'lorenz.m';
Appendix
5
&
6
respectively).Some
examples of"state
spaces"also
known
as"phase
portraits"
are
described
onthe
following
pages(Figures
1-5).
The
purpose ofthis
investigation,
wasto
study
modeling
techniques
usedto
describe
a nonlinear
system andthe
implementation
ofthe
chaostheory
to
determine
the
level
ofpredictability that these
models provide withregardsto
theactual
behavior
ofthe system.To
summarizebriefly
the
following
steps areinvolved
(1)
Developing
adelayed
vector spaceusing
anobtainedtime
series.(The
concept ofdelayed
vector spaces willbe
elaborated onlater
in
moredetail)
(i)
This
requiresfinding
thedelay
time
t.(ii)
It
also means thatthe
dimensionality
ofthe
system wouldhave
to
be
determined.
i.e.
the
minimum number ofdegrees
offreedom
requiredto
describe
the system.(2)
The
nextstep
is
to
derive
a set ofdynamical
equationsto
best
approximatethe
system.(3)
Finally
we needto
study
the predictability
ofthe
systembased
onthe
6d
o o
H.00
-2.756.00
FIGURE
2
The
Duffing
equation;
a mechanicalsystem with a nonlinearspring
is
governedby:
x+
kx
+ax+bx
-A coscot2.75
4.00
FIGURE
3
The
Van der
Pol
Oscillator,
an analysis of a non-linear oscillator which contains a non-lineardamping
term
which causes self excited oscillation.The differential
equationis
expressedbelow:
x-k(a-3bx2)x +x =
Asincor
D 3
>
x
ssnpJA.PX
o> u ViV 3
3
en
H
C/3C3
s X
'a
ts
'a,
.E
la
12
3
03 S^
3 0) .35
M 03"3
13 v 3 ^OD 03
O 3 u, s 88 +j
3
m-u4>
*J
o
> OD3 9 c/j A ? M
j*. 03
o
13
-w V 3 3 XIM 3 \
cn 3 5/5 a 5/3 9
03
2
.3 a *3 U 3 -omm JU -*->
a
"
u> y
-3
'S
a> u1
-oE
>
s
O"a
3 Ot5
3 .mm 03 "35 3ti
03<
mmOJ ~ JS c jB +-> 03
it
2
-M en Ka U,
u
1/1 +*
tS
03
Ed as
>1 5/3
"
J=
J
S
g>Li
J
2
E
*-
*-3 ^
.= t- >->
5/3 3
%
is
13
o 8 a CJ CU. *o
>->
S
*acu
X! 3 .2
3 ~ T3
5
cu .-22 u CU 3 jO 1) 3
u u.
^
JS
K J 2
S
-ou > 3
c
a
2
W
035/3 >> 3
"3
*3
'S
9 3 3 u 3 Q. cu >a CU OD a 3 u "3 3 3 cu5
3 cu u a. 3 cu 32
5/3 3 >-, N 3 "~ it 3 X C mi .M CU cy cu 3 9 c 9 XS
3 01
s
s
-3 cu X CU HZ
X cu 5/3 L. * 3 CU 5/3 a
9 c
a
5/3 5/3CU cu CU 3 CU "3 cu
3
X X a
CU
3
73
5* -~
no u. u. 9 3 La Cm CU X -w 9 3 3 cu 3 9
"3
> CU > CU 3 CU 5/3 -M X 9 Cfl CUC/3 5/3 X C/J X
H
5/5 ON CU 3 3 3 X cuE
cu C/3 CU11
I. DETERMINING THE DELAY TIME
System
identification
andpredictionentailsthe
intelligent
and efficientway
ofdeducing
systemparametersin
orderto
effectively
construct adynamic
model ofaprocess.
In
simpleterms
it is
muchlike
knowing
whatgoesin
and out of a"black
box",
andthen
trying
to
re-constructthe type
of mechanismthatis
"working"
in
the
box.
The
correlationbetween
the
observed andthe
modeled(forecasted
andhypothetical)
processis
proposedto
be
used asthe
degree
ofdeterrninabihty
.In
some cases apparentrandomnessin
time
seriesmay
be due
to
chaoticbehavior
ofa nonlinear
deterministic
system.In
such casesit is
possibleto
exploitthe
determinism
to
make short-termforecasts
that
are much more accurate thanonecould make
from
alinear
stochastic model.This is first done
by
first
reconstructing
a state space and
then
using
nonlinearfunction
approximation methodsto
create adynamical
model.Nonlinear
models are valuable notonly
as shortterm
forecasters,
but
also asdiagnostic
tools
for
identifying
andquantifying
low-dimensional
chaoticbehavior.
To find how
a system evolvesfrom
a giveninitial
state one canemploy
the
dynamic
(equations
ofmotion)
incrementally
along
an orbit.The
method ofdeducing
the
systems
behavior
requires computational effortproportionalto
thedesired length
oftime to
follow
the
orbit.For
example,
systems such as africtionless
pendulum,
the equations of motionmay
13
any
future
statein
terms
ofthe
initial
state .A
closedform
solution provides ashort
cut,
a simpler algorithmthat
needsonly
the
initial state andfinal
time to
predict
the
future
withoutstepping
through
intermediate
steps.The
unpredictablebehavior
ofchaoticdynamical
systems cannotbe
expressedin
aclosed
form
solution.Therefore
there
are no short cutsto
analyzetheir
behavior.
Frictional
losses
for
example causethe
orbitsto
be
attractedto
a smaller region ofthe
state spacewith alower
dimension.
Any
such regionis
called an attractor.An
attractor
is
whatthe
behavior
of a system settlesdown
to
oris
attractedto.
A
system
may
have
several attractors.If
that
is
the
case,
different initial
conditionsmay
evolveto
different
attractors.In
mathematical models of physicaldynamical
systems,
the
dynamical
evolutionis
visualized
in
the
state space whosedimension is
givenby
the number ofdependent
variables.
In
experiments,
the
state spaceis
usually
notknown beforehand
andoften
only
one variable ofthe system canbe
measurede.g. velocity.Thus
only
aprojection of a
trajectory
ofthe
system withusually
high
dimensional
state spaceonto a single coordinate axis
is
given.It
canbe
seenthat the time
seriesalready
contains most of
the
information
aboutthe total
system and notjust
a minor part.The
single variable considereddevelops in
time
notdue
to
its
ownisolated
dynamical laws but is
usually
coupledto
all otherdependent
variables ofthesystem.
Its
dynamics,
therefore,
reflectsthe
influence
of all variables considered.This
mutualinteraction
lets
a single variable containthe
dynamics
of allthe
otherspring.
The
state spaceis
theusual phase space givenby
the
coordinatesrepresenting
elongationx andthe
velocity,
respectively
.From
ourknowledge
ofthe
dynamics
ofthe
system,
weknow
that
withthe
knowledge
ofx(t)
(one
coordinate
only)
we alsoknow
information
aboutthe
other statevariables,
sincethe
state equations are coupled.
Thus,
whenonly
one coordinateis
measured,
say
the
elongation
x,
thenfrom
x(t),
the
velocity
(which is
the
othercoordinate)
cansubsequently be
determined
[5].
The
solutionmay
be formulated
asthe
problem of reconstruction of an attractor of adynamical
systemfrom
atime
series of one measured variable only.In
otherwords,
if
an attractorwas,
shall wesay,
three
dimensional,
being
governedby
threevariables
x,
y
and z andif
xwasthe
only
measurablequantity,
then
we couldsay
that
impregnated
withinthis
quantity
arethe
othertwo
variablesy
and z.Reconstruction
ofthe
state space of an attractor or nonlinear systemfrom
atime
series of one
(measured)
variableonly,
canbe
solvedusing
the
concept ofdelayed
vectors
[5].
Theory
statesthat
for
almostevery
state variablex(t)
andfor
almostevery
time
interval
T,
atrajectory
in
an m-dimensional state space canbe
constructed
from
the
measured values ofthe time series;
[x(to
+kts),
k=0,l,2,..,N],
by
grouping
m valuesto
form
m-dimensional vectors.Therefore,
in
essence, the
pastbehavior
ofthe time
series containsinformation
aboutthe
present state.If
for
convenience, the
sampling
time
xis
assumedto
be
uniform,
this
information
canbe
15
Xx(to+kts)
=(x(kts),x(kts
+t
),
,x(kts +(m
-l)x
))Tfor k
>0
where
ts
is
the
sampling interval
at which samples ofthe variable xaretaken
andT
denotes
the
transpose
(by
convention,
all states aretaken
to
be
column vectors).The
delayed
vectorspaceXT,
is
usedin
obtaining
information
aboutthe
future,
using
pastinformation.
Delay
vectors arecurrently
the
mostwidely
used choicefor
state spacereconstruction.
Unfortunately
to
usethem
it is
necessary
to
choosethedelay
parameter t .
Although
Takens'
theorem
suggeststhat this
choiceis
notimportant;
however,
because
of noise contamination and estimationproblems,
it is
crucialto
choose a good value
for
x .If
xis
too small,
each coordinateis
almostthesame,
andthe trajectories
ofthe
reconstructed space are squeezedalong
the
identity
line;
if
xis
too
large,
in
the
presence of chaos and noisethe
dynamics
at onetime
become
effectively causally
disconnected from
the
dynamics
at alater
time,
so that evensimple geometric objects
look
extremely
complicated.Examples
ofthesebehaviors
are
discussed later in
this
section.Another
method of state spacereconstructionin
common useis
the
principalcomponents
technique
[16]. The
simplestway
to
implement
this
procedureis
to
compute acovariance matrix
Cij
=(x(
t
-ix
)x(
t
-jx
))t
, where|
i-j |
<m,
andthen
computeits
eigenvalues,
where(
)t
denotes
an averagetime.
The
eigenvectors of
Cij
define
anew coordinatesystem,
whichis
a rotation ofthe
original system.
The
eigenvalues ofthis
matrix,
arethe
average root mean squareprojection of
the
m-dimensional
delay
coordinatetime
series onto the eigenvectors.maximum possible projection of
the
statevector, the
secondhas
the
largest
possibleprojection
for any
fixed
state vector orthogonalto
the
first,
and so on.This
procedure can
be
usefulin
the
presence ofnoise;
discarding
eigenvalues whose sizeis
below
the
noiselevel
canreduce noise.Being
the morepopular ofthetwo
methods,
themethod ofusing
delay
coordinateswas used.
In
the sectionsto
follow
arethe techniques
usedto
detenriine
x ,the
delay
time
andthe
effect of x onthe
reconstructionofthe
state space.As
mentioned earlierthe
question of whatis
the
best
way
to
choose xis
still open.If
x
is
too
small,
then the
coordinate atx(
n + x)
andx(
n +2x)
represent almostthe
same
information.
Similarly,
if
xis
too
large,
then
x(
n +x)
andx(
n +2x)
representdistinct
uncorrelateddescriptions
ofthe
embedding
space.Examples
ofthis
canbe
seen on pages
17-20
(Figures 6-9).
Notice
that
as x getslarger,
the phase portraitbecomes
moreevident.In
otherwords,
alarger
amount ofinformation is displayed.
This
willbe
true
ofany
system;
shown as examples arethe
Lorenz
andRossler
Strange Attractors. The delayed
phase portraits were obtainedby
plotting
apseudo-space portrait next
to
the
original phase portrait.This
wasdone
using
the
Matlab
program 'autocorr.m'
(Appendix
9).
The
values of x werearbitrarily
chosento
demonstrate
the
effect ofchoosing
the
correct value ofthe
delay
time.
The
Rossler
andLorenz differential
equations(Appendix 5
&
6)
were evaluatedusing
the
Runge-Kutta
method,
specifying
the
initial
conditions, the
starttime,
the
T "?
>
X
O * (N O CN T <?
joidsap^pa ssnia/v paXaiaa
S3niy\pJASIiQ CU X CU -M *-> CU ^2 u 01
Ji
t-53
9 <u 913
3E
M 9 3 i_ Cm 3 m 3 CU CODi-J2
2f 9 CU cu X 9*-%
9 X 3 5/3 9 cuE
I
mm M 3s >-, i-3 a. CU 3 -3 cu 4-1 jS CU *-33
*-5/5 01) CU 3 aaaaa1 -*-iH
5/3 > 3 3 CUQ
o _ 3 cu M cu >"
CU to ** ao ^fi
C/3 M v 01 3 3 J- <N"35
t.
IIP
8
(-5 CU
o
CU 5/3 u 3 t/31
3 X a 6/3 CUF
ow
CU >> CU 3 319 0> 3 -a
>
x
m-\ Um mZ> sanreApx
c/3 3 03 OD 03 _ 3 > 9 w ^ w CJ 3 9 u CU > cu a. 9 X 3 ^^ cu w 3 3 w mt\ Ita^ 1 9 CU OX) -aw 3 3 3 CS u 3 03 a. OX) cu 3 73 -3 C/3 . 3 a C/3 cu 3 1 M^ C/3 03 3 CU U -W U u mmm 0 03 aa 3 .2S
ex 9 u c/3 CJ u 3 J3 . m--\ CJ 9 3 u -* w ' 3<
cu > N3 ucu
A -\'"'~
SSnjBA p36t\3Q S3npe,\ psXeiso
CU
X
s
X M
Sf ii
u
E
11
S
3
M 3
4)
DC Of)
ml. cy CU
E
5
4*.
3
2
X 3= .2
3 CU ">
-"45
3-CU
-X cu
*
3
5/5 .3
CU
fi
IS
Of)
3
u -aw C/3
3
E
o
cu > a*3
cu w
SI
2
^
ui 3 ^|
'35
tS
I'Saw "* <U
OCU
i
5/3 3
S3n[l,\psXeiaQ "tA p*^ia
1
"3 "3
cu **
>* >>
CU 83 J3
X
"aJ
"cu
H
-a "35/3
value would
correspond
to the
delay
time
x.The
actualdelay
time used wasusually
a
tenth
ofthe
value provideby
this
function
[12]
i.e
the time
associated withthe
first
local
minimum ofthe autocorrelation
function. The
value ofthis
minimum pointis
usually
alarge
value(too large
to
provide accurateinformation
with regardsto the
system),
thereforetaking
atenth
ofthis
value establishes atime
delay
valuethatprovides us with
better
results.However,
through the workthat
wasdone,
it has been
concludedthat this
is
notalways
the
best
approachto
follow.
It
wasfound
by
evaluating
thedecorrelation
function in
ahigher dimension
(usually
the
dimension
ofthe
system)
that
the
optimum value of
the
delay
usually
fell in
the
region where thefirst
maximum slope(or first
inflection)
ofthe
autocorrelationfunction
occurred.The
delay
time
providedby
the
first
method was never sufficientfor
good curvefitting
andusually
resultedin bad
curvefits.
A
better
delay
was alwaysfound
to
be
much
higher
than
wasprovidedby
this
method.This
however,
does
notsay
that the
second method provided a value
that
correspondedto the
optimal shiftnecessary
to
create
the
delay
vectors.Using
the
secondapproach,
we couldconfidently
assumethat
the
optimaldelay
time
wasin
the
nearvicinity
ofthis
value.Examples
to
demonstrate
these
different
delays(shifts)
are shown onpages23
-26
(Figures
10
-13),
as arethe
effects eachhad
onthe
reconstruction.As
the
figures
show animproper
choice ofthe
delay,
couldlead
to
disastrous
effects onthe curvefitting.
The
sectionthat
follows discusses
themethod usedto
determine,
or ratherestimate,
o
?
Z op
<si
CrS O U UJ Q os
o (N oamyaNouviH^raoD
cu X 3
H
# '-aw T) CJ -t cuC
Caw 9 G CU -aw V3 U 3!S
X > UJ X 5/3"3
E
3
a 9 cu X *w OX)"35
3 Sw 9 CU a"35
S
3s
s
3 cu XH
3= Iw 9 3 cu "-5rv 3 3
mU Sw 3 M -M 3
"5
mm Ed lw mii 5/3 3fiw
9 CJ 9 -aw 3 3 5/3 CU cu -aw CU X awJ3 Caw taw
"3
9 9 > S9
-aw
O
-aw Q.
X X CJ
3
mm cu
Q c/3 w2
e
CO 2 O -aw 3 CU mm -aw C/3 B 9 -awE
9 /} = CU CU iwa
'S
3 -3CU3
2
law/-?
3'5
X 9 "3 CU 5/3 -aw CJ cu Iw CU u 3 5J X 5/3 CJ 3 Caw cu s " 9 X CU -aw cu X J3*3
'3
H
C/3 > Q.25
arnvAMouvi3iraoD w Ed X CU XH
-to* Cft to, wW IS QO tot Cm 9 9 CU CU a 3 9 MM 3 Cft > .to* Caw 3 X Cfts
X 3 3s
s
-to* cu a 9H
cu X 3 -to* 9 OX) to* s CJ 5ft 3 3 3 Caw Iw 9 w* 3 9 CJ to* 3 3 Iw -to* cu -to* to, N CJ 3 CU 9 Iw to* 9 3 wd 3 CU CU 5ft cu X -*w X to*3 Caw Caw
3 9 9 , >
at
S "-Z* -to* O"a
to* X CJ 3 Ui to* Cft S 9 CU CU Iw CU 3 Cft -aw X to* cu 3 CU awE
9 to, to, 3&
e
Iw Cto. CJ CJ 3o 3 3CU
Caw Cft 3 3
9 3
'3
Cft to* to* X a. CJ cu X 9 sSB
s C/3w
CU
X X9
cu J3 3 CJ cu r
H
Cfl >ul 3ATVANOLLVTJiTaOO *l CU -to* CJ ,^ 3 3 u -to* a 5ft 3
'3d
9'u
CJ 3 ^^*s
cu O
Sw CU 1-H
X "-f Cft -aw 3 Cto. Iw Cft 9 9 X -to*
9 S3 3
to* CJ 3 <2 CU -to* Iw 3 to* -to* W
3 'S -4
N
3
CJ
3
Iw 5/3
CU -to* 3
Sw 5/3
a
9 3 wd 9 CJ 3 CU X to* cu CU Cft 3 5ft to. 5U XH
CU 3"3
CJ 3 cu > cu 3 Xs
3
3
X9
*
-to*
2
Cft 31
3[E
-to*
Cft 3 3
to. O
*-C
"-to* >>3 3
Cft cu
"3
*
CU -aw Cft 3
3
CU Cft
"3
Sw _w *> CU X 3pg 3 cu
15
to*Cft3 3 u X X Cft cu J3 .3 to* C CU
"3
> cu cu to, 9to. CU i> w.
is
X-aw 3cu
.3
3 Caw
9
-to*
3 -to*v.
Caw X O Cft X aw 3 to* 3
'3
a. cu S cu CU to*C
oIZ
3 9 If)CU
01 to* to*
CU X a
"35
"2
3 3 w2H
3
9 327
II.
DETERMINING
THE
EMBEDDING DIMENSION
The
development
ofalgorithms
for estimating
thedimension
of an attractordirectly
from
atime
serieshas
been
an activefield
of research overthe
last decade. The
objective of
these
algorithms
is
to
estimate thefractal dimension
of ahypothesized
strange attractor
in
the
reconstructed
state space.The
dimension
of asystem canbe defined
to
be
either:
-each pair of
position-velocity
coordinates associated withthe
displacement,
or
-only
one ofthe two
elements,
position or velocity.Thus,
a setofN
bodies free
to
movein
three
spatialdirections has 6N degrees
offreedom.
Sometimes it is
easierto think
ofthe
phasespace ascontaining
allthe
degrees
offreedom
of a particular system.It
is known
thatcomplex aperiodicbehavior
can resultfrom deterrninistic
physicalsystems with
few degrees
offreedom.
Dissipative
dynamical
systems whichmay
have
many
degrees
offreedom
(
such asfluids)
can,
after aninitial
transient
time,
settle
down
to
a statein
whichonly
afew degrees
offreedom
are relevantto the
dynamics.
In
this
post-transientstate,
the system'strajectory
through
phase spaceis
confined
to
alow-dimensional
subset ofthe
available phase space.When
the
subsetis
a strangeattractor,
motionis
complex,
aperiodic andtypically
chaotic.On
the
other
hand,
the
motionofadynamical
systemitself
is
complicated;
afull description
would require
many
degrees
offreedom.
Thus
anexperimentalist,
observing
a systemthat
displays
apparent erraticmotion,
"chaos"
and stochastic
"noise".
Does
the
systemhave
a(low
dimensional)
strangeattractor
in its
phasespace,
andif it
does,
whatis
its dimension?
If
the time
seriesis
detenninistic
and offinite
dimension,
the
estimateddimension
ofthe
reconstructed attractor shouldconvergeto
thedimension
ofthe
strange attractoras
the
embedding
dimension is
increased.If
thetime
seriesis
random,
the estimateddimension
shouldbe
equalto the
embedding dimension.
Unfortunately,
these
statements
hold
only
in
the
limit
ofarbitrarily
long
noise-freestationary
time
series.In
somecases,
the
convergence canbe
excruciatingly
slow.Flistorically,
the
first
numerical algorithms werebased
on a "box-counting"principle,
althoughthis
wasfound
to
be impractical for
a number of reasons.The
correlation
dimension developed
by
Grassberger
andProccacia
[9]
andindependently
by
Takens
considersthe
statistics ofdistances between
pairs ofpoints,
andthis remainsthe
most popularway
to
computedimension. This
methodwas used
in
ourstudy
to
determine
the
dimension
ofa given system and willbe
discussed in
greaterdepth
as we proceedfurther
oninto
this
section.Other
approachesfor
estimating
dimension
include
using
the
statistics ofthe
k
th
nearest
neighbors,
using Lyapunov
exponentsto
estimate a"Lyapunov
dimension",
which
is
relatedto the
actual geometricdimension,
andusing
m-dimensionalhyperplanes
to
confinethe
data in local
regions[16]. It
alsohas been
suggestedthatprediction
itself
can provide a robusttest
for low-dimensionality.
Basically,
the
29
made
is
a reliable upperbound
onthe
numberofdegrees
offreedom in
thetime
series.
Going
back
to
theGrassberger
andProccacia
approach mentionedearlier, this
method
defines
a correlationintegral
C(r, N,
d
)
that
is
an average ofthe
pointwise mass
functions
B(x;
r,
N,
d)
at each point xin
the
reconstructed statespace.
Here,
d is
thedimension
ofthe
embedding
space, N
is
the
number ofpoints,
and
B(x;
r,
N,
r)
is
the
fraction
of points(not
including
xitself)
within adistance
rof
the
point x.The
asymptoticscaling
ofC(
r,
N,
d
)
~r^,
for
smallr,
defines
the
correlationdimension d. This
point willbe
elaborated onfurther in
the
section.The
pointwise massfunctions
B(x;
r,
N,
d
)
arekernel-density
estimatesfor
the
natural measure of
the
strange attractor at pointsxj
onthe
attractor and as suchmore
accurately
characterizethe attractorthan
the
crudehistograms
thatbox
counting
provides.A further
advantageis
that the
correlationintegral
scales asrd
down
to
distances
onthe
orderofthe
smallestdistance between any
pair ofpoints;
this rangeis
significantly
greater thanbox counting
permits.We
wishto
estimatethe
dimension
ofanattractor,
whichis
embeddedin
anm-dimensional
Euclidean
spacefrom
a sample ofN
points onthe
attractor,
that
from
the set
{xi,
X2,
,xn}
withxi
belonging
to the
embedding
spaceRm.
Grassberger
andProccacia
suggestthat
we measurethe
distance between every
pairi
=lj
=i
+l
In
this
expression ,N
refersto
the numberof pointsin
the
Rmembedding
space ofthe
attractor. representsthe
Heavyside
step function
givenby:
0(x)
=0for
x<0,
0(x)
=1for
x>0,
0(0)
=0.
The
points ofthe
attractorare representedby
xi
andxj,
andthe
verticalbars
|
|
represent a suitable
norm,
in
this
casethe
magnitude ofthe"distance
difference"between
the
ithandjth pointsin
the
delayed
vector spacei.e.
|x(i)-x(j)|
=Vi)-x(j))2
The dimension
is
establishedby
counting
the
number ofdistances
which areless
than
afixed
radiusi.e.
N(R)
=(number
of pairs with|
xi
-xj
|
<R
)
~Rdwhere
d
representsthe
correlationdimension
ofthe
attractor.The
processis
repeatedas
the
radiusis
incremented
from
a minimum valueto
a maximum value.Thus
rin
the
correlationintegral
representsthe
quantities mentioned above.At
this
31
C(r)
~rdEven
thoughit
not quite obvious atfirst
sight,the
statement referredto
aboveshows us
that
'd' canbe
determined from
the slope ofthe curve obtainedwhenC(r)
is
plotted againstr , each on alogarithmic
scale;
log
N
=log
r^,
log
N
=d
log
r,
d
=log
N
/
log
r.The quantity
d,
representsthe
minimumdimensionahty
ofthe
artificial phase spacenecessary
to
include
the
attractor.If
the time
series consistsonly
ofthe
sampling
ofa single
dynamical
variablefrom
amultivariabledynamical
system,
the
correlationdimension
stillmay
be
extractedfrom
the
seriesusing
embeddingsconsecutively
in
a sequence of
higher dimensional
spaces.Thus
theory
statesthat
as weincrease
theembedding
dimension,
the
slopes ofthese
curves willeventually
converge onthe
dimension
ofthe
attractor.Finally
if
the
attractorhas
afinite
dimension,
one can nevertheless constructthe
complete state vector
x(t)
by
the
following
process:x(t+ x
)
is
used asthe
first
coordinate,
x(
t
+2x
)
asthe second andx(
t
+nx)
asthe
last,
then
if
nis large
this
willbe
afaithful
representationof xinitial. Different
choices of n and ofthe
delay
time
x willlead
to
differently
shapedattractors,
but
allof
them
willhave
the
samedimension.
The Matlab
program"dimembed.m"
(Appendix
1
1)
was writtento
evaluatethe
however;
eachtime the
program embeddedin
ahigher
dimension,
the
optimaldelay
time
wascomputed
for
that
particularembedding
dimension.
This
wasdone
sothat
each
time the
programran,
the
most amount ofinformation
was extractedfrom
the
time
seriesfor
abetter
approximation
ofthe
dimensionality
ofthe
systemin
question.
The embedding
program providedus withthe
following
results; using
adata
set of15,000
points(time
step
0.002s)
the
dimensionality
ofthe
Rossler
attractorturned
out
to
be approximately
1
.98.The
expected valueis
slightly
greaterthan
2.0. The
Lorenz
attractor was also run with15000
points(time
step
0.002s)
andin
this casethe
dimensionality
wasfound
to
be 2.03. This is
comparedwiththe
accepted valueof2.05.
The
Grassberger
andProccacia
method ofdetermining dimensionality
is
only
goodfor
systems withlow dimensionality.
Furthermore,
the
processis
painstakingly
slow.
To
get good comparableresults,
it is
requiredthat
approximately
20,000
points or more
be
usedin
the process,
as the greaterthe
number ofpointsthere are,
the
moreinformation
there
is
aboutthe
system.This
does
nothelp
at allin
the
runtime
ofthe program,
and as wouldbe
expected,
increases it
tremendously.
Once
thedimensionality
has
been
determined,
we areready
to
develop
the modelequations
that
willessentially
capturethe
essence ofthe
systemin
question.The
calculated
dimensionalities
are rounded offto
the
nexthighest integer (i.e.
a systemwith a
dimensionality
of2.05
wouldbe
best described
using
athreedimensional
33
(N'-OD
"l
J
-a 3 to* 3 O 3"55
cu xH
-3 3 cu Sw "32
83 -to* OX) 3'2
3 to. cuE
ox cu X -3 cu to OX) es 3 3 -3 5/3 cu 3 cu * X CU X -to* cu cu X ormaw to* 9 O
g
S
Cft t* 03 CJ Sw "55 5ft oPS
cu X S3 3 O ** Cft 3 cu -3 CU X -aw OX) 3 8 03 (ft c CU CJ a OX) 8 3 5*3 8 CJ cu cft 00 OS"ej
-aw 03a
Sw a a 03 cft 8 "35 3 cu -3 OX) 8 -3 -3 cu cu CU 8 Cft cu X 3 -*-cu 8 5ft u 3 CU a cu -to* 03 CJ cu CU
-T-*-r^ 1 ,
r-\ \ \ \
1 1 1
\ \ \ *
c? \ \?
\
V\
X
\ \ \ N
\ > \
1
< > \ >
> \ > Q
a a
\
\
>* > \ >
\ , > \ \
\
> \ o
\
\
\
'9 > 4 \ v
\
1 \
\
z,
\\
\i \ \ \ s \ \
>I CJ 1 9
\
v
\ \
1
S
b\
\
*-' I \
5
\
\ a5
VQ
\ W en1 1 1 1 1
a 1 \ \ \ a O 1
T
>o VO 00 ON ' CN(N'-OD
I
d
m cs mE
5ft o 3 3'35
*5
3 3 CU cu'
'3
o\ CUo Xto*
to* -to* 3 O CU X -to* 3 cu 3 O
"C
i-8
CU OX) aw cu 8"2
CJ 93
-to*it
33OX)
5/5 CU <s
"3
X j>>5ft 5ftcu
13
-awX 3 3
to* X
'x
-a. to.!
cu XH
Sw3
3
8a 3
5ft
N
3
I
CU c to* Cft 3 cu Sw2
in cu HX 3 CU
-to* 3 Caw 9 Sw -to* Cto. >* -to* aa*
"3
3 (ft Swas
cu a o 5/5 CU"5
3 Hcu >> -to*
E
'-3
5
3Caw O cu CU X
E
a
"3
-aw > OX) Sw cu3 a
JS
'
a3H
Sw
OX)
8
E
S
2
to*cu CUfi
s
Cft>>
35
EXTRACTION OF
DYNAMICAL
EQUATIONS
FROM DATA
It
has
generally
been
assumedthat
quantitiesthat
fluctuate
withtime
or space withno
discernible
pattern couldbe
described
by
alarge
number ofdeterministic
equations or
by
stochastic ones.Ideally,
one wouldlike
to
be
ableto
extractequations
from
afluctuating
time
series.But
due
to the
lack
of additionalinformation,
this
becomes
quite unrealistic.The
variable observedmay
notbe
simply
relatedto the
fundamental
dynamical
variables ofthe
system.The
measurementwill
usually
be
contaminatedby
noise and round-off errors andlimited
by
sample rate andduration.
However,
it
may
be
possibleto
find
a system of equations which mimicthe
generalfeatures
such asthe
topology
in
a suitable phase space.These
equations might shedinsight into
the
behavior
ofthe
system.The
goal offorecasting
is
to
predictthe
future behavior
of atime
seriesfrom
a set ofknown
past values.Once
wehave
chosen a methodfor
state spacereconstruction,
the
data
set canbe
mappedinto
state space.We
can construct a modelfor
the
dynamics
by
using
anapproximating
function F.The
approximationF
also canbe
used
to
predictfuture
states.The
use of a state space representationis
most effective whenthe
dynamics is
stationary
or canbe
mapped ontosomething
that
is
stationary.The
phrase"stationary",
refersto
an autonomoussystem, that
is
notime-varying
coefficients.Forecasting
involves
extrapolationin
time,
in
the
sensethat
one usesdata from
onedomain
(the
past)
to
extrapolateto the
behavior
ofthe
data in
adisjoint
domain
(the
37
have
seenthrough
earlierdiscussions,
state space reconstruction makesit
possibleto
convert extrapolation ofthe time
seriesinto
a problem ofinterpolation
in
the
statespace.
The
mostcommonly
used approachoftime
seriesforecasting
assumesthat the
dynamics
F
is
linear.
This has
several advantages.Linear
functions have
a uniquerepresentation,
andthey
arerelatively easy
to
approximate.However,
asis
known,
linear
systems areexceptional; virtually any
real system containsnonlinearities.Furthermore,
linear dynamical
systems cannot producelimit
cycles or chaoticdynamics.
Thus
to
model chaoticdynamics,
we areforced
to
considernonlinearfunctions.
The
importance
of nonlinearfunctions
cannotbe
overemphasized.More
recently,
it
has been
appreciatedthat ordinary,
but
nonlinear,
differential
equations with asfew
as
three
degrees
offreedom
ordifference
equationswitha singledegree
offreedom
can
have
pseudo-random(chaotic)
solutions.This
has led
usto the
hope
that
suchsimple systems can model
the
real world.We
willnowtake this
opportunity
to
discuss
afew
aspectsthat
wehave found
useful,
first
discussing
several nonlinear representations andthendiscussing
the
methods we used
to
characterizethe
chaotic systemsthat
were studied.Any
topologically
complete set ofspecialfunctions
canbe
usedto
representnonlinear
functions.
Discussed
below
are some ofthe
varioustechniques that
canbe
o
Wavelets
are alocalized
generalizationoftheFourier
series[16].
Their
mostimmediate
useis
as a means of signaldecomposition
and,
hence,
state spacereconstruction.
They
also present aninteresting
possibility
as afunction
representation withinthe state space.
o
Neural
nets arecurrently
very
popular[16]. There
aremany
different
varieties ofneural
nets,
andit is beyond
the
scope of thisinvestigation
to
discuss
them.
Neural
netshave
the
disadvantage,
when comparedto
some other methods offunction
approximation,
that
for
most algorithmsparameterfitting
is
slow.o
Radial
basis functions
are of theform:
/*(s)
=S
ai
0
(||
s-si
||)
where
0 is
anarbitrary
function,
si
is
the
value ofs associated withthe
i**1data
point,
and||
s-si
||
is
thedistance from
sto the
i*hdata
point[16].
This
functional
form has
the advantagethat the
least
squares solution ofai
is
alinear
problem.o
Local Function Approximation
is
carried outby fitting
parametersbased
onpoints within a given region
[14]. This
approach allowsfor
considerableflexibility
in
building
aglobally
nonlinear model whilefitting
only
afew
parametersin
eachlocal
patch.Local
fitting
has
the
advantage ofbeing
quick andaccurate,
but has
the
disadvantage
that the
approximations arediscontinuous.
o
Polynomials
are oftheform
/*(s)
=Zfl1.5 ad.s/<(where
d
representsthe
dimension
ofthe variables).
They
have
the
advantagethat the
parameters'a'
can
be determined
by
aleast
squares
algorithm,
whichis fast
and gives a unique solution[16].
They
have
the
disadvantage,
in
thatthe
solutionusually
approachesinfinity,
as s --> oo andconsequently
do
not extrapolate welloutside thedomain
ofthe
data
set.39
For
ourpurposes,
wedecided
to
usepolynomialsfor
fitting
the
coefficientsto
the
data
set.The
choice
was madedue
to
the ease ofits implementation. This
providedus with a unique
dynamical
equationthat
mimickedthe
originaldata
setto
approximately 1000
points.We
shall nowoutlinethe
methodusedfor
deterniining
a set ofdifferential
dynamical
equations of motion whichapproximate an
underlying generating
process.Our
method can recover a system ofequationswith good predictive
ability using
afew
data
points.Given
a(scalar)
time
series x[
t
]
derived
from
some physical system as asignal,
we assume
that
atleast
one component ofthe
signalhas been
generatedby
physicalprocess
containing
afew
nonlineardegrees
offreedom
andis
chaotic,
althoughthis
not an absolute requirement.
Using
the
time-delay
phase space reconstructionmethods
described in
the
first
section,
we construct aEuclidean
spacethat
is
topologically
equivalentto
the
original system phase spaceby forming
the
delay
vector space
(as discussed
earlier).To
do
this,
we needto
know
the
embedding
dimension
'd'and
the
delay
time
x.The
methods usedto
determine
these
parametershave
already
been
discussed
in
the
previous sections.We
assumethat
motionin
the
originalphase spaceis
governedby
a set of coupledordinary
differential
equations(ODE).
We
attemptedto
find
an approximationto
this
set ofODE's in
the reconstructionphasespace( let
us callthisdelayed
vectorspace,
XT
)
capturing the
underlying
dynamics in
a simple set of globalequations,
which are valid
globally
onthe
data
attractor and whichhave
the
form:
dXT[t]/dt
=F(XT[t])
orCalculating
the
derivatives
ofXT
was performedusing
a sixth orderaccuracy
centraldifferencing
technique.
It
wasimplemented
using
the
Matlab
code,
'cent6.m'
(Appendix
1
& 12).
The
nextstep
wasto
determine
the
ODE. We
assume abasis
set ofthe
componentsof a vector
argument,
the
simplest casebeing
polynomials,
with:F(Xx[
t
]
)
=aiip,(XT[
t
]
)
+ai2p2(XT[
t
] )
+ +aiMpM(XT[
t
]
).
Here
pj
representsthe terms
of an Omorder,
m-dimensional polynomial constructedfrom
the
elements ofXT[
t
]
andtheaim
are the coefficients ofthese terms.
The
terms
generatedusing
the
programs,
were ofthe
fifth
order.In
all,
there
wereapproximately 56
terms.
Using
the
Matlab
program'nlfit.m'
(Appendix
13),
the
required polynomialis
generated.
The
basic
core ofthe
programgenerates a matrix which represents allpossible combinations of
the
X-r[t]
values requiredto
provide an adequate nonlinearfit
to x(t),
the
approximated values of x(t).The program works suchthat
all powercombinations maybe generated
using
a simple nested algorithm.As
the
programcycles
through
eachloop
new unique polynomialterms
are generated.For
basis
sets such as simple polynomialsthat
arelinear in
the
coefficients,
the
system of equations can
be
written as a set of matrixequations,
Ai
XT
=41
In
the
aboverepresentation,
the
k* row ofXT
consists ofthe
pm
generatedby
thevector
XT[t]
asdescribed
above,
Ai
is
the
matrix ofthe
unknown coefficientsfor
the
function
F,
andxi
is
a column matrix ofapproximationsto the
local derivatives in
the
ithdirection
ofthe
phase spacetrajectory
at eachdata
pointXT[t].
Typically
to
obtain
Ai
(and
therefore
F),
weinvert
this
matrix equationusing
aleast
squaresminimization methodsuch as
the
singular valuedecomposition (S
VD).
The
programhas been
writtento
generate56
different
terms.
As
mentioned earlierusing
the
originalRossler
orLorenz
equations we were ableto
generate originalvalues of
x, y,
and zbased
on specifiedinitial
conditions and aRunge-Kutta
routine.Thus in
the
same mannerto
checkhow
goodthe
generatedODE
is,
we subjectthe
equation or rather
the
coefficientsto
aRunge-Kutta
routineto
iterate
and generatenew predicted values
(
z[t]
)
for
each system.Required for
the
iteration
arethe
values
for
the
initial
conditions, the time
step
andthe
number ofpoints requiredto
be
generated.The Matlab
program'nlpoly.m'
(Appendix
14)
waswrittento
performthis
procedure within aRunge-Kutta
routine(Appendix 15).
Since
phase portraits are plots ofthe
velocity
values()
versusthe
x[t]
values,
therefore
if
it is
requiredto
compareattractors,
new values ofcorresponding
to the
newpredictedvalues, z[t],
needto
be
generated.Using
the
program'nlpvel.m'
(Appendix
16),
these
newderivative
values aredetermined. Different
polynomial combinations of
z[t]
areconstructed,
as was carried outin
the
program'nlfit.m'. Thus
using
the coefficientsdetermined
earlierfor
the
least
squaresfit
to the
derivative
values('cent6.m'),
we are ableto
calculate new x values.That
is,
xnew
=Using
the
program'recon.m'(Appendix
17),
we encapsulated allthe
steps requiredto
reconstructthe
Rossler
andLorenz
systems.Briefly
summarizing;
webegin
by
constructing
the
delayed
vector space.Using
the
centraldifference
approachweobtain anonlinear
function
that
describes
the
derivative
values ofthe
delayed
space.Then using
Runge-Kutta
we workbackwards
to
obtainthe
new reconstructed xvalues.
If
we wishto
compare phaseportraits,
wethen
use the unique coefficientmatrix and
these
x valuesto
determine
the
new reconstructed velocities.The
nextfew
figures
16-22,
demonstrate
the
results obtainedfrom
using
this
method ofreconstruction.
The
systems shown arethe
Lorenz
andRossler
systemsand
their
reconstructedmodels.For
both
systems we were ableto
obtain modelsthat
followed
the original systemfor
approximately 1500
pointsbefore
the
errorbegins
to
visibly
accumulate.However
asFig.
16
wouldindicate
the
reconstructioncan sometimes
be
quitedisastrous.
Once
the
ODE
has been
obtained,
it is
nownecessary
to
study
exactly
how
long
4 3
. "> Cft
SHflT/AX
O cu mm
X mm -to .
r-03 3 X
CU
Cft 03 CJ
8 '.
03 mm
'si
03 c 'Si -CfttWal a> .Q X
X to* Cft a cu CU X > 8 -aw mm
3
"" Cft CU
* 8 >> -to* CU 5ft CU "X CJ
"4*
3 a"o
Si X 3 Sw >~^ 3Cft
^
CJ 8 8E
CU 03 CJCft -3
to* 3
cu 0 CU
s- 9 -to* Cft 3 -to* CJ cu X to* Cft 03 .3 -to* 3 -to< to, 3 Si 8 <2 -3
cu "X CJ
-3 Cft CJ CU
cu 3 3 to,
Sw
mm E>>
Sw
to* cu
3 03 Cft X
w 3" CJ Sw >
"3
-3"o3
-3 cuE
"X 8 CJ cu u -3 O -aw 8 cu cuI
cu Xo
"3 X OX) 8Cft
CU
o
H
036c OX) 03 * iw -aw <2 "X .2 "3 CU
Caw CJ 8 X
03 SI 5ft
CU
(J
Sw "X 03
Sl
03 -303 8 03
t
O a 032
cu u Sw 8 CU X -to* Sl it 0 cu X -to* *>> XE
J
CUto* 5ft
-3 cu CU 03 -to*
cu X -aw X to* <*-3 cr cu Sl
"3
8 CU 5ft CU to. CftCU 0 3 <u
-aw 8 03 Iw a
03 -to 3 CU
Sw -to* Cft "X CJ 8 OX) X 0X) J3 cu u. 3 OX)
C
2
-to* Cft 8 O CJ cu Sw Cft 03 cu Sw CU -3 -to. CU XH
Iw 0 -to< CJ 3"5
X 3 CU to*CU '-iw X 3 to* -to* w, 3 X cS -to* Cft Sw Caw 3 3 H 3J Iw 3. 5ft
21
S
Cft9
(j 3
2
a
CUX
to* -to* -to*
3 3
-- -a
TJ -to*
"o
CU
w CU cu
CJ ** Cft
3
5
3d
Z
3-aw CJ
Cft cu
3 n X
3 Tt
CJ Cft
CU Cft ^
u. 3
it
Cftcm
12
JZt 3 to*
-to*
9 .
"3 -to* 3
3 3
CJ
3 e
Iw OD
l* -to*Cft
"u
Cft1.9 CJ 3 u -to* 3 9 aw 9 CU mm -to* 9 -m CJ CU > to* 3 3 5ft
13
-to* Cft 3 X "3 U >>"3
3is
u
3 t3"a
2
9 cu cu Cft 3 9 CJop
X CU 5ft 3'35
3 3 aw 9 OD CU cu 3 mm"cu
cua
5ft +mt CJ 3X "to* 3 to.
C/3 3
"3
9 Cft'u
to* CJ 3 Sw3
CJ CU3 a to*
C/3
3 to.
a 9 mm
E
CU 33
9 X CJ
CU mmZ
u
H
i.45