## Solving autonomous LTI Systems in Matrix Form

### (& all their possible behaviors)

### •

*Laplace transform of n 1*

^{st}order linear ODEs in matrix form

### •

*The inverse transform and matrix exponential*

### •

Possible system behaviors### •

Discrete-time LTI system (sampling at constant intervals)1

*Solving n 1* ^{st} order linear ODEs in matrix form

### •

*An autonomous (undriven, no u) LTI system with an n dimensional state:*

### •

Its Laplace transform is:### •

Rearranging:### •

*is known as the resolvent.*

### •

To solve: simply find the inverse transform of the resolvent:2

### matrix exponential LTI •

### LTI •

### LTI •

### matrix exponential LTI

### ˙x = Ax

### sX (s) − x(0) = AX(s)

### (sI − A)X(s) = x (0)

### X (s) = (sI − A)

^{−}

^{1}

### x (0)

### x (t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " x(0) x (t) = Φ (t)x(0)

### state tran- Φ (t) resolvent (sI − A)

^{−}

^{1}

### t < 0 sition matrix

### harmonic oscilator

### ˙x = Ax =

### # 0 1

### −1 0

### $

### x , x ∈ R

^{2}

### matrix exponential LTI •

### LTI •

### LTI •

### matrix exponential LTI

### ˙x = Ax

### sX (s) − x(0) = AX(s)

### (sI − A)X(s) = x (0)

### X (s) = (sI − A)

^{−}

^{1}

### x (0)

### x (t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " x(0) x (t) = Φ (t)x(0)

### state tran- Φ (t) resolvent (sI − A)

^{−}

^{1}

### t < 0 sition matrix

### harmonic oscilator

### ˙x = Ax =

### # 0 1

### −1 0

### $

### x , x ∈ R

^{2}

### matrix exponential LTI •

### LTI •

### LTI •

### matrix exponential LTI

### ˙x = Ax

### sX (s) − x(0) = AX(s)

### (sI − A)X(s) = x (0)

### X (s) = (sI − A)

^{−}

^{1}

### x (0)

### x (t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " x(0) x (t) = Φ (t)x(0)

### state tran- Φ (t) resolvent (sI − A)

^{−}

^{1}

### t < 0 sition matrix

### harmonic oscilator

### ˙x = Ax =

### # 0 1

### −1 0

### $

### x , x ∈ R

^{2}

matrix exponential LTI •

LTI •

LTI •

### matrix exponential LTI

˙x = Ax

sX(s) − x(0) = AX(s)

(sI − A)X(s) = x(0)

X(s) = (sI − A)^{−}^{1}x(0)

x(t) = L^{−}^{1} !(sI − A)^{−}^{1}" x(0)
x(t) = Φ(t)x(0)

state tran- Φ(t) resolvent (sI − A)^{−}^{1}

t < 0 sition matrix

harmonic oscilator

˙x = Ax =

# 0 1

−1 0

$

x , x ∈ R^{2}

matrix exponential LTI •

LTI •

LTI •

### matrix exponential LTI

˙x = Ax

sX(s) − x(0) = AX(s)

(sI − A)X(s) = x(0)

X(s) = (sI − A)^{−}^{1}x(0)

x(t) = L^{−}^{1} !(sI − A)^{−}^{1}" x(0)
x(t) = Φ(t)x(0)

state tran- Φ(t) resolvent (sI − A)^{−}^{1}

t < 0 sition matrix

harmonic oscilator

˙x = Ax =

# 0 1

−1 0

$

x , x ∈ R^{2}

### •

*The inverse transform of the resolvent, is known as the state transition*

*matrix because the system is linearly transformed by it from its state at time*0 (either forwards or backwards in time)

### •

**Note the dependence on A’s characteristic polynomial: |sI-A|***j,i entry of resolvent (j*th row, ith column) can be expressed via Crammer’s
rule as:

### •

*So each of the entries in the resolvent matrix is a polynomial fraction in s*

**whose denominator is A’s characteristic polynomial**

### •

**Since |sI-A| has real coefficients, its roots (the eigenvalues of A) are either**

**real or are complex conjugate pairs...**

matrix exponential LTI •

LTI •

LTI •

### matrix exponential LTI

˙x = Ax

sX(s) − x(0) = AX(s)

(sI − A)X(s) = x(0)

X(s) = (sI − A)^{−}^{1}x(0)

x(t) = L^{−}^{1} !(sI − A)^{−}^{1}" x(0)
x(t) = Φ(t)x(0)

state tran- Φ(t) resolvent (sI − A)^{−}^{1}

t < 0 sition matrix

harmonic oscilator

˙x = Ax =

# 0 1

−1 0

$

x , x ∈ R^{2}

matrix exponential LTI •

LTI •

LTI •

### matrix exponential LTI

˙x = Ax

sX(s) − x(0) = AX(s)

(sI − A)X(s) = x(0)

X(s) = (sI − A)^{−}^{1}x(0)

x(t) = L^{−}^{1} !(sI − A)^{−}^{1}" x(0)
x(t) = Φ(t)x(0)

state tran- Φ(t) resolvent (sI − A)^{−}^{1}

t < 0 sition matrix

harmonic oscilator

˙x = Ax =

# 0 1

−1 0

$

x , x ∈ R^{2}

### Eigenvalues of A and poles of resolvent

i, j entry of resolvent can be expressed via Cramer’s rule as
(−1)^{i+j} det ∆_{ij}

det(sI − A)

where ∆_{ij} is sI − A with jth row and ith column deleted

• det ∆_{ij} is a polynomial of degree less than n, so i, j entry of resolvent
has form f_{ij}(s)/X (s) where f_{ij} is polynomial with degree less than n

• poles of entries of resolvent must be eigenvalues of A

• but not all eigenvalues of A show up as poles of each entry
(when there are cancellations between det ∆_{ij} and X (s))

Solution via Laplace transform and matrix exponential 10–11

### Matrix exponential

(I − C)^{−}^{1} = I + C + C^{2} + C^{3} + · · · (if series converges)

• series expansion of resolvent:

(sI − A)^{−}^{1} = (1/s)(I − A/s)^{−}^{1} = I

s + A

s^{2} + A^{2}

s^{3} + · · ·
(valid for |s| large enough) so

Φ(t) = L^{−}^{1} !(sI − A)^{−}^{1}" = I + tA + (tA)^{2}

2! + · · ·

Solution via Laplace transform and matrix exponential 10–12

### Eigenvalues of A and poles of resolvent

i, j entry of resolvent can be expressed via Cramer’s rule as
(−1)^{i+j} det ∆_{ij}

det(sI − A)

where ∆_{ij} is sI − A with jth row and ith column deleted

• det ∆_{ij} is a polynomial of degree less than n, so i, j entry of resolvent
has form f_{ij}(s)/X (s) where f_{ij} is polynomial with degree less than n

• poles of entries of resolvent must be eigenvalues of A

• but not all eigenvalues of A show up as poles of each entry
(when there are cancellations between det ∆_{ij} and X (s))

Solution via Laplace transform and matrix exponential 10–11

### Matrix exponential

(I − C)^{−}^{1} = I + C + C^{2} + C^{3} + · · · (if series converges)

• series expansion of resolvent:

(sI − A)^{−}^{1} = (1/s)(I − A/s)^{−}^{1} = I

s + A

s^{2} + A^{2}

s^{3} + · · ·
(valid for |s| large enough) so

Φ(t) = L^{−}^{1} !(sI − A)^{−}^{1}" = I + tA + (tA)^{2}

2! + · · ·

Solution via Laplace transform and matrix exponential 10–12

**Example: Harmonic Oscillator**

*The phase plane looks like:*

**|sI-A| = s**^{2}**+1 so the eigenvalues of A are i,-i**

We could now use the Laplace transform tables to show that:

(but we show a general solution method instead).

4

### matrix exponential LTI •

### LTI •

### LTI •

### matrix exponential LTI

### ˙x = Ax

### sX (s) − x(0) = AX(s)

### (sI − A)X(s) = x (0)

### X (s) = (sI − A)

^{−}

^{1}

### x (0)

### x (t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " x(0) x (t) = Φ (t)x(0)

### state tran- Φ (t) resolvent (sI − A)

^{−}

^{1}

### t < 0 sition matrix

### harmonic oscilator

### ˙x = Ax =

### # 0 1

### −1 0

### $

### x , x ∈ R

^{2}

### matrix exponential LTI •

### LTI •

### LTI •

### matrix exponential LTI

### ˙x = Ax

### sX (s) − x(0) = AX(s)

### (sI − A)X(s) = x (0)

### X (s) = (sI − A)

^{−}

^{1}

### x (0)

### x (t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " x(0) x (t) = Φ (t)x(0)

### state tran- Φ (t) resolvent (sI − A)

^{−}

^{1}

### t < 0 sition matrix

### harmonic oscilator

### ˙x = Ax =

### # 0 1

### −1 0

### $

### x , x ∈ R

^{2}

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

sI− A =

! s −1

1 s

"

resolvent
(sI − A)^{−}^{1} =

! s

s2+1 1 s2+1

−1 s2+1

2 s2+1

"

i,−i

Φ(t) = L^{−}^{1}

#! s

s2+1

1 s2+1

−1 s2+1

2 s2+1

"$

=

! cos t sin t

−sin t cos t

"

t

c ≤ 1
(1 − c)^{−}^{1} = 1 + c + c^{2} + c^{3} + . . .

|Re(λ)| < 1 C λ C ∈ R^{n×n}

(I − C)^{−}^{1} = I + C + C^{2} + C^{3} + · · ·

2

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

### sI − A =

### ! s −1

### 1 s

### "

### resolvent (sI − A)

^{−}

^{1}

### =

### !

^{s}

s^{2}+1

1
s^{2}+1

−1
s^{2}+1

2
s^{2}+1

### "

### i, −i

### Φ (t) = L

^{−}

^{1}

### #!

^{s}

s^{2}+1

1
s^{2}+1

−1
s^{2}+1

2
s^{2}+1

### "$

### =

### ! cos t sin t

### −sin t cos t

### "

### t

### c ≤ 1 (1 − c)

^{−}

^{1}

### = 1 + c + c

^{2}

### + c

^{3}

### + . . .

### |Re(λ)| < 1 C λ C ∈ R

^{n×n}

### (I − C)

^{−}

^{1}

### = I + C + C

^{2}

### + C

^{3}

### + · · ·

### 2

The resolvent:

A rotation matrix

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

*sI* *− A =*

### ! *s* *−1* 1 *s*

### "

*(sI − A)*

^{−1}### =

### !

_{s}*s*^{2}+1

1
*s*^{2}+1
*s**−1*^{2}+1

*s*
*s*^{2}+1

### "

*i,* *−i* A

*Φ(t) = L*

^{−1}### #!

_{s}*s*^{2}+1

1
*s*^{2}+1
*s**−1*^{2}+1

*s*
*s*^{2}+1

### "$

### =

### ! *cos t* *t*

*−* *t* *cos t*

### "

*t*

*c* *≤ 1* *(1 − c)*

^{−1}*= 1 + c + c*

^{2}

*+ c*

^{3}

*+ . . .*

*|Re(λ)| < 1* C *λ* *C ∈ R*

^{n}

^{×n}*(I − C)*

^{−1}### = I + C + C

^{2}

### + C

^{3}

*+ · · ·*

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

*sI* *− A =*

! *s* *−1*

1 *s*

"

*(sI − A)** ^{−1}* =

! _{s}

*s*^{2}+1

1
*s*^{2}+1
*s**−1*^{2}+1

*s*
*s*^{2}+1

"

*i,* *−i* A

*Φ(t) = L*^{−1}

#! _{s}

*s*^{2}+1

1
*s*^{2}+1
*s**−1*^{2}+1

*s*
*s*^{2}+1

"$

=

! *cos t* *t*

*−* *t* *cos t*

"

*t*

*c* *≤ 1*

*(1 − c)*^{−1}*= 1 + c + c*^{2} *+ c*^{3} *+ . . .*

*|Re(λ)| < 1* C *λ* *C ∈ R*^{n}^{×n}

*(I − C)** ^{−1}* = I + C + C

^{2}+ C

^{3}

*+ · · ·*

### •

*For c < 1*

### •

Similarly, for (with small enough real parts of its eigenvalues)### •

*Plug in and (for large enough s)*

### •

We’ve seen that### •

So (due to the linearity of Laplace tr.) we can apply the inverse transform and get..5

## Matrix Exponential

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

sI − A =

! s −1

1 s

"

resolvent
(sI − A)^{−}^{1} =

! ^{s}

s2+1

1 s2+1

−1 s2+1

2 s2+1

"

i, −i

Φ(t) = L^{−}^{1}

#! ^{s}

s2+1

1 s2+1

−1 s2+1

2 s2+1

"$

=

! cos t sin t

−sin t cos t

"

t

c ≤ 1
(1 − c)^{−}^{1} = 1 + c + c^{2} + c^{3} + . . .

|Re(λ)| < 1 C λ C ∈ R^{n×n}

(I − C)^{−}^{1} = I + C + C^{2} + C^{3} + · · ·

2

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

sI − A =

! s −1

1 s

"

resolvent
(sI − A)^{−}^{1} =

! ^{s}

s^{2}+1

1
s^{2}+1

−1
s^{2}+1

2
s^{2}+1

"

i, −i

Φ(t) = L^{−}^{1}

#! ^{s}

s^{2}+1

1
s^{2}+1

−1
s^{2}+1

2
s^{2}+1

"$

=

! cos t sin t

−sin t cos t

"

t

c ≤ 1
(1 − c)^{−}^{1} = 1 + c + c^{2} + c^{3} + . . .

|Re(λ)| < 1 C λ C ∈ R^{n×n}

(I − C)^{−}^{1} = I + C + C^{2} + C^{3} + · · ·

2

!2 !1.5 !1 !0.5 0 0.5 1 1.5 2

!2

!1.5

!1

!0.5 0 0.5 1 1.5 2

sI − A =

! s −1 1 s

"

resolvent
(sI − A)^{−}^{1} =

! ^{s}

s^{2}+1

1
s^{2}+1

−1
s^{2}+1

2
s^{2}+1

"

i, −i

Φ(t) = L^{−}^{1}

#! ^{s}

s^{2}+1

1
s^{2}+1

−1
s^{2}+1

2
s^{2}+1

"$

=

! cos t sin t

−sin t cos t

"

t

c ≤ 1
(1 − c)^{−}^{1} = 1 + c + c^{2} + c^{3} + . . .

|Re(λ)| < 1 C λ C ∈ R^{n×n}

(I − C)^{−}^{1} = I + C + C^{2} + C^{3} + · · ·

2

s C = ^{A}_{s}

(sI − A)^{−}^{1} = 1
s

!

I − A s

"^{−}1

= I

s + A

s^{2} + A^{2}

s^{3} + · · ·
L^{−}^{1} # _{1}

s^{n} $ = _{(n−1)!}^{t}^{n}^{−1} n ∈ Z
L^{−}^{1} #(sI − A)^{−}^{1}$ = I + tA + (tA)^{2}

2! + · · ·

e^{at} = 1 + ta + (ta)^{2}

2! + · · ·
matrix exponent
e^{M} ≡ I + M + M^{2}

2! + · · · =

∞

%

i=0

M^{i}
i!
LTI

x(t) = Φ(t)x(0) = L^{−}^{1} #(sI − A)^{−}^{1}$ x(0) = e^{t}^{A}x(0)

### LTI

x(0) A ∈ R^{n×n} ˙x = Ax

P ∈ C^{n×n} A

D =

λ_{1} 0

. ..

0 λn

∈ C^{n×n}

P^{−}^{1}AP = D A = PDP^{−}^{1}

A^{n} = PDP^{−}^{1}PDP^{−}^{1} · · · PDP^{−}^{1} = PD^{n}P^{−}^{1}

state transition matrix
Φ(t) = e^{At} = I + tA + (tA)^{2}

2! + (tA)^{3}

3! · · ·

= PIP^{−}^{1} + PtDP^{−}^{1} + P(tD)^{2}P^{−}^{1}

2! + P(tD)^{3}P^{−}^{1}

3! · · ·

= Pe^{Dt}P^{−}^{1}

= Pe

λ_{1}t

. ..

λnt

P^{−}^{1}
3

s C = ^{A}_{s}

(sI − A)^{−}^{1} = 1
s

!

I − A s

"^{−}^{1}

= I

s + A

s^{2} + A^{2}

s^{3} + · · ·
L^{−}^{1} # _{1}

s^{n} $ = _{(n−1)!}^{t}^{n}^{−1} n ∈ Z
L^{−}^{1} #(sI − A)^{−}^{1}$ = I + tA + (tA)^{2}

2! + · · ·

e^{at} = 1 + ta + (ta)^{2}

2! + · · ·
matrix exponent
e^{M} ≡ I + M + M^{2}

2! + · · · =

∞

%

i=0

M^{i}
i!
LTI

x(t) = Φ(t)x(0) = L^{−}^{1} #(sI − A)^{−}^{1}$ x(0) = e^{t}^{A}x(0)

### LTI

x(0) A ∈ R^{n×n} ˙x = Ax

P ∈ C^{n×n} A

D =

λ_{1} 0

. ..

0 λ^{n}

∈ C^{n×n}

P^{−}^{1}AP = D A = PDP^{−}^{1}

A^{n} = PDP^{−}^{1}PDP^{−}^{1} · · · PDP^{−}^{1} = PD^{n}P^{−}^{1}

state transition matrix
Φ(t) = e^{At} = I + tA + (tA)^{2}

2! + (tA)^{3}

3! · · ·

= PIP^{−}^{1} + PtDP^{−}^{1} + P(tD)^{2}P^{−}^{1}

2! + P(tD)^{3}P^{−}^{1}

3! · · ·

= Pe^{Dt}P^{−}^{1}

= Pe

λ_{1}t

. ..

λ^{n}t

P^{−}^{1}
3

### s C =

^{A}

_{s}

### (sI − A)

^{−}

^{1}

### = 1 s

### !

### I − A s

### "

^{−}1

### = I

### s + A

### s

^{2}

### + A

^{2}

### s

^{3}

### + · · · L

^{−}

^{1}

### #

_{1}

s^{n}

### $ =

_{(n−1)!}

^{t}

^{n}

^{−1}

### n ∈ Z L

^{−}

^{1}

### #(sI − A)

^{−}

^{1}

### $ = I + tA + (tA)

^{2}

### 2! + · · ·

### e

^{at}

### = 1 + ta + (ta)

^{2}

### 2! + · · · matrix exponent e

^{M}

### ≡ I + M + M

^{2}

### 2! + · · · =

∞

### %

i=0

### M

^{i}

### i ! LTI

### x (t) = Φ(t)x(0) = L

^{−}

^{1}

### #(sI − A)

^{−}

^{1}

### $ x(0) = e

^{t}

^{A}

### x (0)

### LTI

### x (0) A ∈ R

^{n×n}

### ˙x = Ax

### P ∈ C

^{n×n}

### A

### D =

###

###

###

### λ

_{1}

### 0

### . ..

### 0 λ

^{n}

###

###

### ∈ C

^{n×n}

### P

^{−}

^{1}

### AP = D A = PDP

^{−}

^{1}

### A

^{n}

### = PDP

^{−}

^{1}

### PDP

^{−}

^{1}

### · · · PDP

^{−}

^{1}

### = PD

^{n}

### P

^{−}

^{1}

### state transition matrix Φ (t) = e

^{At}

### = I + tA + (tA)

^{2}

### 2! + (tA)

^{3}

### 3! · · ·

### = PIP

^{−}

^{1}

### + PtDP

^{−}

^{1}

### + P (tD)

^{2}

### P

^{−}

^{1}

### 2! + P (tD)

^{3}

### P

^{−}

^{1}

### 3! · · ·

### = Pe

^{Dt}

### P

^{−}

^{1}

### = Pe

###

###

###

### λ

_{1}

### t

### . ..

### λ

^{n}

### t

###

###

###

### P

^{−}

^{1}

### 3

### •

**This (infinite series), multiplied by x(0), is a general solution to LTI**systems !

### •

The above series looks a lot like the Tailor expansion of an exponent:### •

*So we will borrow the exponent symbol and define a matrix exponent as*

### •

Using this new symbol we can write the solution### •

For a scalar system, the above solution is what we expect:6

s C = ^{A}_{s}

(sI − A)^{−}^{1} = 1
s

!

I − A s

"^{−}1

= I

s + A

s^{2} + A^{2}

s^{3} + · · ·
L^{−}^{1} # _{1}

s^{n} $ = _{(n−1)!}^{t}^{n}^{−1} n ∈ Z
L^{−}^{1} #(sI − A)^{−}^{1}$ = I + tA + (tA)^{2}

2! + · · ·

e^{at} = 1 + ta + (ta)^{2}

2! + · · ·
matrix exponent
e^{M} ≡ I + M + M^{2}

2! + · · · =

∞

%

i=0

M^{i}
i!
LTI

x(t) = Φ(t)x(0) = L^{−}^{1} #(sI − A)^{−}^{1}$ x(0) = e^{t}^{A}x(0)

LTI

x(0) A ∈ R^{n×n} ˙x = Ax

P ∈ C^{n×n} A

D =

λ_{1} 0

. ..

0 λ^{n}

∈ C^{n×n}

P^{−}^{1}AP = D A = PDP^{−}^{1}

A^{n} = PDP^{−}^{1}PDP^{−}^{1} · · · PDP^{−}^{1} = PD^{n}P^{−}^{1}

state transition matrix
Φ(t) = e^{At} = I + tA + (tA)^{2}

2! + (tA)^{3}

3! · · ·

= PIP^{−}^{1} + PtDP^{−}^{1} + P(tD)^{2}P^{−}^{1}

2! + P(tD)^{3}P^{−}^{1}

3! · · ·

= Pe^{Dt}P^{−}^{1}

= Pe

λ_{1}t

. ..

λ^{n}t

P^{−}^{1}
3

### s C =

^{A}

_{s}

### (sI − A)

^{−}

^{1}

### = 1 s

### !

### I − A s

### "

^{−}1

### = I

### s + A

### s

^{2}

### + A

^{2}

### s

^{3}

### + · · · L

^{−}

^{1}

### #

_{1}

s^{n}

### $ =

_{(n−1)!}

^{t}

^{n}

^{−1}

### n ∈ Z L

^{−}

^{1}

### #(sI − A)

^{−}

^{1}

### $ = I + tA + (tA)

^{2}

### 2! + · · ·

### e

^{at}

### = 1 + ta + (ta)

^{2}

### 2! + · · · matrix exponent e

^{M}

### ≡ I + M + M

^{2}

### 2! + · · · =

∞

### %

i=0

### M

^{i}

### i ! LTI

### x (t) = Φ(t)x(0) = L

^{−}

^{1}

### #(sI − A)

^{−}

^{1}

### $ x(0) = e

^{t}

^{A}

### x (0)

### LTI

### x (0) A ∈ R

^{n×n}

### ˙x = Ax

### P ∈ C

^{n×n}

### A

### D =

###

###

###

### λ

_{1}

### 0

### . ..

### 0 λ

^{n}

###

###

### ∈ C

^{n×n}

### P

^{−}

^{1}

### AP = D A = PDP

^{−}

^{1}

### A

^{n}

### = PDP

^{−}

^{1}

### PDP

^{−}

^{1}

### · · · PDP

^{−}

^{1}

### = PD

^{n}

### P

^{−}

^{1}

### state transition matrix Φ (t) = e

^{At}

### = I + tA + (tA)

^{2}

### 2! + (tA)

^{3}

### 3! · · ·

### = PIP

^{−}

^{1}

### + PtDP

^{−}

^{1}

### + P (tD)

^{2}

### P

^{−}

^{1}

### 2! + P (tD)

^{3}

### P

^{−}

^{1}

### 3! · · ·

### = Pe

^{Dt}

### P

^{−}

^{1}

### = Pe

###

###

###

### λ

_{1}

### t

### . ..

### λ

^{n}

### t

###

###

###

### P

^{−}

^{1}

### 3

s C = ^{A}_{s}

(sI − A)^{−}^{1} = 1
s

!

I − A s

"^{−}1

= I

s + A

s^{2} + A^{2}

s^{3} + · · ·
L^{−}^{1} # _{1}

s^{n} $ = _{(n−1)!}^{t}^{n}^{−1} n ∈ Z
L^{−}^{1} #(sI − A)^{−}^{1}$ = I + tA + (tA)^{2}

2! + · · ·

e^{at} = 1 + ta + (ta)^{2}

2! + · · ·
matrix exponent
e^{M} ≡ I + M + M^{2}

2! + · · · =

∞

%

i=0

M^{i}
i!
LTI

x(t) = Φ(t)x(0) = L^{−}^{1} #(sI − A)^{−}^{1}$ x(0) = e^{t}^{A}x(0)

LTI

x(0) A ∈ R^{n×n} ˙x = Ax

P ∈ C^{n×n} A

D =

λ_{1} 0

. ..

0 λn

∈ C^{n×n}

P^{−}^{1}AP = D A = PDP^{−}^{1}

A^{n} = PDP^{−}^{1}PDP^{−}^{1} · · · PDP^{−}^{1} = PD^{n}P^{−}^{1}

state transition matrix
Φ(t) = e^{At} = I + tA + (tA)^{2}

2! + (tA)^{3}

3! · · ·

= PIP^{−}^{1} + PtDP^{−}^{1} + P(tD)^{2}P^{−}^{1}

2! + P(tD)^{3}P^{−}^{1}

3! · · ·

= Pe^{Dt}P^{−}^{1}

= Pe

λ_{1}t

. ..

λ^{n}t

P^{−}^{1}
3

s C = ^{A}_{s}

(sI − A)^{−}^{1} = 1
s

!

I − A s

"^{−}1

= I

s + A

s^{2} + A^{2}

s^{3} + · · ·
L^{−}^{1} # _{1}

s^{n} $ = _{(n−1)!}^{t}^{n}^{−1} n ∈ Z
L^{−}^{1} #(sI − A)^{−}^{1}$ = I + tA + (tA)^{2}

2! + · · ·

e^{at} = 1 + ta + (ta)^{2}

2! + · · ·
matrix exponent
e^{M} ≡ I + M + M^{2}

2! + · · · =

∞

%

i=0

M^{i}
i!
LTI

x(t) = Φ(t)x(0) = L^{−}^{1} #(sI − A)^{−}^{1}$ x(0) = e^{t}^{A}x(0)

LTI

x(0) A ∈ R^{n×n} ˙x = Ax

P ∈ C^{n×n} A

D =

λ_{1} 0

. ..

0 λ^{n}

∈ C^{n×n}

P^{−}^{1}AP = D A = PDP^{−}^{1}

A^{n} = PDP^{−}^{1}PDP^{−}^{1} · · · PDP^{−}^{1} = PD^{n}P^{−}^{1}

state transition matrix
Φ(t) = e^{At} = I + tA + (tA)^{2}

2! + (tA)^{3}

3! · · ·

= PIP^{−}^{1} + PtDP^{−}^{1} + P(tD)^{2}P^{−}^{1}

2! + P(tD)^{3}P^{−}^{1}

3! · · ·

= Pe^{Dt}P^{−}^{1}

= Pe

λ_{1}t

. ..

λnt

P^{−}^{1}
3

### • looks like ordinary power series

### e

^{at}

### = 1 + ta + (ta)

^{2}

### 2! + · · · with square matrices instead of scalars . . .

### • define matrix exponential as

### e

^{M}

### = I + M + M

^{2}

### 2! + · · · for M ∈ R

^{n×n}

### (which in fact converges for all M )

### • with this definition, state-transition matrix is

### Φ(t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " = e

^{tA}

Solution via Laplace transform and matrix exponential 10–13

### Matrix exponential solution of autonomous LDS

### solution of ˙x = Ax, with A ∈ R

^{n×n}

### and constant, is x(t) = e

^{tA}

### x(0)

### generalizes scalar case: solution of ˙x = ax, with a ∈ R and constant, is x(t) = e

^{ta}

### x(0)

Solution via Laplace transform and matrix exponential 10–14

### • looks like ordinary power series

### e

^{at}

### = 1 + ta + (ta)

^{2}

### 2! + · · · with square matrices instead of scalars . . .

### • define matrix exponential as

### e

^{M}

### = I + M + M

^{2}

### 2! + · · · for M ∈ R

^{n×n}

### (which in fact converges for all M )

### • with this definition, state-transition matrix is

### Φ(t) = L

^{−}

^{1}

### !(sI − A)

^{−}

^{1}

### " = e

^{tA}

Solution via Laplace transform and matrix exponential 10–13

### Matrix exponential solution of autonomous LDS

### solution of ˙x = Ax, with A ∈ R

^{n×n}

### and constant, is x(t) = e

^{tA}

### x(0)

### generalizes scalar case: solution of ˙x = ax, with a ∈ R and constant, is x(t) = e

^{ta}

### x(0)

Solution via Laplace transform and matrix exponential 10–14

### •

*So for scalars t,s e*

^{(t+s)A = }*e*

^{(tA+sA) }*= e*

^{tA}*e*

^{sA}

**since (tA)(sA) = (sA)(tA)**This is useful in understanding the LTI system solution...

• matrix exponential is meant to look like scalar exponential

• some things you’d guess hold for the matrix exponential (by analogy with the scalar exponential) do in fact hold

• but many things you’d guess are wrong

example: you might guess that e^{A+B} = e^{A}e^{B}, but it’s false (in general)

A =

! 0 1

−1 0

"

, B =

! 0 1 0 0

"

e^{A} =

! 0.54 0.84

−0.84 0.54

"

, e^{B} =

! 1 1 0 1

"

e^{A+B} =

! 0.16 1.40

−0.70 0.16

"

"= e^{A}e^{B} =

! 0.54 1.38

−0.84 −0.30

"

Solution via Laplace transform and matrix exponential 10–15

however, we do have e^{A+B} = e^{A}e^{B} if AB = BA, i.e., A and B commute

thus for t, s ∈ R, e^{(tA+sA)} = e^{tA}e^{sA}

with s = −t we get

e^{tA}e^{−}^{tA} = e^{tA−tA} = e^{0} = I
so e^{tA} is nonsingular, with inverse

#e^{tA}$^{−}^{1}

= e^{−}^{tA}

Solution via Laplace transform and matrix exponential 10–16

• matrix exponential is meant to look like scalar exponential

• some things you’d guess hold for the matrix exponential (by analogy with the scalar exponential) do in fact hold

• but many things you’d guess are wrong

example: you might guess that e^{A+B} = e^{A}e^{B}, but it’s false (in general)

A =

! 0 1

−1 0

"

, B =

! 0 1 0 0

"

e^{A} =

! 0.54 0.84

−0.84 0.54

"

, e^{B} =

! 1 1 0 1

"

e^{A+B} =

! 0.16 1.40

−0.70 0.16

"

"= e^{A}e^{B} =

! 0.54 1.38

−0.84 −0.30

"

Solution via Laplace transform and matrix exponential 10–15

however, we do have e^{A+B} = e^{A}e^{B} if AB = BA, i.e., A and B commute

thus for t, s ∈ R, e^{(tA+sA)} = e^{tA}e^{sA}

with s = −t we get

e^{tA}e^{−}^{tA} = e^{tA−tA} = e^{0} = I
so e^{tA} is nonsingular, with inverse

#e^{tA}$^{−}^{1}

= e^{−}^{tA}

Solution via Laplace transform and matrix exponential 10–16

8

example: let’s find e^{A}, where A =

! 0 1 0 0

"

we already found

e^{tA} = L^{−}^{1}(sI − A)^{−}^{1} =

! 1 t 0 1

"

so, plugging in t = 1, we get e^{A} =

! 1 1 0 1

"

let’s check power series:

e^{A} = I + A + A^{2}

2! + · · · = I + A
since A^{2} = A^{3} = · · · = 0

Solution via Laplace transform and matrix exponential 10–17

Time transfer property

for ˙x = Ax we know

x(t) = Φ(t)x(0) = e^{tA}x(0)

interpretation: the matrix e^{tA} propagates initial condition into state at
time t

more generally we have, for any t and τ ,

x(τ + t) = e^{tA}x(τ )

(to see this, apply result above to z(t) = x(t + τ ))

interpretation: the matrix e^{tA} propagates state t seconds forward in time
(backward if t < 0)

Solution via Laplace transform and matrix exponential 10–18