• No results found

A Hybrid Kalman-Nonlinear Ensemble Transform Filter

N/A
N/A
Protected

Academic year: 2021

Share "A Hybrid Kalman-Nonlinear Ensemble Transform Filter"

Copied!
36
0
0

Loading.... (view fulltext now)

Full text

(1)

A Hybrid Kalman-Nonlinear Ensemble Transform Filter

Lars Nerger

Alfred Wegener Institute, Bremerhaven, Germany With inputs from

Paul Kirchgessner, Julian Tödter, Bodo Ahrens

on the NETF

(2)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Overview

Ø

Linear and nonlinear filters

Ø

Nonlinear Ensemble Transform Filter – NETF (Tödter &

Ahrens, MWR, 2015)

Ø

Hybrid LETKF-NETF method

(3)
(4)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

represent state and its error by ensemble of

states

Forecast:

Integrate ensemble with numerical model

Analysis:

update ensemble mean

update ensemble perturbations

(both can be combined in a single step)

Ensemble Kalman filters & NETF: Different definitions of

weight vector

Transform matrix

Ensemble filters – ensemble Kalman filters & NETF

X

˜

w

W

x

a

= x

f

+ X

0f

w

˜

X

0a

= X

0f

W

N

(5)

Ensemble Transform Kalman filter:

Transform matrix

Mean update weight vector

(depends on R and y)

Transformation of ensemble perturbations

(depends only on R, not y)

ETKF (Bishop et al., 2001)

A

1

= (m

1)I + (HX

0f

)

T

R

1

HX

0f

˜

w = A(HX

0f

)

T

R

1

y

Hx

f

W =

p

(m

1)A

1/2

N

N

(6)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

• Avoid changing ensemble members (‘particles’)

• Instead: give particles a weight at change it at the analysis step

• Initial weight: 1/N for all particles

• Weights are given by statistical likelihood of an observation

Example: With Gaussian observation errors (for each particle i):

• Ensemble mean state computed with weights

• This update does not assume any distribution of the state errors

(and is not limited to Gaussian distributions)

Particle filters – fully nonlinear ensemble filters

˜

w

i

⇠ exp

0.5(y

Hx

fi

)

T

R

1

(y

Hx

fi

)

(7)

• Ensemble Kalman:

• Transformation according to KF equations

• NETF (Tödter & Ahrens, MWR, 2015)

Ø Mean update from Particle Filter weights: for all particles i

Nonlinear Ensemble Transform Filter - NETF

Ø Ensemble update

• Transform ensemble to fulfill analysis covariance

(like KF, but not assuming Gaussianity)

• Derivation gives

( : mean-preserving random matrix; useful for stability)

W =

p

m

diag( ˜

w)

w ˜

˜

w

T

1/2

˜

w

i

⇠ exp

0.5(y

Hx

fi

)

T

R

1

(y

Hx

fi

)

p

N

(8)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Mean state update

Analysis covariance matrix

with

Derivation of NETF

x

a

= x

f

+ X

0f

w = X

˜

f

w

˜

P

a

=

X

i=1,m

˜

w

i

(x

fi

x

a

)(x

fi

x

a

)

T

W =

p

m

diag(w)

w ˜

˜

w

T

1/2

P

a

=

1

m

X

f

W

2

(X

f

)

T

X

i=1,N

N

p

N

(9)

ETKF parameterizes ensemble distribution by a Gaussian

distribution

NETF uses particle filter weights to ensure correct update of

ensemble mean and covariance

Filter update:

in ETKF is linear in observations

in NETF is nonlinear in observations

Difference of ETKF and NETF

˜

w = A(HX

0f

)

T

R

1

y

Hx

f

˜

(10)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Smoother: Update past ensemble with future observations

Rewrite ensemble update as

Filter:

Ensemble Smoothers – ETKS & NETS

X

ak|k

= X

fk|k 1

W

ˆ

k

analysis time Observations

used up to time

Smoother at time

Ø

works likewise for ETKS and NETS

Ø

also possible for localized filters

X

ai|k

= X

fi|k 1

W

ˆ

k

i < k

(11)
(12)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

NETF

(13)

Configuration of Lorenz-96 model experiments

Lorenz-96:

• 1-dimensional period wave

• Chaotic dynamics

Configuration for assimilation experiments

• State dimension: 80

• Observed: 40 grid points

• Time steps between analysis steps: 8

• Double-exponential observation errors (stronger nonlinearity)

• Experiment length: 5000 time steps

• Observation error standard deviation: 1.0

this is a difficult case for the assimilation

(and more realistic than typical 1-step forecast configuration)

(14)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Double-exponential observation errors

Run all experiments 10x with different initial ensemble

NETF beats ETKF for ensemble size > 30

Performance of NETF – Lorenz-96

20 30 40 50 60 70 ensemble size 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 MRMSE EKTF filter NETF filter

Kirchgessner, Tödter, Ahrens, Nerger. (2017) The smoother extension of the nonlinear ensemble transform filter. Tellus A, 69, 1327766

(15)

Performance for small model (Lorenz-96)

Blue: Smoother

NETS beats ETKS for ensemble size 40 and larger

Smoother slightly stronger for ETKS

NETF better than ETKF smoother for N=70

Performance of NETF – Lorenz-96

20 30 40 50 60 70 ensemble size 0.9 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 MRMSE EKTF filter ETKS smoother NETF filter NETS smoother

(16)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Parameter stability of NETF

RMS error varying

• inflation (forgetting factor) • localization radius

For N=50 and Laplace observation errors • Smaller error for NETF

• Smaller parameter region for low errors

2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: LNETF N e=50 Obs. error=Laplace min=1.237 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: LETKF N e=50 - Obs. error=Laplace min=1.357 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

(17)

NETF with Gaussian observation errors

For Gaussian observation errors

• Need N=90 for comparable RMS errors

• NETF needs much smaller localization radius

4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: LETKF Ne=90 - Obs. error=Gauss

min=1.305 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: LNETF N e=90 - Obs. error=Gauss min=1.336 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

(18)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

NETF

with

(19)

Assimilation into NEMO

European ocean circulation model

Model configuration

• box-configuration “SEABASS”

• ¼o resolution

• 121x81 grid points, 11 layers

(state vector ~300,000)

• wind-driven double gyre

(a nonlinear jet and eddies)

• medium size SANGOMA

benchmark

True sea surface height at 1st analysis time

Longitude (degree) Latitide (degree) −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

True sea surface height at last analysis time

Longitude (degree) Latitide (degree) −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 www.data-assimilation.net

(20)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

PDAF: A tool for data assimilation

PDAF - Parallel Data Assimilation Framework

§ a program library for ensemble data assimilation

§ provide support for parallel ensemble forecasts

§ provide fully-implemented & parallelized filters and smoothers

(EnKF, LETKF, NETF, EWPF … easy to add more)

§ easily useable with (probably) any numerical model

(applied with NEMO, MITgcm, FESOM, HBM, TerrSysMP, …)

§ run from laptops to supercomputers (Fortran, MPI & OpenMP)

§ first public release in 2004; continued development

§ ~280 registered users; community contributions

Open source:

Code, documentation & tutorials at http://pdaf.awi.de

(21)

Extending a Model for Data Assimilation

Extension for data assimilation

revised parallelization enables ensemble forecast plus: Possible model-specific adaption: for NEMO: handle leapfrog time stepping Start Stop Do i=1, nsteps Initialize Model Initialize coupler Initialize grid & fields

Time stepper in-compartment step coupling Post-processing Model single or multiple executables coupler might be separate program

Initialize parallel.

Aaaaaaaa

Aaaaaaaa

aaaaaaaaa

Stop

Initialize Model

Initialize coupler Initialize grid & fields

Time stepper in-compartment step coupling Post-processing Init_parallel_PDAF Do i=1, nsteps Init_PDAF Assimilate_PDAF Start Initialize parallel.

(22)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Features of online-coupled DA program

• minimal changes to model code when combining model with filter algorithm • model not required to be a subroutine • no change to model numerics!

• model-sided control of assimilation program (user-supplied routines in model context) • observation handling in model-context • filter method encapsulated in subroutine • complete parallelism in model, filter, and

ensemble integrations

Aaaaaaaa

Aaaaaaaa

aaaaaaaaa

Start Stop Initialize Model generate mesh Initialize fields Time stepper consider BC Consider forcing Post-processing init_parallel_pdaf Do i=1, nsteps init_pdaf assimilate_pdaf

(23)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Observations and Assimilation Configuration

Observations

• Simulated satellite sea surface

height SSH (Envisat & Jason-1 tracks), 5cm error

• Temperature profiles on 3ox3o grid,

surface to 2000m, 0.3oC error

Data Assimilation

• Ensemble size: 120

• LETKF, LNETF

Localization: weights on matrix R-1

(Gaspari/Cohn’99 function, 2.5o radius)

• Assimilate each 48h over 360 days−60 −55 −50 −45 −40 −35 −30

24 28 32 36 40 44 (a) SSH [m] Longitude [°] Latitude [°] ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −0.6 −0.4 −0.2 0.0 0.2 0.4 ● ● ● ● ● ● ● ● 5000 4000 3000 2000 1000 0 (b) T [°C] Latitude [°C] Depth [m] 24 26 29 32 35 38 41 44 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 2 4 6 8 10 12

FIG. 3. Observation characteristics on day 8: (a) The horizontal domain is shown, together with the Argo

profiler locations (crosses) and the syntheticSSH observations (colored) on the Envisat tracks (thin lines). (b)

The vertical grid of 11 layers is visualized, and embedded are the artificial Argo temperature profiles along the

= 50 longitude line. Note that at = 44 , the true temperature field is zero due to the lateral boundary conditions. 774 775 776 777 778

FIG. 2. Observation characteristics on day 8: (a) The horizontal domain is shown, together with the Argo

profiler locations (crosses) and the synthetic SSH observations (colored) on the Envisat tracks (thin lines). (b) The vertical grid of 11 layers is visualized, and embedded are the artificial Argo temperature profiles (46 values each) along the = 50 longitude line. At = 44 , the true temperature field is zero due to the lateral boundary conditions. 817 818 819 820 821 42 −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 (a) SSH [m] Longitude [°] Latitude [°] ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● −0.6 −0.4 −0.2 0.0 0.2 0.4 ● ● ● ● ● ● ● ● 5000 4000 3000 2000 1000 0 (b) T [°C] Latitude [°C] Depth [m] 24 26 29 32 35 38 41 44 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● 0 2 4 6 8 10 12

FIG. 3. Observation characteristics on day 8: (a) The horizontal domain is shown, together with the Argo

profiler locations (crosses) and the syntheticSSH observations (colored) on the Envisat tracks (thin lines). (b) The vertical grid of 11 layers is visualized, and embedded are the artificial Argo temperature profiles along the

= 50 longitude line. Note that at = 44 , the true temperature field is zero due to the lateral boundary conditions. 774 775 776 777 778 41

FIG. 2. Observation characteristics on day 8: (a) The horizontal domain is shown, together with the Argo

profiler locations (crosses) and the synthetic SSH observations (colored) on the Envisat tracks (thin lines). (b) The vertical grid of 11 layers is visualized, and embedded are the artificial Argo temperature profiles (46 values each) along the = 50 longitude line. At = 44 , the true temperature field is zero due to the lateral boundary conditions. 817 818 819 820 821 42

(24)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Dimensions of the problem

State vector dimension ~300,000

Dimension of dynamics (error space):

From eigenvalue decompositions (EOFs)

~180 modes for 90% of variability ~400 modes for 99.9% of variability

5. p. 6, line 113: ”It allows to conveniently write”. This sentence is grammatically incorrect, and should be fixed.

6. p. 6, lines 119-120: The authors seem to suggest that the likelihood is required to be Gaussian. Is it necessary to restrict this to Gaussian, or are other observation PDFs possible? There are certainly PDFs more appropriate for positive definite quantities... From my read of Todter and Ahrens (2015), it does not look like the likelihood should necessarily be restricted to be Gaussian.

7. p. 19, section 5a: There is a di↵erence between a ”free run” and a ”free ensemble”. ”Free run implies you are running a control simulation that is not a↵ected by data as-similation, while you are in fact referring to the ensemble mean of a set of simulations initialized from random IC. Is this realistic? Would it be a better test to initialize an ensemble from perturbations around truth and see how the errors grow?

8. p. 21, line 439: The word ”monotonously” should be replaced with ”monotonically”. 9. p. 22, lines 451-452: The sentence that begins with ”Nevertheless, a minimal relative error” does not make sense. Are you plotting all temperatures or just the surface? Same with u and v. Please clarify. If only the surface, what do the errors at depth look like?

Number of eigenvalue

100 200 300 400 500 600 700

Percentage of variablility explained

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1 Dimensionality of the problem

Sample 48h 0.9

(25)

Application of LETKF

True sea surface height at 1st analysis time

Longitude (degree) Latitide (degree) −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

Initial guess of sea surface height

Longitude (degree)

Latitide (degree)

Estimated SSH at 1st analysis time

−60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

(26)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Application of LETKF (2)

Estimated SSH at last analysis time

Longitude (degree) Latitide (degree) −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

True sea surface height at last analysis time

Longitude (degree) Latitide (degree) −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

Final SSH without assimilation

True sea surface height at last analysis time

Longitude (degree) Latitide (degree) −60 −55 −50 −45 −40 −35 −30 24 28 32 36 40 44 −0.6 −0.4 −0.2 0 0.2 0.4 0.6

(27)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

• RMS errors reduced to 10% (velocities to 20%) of initial error

• Slower convergence for NETF, but to same error level as LETKF

• CRPS (Continuous Rank Probability Score) shows similar behavior

Filter performances in NEMO

0 50 100 150 200 250 300 350 0 10 20 30 40 50 (a) for SSH Time [days] relativ e RMSE / CRPS [%] 0 50 100 150 200 250 300 350 0 10 20 30 40 50 (a) for T Time [days] relativ e RMSE / CRPS [%] RMSE_NETF RMSE_LETKF CRPS_NETF CRPS_LETKF 0 50 100 150 200 250 300 350 0 10 20 30 40 50 (a) for U Time [days] relativ e RMSE / CRPS [%] 0 50 100 150 200 250 300 350 0 10 20 30 40 50 (a) for V Time [days] relativ e RMSE / CRPS [%]

FIG. 9. Comparison of NETF and LETKF in terms of RMSE (black/gray) and CRPS (red/orange). The lines

represent the field-averaged relative RMSE and CRPS, respectively, for all prognostic variables, i.e., (a)SSH, (b)T, (c)U and (d)V, which are defined in Sec. 5.b. The legend in (b) is valid for all panels.

841

842

843

49

Tödter, Kirchgessner, Nerger & Ahrens, MWR 144 (2016) 409 – 427

(28)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

(29)

Motivation

NETF

• can perform better than LETKF with nonlinear model • needs rather large ensemble

LEKTF

• larger parameter region with convergence • very stable

Hybrid filter

• Can we combine the strengths of LETKF and NETF?

(30)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Hybrid variants

1-step update (HSync)

• is assimilation increment of a filter

• ! is hybrid weight (between 0 and 1; 1 for fully LETKF)

XaHSync = Xf + (1 ) XN ET F + XET KF

X

2-step updates

Variant 1 (HNK): NETF followed by LETKF

• Both steps computed with increased R according to !

Variant 2 (HKN): LETKF followed by NETF

˜

XaHN K = XaN ET F[Xf, (1 )R 1]

(31)

Choosing hybrid weight

!

• Hybrid weight shifts filter behavior • How to choose it?

Some possibilities: • Fixed value

• Adaptive

• According to which condition?

• For hybrid particle-EnKF, Frei & Kuensch (2013) suggested using effective sample size

• Choose " so that is as small as possible but above

minimum limit • Alternative used here

(close to 1 if small) adap = 1 Nef f/Ne Nef f = X i 1/(wi)2 Nef f Nef f

(32)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Test with Lorenz-96 model (n=80 as before)

Ensemble size N=50 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: LETKF Ne=50 - Obs. error=Gauss

min=1.357 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: LNETF Ne=50 - Obs. error=Gauss

min=1.497 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

• All hybrid variants improve estimates compared to LETKF & NETF

• Similar stability as LETKF

• Dependence on forgetting factor & localization radius like LETKF

• Similar optimal localization radius • Largest improvement for variant HNK

(NETF before LETKF)

4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HKN Ne=50 - adaptive γ

min=1.342 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HSync Ne=50 - adaptive γ

min=1.302 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HNK Ne=50 - adaptive γ

min=1.141 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 16% improvement 4% improvement 1% improvement

(33)

N=50 – adaptive and fixed hybrid weight

!

4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: hybrid HNK Ne=50 - γ = 0.9 min=1.062 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HNK Ne=50 - adaptive γ

min=1.141 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: hybrid HNK Ne=50 - γ = 0.85 min=1.108 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: hybrid HNK Ne=50 - γ = 0.95 min=1.134 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9

Consider only version HNK • Fixed ! also successful,

smaller errors than hybrid • Has to be close to 1.0

(small NETF fraction)

• Smaller ! reduced stability

22% improvement 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: hybrid HNK Ne=50 - γ = 0.5 min=1.284 1 1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 5% improvement 16% improvement

(34)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

• More interesting case; we always have small ensembles • Larger estimation errors that N=50

• NETF increasingly worse and very small stability region

• (N=10 would also work, but higher risk of model crashes)

Small Ensemble N=15

1 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: LNETF Ne=15 - Obs. error=Gauss

min=1.853 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: LETKF Ne=15 - Obs. error=Gauss

min=1.663 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 8% improvement 6% improvement 1% improvement 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HSync Ne=15 - adaptive γ

min=1.569 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HNK Ne=15 - adaptive γ

min=1.534 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HKN Ne=15 - adaptive γ

min=1.647 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4

• Hybrid still positive influence

• Smaller improvement than for N=50 • Optimal parameters for HSync & HNK

different from HKN

(35)

Small Ensemble N=15

8% improvement 11% improvement 9% improvement 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: hybrid HNK Ne=15 - γ = 0.95 min=1.481 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor RMSE: hybrid HNK Ne=15 - γ = 0.9 min=1.509 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 2 3 4 5 6 7 8 9 10 12 14 16 support radius 0.7 0.75 0.8 0.85 0.9 0.95 1 forgetting factor

RMSE: hybrid HNK Ne=15 - adaptive γ

min=1.534 1.4 1.5 1.6 1.7 1.8 1.9 2 2.1 2.2 2.3 2.4 Fixed !

• reduces error compared to adaptive ! • Can increase stability region

• Needs to be even closer to 1 than for N=50

(36)

Hybrid Kalman-Nonlinear Ensemble Transform Filter

Summary

Ø

Nonlinear ensemble transform filter (NETF)

§

Update state estimate as particle filter

Transform ensemble using covariance matrix

Ø

Hybrid LETKF-NETF

§

Combine analysis updates controlled by hybrid weight

§

Smaller errors than LETKF and NETF

§

Variant NETF-before-LETKF yield best results

§

Fixed hybrid height showed lower errors compared to

simple adaptive weight

§

Next steps

Ø

reconsider adaptive weight

Ø

assess with more realistic model

References

Related documents

General License D-1 extends beyond the original General License D—which had authorized the exportation to Iran of fee-based services incident to the exchange of

However, even though the receive power constraint of the served users is considered, there might be cases where at some regions of the multibeam coverage area, the signal power

This move doesn’t alter the inclusion in the center of the triangle (as in the public value governance triangle) of the formal and informal processes of democracy as practices

EU Grant: EUR 1.3 mln (90% of the budget); Duration: 3 years – Dec 2013 to Nov 2016; Consortium: 17 institutions from 11 countries – 7 EU, and 4 Eastern European countries

This research proposes a methodological framework for using Latent Dirichlet Allocation to sculpt categories from the natural distribution of contract topics, and assesses

(b) Transport and traffic management which include the formulation, coordination, and monitoring of policies, standards, programs and projects to rationalize the existing

Different Kinds of Chromatography (characterized by the mobile phase) „ Liquid chromatography (includes column.. chromatography, thin-layer, and HPLC) – Stationary phase: