Kalman Filter

来源:互联网 发布:信达通证券软件下载 编辑:程序博客网 时间:2024/05/18 13:31

  • Kalman Filter
    • 1 Linear Optimal Filtering
    • 2 Orthogonality Principle
    • 4 State Model
    • 5 Objective And Hypothesis
      • Objective
      • Hypothesis
        • Remark
    • 3 Notations
      • Remark
    • Proposition
      • Conclusion of Space Relationship
    • The Innovation Covariance
      • Proof the prediction of observation

Kalman Filter

1.1 Linear Optimal Filtering

LOF是kalman filter的精髓,可以描述为:

  • inputs: x[0], x[1], …, x[n]
  • outputs: y[0], y[1], …, y[n]
  • filter: w[0], w[1], …, w[n]
  • objection function: en=ynŷ n, ŷ nspanned by{xn}

1.2 Orthogonality Principle

LOF得到的ŷ n垂直于ynŷ n

LOF ŷ n[ynŷ n]E[ŷ n[ynŷ n]T]=0

1.4 State Model

Xk+1YK=FkXk+GKUK=HkXk+Bkenvolution of statesobservations(1)(2)

  • Hk,Fk,Gkare determinated and known, MATRIX
  • Uk,Bk: White Noises of state and observation

这里B加粗B表示B是向量。

1.5 Objective And Hypothesis

Objective:

Obtain recursively linear estimators of Xkbased on the observations up to the k instants ({Yn}n=0,...,k) and the signal statistical properties.

Hypothesis:

E[X0]=x¯0,E[Bk]=0,E[Uk]=0EBkX0Uk[BTlXT0UTl]RkQkδkl=Rkδkl000P0000Qkδkl=E[BkBTk]=E[UkUTk]={1,0,if k=l;otherwise.

这里要证明一下,为什么假设E[Bk]=0,E[Uk]=0会是『Without loss of generality 』:
假设Bk=Bk+B0, Uk=Uk+U0

Yk(YkB0)Yknew(Xk+1+C)C=HkXk+Bk+B0=HkXk=(YkB0)=HkXk=Fk(Xk+C)+GKUk=(Fk1)1GkU0

Remark

Stationary: 是指随机过程(不是随机变量)的联合概率密度不随时间改变而改变的状态。

Stationary Process: is a stochastic process whose joint probability distribution does not change when shifted in time. Consequently, parameters such as mean and variance, if they are present, also do not change over time. – From Wiki

须满足

E[Xk]E[XkXTl]=x¯=Γkl

Proof: Xk,Ykare non-stationary unless

  1. the matrices Hk,Fk,Gk,Rk,Qk are time invariant (noted by H, F, G, R and Q);
  2. F is a stable “filter”, i.e., all eigenvalues lies within the unit circle;
  3. x¯0 = 0 and the initial covariance P0is a solution to the Lyapunov equation:
    P=FPFT+GQGT

1.3 Notations

  • yn: space spanned by {Yi}i=1,..,n;
  • X̂ (n+1|yn): optimal linear estimate of Xn+1 knowing yn;
  • Ŷ (n|yn1): optimal linear predictor of Yn knowing yn1;
  • α[n] =YnŶ (n|yn1): innovation/error, the information we have gained.

Remark

  1. i = n, X̂ (n|yn1) is the Kalman Filter (On line)
  2. i < n, X̂ (i|yn1) is the Kalman Smoothing (Off line)

α[n]X̂ (n+1|yn)Ŷ (n|yn1)=YnŶ (n|yn1){y1:n}{y1:n1}

Proposition

  1. α[n]yn1E[α[n]YTk]=0, 1k<n
  2. α[n]α[k]E[α[k]α[k]T]=0, kn
  3. Linear transformation
    yn={Y1,...,Yn}{α[1],...,α[1]}

证明:
1.

sinceand:Ŷ (n|yn1)is optimal linear:α[n]=YnŶ (n|yn1):α[n]yn1

这里有一些引理:
spaceifif:A,B,C,D,AB:CBAC:B=CDAC&AD

sinceand:Ŷ (n|yn1)is optimal linear:α[n]=YnŶ (n|yn1):α[n]yn1

sincesincesincesince:Yn1=Hn1Xn1+Bn1:Xn=FnXn1+Gn1Un1{Xn}space{X0,U0:n1}space{Yn}space{X0,U0:n1,B0:n}space:α[n]yn1α[n]{X0,U0:n2,B0:n1}space:{X0,U0:n3,B0:n2}space{X0,U0:n2,B0:n1}spaceα[n]{X0,U0:n3,B0:n2}α[n]yn2α[n]yi,0 < i < n


2.
sincesince:α[k]=YkŶ (n|yk1){Yk}space:α[k]yk:α[n]α[k],1kn1

3.
α[n]α[n]YnYn1=YnŶ (n|yn1){Yn}=α[n]+Ŷ (n|yn1){α[n],Yn1}{α[n1],Yn2}

Conclusion of Space Relationship

{Xn}space{Yn}spaceBkBkUkUkBk{X0,U0:n1}space{X0,U0:n1,B0:n}spaceUlXl,0lkXl,0lkYl,0lkYl,0lk1

The Innovation Covariance

ϵ(n,n1)K(n,n1)Ŷ (n|yn1)α[n]=XnX̂ (n|yn1)=E[ϵ(n,n1)ϵ(n,n1)T]=HnX̂ (n|yn1)=YnŶ (n|yn1)the prediction errorcovariancethe prediction of observationinnovation

Proof the prediction of observation:

Ŷ (n|yn1)=HnX̂ (n|yn1)

Proof: 既然Ŷ (n|yn1)表示 optimal linear predictor of Yn knowing yn1, 那么它与 Y的差必定与空间spanned by{yn1}相互垂直.

YnŶ (n|yn1)  E[(YnŶ (n|yn1))(yn1)T]since  HnXn+Bn  E[(Hn(XnX̂ (n|yn1))+Bn)yTn1]XnX̂ (n|yn1)yn1(Optimal   yn1=0=Yn=0Filter properties) and Bnyn1

原创粉丝点击