IEKF and ISPKF

The extended Kalman filter utilizes the Jacobian of the observation function $h$ centered at $\hat{ \mathbf{x} }^{-}$, the a priori state, and, through the knowledge of the observation, enables the estimation of the a posteriori state.

In fact, this procedure is precisely a single iteration of the Gauss-Newton method.

It is possible to increase the iterations in order to obtain the class of iterative Kalman filters, which typically demonstrate significantly better performance than their non-iterative counterparts.

The only difference compared to the respective non-iterative filters lies in the observation part (see equation (2.114)), which is replaced by iterations in the form of:

\begin{displaymath}
\mathbf{x}_{i+1} = \hat{ \mathbf{x} } + \mathbf{K}(\mathbf{...
...thbf{x}_i) - \mathbf{H}_i( \hat{ \mathbf{x} } - \mathbf{x}_i))
\end{displaymath} (2.120)

with the gain $\mathbf{K}$ calculated iteratively as
\begin{displaymath}
\mathbf{K} = \mathbf{P} \mathbf{H}_i^\top (\mathbf{H}_i \mathbf{P} \mathbf{H}_i^\top + \mathbf{R})^{-1}
\end{displaymath} (2.121)

and using the value $\mathbf{x}_0 = \hat{ \mathbf{x} }^{-}$ as the initial value for minimization.

The value of $\mathbf{K}$, associated with the last iteration, is finally used to update the process covariance matrix.

The same procedure can be applied to the SPKF filter to obtain the Iterated Sigma Point Kalman Filter (SSM06), where the iteration to compute the state takes the form

\begin{displaymath}
\mathbf{x}_{i+1} = \hat{ \mathbf{x} } + \mathbf{K} \left(\m...
...} \mathbf{P}^{-1} ( \hat{ \mathbf{x} } - \mathbf{x}_i) \right)
\end{displaymath} (2.122)

Paolo medici
2025-10-22