第五章 处理非理想情况

Chapter 5   Handling Nonidealities in Estimation


5.1 估计器性能评估 / Estimator Performance

5.1.1 无偏与一致性 / Unbiased and Consistent

中文

前几章建立了理论上优美的估计框架,但真实世界的传感器和系统永远存在各种”非理想”:未知偏差、数据关联错误、异常值……本章系统地讨论如何检测和处理这些问题。

在讨论偏差之前,先要建立评价估计器好坏的量化标准。假设我们在时刻 产生了高斯估计 ,同时有真值 ,定义估计误差

\hat{\mathbf{e}}_k = \hat{\mathbf{x}}_k - \mathbf{x}_{\text{true},k}. \tag{5.1}

一个好的估计器应该同时满足:

性质直觉数学条件
无偏(Unbiased)误差平均值为零,不偏向任何方向
一致(Consistent)误差大小与估计器给出的不确定性相匹配(一维情形)

直觉:想象一名射手打靶。

  • 无偏:子弹落点围绕靶心,没有系统性偏移。
  • 一致:散布范围与射手自己估计的”准头”吻合——不过度自信(散布比自称的大),也不过度保守(散布比自称的小)。

一个估计器只有同时无偏且一致,才称得上是健康的。

在有限数据下,可用统计检验来验证:

  • 无偏性:检验样本均值 是否在零附近的置信区间内
  • 一致性:检验 是否符合自由度为 的卡方分布

English

A good estimator should be both unbiased and consistent.

  • Unbiased: — errors do not drift systematically.
  • Consistent: — reported uncertainty correctly predicts actual error magnitude.

For the sample mean to test unbiasedness: \hat{e}_\text{mean} \sim \mathcal{N}\!\left(\mu, \frac{\sigma^2}{K}\right), \tag{5.4}

and for consistency, the statistic should lie within quantile bounds of a distribution. Neither test holds exactly with finite data, so statistical confidence intervals are used. The ergodic hypothesis — that averaging over time is equivalent to averaging over many independent trials — is often invoked to make single-trajectory evaluation meaningful.


5.1.2 NEES 与 NIS / NEES and NIS

中文

对于 维状态,一维的一致性条件需要推广。

**归一化估计误差平方(Normalized Estimation Error Squared, NEES)**定义为:

\epsilon_{\text{nees},k} = \hat{\mathbf{e}}_k^T \hat{\mathbf{P}}_k^{-1} \hat{\mathbf{e}}_k. \tag{5.11}

这正是马哈拉诺比斯距离的平方。一致性条件变为:

E[\epsilon_{\text{nees},k}] = N, \tag{5.12}

统计检验:

Q_{\chi^2(NK)}(\ell) \leq \sum_{k=1}^K \epsilon_{\text{nees},k} \leq Q_{\chi^2(NK)}(u). \tag{5.13}

直觉:NEES 衡量误差椭球与估计协方差椭球的比值。若 NEES 均值显著大于 ,说明估计器过于自信(协方差偏小);若显著小于 ,说明过于保守(协方差偏大)。

问题:NEES 需要真值 ,实际中往往没有。

**归一化新息平方(Normalized Innovation Squared, NIS)**只需要测量数据,定义为:

\epsilon_{\text{nis},y,k} = \mathbf{e}_{y,k}^T \mathbf{S}_{y,k}^{-1} \mathbf{e}_{y,k}, \tag{5.14}

其中新息(innovation)是测量预测误差:

\mathbf{e}_{y,k} = \mathbf{y}_k - \mathbf{g}(\check{\mathbf{x}}_k), \quad \mathbf{S}_{y,k} = \mathbf{G}_k \check{\mathbf{P}}_k \mathbf{G}_k^T + \mathbf{R}_k. \tag{5.19}

一致性条件: 为测量维数)。

类似地,可基于运动模型定义运动 NIS ,一致性条件为

注意:对于批量估计器,NIS 应使用未参与估计的测量(留出集)来计算,以避免相关性。对于 EKF,使用预测状态 而非后验 来计算 NIS,则可以安全地使用所有测量。


English

For -dimensional states, the NEES generalizes the 1D consistency test:

NEES requires groundtruth. When groundtruth is unavailable, the NIS provides an alternative, using the EKF innovation and its predicted covariance . The EKF’s predicted state has not yet incorporated , so NIS can be evaluated online using all measurements.


5.2 偏差估计 / Bias Estimation

5.2.1 偏差对卡尔曼滤波器的影响 / Bias Effects on the Kalman Filter

中文

现实中,传感器输入或测量往往含有未知偏差。假设系统为:

\mathbf{x}_k = \mathbf{A}\mathbf{x}_{k-1} + \mathbf{B}(\mathbf{u}_k + \bar{\mathbf{u}}) + \mathbf{w}_k, \tag{5.25a} \mathbf{y}_k = \mathbf{C}\mathbf{x}_k + \bar{\mathbf{y}} + \mathbf{n}_k, \tag{5.25b}

其中 是输入偏差, 是测量偏差。

分析偏差对误差动力学的影响:定义预测误差 和修正误差 ,误差动力学变为:

\check{\mathbf{e}}_k = \mathbf{A}\hat{\mathbf{e}}_{k-1} - (\mathbf{B}\bar{\mathbf{u}} + \mathbf{w}_k), \tag{5.29a} \hat{\mathbf{e}}_k = (\mathbf{1} - \mathbf{K}_k\mathbf{C})\check{\mathbf{e}}_k + \mathbf{K}_k(\bar{\mathbf{y}} + \mathbf{n}_k). \tag{5.29b}

时:

关键结论

  1. 输入偏差 通过运动方程影响预测步,导致预测误差有均值;
  2. 测量偏差 在修正步引入额外误差;
  3. KF 的协方差估计变得过于自信(underestimated),因为它忽略了偏差贡献的那部分误差方差;
  4. 随时间 增大,偏差效应无界累积

解决方案:若偏差已知,直接在预测和修正方程中补偿(5.37a-e)。但若偏差未知,则需要估计它——下面两节讨论如何把偏差估计融入状态估计框架。


English

An unknown bias on the input or on the measurement causes the KF to be both biased () and inconsistent (the reported covariance underestimates the true error covariance). The bias effect accumulates without bound over time. If the bias is known exactly, it can be subtracted in the prediction and correction steps. Otherwise, it must be estimated.


5.2.2 未知输入偏差 / Unknown Input Bias

中文

核心思路:将偏差 纳入状态,与原状态一起估计。

构造增广状态

\mathbf{x}_k' = \begin{bmatrix} \mathbf{x}_k \\ \bar{\mathbf{u}}_k \end{bmatrix}, \tag{5.38}

偏差的运动模型采用随机游走(Brownian motion):

\bar{\mathbf{u}}_k = \bar{\mathbf{u}}_{k-1} + \mathbf{s}_k, \quad \mathbf{s}_k \sim \mathcal{N}(\mathbf{0}, \mathbf{W}), \tag{5.39}

增广后的系统方程为:

\mathbf{x}_k' = \underbrace{\begin{bmatrix} \mathbf{A} & \mathbf{B} \\ \mathbf{0} & \mathbf{1} \end{bmatrix}}_{\mathbf{A}'} \mathbf{x}_{k-1}' + \underbrace{\begin{bmatrix} \mathbf{B} \\ \mathbf{0} \end{bmatrix}}_{\mathbf{B}'} \mathbf{u}_k + \begin{bmatrix} \mathbf{w}_k \\ \mathbf{s}_k \end{bmatrix}, \tag{5.40}

观测方程:

\mathbf{y}_k = \underbrace{\begin{bmatrix} \mathbf{C} & \mathbf{0} \end{bmatrix}}_{\mathbf{C}'} \mathbf{x}_k' + \mathbf{n}_k. \tag{5.42}

系统回到零均值噪声形式,可直接应用标准 KF。

关键问题:增广后的系统是否可观?

这取决于增广可观测矩阵 的秩是否等于

例 5.1(可观):一维小车,位置+速度为状态,加速度偏差为待估量。,可成功估计偏差。

例 5.2(不可观):速度偏差和加速度偏差同时存在,观测矩阵列线性相关,,无法唯一确定两个偏差。

直觉:能否估计偏差,取决于偏差是否能通过不同路径”映射到”测量上。若两个偏差对所有测量的影响完全相同(线性相关),就无法区分它们。


English

Augmented-state trick: append the bias to the state vector and model it as a random walk. The augmented system is standard (zero-mean noise) and the KF can be applied directly.

Observability condition: the augmented system is observable iff . This is not guaranteed in general.

  • Example 5.1 (cart position+velocity, acceleration bias): — observable, bias can be estimated.
  • Example 5.2 (both speed and acceleration biases): — not observable, the two biases are indistinguishable from the measurements.

5.2.3 未知测量偏差 / Unknown Measurement Bias

中文

同样地,测量偏差 也可以纳入增广状态:

\mathbf{x}_k' = \begin{bmatrix} \mathbf{x}_k \\ \bar{\mathbf{y}}_k \end{bmatrix}, \tag{5.51}

增广观测矩阵:

例 5.3(经典 SLAM 模型):小车测量到地标的距离,但不知道地标位置(地标位置 = 负的测量偏差)。可观测矩阵秩不足,因为同时平移小车和地标不改变测量值——这个系统存在一个不可观子空间(nullspace of )。

\text{null}\,\mathcal{O}' = \text{span}\left\{\begin{bmatrix} 1 \\ 0 \\ -1 \end{bmatrix}\right\}, \tag{5.59}

意味着小车位置和地标位置可以整体平移而测量不变。

有路可走

  • 批量估计中,方程组有无穷多解,每一个解对应一种”全局位移”——系统能估计相对位置;
  • KF 中,偏差初值将保持不变——KF 仍可正常运行,只是绝对位置需要初始条件固定。

这正是 SLAM(同步定位与地图构建) 问题的数学本质。


English

The measurement bias is similarly augmented into the state. Example 5.3 — a cart measuring distance to an unknown landmark — is exactly a minimal SLAM model. The augmented observability matrix is rank-deficient by 1: sliding the cart and landmark together leaves all measurements unchanged. This is the unobservable direction. Both batch estimators and the KF can still yield useful (relative) solutions, but the absolute position is not uniquely determined without additional information (e.g., a prior on initial position).


5.3 数据关联 / Data Association

中文

数据关联问题:哪个测量对应哪个模型/地标?

这是实际估计问题中最容易导致失败的环节。经典例子:

  • GPS 系统知道每颗卫星的位置,卫星信号中嵌入了唯一编码——数据关联简单;
  • 星敏感器从一片星空中辨认每个亮点对应哪颗星——数据关联困难,需要星图。

数据关联分两大类:

类型定义优缺点
外部数据关联(External)利用模型/传感器的专属特征(颜色、条码、唯一编码)进行关联;与估计问题解耦可靠,但需要对环境进行特殊设计
内部数据关联(Internal)仅利用测量与模型的几何关系(最大似然关联);多假设方法可保留多个关联候选通用,但容易出错

关键认知:实际中估计器失败,最常见的根源是数据关联错误,而非算法本身有问题。因此,估计框架必须对数据关联错误有鲁棒性——下一节的离群值处理正是为此设计的。


English

Data association is the problem of determining which measurement corresponds to which part of a model. Two approaches:

  • External: Exploit sensor-specific features (unique codes, colors, barcodes) that are separate from the estimation problem. Reliable when the environment can be designed, but unsuitable for unstructured environments.
  • Internal: Use only measurement-model geometric likelihood to assign correspondences. More general but prone to misassociation.

In practice, data association failures are the most common cause of estimator divergence, which motivates robust outlier handling.


5.4 处理离群值 / Handling Outliers

中文

离群值(outlier):按噪声模型极不可能出现的测量值(如超过 )。常见来源:

  • 多路径 GPS 信号(信号绕高楼反射,路径变长);
  • 数据关联错误;
  • 传感器临时故障。

对一个二次代价函数,单个大误差会主导整体优化,导致估计器”崩溃”。本节介绍两种主要对策。


English

Outliers are measurements that are highly improbable under the assumed noise model. Without protection, a single outlier can dominate a quadratic cost function and drive the estimator to a poor solution.


5.4.1 随机采样一致性(RANSAC)/ Random Sample Consensus

中文

RANSAC 是一种通用的鲁棒拟合算法,基本思路:在一批含有异常值的数据中,找到最多内点(inlier)支持的模型参数。

算法流程(每次迭代):

  1. 从数据中随机选取最小子集(如拟合直线取 2 点);
  2. 用该子集拟合模型;
  3. 用完整数据集检验模型,标记内点(误差在阈值内)和外点;
  4. 若内点数不足则丢弃,否则用全部内点重新拟合;
  5. 记录当前最优模型(内点数最多且残差最小)。

重复多次迭代后,选最佳结果。

需要多少次迭代? 设每个点是内点的概率为 ,子集大小为 ,期望以概率 至少有一次成功选到全内点子集:

k = \frac{\ln(1-p)}{\ln(1-w^n)}. \tag{5.61}

(一半点是内点),,则 次迭代即可。若 (内点仅占 10%),,则需要 次。

直觉:RANSAC 是”最佳多数表决”——在外点众多的情况下,通过随机采样+投票找到真正的主流模式。代价是计算量随外点比例和子集大小指数增长。


English

RANSAC iteratively selects random minimal subsets, fits a model, counts inliers (points consistent with the model within a threshold), and keeps the best model found over many iterations. The required number of iterations to succeed with probability is: where is the inlier fraction and is the minimal subset size. RANSAC is a hard classifier — each datum is either accepted or rejected. M-estimation provides a softer alternative.


5.4.2 M-估计 / M-Estimation

中文

核心思想:不是硬性拒绝外点,而是使用增长速度慢于二次函数的代价函数,使大误差项自动获得较低权重。

标准 MAP 代价函数是二次型:

J(\mathbf{x}) = \frac{1}{2}\sum_{i=1}^N \mathbf{e}_i(\mathbf{x})^T \mathbf{W}_i^{-1} \mathbf{e}_i(\mathbf{x}). \tag{5.62}

M-估计将其推广为:

J'(\mathbf{x}) = \sum_{i=1}^N \alpha_i \rho(u_i(\mathbf{x})), \quad u_i(\mathbf{x}) = \sqrt{\mathbf{e}_i^T \mathbf{W}_i^{-1} \mathbf{e}_i}. \tag{5.64}

常见鲁棒代价函数(图 5.8):

\underbrace{\rho(u) = \tfrac{1}{2}u^2}_{\text{二次(非鲁棒)}}, \quad \underbrace{\rho(u) = \tfrac{1}{2}\ln(1+u^2)}_{\text{Cauchy}}, \quad \underbrace{\rho(u) = \tfrac{1}{2}\frac{u^2}{1+u^2}}_{\text{Geman-McClure}}. \tag{5.66}

Cauchy 和 Geman-McClure 对大 (异常值)的增长远慢于二次,大残差项的梯度贡献大幅降低。

如何求解:梯度设零后可以改写为:

\frac{\partial J'(\mathbf{x})}{\partial \mathbf{x}} = \sum_i \mathbf{e}_i^T \mathbf{Y}_i(\mathbf{x})^{-1} \frac{\partial \mathbf{e}_i}{\partial \mathbf{x}}, \tag{5.69}

其中 是依赖于当前残差大小的自适应协方差矩阵。这导出迭代重加权最小二乘(IRLS):在每次外迭代中,用当前状态 计算 ,然后解标准加权最小二乘问题:

J''(\mathbf{x}) = \frac{1}{2}\sum_i \mathbf{e}_i(\mathbf{x})^T \mathbf{Y}_i(\mathbf{x}_{\text{op}})^{-1} \mathbf{e}_i(\mathbf{x}). \tag{5.71}

Cauchy 的 IRLS 权重(代入得):

\mathbf{Y}_i(\mathbf{x}_{\text{op}}) = \frac{1}{\alpha_i}\left(1 + \mathbf{e}_i^T \mathbf{W}_i^{-1} \mathbf{e}_i\right)\mathbf{W}_i. \tag{5.77}

即:残差越大,自动膨胀协方差,降低该测量的权重。

直觉:IRLS 就像一位经验丰富的工程师——他不直接把可疑读数扔掉(那太武断),而是对它”打折”,误差越大折扣越多,让正常数据主导最终估计。

补充:Barron (2019) 提出了统一多种鲁棒代价函数的参数化族,可在估计中自适应选择最优鲁棒参数;Yang et al. (2020) 的**渐进非凸(GNC)**技术可防止 IRLS 陷入局部最优。


English

M-estimation replaces the quadratic cost with a robust function that grows sub-quadratically for large residuals . This automatically downweights outliers. Common choices:

Setting the gradient to zero shows the robust problem is equivalent to a weighted least-squares problem with state-dependent covariance . IRLS solves this iteratively: at each outer iteration, evaluate from the current estimate and solve the weighted least-squares problem. For the Cauchy cost, is simply the original covariance inflated by a factor — the more outlier-like the measurement, the less it is trusted.


5.5 协方差估计 / Covariance Estimation

中文

到目前为止,我们都假设噪声协方差 (过程噪声)和 (测量噪声)已知。实际中往往需要从数据中估计它们。

困境:传感器数据手册给出的噪声参数往往不够准确;手动调参耗时且结果不稳定;若协方差设置错误,估计器会偏离一致性。


English

In practice, the noise covariances and are rarely known exactly. Three methods for estimating them from data are discussed below.


5.5.1 有监督协方差估计 / Supervised Covariance Estimation

中文

前提:有一条带高质量真值的训练轨迹。

利用真值轨迹 计算过程误差和测量误差的样本均值(偏差)和样本协方差:

\bar{\mathbf{e}}_v = \frac{1}{K}\sum_{k=1}^K \mathbf{e}_{v,k}(\mathbf{x}_{\text{true}}), \quad \bar{\mathbf{e}}_y = \frac{1}{K+1}\sum_{k=0}^K \mathbf{e}_{y,k}(\mathbf{x}_{\text{true}}), \tag{5.80}

\mathbf{Q} = \frac{1}{K-1}\sum_{k=1}^K (\mathbf{e}_{v,k} - \bar{\mathbf{e}}_v)(\mathbf{e}_{v,k} - \bar{\mathbf{e}}_v)^T, \tag{5.81a}

\mathbf{R} = \frac{1}{K}\sum_{k=0}^K (\mathbf{e}_{y,k} - \bar{\mathbf{e}}_y)(\mathbf{e}_{y,k} - \bar{\mathbf{e}}_y)^T, \tag{5.81b}

(使用 Bessel 修正的样本协方差)。估计得到的 和偏差可用于后续无真值场景。可用独立验证集的一致性检验(NEES/NIS)来评估质量。


English

Given a training trajectory with groundtruth , compute sample means (biases) and sample covariances of the process and measurement residuals using (5.80)–(5.81). The Bessel correction or in denominators is applied as appropriate for unbiased sample covariance estimation.


5.5.2 自适应协方差估计 / Adaptive Covariance Estimation

中文

前提:无需真值,只需在线数据。

核心思路:利用滤波器的新息(预测误差)来反推协方差。回顾 EKF 的新息协方差:

E[\mathbf{e}_{y,k}\mathbf{e}_{y,k}^T] \approx \mathbf{G}_k \check{\mathbf{P}}_k \mathbf{G}_k^T + \mathbf{R}_k. \tag{5.84}

用长度为 的滑动窗口计算新息的样本协方差 ,从中提取测量噪声协方差的估计:

\mathbf{R}_k = \mathbf{S}_{y,k} - \frac{1}{L}\sum_{\ell=k-1}^{k-L} \mathbf{G}_\ell \check{\mathbf{P}}_\ell \mathbf{G}_\ell^T. \tag{5.86}

同理,对过程噪声:

\mathbf{Q}_k = \mathbf{S}_{v,k} - \frac{1}{L}\sum_{\ell=k-1}^{k-L} \mathbf{F}_{\ell-1} \hat{\mathbf{P}}_{\ell-1} \mathbf{F}_{\ell-1}^T. \tag{5.91}

直觉:创新序列的总方差 = 状态估计不确定性 + 测量噪声。从样本协方差中减去状态不确定性部分,剩下的就是测量噪声协方差的估计。

本方法与 NIS 测试密切相关——它本质上是在持续调整协方差,使滤波器保持一致性。

适用条件:协方差变化不太快;噪声为可加性;滤波器本身已运行健康(无偏一致)。


English

Adaptive (noise-adaptive) estimation uses a trailing window of EKF innovations to estimate online. The innovation covariance equals ; subtracting the state-estimate contribution leaves an estimate of . Similarly, can be estimated from the process residuals. The method is unsupervised (no groundtruth needed) and essentially keeps the filter consistent with respect to the NIS test.


5.5.3 MAP 协方差估计 / MAP Covariance Estimation

中文

最优思路:直接将协方差矩阵 作为未知量,与状态 一起在 MAP 框架内联合估计:

\{\hat{\mathbf{x}}, \hat{\mathbf{M}}\} = \arg\min_{\{\mathbf{x},\mathbf{M}\}} J'(\mathbf{x}, \mathbf{M}). \tag{5.92}

为防止过拟合,对协方差施加逆 Wishart 先验(covariance 矩阵的共轭先验):

\mathbf{M}_i \sim \mathcal{W}^{-1}(\boldsymbol{\Psi}_i, \nu_i), \tag{5.93}

代入后代价函数变为:

J'(\mathbf{x}, \mathbf{M}) = \frac{1}{2}\sum_{i=1}^N \left(\mathbf{e}_i^T \mathbf{M}_i^{-1} \mathbf{e}_i - \alpha_i \ln\det(\mathbf{M}_i^{-1}) + \text{tr}(\boldsymbol{\Psi}_i \mathbf{M}_i^{-1})\right). \tag{5.97}

求导并设零,得最优协方差的闭合表达式:

\mathbf{M}_i(\mathbf{x}) = \underbrace{\frac{1}{\alpha_i}\boldsymbol{\Psi}_i}_{\text{常数基底}} + \underbrace{\frac{1}{\alpha_i}\mathbf{e}_i(\mathbf{x})\mathbf{e}_i(\mathbf{x})^T}_{\text{残差膨胀项}}. \tag{5.99}

重要发现:将最优 代回消去,得到关于 的等价代价函数:

J'(\mathbf{x}) = \frac{1}{2}\sum_i \alpha_i \ln\left(1 + \mathbf{e}_i^T \boldsymbol{\Psi}_i^{-1} \mathbf{e}_i\right). \tag{5.100}

正好是 Cauchy 鲁棒代价函数!当取 时。

深刻结论:MAP 协方差估计(用逆 Wishart 先验)与 Cauchy M-估计数学等价。鲁棒代价函数不是临时补丁,而是有坚实的贝叶斯概率解释:它等价于在协方差上放置了一个逆 Wishart 先验,允许协方差根据残差大小自适应膨胀。


English

MAP covariance estimation jointly optimizes over and the covariances , using an inverse-Wishart prior to prevent overfitting. Analytically eliminating the optimal from the cost function yields exactly a Cauchy robust cost function in . This establishes a deep connection: M-estimation is not merely an ad-hoc robustness patch — it has a rigorous MAP interpretation as Bayesian covariance adaptation. The inflated covariance grows with the residual, automatically downweighting outliers.


5.6 本章小结 / Summary

中文

本章的核心要点:

主题核心结论
性能评估无偏性 + 一致性是健康估计器的两大标准;NEES(需真值)和 NIS(无需真值)是定量检验工具
偏差未知偏差可通过增广状态估计,但能否成功取决于系统的可观测性;随机游走先验是偏差时变的常用模型
数据关联关联错误是实际估计失败的最主要原因,需要鲁棒性设计来对抗
离群值RANSAC(硬分类)和 M-估计/IRLS(软加权)是最常用的两类方法;可联合使用
协方差估计有监督法(需真值),自适应法(滑动窗口新息),MAP法(等价 Cauchy 鲁棒代价)

核心结论

  1. 可观测性决定一切:偏差能否被估计,取决于增广后系统的可观测性,而非算法的好坏。
  2. 鲁棒代价函数有深刻概率根源:Cauchy M-估计等价于在噪声协方差上放置逆 Wishart 先验进行 MAP 估计。
  3. 实际估计失败往往来自非理想因素:算法正确但参数(协方差、偏差、关联)错误,是最常见的失败模式。

English

Key takeaways:

  1. Observability governs bias estimation. Augmenting the state with an unknown bias only works when the augmented system is observable. This is not guaranteed and must be checked case by case.
  2. Robust cost functions have rigorous probabilistic foundations. M-estimation with the Cauchy cost is equivalent to MAP estimation with an inverse-Wishart prior on the noise covariance — a theoretical justification for robustness.
  3. Consistency tests drive adaptive covariance estimation. The NIS test and adaptive covariance estimation are two sides of the same coin: adaptive covariance estimation is continuously adjusting noise parameters to keep the filter consistent.
  4. In practice, nonidealities (not algorithm choices) often dominate performance. Bad data association, unmodeled biases, and poor covariance tuning cause most real-world failures.

5.7 习题 / Exercises

5.1 考虑系统 为未知输入偏差。构造增广状态系统并判断是否可观。

5.2 考虑系统:

其中 为未知测量偏差(仅作用于第二个测量方程)。构造增广状态系统并判断是否可观。

5.3 若每个点是内点的概率 ,最小子集大小 ,要求至少以概率 成功,RANSAC 需要多少次迭代

5.4,RANSAC 需要多少次迭代?

5.5 讨论分段鲁棒代价函数(对小残差二次,对大残差使用有界增长函数)相比 Geman-McClure 代价函数的优势是什么?


下一章将从”自顶向下”的视角重新审视估计问题,介绍变分推断(Variational Inference)——一个统一的框架,涵盖非线性状态估计、参数辨识(系统矩阵、偏差、协方差)等多种任务。

The next chapter takes a top-down view and introduces variational inference — a unified framework that encompasses nonlinear state estimation and parameter identification (system matrices, biases, covariances) within a single principled objective.