Propagation of Uncertainty

To understand how uncertainty propagates in a system, it is therefore necessary to undergo a process, which may be more or less complex, of both inversion and derivation of the system itself.

In many applications, it is therefore difficult, if not impossible, to obtain analytically the probability distribution at the output of a transformation of a generic input distribution. Fortunately, in practical applications, a lower precision is often required when addressing a problem of uncertainty propagation, typically focusing only on first and second-order statistics and limiting the cases to Gaussian-type probability distributions.

Sum or Difference of Quantities

The random variable $Z=X \pm Y$, the sum/difference of independent random variables, has variance (covariance) equal to The variance of the resulting variable is therefore the sum of the individual variances.

Linear Transformations

Let It seems that you've entered a placeholder for a mathematical block. Please provide the specific content or equations you'd like translated, and I'll be happy to assist you! a linear system where the random vector $\mathbf {x}$ is associated with the covariance matrix $\text{var}(X)$. The covariance matrix of the resulting random variable $\mathbf{y}$, which is the output of the system, is It seems that you've entered a placeholder or a command for a math block. Please provide the content or context you would like to translate, and I'll be happy to assist you!

This relationship also holds in the case of projections $y = \mathbf{b} \cdot \mathbf{x}$, and similarly to the linear system, the variance of the variable $Y$ becomes

\begin{displaymath}
\text{var}(Y) = \text{var}(\mathbf{b}^{\top}X) = \mathbf{b}^{\top}\text{var}(X) \mathbf{b}
\end{displaymath} (2.25)

.

Generalizing the previous cases, the cross-covariance between $\mathbf{A}\mathbf{x}$ and $\mathbf{B}\mathbf{y}$ can be expressed as:

\begin{displaymath}
\text{cov}(\mathbf{A}X, \mathbf{B}Y) = \mathbf{A} \text{cov}(X, Y) \mathbf{B}^{\top}
\end{displaymath} (2.26)

As a special case, the cross-covariance between $\mathbf {x}$ and $\mathbf{A}\mathbf{x}$
\begin{displaymath}
\text{cov}(X, \mathbf{A}X) = \text{var}(X) \mathbf{A}^{\top}
\end{displaymath} (2.27)

It is worth noting that $\text{cov}(Y,X) = \text{cov}(X,Y)^\top = \mathbf{A} \text{var}(X)$.

The examples of uncertainty propagation discussed so far can be further generalized, anticipating important results for the nonlinear case, presented by the affine transformation $f(\mathbf{x})$ defined as

\begin{displaymath}
f(\mathbf{x}) = f_{\bar{\mathbf{x}}} + \mathbf{A} (\mathbf{x} - \bar{\mathbf{x}})
\end{displaymath} (2.28)

that is, a transformation of random variables $Y=f(X)$ which effectively has a mean value $\bar{y}=f_{\bar{\mathbf{x}}}$ and a covariance matrix $\boldsymbol\Sigma_Y = \mathbf{A} \boldsymbol\Sigma_X \mathbf{A}^{\top}$.

Nonlinear Transformations

The propagation of covariance in the nonlinear case is not easily obtainable in closed form and is generally achieved only in an approximate manner. Techniques such as Monte Carlo simulation can be employed to accurately simulate the probability distribution following a generic transformation at various orders of precision. The linear approximation is still widely used in practical problems; however, as will be discussed in the next section, modern techniques allow for the estimation of covariance at high orders of precision in a relatively straightforward manner.

Normally, for first-order statistics (first-order error propagation), the nonlinear transformation $f$ is approximated, through a series expansion, by an affine transformation

\begin{displaymath}
f(\mathbf{x}) \approx f(\bar{\mathbf{x}}) + \mathbf{J}_f (\mathbf{x} - \bar{\mathbf{x}})
\end{displaymath} (2.29)

with $\mathbf{J}_f$ being the matrix of partial derivatives (Jacobian) of the function $f$. With this approximation, the result from the previously shown linear affine case in equation (2.28) can be used to determine the covariance matrix of the variable $f(\mathbf{x})$, by replacing the matrix $\mathbf{A}$ with the Jacobian, yielding the covariance
\begin{displaymath}
\boldsymbol\Sigma_Y = \mathbf{J}_f \boldsymbol\Sigma_X \mathbf{J}_f^{\top}
\end{displaymath} (2.30)

and using the expected mean value $\bar{y} = f(\bar{\mathbf{x}})$.



Subsections
Paolo medici
2025-10-22