Conditioning in Overparameterized Linear Systems

In section 2.6 and the following sections, we discussed how noise propagates through both linear and nonlinear transformations.

In this section, we examine the complementary case where the estimation of noise on the output variable of the linear system is known, while we seek to determine the estimation of noise on the input variables, specifically the quality with which the solution of a linear system has been obtained. For a significant portion of this section, we refer to the theory discussed in section [*], which serves as a continuation of that discussion. This will then be integrated in section 3.5 into the broader discourse on regression with nonlinear models.

Let therefore

\begin{displaymath}
\mathbf{A} \mathbf{x} = \mathbf{b}
\end{displaymath} (2.41)

be a linear system, ideal and free from noise, with $\mathbf {x}$ being the exact solution to the problem.

A perturbation $\delta \mathbf{b}$ on the column of known terms (observations, outputs), in

\begin{displaymath}
\mathbf{A} \mathbf{x} = \tilde{\mathbf{b}}
\end{displaymath} (2.42)

with $\tilde{\mathbf{b}} = \mathbf{b} + \delta \mathbf{b}$, causes a perturbation $\tilde{\mathbf{x}} = \mathbf{x} + \delta \mathbf{x}$ on the solution of magnitude equal to
\begin{displaymath}
\delta \mathbf{x} = \mathbf{A}^{-1} \delta \mathbf{b}
\end{displaymath} (2.43)

. In this way, we fall back into the previously discussed case of noise propagation in a linear system.

An interesting index consists of calculating the norm of the error in relation to the expected value. This relationship holds as


\begin{displaymath}
\frac{\Vert \delta \mathbf{x} \Vert}{\Vert \mathbf{x} \Vert...
...) \frac{\Vert \delta \mathbf{b} \Vert}{\Vert \mathbf{b} \Vert}
\end{displaymath} (2.44)

having defined $\kappa(\mathbf{A})$ as the condition number of the coefficient matrix (sensitivity matrix) $\mathbf{A}$. In the particular case where $\mathbf{A}$ is singular, the conditioning of the matrix is set to $\kappa(\mathbf{A})=\infty$.

Let us now extend the analysis to the case where the system is an overdetermined linear system. For this purpose, it is possible to derive the conditioning of a matrix using an additional property of the SVD decomposition. Let


\begin{displaymath}
\mathbf{x} = \mathbf{V} \mathbf{S}^{-1} \mathbf{U}^{*} \mathbf{b}
\end{displaymath} (2.45)

be the solution to an overdetermined linear problem obtained through the SVD decomposition method. By expanding equation (2.45), it can be shown that the solution of a linear system, obtained through the SVD decomposition, takes the form


\begin{displaymath}
\mathbf{x}=\sum \frac{ \mathbf{u}^{\top}_i \mathbf{b} } {\sigma_i} \mathbf{v}_i
\end{displaymath} (2.46)

From this formulation, it is evident that when the singular values $\sigma_i$ are low, any small variation in the numerator is amplified: under the Euclidean norm, the condition number of a matrix is precisely the ratio of the largest singular value to the smallest. The condition number is always positive, and a condition number close to one indicates a well-conditioned matrix.

In summary, conditioning has the following important properties:

As noted in section [*], the solution to the perpendicular equations tends to amplify errors compared to alternative solutions. It is indeed straightforward to demonstrate that in this case

\begin{displaymath}
\kappa \left(\mathbf{A}^{\top}\mathbf{A} \right) = \left( \frac{\sigma_1}{\sigma^{n}} \right)^{2}
\end{displaymath} (2.47)

Paolo medici
2025-10-22