The Cramer-Rao Bound

The Cramer-Rao Lower Bound (CRLB) establishes a lower limit for the variance of any unbiased estimator of the parameter $\theta $ (to maintain consistent notation with the literature, $\beta$ in our case).

Let $X$ be a multidimensional random variable and $\theta $ an unknown deterministic parameter. Let $f^{\vartheta}_{x} (X)$ be the probability density of $X$ given $\theta $. We assume that such a probability density exists and is twice differentiable with respect to $\theta $.

Theorem 1 (Cramer-Rao Inequality)   Let $T(\cdot)$ be an unbiased estimator of the scalar parameter $\vartheta$, and suppose that the observation space $X$ is independent of $\theta $. Then (under certain regularity conditions...)
\begin{displaymath}
E^\theta \left[ \left( T(X) - \theta \right)^2 \right] \ge \left[ I_n(\theta) \right]^{-1}
\end{displaymath} (3.1)

where $I_n(\theta)=E^{\theta} \left[ \left( \frac{\partial \ln f^{\theta}_{x} (X) }{\partial \theta} \right)^2 \right]$ (Fisher Information quantity).

Since the parameter $\theta $ is not known, the Cramer-Rao theorem only allows us to determine whether the estimator is optimal or not.



Paolo medici
2025-10-22