*Bayesian performance bounds* are supposed to benchmark Bayesian estimators and detectors, which infer random parameters of interest from noisy measurements. These parameters are usually physical values as temperature, position, etc.

Consider a *measurement model *

\[\boldsymbol{y} = C(\boldsymbol{x})~,\]

where a sensor modeled by a probabilistic mapping $C$ measures a random parameter vector $\boldsymbol{x}$. An vector-valued estimator $\hat{\boldsymbol{x}}(\boldsymbol{y})$ infers the parameter vector using random measurements $\boldsymbol{y}$. A simple example adds noise to the paramter, i.e.

\[\boldsymbol{y} = \boldsymbol{x} + \boldsymbol{v}~,\]

where the random vector $\boldsymbol{v}$ models measurement noise.

A *performance bound* [VB07] is a lower bound on the *mean-square-error matrix* $\mathrm{E}(\tilde{\boldsymbol{x}}\tilde{\boldsymbol{x}}^{\mathrm{T}})$ for the *estimation error* $\tilde{\boldsymbol{x}} = \hat{\boldsymbol{x}}(\boldsymbol{y}) - \boldsymbol{x}$ of *any* Bayesian estimator. This "any" is in contrast to the traditional frequentist Cramer-Rao bound.

A popular performance bound is the van-Tree-Cramer-Rao (Bayesian Cramer-Rao) bound which is the Bayesian version of the traditional Cramer-Rao bound. It is a relative of the family of Weiss-Weinstein bounds, which in turn is a subclass of the family of Bayesian lower bounds. With these bounds it is possible to compare different Bayesian estimators. Note that $\tilde{\boldsymbol{x}}\tilde{\boldsymbol{x}}^{\mathrm{T}}$ is the square-error loss that provides the minimum-mean-square-error (MMSE) estimator. Hence, all other Bayesian estimators are worse with respect to loss $\tilde{\boldsymbol{x}}\tilde{\boldsymbol{x}}^{\mathrm{T}}$ than the MMSE estimator (cf. Algebraic vs. Frequentist vs. Bayesian Inference).