Univariate density estimation

Bayesian analysis typically results in either a mathematical expression for the posterior density or, given MCMC treatment, a sample from the posterior. In the first case it is usually a trivial matter to obtain a sample from that density. The sample, in either case, will have dimensionality equal to that of the parameter space of the posterior, typically this will be significantly higher than 3. Some method can then be used to obtain reduced dimensional ``views'' of the sample, usually this is the Projection Pursuit family of tools and results in 1D or 2D projections from the sample of interest.

Having obtained a low dimensional projection the problem is how to represent it. Ideally that representation should consist of some form of estimation of the underlying density. Initially, consider a 1 dimensional projection. This may be a marginal density for some univariate functions of the parameter vector ${\mbox{\boldmath$\theta$}}$ or a predictive density for some future observation $y$. In either case the presentation or density estimation problem is the same.

A review of frequentist density estimation techniques is presented here. The issues relating to the use of these methods for presenting marginal posterior densities are also discussed and a number of concerns and problems identified. Kernel Density Estimation (KDE) would normally be used to attempt to recover the underlying density from a data sample, here it is intended to apply the same technique to the output obtained as a projection from a sample from a posterior density.



Subsections
danny 2009-07-23