The statistics group that I’m part of is publishing a book that details how we do Bayesian statistics, what it’s used for and how people can use it. I wrote a chapter on Bayesian splines which is basically a few recipes for splines with some illustrative examples with small data sets. The work in the book chapter is currently being extended into something more useful as part of my PhD but the chapter itself does a decent job of introducing a few simple spline types and shows how you can generate the basis and use them in an adaptive Metropolis-Hastings framework to fit a univariate regression model.

The director of my stats group is one of the editors of the book and has asked me to give a small lecture to her Bayesian Data Analysis class, an honours level unit. I’ll be walking the students through the chapter and hoping that they’re familiar enough with the idea of priors to understand that a prior doesn’t just have to mean “I think this parameter has this value”. The idea of a prior being used to inform the values of the difference between spline coefficients [1] is a really nice Bayesian analogue of the frequentist approach (and does away with GCV, instead maximising the posterior density) [2] and I think makes a lot more sense than attempting to do a grid-based GCV approach, incorporating the amount of smoothing into the MCMC sampler (or INLA fitting).

It’ll be good fun.

[1] Lang, S. and Brezger, A. (2004): Bayesian P-Splines. *Journal of Computational and Graphical Statistics*, 13, 183-212. Download

[2] Eilers, P. H. C. and Marx, B. D. (1996): Flexible smoothing with B-splines and penalties. *Statistical Science*, 11 (2), 89-121. Download

### Like this:

Like Loading...

*Related*