Posterior Samples

Thiago Martins has posted a neat little tutorial about using R to calculate and visualise Principle Components Analysis, using Fisher’s Iris data. PCA is something I’ve struggled with as I’ve gone further into statistics, as it comes across as being based on mathematics rather than statistics. I’d like to learn more about the Indian Buffet Process and associated non-parametric Bayesian methods but if I’m going to be looking at long and wide data sets (say, UPTECH questionnaire data) I’d like to have somewhere to start. It looks like this may provide that.

Rasmus Bååth’s done a tutorial on Laplace Approximations in R (hat tip to Matt Moores for this one). Laplace Approximations are an alternative to MCMC simulation that can provide good approximations to well-behaved posterior densities in a fraction of the time. The tutorial deals with the issue of reparameterisation for when you’ve got parameters which have bounded values (such as binomial proportions). As a piece of trivia, Thiago (above) is based at NTNU where R-INLA is developed.

I’m at the emac2013 conference this week. We’re about half way through day one of the talks (of three) and there’s already been some fascinating stuff. Professor Robert Mahony (ANU) gave a talk that shows that the development of more advanced unmanned aerial vehicles (UAVs, drones) involves some quite complex but elegant mathematics, involving Lie group symmetries, rather than just coming up with cooler robots. Hasitha Nayanajith Polwaththe Gallage (QUT) showed some really interesting particle method (mesh-free) modelling where forces and energies were used to determine the shape of a red blood cell that had just ejected its nucleus.

Advertisements

2 thoughts on “Posterior Samples

  1. Dan Simpson

    I’m not 100% sure about what you mean when you say that PCA is more maths than stats. I’ve always thought of it as the reasonably sound principle that most of what the data is telling you is going to be in the strongest part of the signal, so only using that part in some sense robustifies you against weird noise. You can even make it model based if you want (although this need Steifel manifolds, so I don’t think anyone really wants to :p)

    You can also get all sorts of information about the sparsity of the resulting signal (eg PLS is better than PCA regression). You could also, probably, link it to ABC type approaches (where you replace your data by a summary and do exact [epsilon =0] inference).

    But it’s not Bayesian (unless you do the manifold rubbish), but that’s not always a bad thing.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s