I didn’t have any one on one meetings with Professor Robert but he did address two of my stats research group’s meetings and gave two seminars as part of his AMSI talk. In addition to this, there was a few hours one afternoon where QUT statisticians presented some of the Bayesian computation that is worked on here.

Christian’s first talk to BRAG focused on model choice and comparison and I wrote about it last week. The second talk to BRAG (Storify) was to finish off the themes on improving MCMC methods that he mentioned. We picked up where we left off, with a discussion of simulated tempering. The choice of *α* is probably the key concept here, and Christian recommends choosing a scale based on the number of data points (if π_{0} is the original likelihood, try using π = π_{0}*e*^{1/n} for *n* data points). Allowing *α* to take multiple values (or even have a distribution) means that we now have a space of target posteriors. By moving around in this space of posteriors we are less likely to end up with sticky Markov chains because a sharply peaked posterior in one posterior will likely not be as sharply peaked in another posterior. Robert gave a reference to a paper by Radford Neal which deals with tempering [1].

Another way to ensure movement throughout the posterior space is to either subsample the data or to perturb the data with a small amount of noise. These will again change the posterior (ever so slightly) and help exploration of the space to find the modes. A change of variable is another way to flatten out the posterior.

Parallel computation is another option; running the same model with different starting values and using the posterior density (a one dimensional value for each simulation) to get a sense of what the maximum value of the posterior might be. Low density models can be dropped in this way as they are not near the global mode of the posterior. I find this approach quite interesting and if I had a bit more time I’d try implementing it. Some exploration of low density modes is burn-in, but if the exploration is trapped at a local mode (and tempering, etc. aren’t getting it out of there) then perhaps it ought to be dropped.

Robert then went on to talk about using auxiliary variables to create joint densities where we can move along level sets of the posterior to get large jumps in the parameter of interest. This is probably where I got lost, but I did manage to get that there’s a paper by Girolami and Calderhead [2] where this is formalised as Langevin and Hamiltonian Monte Carlo. Some of the members of BRAG will be presenting LMC/HMC at a future meeting and I look forward to it. It’s always interesting to find out that so much Bayesian computation started out as a branch of physics (the Metropolis(-Hastings) algorithm as a solution for the Ising/Potts model, for example).

I’ve used Storify to publish my notes from the two AMSI sessions (simulation, ABC for model choice) that were part of Christian’s tour. I missed the one at UQ about Rao-Blackwellisation, unfortunately. Some of his slides from his talks are available online at Slideshare.

The open floor after the ABC talk was quite good as it gave me a chance to see what people are working on. While I have seen a few pyMCMC talks before there seems to be quite a lot of energy from Chris Strickland and Clair Alston for the development of new models in the pyMCMC framework. I’ve previously spoken with Chris about how to go about using it for spline models and spatial modelling and I hope that once I finish my PhD I’ve got some time to check out pyMCMC in greater depth. If you want to have a look, you can clone it from the git repository on bitbucket. It was also interesting to see the work Ewan Cameron and Tony Pettitt have been doing on ABC for astronomical data (evolution of galaxies, etc.) and the work of Chris Drovandi, James McGree, Liz Ryan and others on using Sequential Monte Carlo with ABC for adaptive clinical study design.

All in all a good week of seminars with lots of interesting ideas. Bayes on the Beach 2012 is coming up. This is one of my favourite events of the year because it’s a chance to get away from uni with a bunch of statisticians and discuss some interesting problems and do little workshops. The beach is also quite a nice part of it and there’s usually some sort of games night (last year I contributed a game which is basically Chinese Whispers mixed with Pictionary and we all had a lot of fun trying to figure out what was going on). It’s also a really good chance for early PhD students to present a poster in a very friendly environment so they can get used to talking to others about their work.

I’ve also managed to land myself on the list of presenters for the School of Mathematical Sciences’ “Postgrad Day”. While not a member of the school, two of my supervisors are and I’ve been working on statistical methods during my PhD. I will present the work I’ve been doing with my Finnish collaborators in a 20 minute slot. This will be a sort of practice run for my final seminar and is what I should have submitted to ISBA 2012 (but when abstracts closed we weren’t nearly ready enough). It’ll be good to have an audience of mathematicians and statisticians, because I always feel awkward explaining statistical research to aerosol scientists, chemists, physicists and microbiologists. Perhaps I need to focus a bit more on pitching a particular aspect of my work to the audience I’ve got.