Educational testing and statistical testing

When the Australian Government introduced NAPLAN and the MySchool website I was very worried about league tables being drawn up, the ranking of schools and the stigmatisation of poorer schools as being “bad” and parents opting to not send their kids there. I don’t have a problem with benchmarks for students, letting parents know how their kids are developing and ensuring that governments are able to target their resources where they’re needed. What I do have a problem with is this bizarre notion that “accountability” means the government throws good policy, teachers and kids under a bus to appease parents or the more neoliberal elements of the national media.

If it’s not clear, I have grave concerns about the impact on our education system of publicly releasing nation-wide summary statistics of how well the students are doing at each school. I think the USA’s focus on standardised testing and the awful notion of “merit based pay” threaten the integrity of public education. Having said that, the collection of this data and its appropriate analysis provide governments with a very good tool for assessing their policies and determining where to spend the finite amount of money they have.

This post from Quantum Forest shows how a naive analysis of literacy versus socioeconomics in New Zealand can give a very misleading picture. To cut a long story short, there’s a lot of variability that plotting a trend line or some averages doesn’t take into account. The post is worth a read and doesn’t go into a huge amount of statistical detail but explains, with some well described R code, the use of boxplots for quick summaries of data that show the variability inherent in the data. The author also discusses how this exploration can lead to appropriate modelling which takes the variability into account.

This is the sort of exploration that should form the basis of any academic analysis of any data and I’m grateful to the author for explaining it simply and providing R code and the publicly available data. To me, statistics is all about quantifying uncertainty; Bayesian statistics even moreso. Confidence/credible intervals are not just something we calculate to check that something’s significant at the 5% level, they’re how we represent how certain we are about our parameter estimates. Not stating the uncertainty in one’s analysis may as well be a cardinal sin and no one should get away with providing a plot or parameter value without an estimate of its variability. It doesn’t matter if you’re a first year student, research scientist, public servant or journalist, you need to include uncertainty or else you’re lying to your audience (a lie of omission, but still a lie).

With a bit of luck, there’s a statistician working at a major news service who can make use of this when the next MySchool report comes out and some statisticians working in the Department of Education who are able to make use of good statistical modelling to make suggestions to the Minister regarding funding allocation.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s