They say in football that the league table never lies.
Fans and players may begrudge a decision or two, or feel that luck did not go their way in a match. But even the most ardent supporter will accept their team’s league position after than final game as a true reflection of how good they were over the season.
Every year a range of ‘league tables‘ or ‘world rankings’ of universities are published. It always surprises me how much news they generate even though the standings don’t seem to alter that much. How much more fun they would be were there a relegation zone? Or points deducted?
Research activity plays a significant role in determining the respective rankings of universities in many of these lists. Depending on the list this activity is defined by research income, outputs or the staff profile. None, as far as I am aware, look at public engagement.
Why not?
In fact, why couldn’t a ranking be produced of all UK science organisations based on their public engagement in science activities?
It might be based on a range of factors – their annual investment in such work; this investment expressed as a proportion of their overall expenditure; how this has changed over time; their reach in terms of audience; their impact as regards changing behaviour or generating knowledge and; whether they have an embedded programme and/or a strategic commitment to it at the highest level.
All of these factors raise complex issues – simply reaching an agreed definition for each of them is fraught with difficulty. And, as the variation in the quality of ‘impact’ evidence provided by institutions in the recent REF pilot exercie demonstrates, some of them are not even on the same planet as us when it comes to what they think is public engagement.
But at least a league table or scorecard might help generate greater transparency about what science organisations are up to and whether it represents good value for money, indeed whether they are any good at it or as good as they claim to be. And they might help us assess how they stack up against one another in terms of the effectiveness of their public engagement work.
A quick scan over various annual report and accounts of the major funders over the weekend was revealing in the lack of depth one can reach too.
The Wellcome Trust stands head and shoulders above the rest perhaps because it has lived with public benefit reporting under charity law for some time. The big four universities are obscure to say the least despite some nice stories here and there. Even those universities with public engagement beacons shine a little dimmer than they should.
And then there’s the Royal Society which continues to frustrate in its inablity to give a sense of the whole as opposed to its individual parts. Until it says otherwise one could be forgiven for thinking its much-lauded 350th anniversary – well, by Melvyn Bragg anyway – was little more than a fundraising exercise to create the science equivalent of Downton Abbey (otherwise known as Chihcely Hall). But I suppose it does do a very good summer exhibition so that’s ok.
In fact it strikes you when you look at the individual efforts of these institutions how, while they have every right and cause to invest in their own programmes, there could be some benefit and greater impact achieved by a pooling of some of this effort in monetary if not effort terms. It all seems so…well, er, fragmented at times.
Time to get off my hobby horse. But I do think that there is merit in an exercise aimed at producing a league table on public engagement by science organisations on an annual basis.
Simplistic, rudimentary, yes. But would it be lying at the end of the day? I very much doubt it.