What university rankings and output measurement tell us?
I must first confess that I personally have a love-hate relationship with rankings. On the one hand, they may be informative as comes to benchmarking and development of our activities. On the other hand, they tend to carry many validity and reliability problems as comes to the applied measurements and the underlying data, which again questions their whole existence: if they are not valid and reliable, why bother for recording “random noise”. Do the rankings measure learning? No. Do the rankings measure research quality? Yes, but not in a consistent way. Such findings have for long been reported by researchers in the Research Unit for the Sociology of Education (http://ruse.utu.fi/home), for instance.
So, is the tail wagging the dog? It feels that the very basis of the rankings is to create – many times unnecessary – competition: to create a ‘horse race’ between universities to facilitate attractiveness in the eyes of student candidates, their parents, academic professionals, and external stakeholders including funders. The Ministry of Education tends to be keen on all kinds of figures, as well. While it is their task to make decisions based on facts, it seems that the governmental officers do not always recognize the problems related to data collection regarding the quantitative and qualitative aspects of science in action. Instead of discussing and controlling university and journal rankings, we should discuss new ideas and theoretical/empirical contribution in our (inter-)disciplinary fields, but also what constitutes quality in the first place. In my view, at the University of Turku we have succeeded to avoid unnecessary emphasizing of rankings and selected our own inter-disciplinary path, recognizing the related risks but also the enormous scientific potential underlying this choice.
There is plenty of evidence on the fundamental flaws of university rankings and on the perverse effects of performance measurement systems in the university sector, at least if they are applied in inappropriate ways. The incentive systems so created may lead to situations where the actual outcomes do not meet the intended ones. For example, incentives driving an increase in the number of publications may lead to massive amounts of substandard, incremental papers, reduced quality of peer review, and, ultimately, ‘bad science’. Emphasizing quantitative measures in education has, on the other hand, been shown to lead to reduced coursework, grade inflation, and overall emphasis on short term learning. Yet, performance measurement systems remain a dominant management mechanism in universities globally. I think we at the University of Turku have skillfully been able to avoid most of such pitfalls.
The figures and rankings based on such systems are intrinsically always and only proxies, some more valid some less, all somewhat problematic regarding reliability. However, in the big picture, I think it is not a coincidence that world famous universities rank at the top tier in all the ranking systems: they must tell something about the quality of research, education, and overall impact. It is another question do these rankings fit the idea of universities being the ultimate and maybe last defenders of objective knowledge creation and value-free civilization. It is often forgotten that creativity and innovation requires time; these virtues should not be subject to short-term measurement and hasty evaluations. Many non-academics tend to think nowadays that it is the right of those who fund universities (including all tax-payers) to decide their objectives, ‘speed of delivery’, and role in the society. I disagree with such an idea.
Markus Granlund
Dean, Professor
Turku School of Economics