European research funders, universities and other research organisations are currently engaged in an overdue discussion of how research assessment can be improved.
The hope is that this process, led by the European Commission, can generate a Memorandum of Understanding (MoU) by the summer. If enough institutions and research organisations sign it, the MoU could become one of the biggest levers of change for the European Research Area, designed to better align Europe’s research systems. But with that tight time frame in mind, it is not too soon to think about possible unintended consequences.
The MoU seeks to put into action the Commission’s recent scoping report on reforming how we assess research quality. The key drivers are that scientific practices have been changing rapidly. The task of scientists has become broader, embracing both public engagement and the generation of impact. Teamwork is becoming more important, as is interdisciplinarity and open science. And it has long been clear that quantitative indicators such as the Journal Impact Factor (designed originally to guide library acquisitions of journals) are misleading indicators of the quality of individual academic papers.
The debate on how to reflect all this in research assessment has been raging in universities and research-performing organisations across the continent. How can we better capture high-quality research (and its impact) in all its aspects? And how can we foster qualitative peer evaluation while relying less on metrics and misleading quantitative indicators?
It is important that the Commission is trying to foster broad agreement on the answers. Common parameters (that still allow for institutional and national diversity) will boost the transparency and interoperability of scientific recognition across Europe. If we get it right, the reform can make research careers fairer – even if, inevitably, it is scorned by those who lose out from the changes.
But whether the MoU will genuinely change the system for the better is far from assured. To begin with, it will not create a single new post. Hence, it is unlikely to address dissatisfaction among excellent early career researchers over the difficulty of gaining permanent employment. And it is unlikely to reduce the frustrations of those deserving promotion but not getting there. The fact that there are simply not enough positions to go round will not change.
Nor will the fact that there is not enough grant funding to go around. Through the MoU, institutions will commit to recognising and evaluating contributions to high-quality output in all its dimensions, but for as long as underfunded universities rely on third-party financing, current financial pressures on researchers to attract grants will persist. External income (and perhaps other quantitative indicators, too) may continue to be valued more than other research contributions.
In many European systems, governments have incentivised universities to perform according to blunt metrics such as the h-index or the number of publications in the web of science. This is likely to be incompatible with the MoU, so adopting it will only be possible if these countries’ governments change their resource allocation models accordingly.
This can only be done in three ways. One is to substitute the metrics-driven approach with a full-scale qualitative review of research performance, such as the UK’s research excellence framework (REF). However, it is hard to see any appetite on the continent for such a costly and intrusive exercise.
The second option is to stop allocating resource according to research performance altogether. However, this would ramp up the importance of other factors, such as student numbers, potentially disincentivising the pursuit of research excellence and diminishing the amount of funding made available by institutions for research.
The third option would be to increase the proportion of research funded through competitive calls based on quality assessment. But given the low success rates and the large amounts of time absorbed by grant applications, as well as the proliferation of short-term research positions this would entail, it is hard to see how this option could be in the interests of researchers or universities.
A final potential negative consequence of the MoU to be mindful of relates to language. There is significant support in the academic community (and in some member states) to make support for multilingualism explicit. This would be good news for researchers in small countries who publish in minority languages since it would lift the pressure that currently exists to publish in languages (and publications) that are widely read: mostly (in) English.
But were this reform to decrease the drive to publish in major European languages, it would reduce the capacity of some European research systems to address wider continental and global issues. The relatively small number of researchers who can converse with and assess each other in minority languages could still, of course, attest to each other’s world-class quality. But this would not change the fact that we would have increased a scientific disconnect across Europe. It is an outcome we must avoid at all costs.
It is already hugely challenging to articulate the right criteria for assessing research quality. But the difficulty of this task should not prevent us discussing what the wider consequences of change might be. If we want this reform to succeed, we must ensure that it strengthens not only researchers, but also the science ecosystem on which they rely.