I haven’t seen much commentary by nurses or midwives on the forthcoming Research Excellence Framework (REF), so I thought I’d make a start.
For those coming to this afresh, the REF has replaced the Research Assessment Exercise (RAE) as the mechanism through which the quality of research conducted in the UK’s universities will be weighed up. The results will provide the basis for the recurring allocation of quality-related (QR) research funding to higher education institutions for a period of years thereafter (until the whole exercise, or a version of it, is repeated). As has been the case with the RAE, the results from the REF will also be used to rank universities and the departments located within them.
Universities will make their formal submissions to REF 2014 by the end of November this year. These will be made to one of 36 ‘units of assessment’ (UoA), each of which is part of a larger main panel. Nursing had its own UoA in RAE 2008, but this time around is subsumed within a larger UoA also including the Allied Health Professions, Dentistry and Pharmacy.
Making a submission means providing information on the vitality and sustainability of the research environment. It also means giving details of individual researchers, and up to four separate research outputs for each where an ‘output’ will typically (but not necessarily) be a paper published in a journal. For the first time the wider impact of research, judged in terms of its reach and significance beyond academia, will also be assessed.
Of these three components it is outputs which will carry the most weight, accounting for 65% of the overall quality profile to be awarded to each submission. Impact is weighted at 20%, and environment at 15%. Given their weighting, it is outputs that I want to concentrate on in this post.
Each UoA expert panel will have the task of reviewing outputs using this five-point scale:
||Quality that is world-leading in terms of originality, significance and rigour
||Quality that is internationally excellent in terms of originality, significance and rigour but which falls short of the highest standards of excellence.
||Quality that is recognised internationally in terms of originality, significance and rigour.
||Quality that is recognised nationally in terms of originality, significance and rigour.
||Quality that falls below the standard of nationally recognised work. Or work which does not meet the published definition of research for the purposes of this assessment.
The Allied Health Professions, Dentistry, Nursing and Pharmacy UoA has a Chair (Professor Hugh McKenna, an academic mental health nurse at the University of Ulster and Chair of the Nursing and Midwifery UoA for RAE 2008), a Deputy Chair (Professor Julius Sim) plus 33 members and three assessors (there to ‘extend the breadth and depth of expertise on the sub-panels as required to carry out the assessment’). Of these 38 individuals I count 13 with nursing and/or midwifery backgrounds. Collectively this panel will be required to assess the quality of all outputs which come before them, and to do so ‘with a level of detail sufficient to contribute to the formation of a robust sub-profile for all the outputs in that submission’ (I’ve extracted this statement from the Panel Criteria and Working Methods document).
Expert review is fine, but in the context of the REF I think there are problems with how this is going to work. In Annex E of the RAE 2008 Manager’s Report the total number of outputs received by each RAE 2008 UoA is given. In the table below I’ve brought together the figures for each of the separate UoAs which, for REF 2014, are combined within UoA A3:
|RAE 2008 UoA
||Outputs submitted to RAE 2008
|Nursing and Midwifery
|Allied Health Professions and Studies
Higher education institutions have already responded to a survey inviting them to indicate their intentions to return researchers to the REF, and a summary of the findings can be found here. This suggests that, across Main Panel A (which includes UoA A3 for Allied Health Professions, Dentistry, Nursing and Pharmacy) 2% fewer people will be returned than were returned in RAE 2008. So let’s assume a uniform 2% drop in outputs across all of Main Panel A’s UoAs compared with RAE 2008, which (based on the 12,598 figure above) suggests a total return to UoA A3’s expert reviewers of some 12,346 individual outputs. That’s 12,346 journal articles, book chapters, reports to funding bodies (and so on) to be read and quality graded by a panel of 38 people. Assuming each output is considered by two panel members then each person will have around 650 items to consider, throughout the period from January to December 2014. For a cross-panel comparison, I note that this is a figure remarkably close to the 640 items the blogging physicist Peter Coles estimates will be read and reviewed by members of the Physics UoA.
That’s a whole lot of reading, reviewing and ranking. It’s also only one part of the work that REF panel members will have to do. What chance, then, that all 12,000+ individual outputs will be examined in close detail? Very little indeed. Perhaps abstracts (the 200 or so word summaries appearing at the start of published papers) will be crucial pieces of information on which assessments will be based? Or possibly papers will be sampled, with some being read in relatively greater depth than others? Who, at this stage of the process, knows? What we do know is that in undertaking their assessments of quality expert reviewers will have access to supporting information, including citation data provided via Scopus. This suggests that the number of times submitted outputs have been cited in subsequent publications is likely to have a bearing on assessments, even though the relationship between citations and research quality is a complex one. And, whilst we know from the Panel Criteria and Working Methods document that ‘No sub-panel will make use of journal impact factors, rankings or lists, or the perceived standing of the publisher, in assessing the quality of research outputs’ it may in reality be difficult for hard-pressed reviewers not to take informal account of journal titles in giving a view.
So the sheer volume of outputs presents a challenge. I also happen to think that, even with the benefit of time, achieving consistency in quality assessment is incredibly hard. Nothing in my experience tells me that different reviewers, even with similar academic backgrounds, will necessarily agree on a journal article’s status as ‘world leading’, ‘internationally excellent’, ‘internationally recognised’ or ‘nationally recognised’. These are not self-evident categories, and the distinctions to be made on the grounds of ‘originality, significance and rigour’ are fine indeed. The problem of assessment inconsistency is also magnified in the case of REF UoA A3 as this is a panel bringing together reviewers from academic fields which are remarkably diverse. Unless I have missed something important, I see no stated process for the alignment of UoA A3 reviewers to outputs based on disciplinary background. So papers by nurses reporting explorations of service user experiences using qualitative methods might (for example) be read and reviewed by pharmacists with expertise in the laboratory development of new drugs. This is odd, to say the least. So odd, in fact, that I’m now wondering whether, when panel A3 begins its work, members will do something to make sure that each output is assessed by people who really know the area within which it sits. How else can the reviews be considered ‘expert’?
That, I think, will do it for today, and thanks for reading. Perhaps I’ll say something in a later post on the ‘impact’ component of REF.