I very much hope that UK readers of this blog have enjoyed this year’s summer (which, at least, coincided with the early May bank holiday weekend). Right now we’ve been plunged back into autumn, or so it feels here in South Wales. Wind and rain are everywhere.
Here’s a wordcloud used during Friday morning’s teaching with students of mental health nursing, during which I shared something about COCAPP and other (past and present) research projects involving people working in the Cardiff School of Nursing and Midwifery Studies. One of the things I did was to draw students’ attention to my paper on complex trajectories in community mental health, as previously blogged about here. Unrelatedly, towards the end of Friday I also caught sight of some newly delivered reviewers’ feedback on a grant proposal on which I am a co-applicant. One of the points the reviewers made was to encourage us, as a research team, to plan to do more to get future findings into services and practice.
The first of these otherwise unconnected events was a modest attempt to close the gap between research and education. The second was a reminder of the importance of closing the gap between research and the world of health and social care. So with both experiences in mind this post is about getting research out of the hands of academics and into the hands of others who might use it: practitioners and students, service managers, policymakers, users, carers. Coming not long after my recent post on the assessment of outputs in the Research Excellence Framework, this post might also be thought of as an excursion into ‘impact’.
Within single university departments it ought to be reasonably straightforward to bring research and teaching closer together. This said, I can still clearly remember co-presenting with Cardiff colleagues at a nursing research conference in London in the late 1990s only to be told, by a student who had travelled from our own school, that she had had no previous idea who we were or that the research projects we had discussed were ongoing. That was a salutary moment, and since then I have taken opportunities to directly bring research (mine, my colleagues’, other people’s) into the modules I have led and contributed to. And of course, I am hardly alone in doing this kind of thing. But across the whole higher education sector demarcations are growing between ‘teachers’ and ‘researchers’, with universities routinely differentiating between staff on the basis of their expected roles. If researchers become less involved in teaching then the risk is run that naturally occurring opportunities for projects to be brought into the classroom, by those who are running them, will dwindle.
But if integrating research and teaching can be challenging then getting research findings out of universities’ doors for the benefit of all is harder still. In the health and social care fields the publication of findings in peer reviewed journals comes with no guarantee that these will be read, or used to inform anything which happens outside of academia. In nursing (and I imagine in many other practitioner disciplines too) this has often been seen as part of the ‘theory/practice gap’ problem. Nurses have spent a long time agonising over this, and typing some suitable search terms into Google Scholar produces some 200,000 documents (that’s the slightly obscured number circled in red in this screenshot) evidently devoted to its examination:
Nurses are not alone in having concerns of this type. The Cooksey review of UK health research funding talked about tackling the ‘translation gap’ through getting ‘ideas from basic and clinical research into the development of new products and approaches to treatment of disease and illness‘, and at the same time ‘implementing those new products and approaches into clinical practice‘. Universities are increasingly urged to do better with their ‘knowledge exchange’ activities. And, as we know, the Research Excellence Framework 2014 has introduced the idea of assessing ‘impact’.
‘Impact’ in the REF2014 Assessment framework and guidance on submissions document is defined ‘as an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia‘. It’s about research being ‘felt’ beyond universities, and assessing this. The assessed bit is important in the formal REF exercise because impact (presented using case studies, and counting for 20% of the overall quality profile to be awarded to each individual submission) will be graded using this scale:
||Outstanding impacts in terms of their reach and significance.
||Very considerable impacts in terms of their reach and significance
||Considerable impacts in terms of their reach and significance
||Recognised but modest impacts in terms of their reach and significance
||The impact is of little or no reach and significance; or the impact was not eligible; or the impact was not underpinned by excellent research produced by the submitted unit.
As in the case of the assessment of outputs I am struck by the fine judgements that will be required by the REF’s experts. I suggest that one person’s time-pressed ‘very considerable’ may well turn out to be another’s ‘considerable’, or even ‘modest’.
Issues of reliability aside, the inclusion of ‘impact’ in REF2014 has got people to think, again, about how to close some of the gaps I have referred to above. For researchers in health and social care there has been new work to do to demonstrate how findings have been felt in policymaking, in services and in the provision of care and treatment. Who would object to the idea that research for nursing practice should have benefits beyond academia? But as many of the documents I identified when searching for papers on the theory/practice gap (along with newer materials on ‘knowledge exchange’) will no doubt confirm, demonstrably getting research into policy, organisations and practice can be fiendishly hard.
There are many reasons why this is so. Not all research findings have immediate and direct applications to everyday health and social care. Even when findings do have clear and obvious application, university-based researchers may not be best-placed to do the necessary ‘mobilisation’ (to use the currently fashionable phrase), including in relation to knowledge which they themselves have created. And by the time peer reviewed findings have reached the public domain, policy and services in fickle, fast-moving, environments may have moved on. In cases where we think research has made a difference there is also the small matter, in the context of the REF, of marshalling the evidence necessary to demonstrate this to the satisfaction of an expert panel. In any event research is often incremental, with knowledge growing cumulatively as new insights are added over time. Given this we should, perhaps, have rather modest expectations of the likely influence of single papers or projects.
Beyond this it is always good to hear of new ways in which wider attention might be drawn to research and its benefits, and a rich resource for people with interests in this area is the multi-author blog and associated materials on the impact of the social sciences run by the LSE. This is a suitably interdisciplinary initiative, which can be followed on Twitter at @LSEImpactBlog. I recommend it (and not just to social scientists), and as a starting point its Maximising the impacts of your research document. This sets out to provide ‘a large menu of sound and evidence-based advice and guidance on how to ensure that your work achieves its maximum visibility and influence with both academic and external audiences‘, and as such has lots of useful observations and suggestions.