Last week, for reasons best known to one of my clients, I was reading a bunch of systematic reviews and meta-analyses. A systematic review is a way of assessing a whole lot of research at once. A researcher picks a topic, say the effectiveness of befriending services in reducing the isolation of housebound people, then searches all the databases they can for relevant research. That usually yields tens of thousands of results, which of course is far more than anyone can read, so the researcher has to devise inclusion and/or exclusion criteria. Some of these may be about the quality of the research. Does it have a good enough sample size? Is the methodology robust? And some may be about the topic. Would the researcher include research into befriending services for people who have learning disabilities but are not housebound? Would they include research into befriending services for people in prison?
These decisions are not always easy to make. Researcher discretion is variable and fallible, and this means that systematic reviews themselves can vary in quality. One thing they almost all have in common, though, is a despairing paragraph about the tremendous variability of the research they have assessed and a plea to other researchers to work more carefully and consistently.
One of the systematic reviews I read last week reported an earlier meta-analysis on the same topic. A meta-analysis is similar to a systematic review but uses statistical techniques to assess the combined numerical results of the studies, and may even re-analyse data if available. The report of the meta-analysis I read, in the systematic review, contained a sentence which jumped out at me: ‘…differences in study design explained much of the heterogeneity [in findings], with studies using randomised designs showing weaker results.’
Randomised designs are at the top of the hierarchy of evidence. The theory behind the hierarchy of evidence is that the methods at the top are free from bias. I don’t subscribe to this theory. I think all research methods are subject to bias, and different methods are subject to different biases. For example, take the randomised controlled trial or RCT. This is an experimental design where participants are randomly assigned to the treatment or intervention group (i.e. they receive some kind of service) or to the control group (i.e. they don’t). This design assumes that random allocation alone can iron out all the differences between people. It also assumes that the treatment/intervention/service is the only factor that changes in people’s lives. Clearly, each of those may not in fact be the case.
Now don’t get me wrong, I’m not anti-RCTs. After all, every research method is based on assumptions, and in the right context an RCT is a great tool. But I am against bias in favour of any particular method per se. And the sentence in the systematic review stood out for me because I know the current UK Government is heavily biased towards randomised designs. It got me wondering, do randomised designs always show weaker results? If so, is that because the method is more robust – or less? And does the UK Government, which is anti-public spending, prefer randomised designs because they show weaker results, and therefore are less likely to lead to conclusions that investment is needed?
And that got me thinking we really don’t know enough about how research methods influence research results. I went looking for work on this and found none, just the occasional assertion that methods do affect results. Which seems like common sense… but how do they? Does the systematic review I read hold a clue, or is it a red herring? The authors didn’t say any more on the subject.
We can’t always do an RCT, even when the context means it would be useful, because (for example) in some circumstances it would be unethical to withhold provision of a treatment/intervention/service. So what about other methods? Do we understand the implications of asking a survey question that a participant has never thought about and doesn’t care about – or cares about a great deal? I know that taking part in an interview or focus group can lead people to think and feel in ways they would not otherwise have done. What impact does that have on our research? Can we trust participants to tell us the truth, or at least something useful?
This is troubling me and I have more questions than answers. I fear I may be up an epistemological creek without an ontological paddle. But I think that bias in favour of – or against – a particular research method, without good evidence of its benefits and disadvantages, is poor research practice. And it’s not only the positivists who are subject to this. Advocates of participatory research are every bit as biased, albeit in the opposite direction. The way some participatory researchers write, you’d think their research caused bluebirds to sing and rainbows to gleam and all to be well in the world.
It seems to me that we all need to be more discerning about method. And that’s not easy when there are so many available, and a plethora of arguments about what works in which circumstances. So I think we may need to go meta here and do some research on the research. But ‘further research needed’ is a very researcher-y way of thinking, and I’m a researcher, so… does my bias look big in this?
I am delighted to have been invited to launch my forthcoming book,
When does a research project end? When a report has been written? When a budget has been spent? When the last discussion of a project has taken place? It’s not clear, is it?
I have been an independent researcher for almost 20 years, yet I still find the gear changes difficult.
We are all familiar with the structural faultlines of inequality that exist around attributes such as age, ethnicity, and gender. These faultlines act, and sometimes interact, to create barriers to academic publication. For example, Michael Eisen, a US biologist,
Academic writing has powerful conventions that lecturers, doctoral supervisors, and published academics work to uphold. Proper academic writing should be correct in every detail of grammar, punctuation, spelling and structure. It should use the third person, for neutrality, and to remove any sign of personal bias. The author should be as specific and precise as possible, and careful not to over-claim.
The external examiner for my viva was not the person I wanted, who was seminal in my field, but someone more peripheral to my topic but who owed my supervisor a favour. For that reason alone, she thought he would agree to examine my thesis – and he did. Alongside core work for their own institutions, academics give guest lectures, seminars, and keynote speeches at other universities, act as external examiners for vivas and courses, review journal articles and write testimonials for books. No money changes hands (apart from perhaps travel expenses, or sometimes a small honorarium) and nor does it need to, because everyone involved is drawing an academic salary.
Academia is a community with conventions, customs, and no-go areas. These vary, to some extent, between disciplines. For example, in most STEM subjects it is taboo for research authors to refer to themselves in writing in the first person. This leads to some astonishing linguistic contortions. Conversely, in arts disciplines, and increasingly in the humanities and social sciences, it is permissible to use more natural language.
Last week there was an interesting conference called
I have more exciting news! But first: why is this blog like a bus stop? Because you wait ages for posts with exciting news, then two come along in quick succession!