Last week, for reasons best known to one of my clients, I was reading a bunch of systematic reviews and meta-analyses. A systematic review is a way of assessing a whole lot of research at once. A researcher picks a topic, say the effectiveness of befriending services in reducing the isolation of housebound people, then searches all the databases they can for relevant research. That usually yields tens of thousands of results, which of course is far more than anyone can read, so the researcher has to devise inclusion and/or exclusion criteria. Some of these may be about the quality of the research. Does it have a good enough sample size? Is the methodology robust? And some may be about the topic. Would the researcher include research into befriending services for people who have learning disabilities but are not housebound? Would they include research into befriending services for people in prison?
These decisions are not always easy to make. Researcher discretion is variable and fallible, and this means that systematic reviews themselves can vary in quality. One thing they almost all have in common, though, is a despairing paragraph about the tremendous variability of the research they have assessed and a plea to other researchers to work more carefully and consistently.
One of the systematic reviews I read last week reported an earlier meta-analysis on the same topic. A meta-analysis is similar to a systematic review but uses statistical techniques to assess the combined numerical results of the studies, and may even re-analyse data if available. The report of the meta-analysis I read, in the systematic review, contained a sentence which jumped out at me: ‘…differences in study design explained much of the heterogeneity [in findings], with studies using randomised designs showing weaker results.’
Randomised designs are at the top of the hierarchy of evidence. The theory behind the hierarchy of evidence is that the methods at the top are free from bias. I don’t subscribe to this theory. I think all research methods are subject to bias, and different methods are subject to different biases. For example, take the randomised controlled trial or RCT. This is an experimental design where participants are randomly assigned to the treatment or intervention group (i.e. they receive some kind of service) or to the control group (i.e. they don’t). This design assumes that random allocation alone can iron out all the differences between people. It also assumes that the treatment/intervention/service is the only factor that changes in people’s lives. Clearly, each of those may not in fact be the case.
Now don’t get me wrong, I’m not anti-RCTs. After all, every research method is based on assumptions, and in the right context an RCT is a great tool. But I am against bias in favour of any particular method per se. And the sentence in the systematic review stood out for me because I know the current UK Government is heavily biased towards randomised designs. It got me wondering, do randomised designs always show weaker results? If so, is that because the method is more robust – or less? And does the UK Government, which is anti-public spending, prefer randomised designs because they show weaker results, and therefore are less likely to lead to conclusions that investment is needed?
And that got me thinking we really don’t know enough about how research methods influence research results. I went looking for work on this and found none, just the occasional assertion that methods do affect results. Which seems like common sense… but how do they? Does the systematic review I read hold a clue, or is it a red herring? The authors didn’t say any more on the subject.
We can’t always do an RCT, even when the context means it would be useful, because (for example) in some circumstances it would be unethical to withhold provision of a treatment/intervention/service. So what about other methods? Do we understand the implications of asking a survey question that a participant has never thought about and doesn’t care about – or cares about a great deal? I know that taking part in an interview or focus group can lead people to think and feel in ways they would not otherwise have done. What impact does that have on our research? Can we trust participants to tell us the truth, or at least something useful?
This is troubling me and I have more questions than answers. I fear I may be up an epistemological creek without an ontological paddle. But I think that bias in favour of – or against – a particular research method, without good evidence of its benefits and disadvantages, is poor research practice. And it’s not only the positivists who are subject to this. Advocates of participatory research are every bit as biased, albeit in the opposite direction. The way some participatory researchers write, you’d think their research caused bluebirds to sing and rainbows to gleam and all to be well in the world.
It seems to me that we all need to be more discerning about method. And that’s not easy when there are so many available, and a plethora of arguments about what works in which circumstances. So I think we may need to go meta here and do some research on the research. But ‘further research needed’ is a very researcher-y way of thinking, and I’m a researcher, so… does my bias look big in this?
Ha ‘does my bias look big in this’ … in Australia that’s what we call ‘a pearler’!
LikeLiked by 1 person
Glad you liked it, Jen 🙂 also happy to know that at least one person has read to the end!
Helen, you’ve raised a very important question. It seems obvious that the choice of a research method affects the results, but hard to think of studies that show how this pans out in practice.
My studies of nonviolent action led to a somewhat different perspective and question. It is important to say what the purpose of research is, because the purpose can affect the most suitable choice of research methods. Serving military purposes is usually linked to traditional research methods whereas serving the purposes of nonviolent struggle is linked to more participatory methods. At least that’s my argument: http://www.bmartin.cc/pubs/01tnvs/tnvs09.html
Brian Martin, firstname.lastname@example.org
LikeLiked by 1 person
Brian, thanks for your comment and the links to your interesting work. Useful to know it’s not just me who finds it hard to think of studies that show how research methods affect results. I did some web searching and couldn’t come up with anything, but I always wonder whether I’m looking in the wrong places. I agree that the purpose of research affects the choice of methods, as well as (of course) the research questions; I haven’t thought about that enough so I will give it more consideration.
My open access paper with Pam Alldred analysed all kinds of methods and research techniques and shows how each affects the results. http://journals.sagepub.com/doi/abs/10.5153/sro.3578
Every method affects resultes in some way or another but they do this in differnt ways. We advocate mixing methods. This doesn’t get rid of the effects, but may allo you to cancel ome of them out.
LikeLiked by 1 person
Hi Nick, thanks so much for alerting me to this. For some annoying techie reason I can’t access the paper; I’ve tweeted SAGE and I’m hoping they will sort it out (tomorrow now probably I guess). I see you guys have written a few papers on the topic so I will follow those up. Looking forward to reading!
Sorry you can’t access. Its also on researchgate at
Hope it proves useful
LikeLiked by 1 person
Oddly enough, I’ve just been able to download it from SAGE. Whatever is going on there is evidently an intermittent problem. I also have your IJSRM paper now. Thanks again.