I have been a peer reviewer of journal articles for the last eight years. I documented my first peer review, in late 2014, on this blog. Peer reviewing has never seemed easy to me – and I don’t think it should. Reviewing original work by other scholars is bound to be intellectually and emotionally demanding. But I feel as if peer reviewing has become more difficult, even over the comparatively short time I have been involved. There are several reasons for this, and I will focus on three of them here: hoaxes, malpractice and complexity.
Academic hoaxes pre-date my reviewing experience. In 2005, three US-based doctoral students in computer science, Jeremy Stribling, Max Krohn and Dan Aguayo, created SCIgen. SCIgen is a computer program which can generate whole computer science journal articles including graphs, figures and citations, that look credible but are in fact nonsensical. A lot of articles generated by SCIgen have been accepted by, and published in, academic journals, despite the use of peer reviewers.
And such hoaxes are not limited to computer science. In 2017–18, three UK-based scholars, James Lindsay, Helen Pluckrose and Peter Boghossian, wrote 20 fake articles using social science jargon. They were able to get several of these articles published in academic journals, even though some of them promoted morally questionable acts. The aim of these three scholars was apparently to highlight what they saw as poor quality work in some areas of the social sciences. However, I am not sure this intended end justifies the questionable means of duping reviewers and editors into publishing bogus research.
Sadly, though, it seems that academic journals are regularly duped into publishing bogus research by researchers themselves. Retraction Watch, based in the US, has been keeping track of retracted journal articles for the last 12 years. Some articles are retracted because their authors made honest mistakes. But the Retraction Watch database lists a lot of other reasons for retraction, including falsification or fabrication of data, and falsification, fabrication or manipulation of images or results. And the numbers are staggering. At the time of writing, there are over 1,500 articles listed on the database as retracted due to the falsification and/or fabrication of data, and over 1,000 due to the manipulation of images. Also, the database only includes those articles in which fabrication, falsification or manipulation have been detected and reported. By its own admission, Retraction Watch is biased towards the life sciences, so problematic journal articles in other sectors will be even less visible.
A bunch of people make it their business to find and publicise these problematic articles. One even does it under her own name: Elisabeth Bik. Others use pseudonyms such as Clare Francis, Smut Clyde, Cheshire, and TigerBB8.
Bik specialises in identifying manipulated images, and has found through empirical research that their prevalence is increasing. However, Bik has a particular talent for pattern recognition. Of course it is useful to know that images may be manipulated, and Bik regularly shares examples on social media and elsewhere which can help others understand what to look for. But even so, spotting manipulated images can be difficult for the average, harassed, unpaid peer reviewer. And catching fabricated or falsified images, data or results may be almost impossible without inside information. Most journal articles have strict word limits which can work against them here. These restrictions mean researchers are used to some aspects of their processes receiving a cursory mention at best, and this can enable cheating to pass undetected.
When reviewing goes wrong, consequences can be disastrous. The link is to a recent controversy about a published article promoting a morally questionable act. I am not using any of its keywords in this article. I think there are some particularly interesting aspects of this case. It is not the first article to be published that features morally questionable acts. I have read the article; it is well written, and I can see how a peer reviewer could regard it as worthy of publication – as its own peer reviewers did. The problem, for me, lay in the background of the author who promotes morally questionable acts outside of academia. He may have written this article in the hope that publication would lend legitimacy to his actions. Even if he did not, publication might be perceived to confer such legitimacy, which could cause reputational damage to the publisher and the university concerned.
So, the article you are reviewing may be a hoax, and/or may contain data, images, and/or results that have been manipulated, fabricated or falsified, in ways that are difficult or impossible to detect, and/or may have been written by someone with a dodgy agenda. But that’s not all. Academic work – and, indeed, the world around us – is becoming more complex. More research is transdisciplinary, pushes methodological boundaries, is multi-lingual, and so on. The process of peer review was devised when people worked in neat, tidy, single disciplines and fields. In that landscape people could act as experts on other people’s work in its entirety. These days that is not so easy. Topics such as sustainability, the climate crisis, and food security transcend disciplines and methods. This means that nobody, really, is an expert any more, so peer review is effectively obsolete. Yet it is still being used.
This means we need not only peer review before publication, but also after publication. Luckily there is a tool for this: PubPeer, a website where you can comment on published journal articles, anonymously if you wish. This enables researchers with inside information to whistleblow without risking the loss of their jobs. Also, you can use PubPeer to check articles you are intending to cite, to make sure nobody has raised any concerns about the work you want to use. At the moment PubPeer focuses mostly on laboratory and clinical research, but there is also (not surprisingly) some computer science. In fact PubPeer can be used for any published journal article as long as the article has a recognisable ID such as a DOI. Also, there is a PubPeer browser plugin which enables PubPeer comments to be visible on other websites besides PubPeer itself.
This blog and the videos on my YouTube channel are funded by my beloved Patrons. Patrons receive exclusive content and various rewards, depending on their level of support, such as access to my special private Patreon-only blog posts, bi-monthly Q&A sessions on Zoom, free e-book downloads and signed copies of my books. Patrons can also suggest topics for my blogs and videos. If you want to support me by becoming a Patron click here. Whilst ongoing support would be fantastic you can make a one-time donation instead, through the PayPal button on this blog, if that works better for you. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!
Thank you for raising these issues. I have also written about peer review recently and hadn’t really considered some of these issues. I will definitely be thinking more about them.
LikeLiked by 1 person
Thanks, Jo. I don’t imagine I have thought of everything either. It’s a really complex picture.
When reviewing for conferences, especially for conferences, I find I have to use the same mindset as for marking student papers. I have to ask myself did the author write this, and is it original. What is most common is the same paper submitted to another conference, which could be easily dealt with using the same plagiarism detection tools used for student papers. A more difficult problem is papers which just don’t have something new and significant to say. I suggest we need to boost the prestige of practice papers, which are not finding new scientific discoveries, but just improving the ways we work.
LikeLiked by 1 person
Thanks, Tom. Completely agree about the importance of practice papers, whether for conferences or journals.
Thanks, Helen, for your thoughtful comments about peer reviewing. The problems you describe are real and important, yet there are complications and bigger issues.
A complication is that some retractions result from complaints to publishers by those who don’t like the findings, so just because an article has been retracted doesn’t necessarily mean there’s anything wrong with it.
A bigger issue is the ghost-writing of articles, most notably by pharmaceutical companies. On this, see Sergio Sismondo’s eye-opening book Ghost-Managed Medicine (https://www.matteringpress.org/books/ghost-managed-medicine). Thousands of lives are at stake. More generally, there are reviewers who torpedo submissions because they don’t like the conclusions and waive through submissions by friends and allies.
From what I’ve read, in most cases it is difficult for reviewers to detect fraud, and anyway being a fraud-detector should not be a reviewer’s main role. Rather, I think it should be to help authors improve their work. For my approach on how to do this, see “Writing a helpful referee’s report”, https://www.bmartin.cc/pubs/08jspwhrr.html
LikeLiked by 1 person
Thank you Brian for your helpful comment. As I said to Jo above, it’s a complex picture. Thanks, too, for the resource you shared. The COPE ethical guidelines for peer reviewers are also good, and complementary, I think https://publicationethics.org/resources/guidelines/cope-ethical-guidelines-peer-reviewers