Creative Methods in Unlikely Places 

I am finding creative research methods in more and more unexpected locations. I stumbled across a fascinating example while researching the ethics of project management for a book I’m co-writing. It is in the International Journal of Project Management which is not a journal I have read much from until recently. The author is Jan Bröchner, an Emeritus Professor from Chalmers University of Technology in Sweden, who specialises in facilities management in construction organisations. Does he sound like a creative methods person to you? He didn’t sound like one to me – but my stereotyping was soon overturned as I began to read his article. 

Bröchner gathered and analysed fictional accounts of construction project management. He was particularly interested in the way project managers’ individual values were expressed in these accounts. He cites a book from 1994 called ‘Good Novels, Better Management’ as foundational to the idea that fiction can be relevant for organisational research. This is a book I have on my shelves from my doctoral student days 20 years ago! My PhD focused on storytelling and organisations. Though I have lost touch with this area of research since then, it is good to see how it has developed. Bröchner cites other relevant work from 1995 to 2019 to support his contention that studying fiction is more use than conventional research methods for investigating the ethical dilemmas that project managers face and for gaining insights ‘into less desirable managerial behaviours’ (Bröchner 2021:594).  

Bröchner drew on the values identified by the Polish-American psychologist Milton Rokeach in the 1970s as the internal reference points people use to formulate their attitudes and opinions. (I’m not sure how universal these are, as values can be influenced by society, religion etc and may change over time, but most of them seem reasonably widespread.) Bröchner used a five-step method for finding and working with his fictional data. First, he defined the criteria for selecting his data: they had to be novels, short stories, or plays; available in English, French, or German; with at least one construction project as a prominent feature; and a character who is the construction project manager. Using these criteria, he found fourteen novels, two short stories, and four plays for his dataset. The literature he selected ranged from Aristophanes’ play The Birds, written in 414 BC, to The Victoria System, a novel by Éric Reinhardt published in 2011. 

In the second and third steps, Bröchner wrote one short summary to highlight the relevant action; and another to briefly summarise any other relevant details of background information. Here is an example: 

Peter Ackroyd: Hawksmoor (1985) 

Novel. Two intertwined murder stories. One featuring a satanist clerk of works (or supervisor) responsible for building seven churches in 18th-century London. The other concerning the same churches and a 1980s detective.  

Background: Career of Nicholas Hawksmoor (d. 1736) as assistant to Christopher Wren and supervising architect. (Bröchner 2021:600) 

Bröchner found that all the authors had relevant personal experiences, and many worked hard to understand construction project management (Bröchner 2021:601). 

In the fourth step, Bröchner used the Rokeach values as pre-determined codes and applied those codes to his data. And in the fifth step, he used the results of that coding process to assess how each of the values was represented by the authors, and how frequently each occurred.  

Bröchner found that the top five values were Imagination, (Mature) Love, Ambition, Courage, and Happiness (Bröchner 2021:600). These may not be the first five values you would expect a construction project manager to have. Values such as capability, logic, self-respect, politeness, and a sense of accomplishment seem more likely (and yes, those are Rokeach values too). So Bröchner’s findings are surprising and therefore interesting. He expresses a hope that his pioneering work will shift the focus from ‘management methods that are intended to lead to successful project outcomes to an acceptance [of] project managers as human beings’ with their own values and personal commitments to balance with their work and ethical considerations (Bröchner 2021:602). 

I think this is one of the key benefits of creative research methods: they facilitate people being accepted as people. We are slowly moving away from the idea that people should compartmentalise themselves so that when you are at work your personal life is irrelevant, and only your work-related knowledge and skills can be of use. Creative methods offer an opportunity for people to bring all their knowledge, and skills, and imagination, and ideas, and courage, and love into their research work. With creative methods, there is no need to exclude anything except whatever is not useful for the task in hand. 

Why Peer Reviewing Is More Difficult These Days

I have been a peer reviewer of journal articles for the last eight years. I documented my first peer review, in late 2014, on this blog. Peer reviewing has never seemed easy to me – and I don’t think it should. Reviewing original work by other scholars is bound to be intellectually and emotionally demanding. But I feel as if peer reviewing has become more difficult, even over the comparatively short time I have been involved. There are several reasons for this, and I will focus on three of them here: hoaxes, malpractice and complexity.

Academic hoaxes pre-date my reviewing experience. In 2005, three US-based doctoral students in computer science, Jeremy Stribling, Max Krohn and Dan Aguayo, created SCIgen. SCIgen is a computer program which can generate whole computer science journal articles including graphs, figures and citations, that look credible but are in fact nonsensical. A lot of articles generated by SCIgen have been accepted by, and published in, academic journals, despite the use of peer reviewers.

And such hoaxes are not limited to computer science. In 2017–18, three UK-based scholars, James Lindsay, Helen Pluckrose and Peter Boghossian, wrote 20 fake articles using social science jargon. They were able to get several of these articles published in academic journals, even though some of them promoted morally questionable acts. The aim of these three scholars was apparently to highlight what they saw as poor quality work in some areas of the social sciences. However, I am not sure this intended end justifies the questionable means of duping reviewers and editors into publishing bogus research.

Sadly, though, it seems that academic journals are regularly duped into publishing bogus research by researchers themselves. Retraction Watch, based in the US, has been keeping track of retracted journal articles for the last 12 years. Some articles are retracted because their authors made honest mistakes. But the Retraction Watch database lists a lot of other reasons for retraction, including falsification or fabrication of data, and falsification, fabrication or manipulation of images or results. And the numbers are staggering. At the time of writing, there are over 1,500 articles listed on the database as retracted due to the falsification and/or fabrication of data, and over 1,000 due to the manipulation of images. Also, the database only includes those articles in which fabrication, falsification or manipulation have been detected and reported. By its own admission, Retraction Watch is biased towards the life sciences, so problematic journal articles in other sectors will be even less visible.

A bunch of people make it their business to find and publicise these problematic articles. One even does it under her own name: Elisabeth Bik. Others use pseudonyms such as Clare Francis, Smut Clyde, Cheshire, and TigerBB8.

Bik specialises in identifying manipulated images, and has found through empirical research that their prevalence is increasing. However, Bik has a particular talent for pattern recognition. Of course it is useful to know that images may be manipulated, and Bik regularly shares examples on social media and elsewhere which can help others understand what to look for. But even so, spotting manipulated images can be difficult for the average, harassed, unpaid peer reviewer. And catching fabricated or falsified images, data or results may be almost impossible without inside information. Most journal articles have strict word limits which can work against them here. These restrictions mean researchers are used to some aspects of their processes receiving a cursory mention at best, and this can enable cheating to pass undetected.

When reviewing goes wrong, consequences can be disastrous. The link is to a recent controversy about a published article promoting a morally questionable act. I am not using any of its keywords in this article. I think there are some particularly interesting aspects of this case. It is not the first article to be published that features morally questionable acts. I have read the article; it is well written, and I can see how a peer reviewer could regard it as worthy of publication – as its own peer reviewers did. The problem, for me, lay in the background of the author who promotes morally questionable acts outside of academia. He may have written this article in the hope that publication would lend legitimacy to his actions. Even if he did not, publication might be perceived to confer such legitimacy, which could cause reputational damage to the publisher and the university concerned.

So, the article you are reviewing may be a hoax, and/or may contain data, images, and/or results that have been manipulated, fabricated or falsified, in ways that are difficult or impossible to detect, and/or may have been written by someone with a dodgy agenda. But that’s not all. Academic work – and, indeed, the world around us – is becoming more complex. More research is transdisciplinary, pushes methodological boundaries, is multi-lingual, and so on. The process of peer review was devised when people worked in neat, tidy, single disciplines and fields. In that landscape people could act as experts on other people’s work in its entirety. These days that is not so easy. Topics such as sustainability, the climate crisis, and food security transcend disciplines and methods. This means that nobody, really, is an expert any more, so peer review is effectively obsolete. Yet it is still being used.

This means we need not only peer review before publication, but also after publication. Luckily there is a tool for this: PubPeer, a website where you can comment on published journal articles, anonymously if you wish. This enables researchers with inside information to whistleblow without risking the loss of their jobs. Also, you can use PubPeer to check articles you are intending to cite, to make sure nobody has raised any concerns about the work you want to use. At the moment PubPeer focuses mostly on laboratory and clinical research, but there is also (not surprisingly) some computer science. In fact PubPeer can be used for any published journal article as long as the article has a recognisable ID such as a DOI. Also, there is a PubPeer browser plugin which enables PubPeer comments to be visible on other websites besides PubPeer itself.

This blog and the videos on my YouTube channel are funded by my beloved Patrons. Patrons receive exclusive content and various rewards, depending on their level of support, such as access to my special private Patreon-only blog posts, bi-monthly Q&A sessions on Zoom, free e-book downloads and signed copies of my books. Patrons can also suggest topics for my blogs and videos. If you want to support me by becoming a Patron click here. Whilst ongoing support would be fantastic you can make a one-time donation instead, through the PayPal button on this blog, if that works better for you. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!