Teaching Creative Research Methods for GEDIS

I got an email through my contact form in mid-December. Would I like to spend a day teaching creative research methods to an EU-funded project team, at the University of Barcelona, in the middle of March – on a Friday?

Would I ever!

This was my first time teaching creative research methods to an EU-funded project team. However, I have been part of an EU-funded project team, and I work with EU-funded project teams as an ethics expert, so I am familiar with the terrain. It was also my first time working in Spain, and fortunately my client and his colleagues were able and willing to help me find my way through the maze of Spanish bureaucracy.

My client, Juan-José Bo, is a Professor at the University of Barcelona. He is also the project lead for GEDIS, which stands for Gender Diversity in Information Science, a project that began work at the start of 2025. Juanjo had a copy of my book on creative research methods which had led him to ask for my help. He wanted a day of awareness-raising, with some hands-on practice, and an emphasis on gender. So that’s what we did.

Juanjo kindly arranged for me to stay at a comfortable hotel very close to the university. On the morning of our session he came to meet me at the hotel to show me the way to the university and the room where we would be working. Before we got around to the business of the day, Juanjo showed me a marvellous three-dimensional ‘toolkit for librarians’ he had created with help from an artist, which is designed to fold down flat for easy transport to conferences.

There were 21 people in the group, who came from a range of countries including Czechia, Austria, Bosnia, Germany, Croatia, Spain and Mexico. I began, as I always do, by asking people to introduce themselves and say what they wanted to get out of the day. It turned out that most were quite senior, including librarians and Professors, and several were also teachers of research methods. I had a moment of internal ‘eek!’, wondering whether I had taken on an assignment I couldn’t fulfil, but it turned out to be fine.

Our day together came at the end of a week of intensive co-working, and everyone was tired, but they all concentrated hard and asked really good questions. The discussions were focused on how creative methods could help GEDIS, and the group generated some excellent ideas.

Juanjo emailed me the following week to say:

“Partners told me that they were really happy with the session. It reported to them new and fresh ideas, to use not only at GEDIS at some point but also to their projects. In my case, I enjoyed the session so much.”

I was particularly pleased, in this global moment, to be able to assist a project working to strengthen gender diversity. And on a personal level I was delighted to be able to spend the weekend in Barcelona!

Claude Wrote A Parallel Post On AI

Following my own post on AI and the parallel post written by ChatGPT, I also have a parallel post written by Claude with the same sources and prompt (“write 1000 words on the extent to which humans should trust AI and why using the sources cited”).

I find it interesting that although the AI-generated posts each have a different ‘voice’, they make similar points and draw similar conclusions. They come across as more balanced than my own rather grumpy initial post! Since I wrote that post, I have heard from a few people who are using Gen AI in ways they find beneficial. I also enjoyed reading the Thesis Whisperer’s latest musings on Gen AI and its pros and cons. This whole thing is evidently a massive ‘work in progress’ and it will be fascinating to see how it all pans out. In the meantime, here is Claude’s take on the extent to which humans can trust AI.

   The Delicate Balance: To What Extent Can Humans Trust AI?

In an era where artificial intelligence increasingly shapes our daily experiences, from content summarization to creative work, the question of trust becomes paramount. How much faith should we place in these complex systems? This examination draws from recent research and reporting to explore the nuanced landscape of human-AI trust relationships.

     The Promise of AI Partnership

Organizations worldwide have embraced AI for its considerable benefits.

According to Statista data, businesses report significant advantages from generative AI implementation, including increased productivity, enhanced creativity, and improved decision-making. This practical utility provides a compelling foundation for trust.

Ethan Mollick’s concept of “co-intelligence,” detailed in his 2024 book, suggests that humans and AI can form powerful collaborative partnerships that leverage the strengths of both. This model doesn’t require blind trust, but rather an informed collaborative approach where humans maintain oversight while benefiting from AI capabilities.

Research from Harvard’s Graduate School of Education reveals that students are already forming working relationships with AI tools, developing nuanced perspectives on appropriate use. Many have established personal boundaries and expectations, demonstrating that trust can develop naturally through practical engagement rather than through theoretical consideration.

     The Technical Limitations Challenge Trust

Despite impressive capabilities, AI systems contain fundamental limitations that complicate trust relationships. The now-infamous problem of AI-generated hands – distorted, extra-fingered monstrosities that appear in otherwise convincing images – serves as a visible reminder of underlying technical imperfections. As Science Focus explains, these errors stem from training data limitations and the challenge of representing complex three-dimensional structures in two-dimensional space.

More concerning are the instances of AI “hallucinations” – confident assertions of false information. Google’s AI summary tool recently generated misleading responses about certain topics, as reported by Euronews, eroding trust through demonstrable inaccuracy. The blog “One Useful Thing” documents how AI systems can be manipulated into providing false information through careful prompting, highlighting vulnerability to exploitation.

Compounding these issues is the environmental cost of large AI systems.

The Conversation reports that data centers powering AI consume substantial energy resources, raising ethical questions about whether the benefits justify the environmental impact – another dimension of trustworthiness.

     The Ethical Dimension of Trust

Trust extends beyond technical capability to questions of intent and governance. Recent controversies surrounding academic publisher Taylor & Francis selling research content to Microsoft for AI training (reported by The Bookseller) left many authors feeling betrayed, as they had not given consent for their work to be used in this manner. Salmon’s Blog characterized this as “selling out authors to AI,” highlighting how trust can be undermined when commercial interests appear to override ethical considerations.

Similar tensions appear in the BBC’s research into AI assistants, which found significant public concern about data privacy and security. Users worry about how their interactions with AI might be stored, analyzed, or monetized without their informed consent or control.

Trust also involves understanding power dynamics. Springer’s recent article on the “Anthropology of Trust in Human-AI Relations” emphasizes the importance of examining who benefits from AI deployment and who bears the risks. When ordinary users perceive that AI systems primarily serve corporate interests rather than public good, trust naturally diminishes.

     Practicing Calibrated Trust

Given this complex landscape, what approach to trust makes sense?

Research published in Springer suggests adopting “calibrated trust” – adjusting our level of trust according to context, stakes, and demonstrated reliability. This nuanced position rejects both uncritical acceptance and blanket rejection of AI.

The “Nobot” concept proposed by Thesis Whisperer advocates for strategic AI engagement, where humans maintain agency by carefully choosing when and how to involve AI in their work. This approach treats AI as a tool whose trustworthiness must be continually evaluated rather than assumed.

SAGE’s “Generative AI for Academics” similarly promotes informed skepticism, encouraging users to verify AI outputs independently and maintain awareness of both capabilities and limitations. This verification process builds appropriate trust over time through evidence rather than through marketing claims or theoretical potential.

     The Psychological Element

Our relationship with AI also involves psychological dynamics that influence trust. Research published in “Our Mental Health” on people-pleasing behavior offers an interesting parallel – humans may over-trust AI because of a natural tendency to assume good intentions and avoid conflict. The pleasant, helpful personas of many AI systems can trigger this tendency, potentially leading to uncritical acceptance.

Conversely, the highly publicized failures of AI systems can trigger an opposite reaction of excessive distrust, even in situations where AI performs reliably. BBC research found that negative experiences with AI significantly damaged trust across all AI applications, suggesting that trust is fragile and easily undermined.

     Finding the Balance

The extent to which humans can trust AI defies simple answers. Trust must be contextual, informed by:

  1. The specific application and its demonstrated reliability

  2. The stakes involved and consequences of error

  3. The transparency of the system’s limitations

  4. The governance structures ensuring ethical use

  5. The balance of power between users and providers

The most trustworthy relationship with AI may be one where trust itself isn’t the goal, but rather effective collaboration with clear boundaries and expectations. As IBM observes in their overview of artificial intelligence, AI works best when designed to enhance human capabilities rather than replace human judgment.

In this evolving landscape, trust becomes less about faith in AI systems themselves and more about confidence in our collective ability to deploy them wisely, govern them effectively, and maintain human agency in their use. The question isn’t simply whether we can trust AI, but whether we can trust ourselves and our institutions to manage these powerful tools responsibly.

The evidence suggests that some trust is warranted in specific contexts where AI consistently delivers value, but this trust should remain conditional, informed by ongoing critical evaluation rather than by either techno-optimism or reflexive skepticism. The wisest approach embraces neither unlimited trust nor categorical distrust, but rather an engaged, discerning relationship that recognizes both the remarkable capabilities and real limitations of these increasingly consequential technologies.

ChatGPT Wrote A Parallel Post On AI

Further to my last post about AI, by way of an experiment, one of my friends kindly agreed to create a new ‘instance’ in ChatGPT, give it the sources I used for my post, and prompt it to “write 1000 words on the extent to which humans should trust AI and why using the sources cited”.

This is the result. Another friend has offered to do the same thing with Claude, so I will post that one next time, and then see what conclusions might be drawn.

The Extent to Which Humans Should Trust AI and Why?

Artificial intelligence (AI) has become an integral part of modern life, influencing sectors as diverse as healthcare, education, finance, and creative industries. However, the extent to which humans should trust AI remains a pressing question. While AI offers remarkable efficiency, decision-making capabilities, and automation, it also presents significant risks, including misinformation, bias, ethical dilemmas, and environmental concerns. This essay explores the degree of trust humans should place in AI by analyzing its benefits, limitations, and potential dangers, drawing from the latest research and expert insights.


1. The Benefits of AI: Why Trust AI?

AI has transformed various industries, making processes more efficient, reducing human errors, and improving decision-making. Some of the key reasons why AI deserves a degree of trust include:

1.1 Enhanced Decision-Making and Efficiency

AI systems can process vast amounts of data and identify patterns faster than humans. For instance, in the medical field, AI assists in diagnosing diseases such as cancer by analyzing imaging scans with greater accuracy than some human doctors (IBM, 2024). Similarly, financial institutions use AI for fraud detection, recognizing suspicious transactions in real time. AI’s ability to summarize information efficiently also contributes to its utility. However, as seen in Google’s AI-generated summaries, the technology can sometimes produce misleading responses (Euronews, 2024). While AI can be a valuable tool for decision-making, blind trust without human oversight is risky.

1.2 AI as a Collaborative Tool

Rather than replacing human intelligence, AI can complement it. Ethan Mollick (2024) argues that AI should be viewed as a tool for “co-intelligence,” where humans and AI collaborate for better outcomes. This is particularly relevant in education, where students are already using AI tools to enhance learning, often more effectively than traditional teaching methods (Harvard GSE, 2024). When used responsibly, AI can augment human capabilities and improve productivity, but trust should be conditional on human supervision and verification of AI-generated content.

1.3 Reducing Bias in Certain Scenarios

While AI is often criticized for bias, in some cases, it can reduce human prejudice. For example, AI-driven recruitment tools can help identify candidates based on merit rather than unconscious biases in human decision-making. However, this benefit depends on how the AI models are trained and whether they inherit biases from historical data.


2. The Risks of AI: Why Caution Is Necessary

While AI has many advantages, there are several reasons why humans should be cautious about fully trusting it.

2.1 AI Can Generate False or Misleading Information

One of the most pressing concerns is AI’s tendency to “hallucinate” or generate inaccurate information. Google’s AI summaries have been shown to produce misleading or entirely false responses, raising concerns about the reliability of AI-generated content (Euronews, 2024). Ethan Mollick (2024) discusses how it is even possible to manipulate AI into lying, which raises ethical concerns about its trustworthiness in high-stakes situations like journalism, law, and medicine. If AI can be tricked into producing deceptive information, its outputs should always be verified by human experts.

2.2 AI Can Be Exploitative

Recent reports indicate that AI companies have been exploiting academic research without proper consent. Taylor & Francis, for instance, was found to have sold access to researchers’ work to Microsoft AI, shocking many authors (The Bookseller, 2024). This exploitation of intellectual property highlights the need for greater transparency in AI development and usage. Similarly, Informa, a company that provides AI-driven data solutions, has seen financial success after striking deals with Microsoft, but ethical concerns remain about the use of proprietary information without clear consent (Interactive Investor, 2024). Such cases erode trust in AI companies and raise questions about who controls AI’s knowledge base.

2.3 AI’s Environmental Impact

Trust in AI must also consider its environmental cost. Data centers powering AI models consume vast amounts of energy, contributing to climate change. Research has shown that even making AI data centers more energy-efficient will not fully solve this issue (The Conversation, 2024). Given AI’s high carbon footprint, over-reliance on it could have unintended ecological consequences.

2.4 The Psychological Impact of AI Deception

AI’s ability to generate human-like interactions can lead to manipulation. Studies suggest that AI’s tendency to act as a “people-pleaser” may cause psychological harm, as users may form attachments to AI-generated responses without realizing they are interacting with a machine (Our Mental Health, 2024). This can have significant implications for mental well-being, particularly in cases where AI is used in therapy or customer service.

2.5 AI’s Impact on Academic Integrity

In the academic world, there are growing concerns about AI-generated research. The rise of generative AI tools has sparked debates about plagiarism and the authenticity of research. Some scholars argue that AI should not be trusted as a source of original thought, as it often repurposes existing content without proper attribution (Salmons Blog, 2024). Moreover, students and researchers are already using AI extensively, but there is a gap in understanding how to regulate its use ethically (Harvard GSE, 2024). If academic institutions fail to set clear guidelines, trust in AI-driven education could erode.


3. How Much Trust Is Justified?

3.1 The Case for Conditional Trust

Trust in AI should be conditional rather than absolute. While AI is a powerful tool, it should always be used with human oversight. The best approach is to view AI as an assistant rather than a decision-maker. For example, AI-generated academic summaries should be fact-checked before being cited, and AI-driven recruitment processes should involve human reviewers to ensure fairness. In fields like medicine and finance, AI should support, not replace, expert judgment.

3.2 Transparency and Regulation Are Key

For AI to be more trustworthy, companies must be transparent about how their models work and where they source their data. The backlash against AI companies exploiting academic research (The Bookseller, 2024) underscores the need for stronger regulations to ensure ethical AI development. Governments and institutions must implement stricter policies on AI-generated content, ensuring that it does not spread misinformation or infringe on intellectual property rights.

3.3 Public Awareness and AI Literacy

A major factor in AI trustworthiness is the level of public understanding. Many people use AI without fully grasping its limitations. As AI becomes more integrated into everyday life, improving AI literacy is essential. Users should be educated on verifying AI-generated content and recognizing its potential biases.

3.4 Addressing AI’s Environmental Impact

To ensure AI remains sustainable, developers must prioritize energy-efficient AI models and explore alternative computing methods. If AI continues to expand unchecked, its environmental toll could become a major factor in reducing public trust.


Conclusion

AI is a transformative technology with the potential to improve efficiency, enhance decision-making, and augment human capabilities. However, trust in AI should not be blind. Its tendency to generate misinformation, exploit intellectual property, and contribute to environmental damage means that caution is necessary. The best approach is conditional trust, where AI is used as a collaborative tool with human oversight. Ethical AI development, transparency, and regulation are essential for ensuring AI remains a trustworthy asset rather than a source of deception or harm.

Ultimately, the degree to which humans trust AI should depend on its ability to demonstrate reliability, fairness, and accountability. By balancing AI’s potential with its risks, society can harness its benefits without falling prey to its dangers.

AI Lies, Steals, and Cheats

I don’t trust AI. Even its name is misleading because it implies that AI is one homogeneous thing, and that is far from the truth. The global multinational computer company IBM describes AI as a series of concepts that developed over the last 70 years. The concept of AI was initially defined in the 1950s. In the 1980s machine learning was defined, i.e. AI systems that learn from data. Then in the 2010s deep learning was developed: a form of machine learning that mimics human brain function. And in the 2020s deep learning was used to develop generative AI (aka Gen AI), i.e. large language models that can synthesise “new” content from existing data. I put “new” in inverted commas because content synthesised by a computer from existing data is not in fact new, it is simply remixed. If Gen AI output is repurposed as Gen AI input too many times, that can lead to ‘knowledge collapse’. This is a point where the diversity of available knowledge and understanding has diminished so much that outputs are no longer useful.

Gen AI also produces lies at times. These are commonly called ‘hallucinations’, probably to try to embed the concept that Gen AI is a kind of brain. This worries me because hallucinations are closely identified with some mental illnesses and the use of illegal drugs, so there is an implicit suggestion that “normal people” or “most people” would be able to recognise them. And indeed some Gen AI ‘hallucinations’ seem unmissable, like its inability to produce realistic images of hands, or suggestions that astronauts met cats when they landed on the moon. But others may seem very real, particularly as Gen AI ‘hallucinations’ will be presented to users in the same way as accurate information. This makes me wonder how many of Gen AI’s ‘hallucinations’ are going undetected. Even Professors who are experts in using Gen AI, such as Ethan Mollick, admit they can be taken in. The BBC has just published research in which Gen AI had access to the BBC website and was asked questions and given tasks about the news. Almost one in five of those answers introduced factual errors, over half involved ‘significant issues’, and over 90% contained at least some problematic content. There is a serious risk here of misinformation and we’re dealing with enough of that from human beings; we don’t need computers adding to the problem.

Gen AI also steals. Not directly, to be fair, but it is certainly in possession of stolen goods. This blog is written under a Creative Commons licence 4.0 which permits reuse, even for commercial purposes, as long as appropriate credit is given. ‘Appropriate credit’ includes my name and a link to the material. I cannot prove that my blog has been used to train Gen AI, but I bet it has, and I also bet the resulting material does not include my name or a link to my blog. Also, my conventionally published books are subject to copyright laws which prohibit their sale or use, beyond the terms of my contract with the publishers, without my permission. Yet the books I have written and commissioned for Routledge, and the articles I have written and co-written for journals published by Taylor & Francis, formed part of a deal worth millions of pounds made by their parent company Informa to sell access to their academic content for Gen AI training purposes. My contracts with Routledge do not mention AI training, I was not asked for my consent, and I have not seen a single penny of the income received or generated by this deal. Neither have any other authors I know, and some, like my co-author Janet Salmons, are very very angry.

And Gen AI both cheats and enables cheating. Gen AI has enabled students to cheat on assignments, homework, and tests. Also fraudulent AI-generated data is increasingly problematic for researchers who collect data through online surveys. There are many other examples too.

In humans, lying, stealing, and cheating are toxic behaviours. People-pleasing is another toxic human behaviour which also appears in Gen AI. Gen AI is designed to please its human operators, which can lead to ‘fake alignment’ i.e. giving different answers under different conditions rather than sticking to the truth.

Because I don’t trust Gen AI, I have never used it. I should, however, acknowledge that the opposite may be true: it may be that because I have never used Gen AI, I don’t trust it. Some people I know personally, and for whose integrity I have the utmost respect, advocate using Gen AI. Inger Mewburn, aka The Thesis Whisperer, finds it very useful. Mark Carrigan has written an excellent book, Generative AI for Academics, which recommends that readers experiment with Gen AI to assess its potential for themselves. I do not know Ethan Mollick except through his work but he too seems like a person of integrity. He has written a book, Co-Intelligence: Living and Working with AI, which recommends that readers always invite AI to the table. And he has a useful blog about AI, One Useful Thing, where he publishes his latest discoveries and thinking.

Gen AI makes computers seem like they understand us. Inger Mewburn calls Claude her work husband; Ethan Mollick recommends that we treat Gen AI like a person.  But computers are not people and I think this conflation is potentially dangerous. Computers don’t understand anything, they simply produce content in response to patterns in their training data. For me, it makes more sense to treat Gen AI as the machines they are: non-sentient, but able to mimic sentience at times.

Also, Gen AI has not been created for our benefit. Although there are undoubtedly benefits we can derive from using Gen AI, it has been created primarily to make money for big corporations and their shareholders. And it is doing that very successfully at present, not only through deals like the one Informa struck with Microsoft, but also because its primary benefits seem to be improving efficiency and productivity while reducing costs – and therefore increasing profits. And increases in the profits of global organisations benefit the few, not the many.

So for all these reasons, I have not used Gen AI, and I do not intend to any time soon. This is primarily because I think its features are more unethical than not. (And we haven’t even talked about data centres and their environmental impact.) Though I am watching and listening, and will be happy to change my mind as soon as I see some evidence that Gen AI’s features have become more ethical than unethical.

Conference Organisation Behind The Scenes

If you’ve never organised a conference before, the chances are you have no idea how much work is involved. It takes at least a year; more if it’s your first one. Good venues and good keynote speakers are usually booked up a year or more in advance, and both are crucial to the success of the event. I am already starting to think about the keynote speakers for the 2026 International Creative Research Methods Conference (ICRMC) which is still 19 months away.

Immediately after an annual conference has been held, there is a bunch of work to do: thanking keynote speakers and sponsors, preparing videos for sharing, signing a contract with the venue for the following year, and working on the call for proposals for the next conference. ICRMC is held in the second week of September and we aim to publicise the call for the following year at the start of October, with a deadline of early December. (Every year we get anguished messages, for weeks after the deadline, from people who have missed it – we had a couple just a week or so ago – so if you might be one of those people in future years, make a note now!)

While the call for proposals is out, the conference team are seeking sponsorship. Sponsorship is useful for a range of reasons: sometimes sponsors want to fund useful things like bursaries or the printing of the conference programme; institutional sponsors lend credibility; sponsors often provide merch for the goody bags. (We have had more difficulty than usual in finding sponsors this year, and so far we only have a very small amount of funding for bursaries. If you know of any individual or organisation that might be interested in sponsoring ICRMC25, please get in touch.)

Image by 정훈 김 from Pixabay

In early January a small group of us meet to assess the proposals we have received. Then the conference programme needs to be put together which is a particularly complicated job for this conference. That is because (a) we let people choose how long they want, in multiples of 15 minutes, from 15-90 minutes, and (b) it is a hybrid conference so we need to create a good conference-within-a-conference for people who are attending online. So the programme can take two or three weeks to finalise.

In early March tickets go on sale, so bookings, and applications for bursaries, start to come in. Queries do too: which band do I fall into for payment, can I bring my breastfeeding baby, is there an induction loop, etc etc etc.

Over the following six months there are plenty of jobs to keep us busy. We need to order new conference bags and make sure we have enough good quality merch to go in them; prepare the virtual ‘goody bag’ with links and discounts from sponsors and presenters; make decisions about the bursary applications and communicate those decisions to the applicants; liaise with the venue about people’s dietary requirements; and so on. And the queries keep coming.

Throughout this whole time, promo is happening on social media, in newsletters, and anywhere else we can advertise the conference. Then the weekend before the conference is very busy with printing programmes and name badge inserts, making up name badges, filling goody bags, and managing the inevitable last-minute crises such as a presenter having to drop out and needing to be replaced. (Or worse, a keynote speaker, though fortunately that hasn’t happened yet and I hope it never will.)

And then we’re off to Manchester, already exhausted but also excited and with enough adrenalin to see us through. We have two wonderful days with a delightful group of like-minded people from around the world, which makes it all worthwhile. Then the whole thing starts all over again!

Writing Creatively For Work Or Study

I have written a book called Writing Creatively For Work or Study: Easy Techniques to Engage Your Readers which is now available for pre-order and will be published in May. This is my first trade book, which means it’s intended for a mainstream audience, even though the publisher is Manchester University Press.

MUP was not my first choice of publisher. Nor my second, third, fourth, or… I lost count. I had the idea for this book back in 2018, and I wrote a proposal and sent it to every non-fiction publisher I thought might be interested, in the UK and the US, and every single one turned me down like a bedspread. Then in 2022, when I was looking for potential publishers for a co-written book with a colleague, I was checking out MUP and discovered that in 2020 they had started publishing trade books on topics aligned with the academic books they publish. I remembered my poor sad lonely proposal and wondered if it might find a home at last. I knew one of the MUP editors slightly via Twitter, so I emailed him, he passed me on to a colleague who was definitely interested, and now the book is almost here!

There are a lot of books on how to write all kinds of outputs in many different contexts. I have read quite a few of them myself. But I don’t think anyone has written a book like this before. In essence, the premise is that non-fiction writing is creative, so creative writing techniques – stories, metaphors, poems, dialogue and the like – can be very useful in non-fiction writing. Of course you need to be selective, choosing techniques to suit the purpose of your writing, your skills, and your readers. And the book shows you how to do that, because whether you’re writing a sales report or an NVQ assignment, an appraisal form or a college dissertation, creative writing techniques can help you to get your message across in an accessible and engaging way.

The book is stuffed with examples of work by people who have done just that. You will find a legal contract which looks like a poem; comics used for marketing; letters from scientists to school-children; and many more. These are macro examples and there are also micro examples: a single sentence of sensory language from a book on ethics; a paragraph of memoir from a book on statistics; a six-word story; and lots of others. And there is loads of how-to advice alongside the examples: how to use fiction writing techniques such as repetition and recapitulation, how different poetic forms work, why a zine might be a useful format in the workplace and how to approach writing one, and plenty more besides.

I have had some lovely reviews already! The stand-up poet and broadcaster Kate Fox said,

‘This book is an absolute treasure trove of creative writing techniques and ideas. With great clarity and common sense, Helen Kara demystifies creative writing in a way that will give anyone in a work or study setting the knowledge and confidence to put these suggestions into practice.’

And the Sunday Times best-selling novelist C.L. Taylor said,

‘Everyone from students to business owners will benefit from Helen Kara’s insightful and entertaining exploration of the secrets behind memorable, engaging and authentic writing.’

And the poet, academic, and international writing expert Helen Sword said,

‘Creative methods
Transform dusty documents –
Readers wake and smile.’

That’s the first time I’ve ever had a review in poetic form, and I love it! I hope it starts a trend.

If you like writing, this book is designed to help you enjoy it even more. If you loathe writing, this book should help you to hate it a little less. And whatever you feel about writing, this book will help you to write with more conviction, energy, and skill.

Helsinki revisions and their implications

The Declaration of Helsinki focuses on research ethics and, like me, is 60 years old. It was first adopted by the World Medical Association (WMA) in June 1964 at their meeting in Helsinki, hence the name. The WMA describe the Declaration as ‘the global reference for medical research involving human participants’. As medical research ethics developments are often influential for other disciplines and sectors, even though medical research ethics do not always translate smoothly to social research contexts, the Declaration is also a key reference for social research in general.

The Declaration has been revised a number of times, most recently in October 2024. The latest revisions are the culmination of an extensive global consultation process over 30 months involving physicians, patients, members of research ethics committees, and others. From my perspective, some of the revisions seem like the WMA are playing catch-up, e.g. it is only in this latest iteration that they have stopped talking about ‘human subjects’ and started talking about ‘participants’. (Regular readers may remember that I have ranted about this before.) But other revisions seem rather more forward-looking.

For a start, the 2024 revisions use the phrase ‘physicians and other researchers’, when previous iterations seemed to regard physicians as the only people who would be doing medical research. This is definite progress as of course it is not only physicians who are able to conduct medical research.

The Declaration’s position on harm to the environment has been strengthened from ‘minimise possible harm’ to recommend avoiding or minimising harm to the environment and striving for environmental sustainability. That is quite a big move, and an important one in the current global context.

The latest iteration specifies that ‘medical research takes place in the context of various structural inequities’ which was not previously mentioned. Also, the previous version specified that medical research ‘must be preceded by careful assessment of predictable risks and burdens to the individuals and groups involved in the research’, but mostly focused on risks. The latest version uses ‘risks and burdens’ throughout, and widens ‘individuals and groups’ to ‘individuals, groups, and communities’.

A more radical statement is: ‘When collaborative research is performed internationally, the research protocol must be approved by research ethics committees in both the sponsoring and host countries.’ This is laudable but may be difficult to implement as some countries still have little or no research ethics governance. Even so, I think it’s a worthwhile aim, and may even help to speed up the process of devising good quality research ethics governance systems for places where they do not yet exist.

The revised Declaration says quite a bit about research ethics committees (RECs) and their remit. The previous version from 2013 said that RECs must have the right to monitor ongoing studies (which just goes to show that the implementation of the Declaration has its limits, as many RECs still do not do this). The latest version says that RECs ‘must have the right to monitor, recommend changes to, withdraw approval for, and suspend ongoing research.’ That is quite a big shift. It suggests that the work of RECs may need to become more relational and dialogical, rather than being primarily compliance-based, which I think would be a good thing. However, it also has implications for the staffing and resourcing of RECs, as well as for the ethical work of researchers, because more monitoring etc means more time which means more money. I wonder where the WMA think that could come from.

Some of the revisions are specifically about medical procedures such as clinical trials, which are as important as any other revisions from the WMA’s perspective but are not as relevant to non-medical research. However, the revisions highlighted in this post, and no doubt others too, will reverberate through research ethics work around the world for years to come.

Proposals And A Proposal

Proposals for next year’s International Creative Research Methods Conference are coming in, though more slowly than in previous years. I am trying not to have sleepless nights about this! We often get a big last-minute flurry so by the deadline – 6 December 2024 in case you were wondering – we may have enough proposals to make a conference. I hope so. If not, we will extend the deadline, and make it clear that we have done so, and why. (I’m never impressed with those social media posts that say ‘this conference has been so popular that we have extended the deadline for proposals’.)

I wonder whether the slower rate of submissions is because we said we were particularly interested in proposals for creative methods other than those for gathering data. This was in response to some of the feedback from this year’s conference, where people pointed out that most of the sessions focused on data gathering, and said they would like to know more about creative methods at other stages of the process. We usually have several sessions on writing, and some on analytic techniques – The Handbook of Creative Data Analysis demonstrates that researchers are doing a lot of creative work at the analytic stage. And I know from my own research and writing that creative work is also being done with research design, literature reviews and other kinds of context-setting, presenting findings, disseminating results, and so on.

That said, we do not expect, or want, to have a creative methods conference which includes nothing about creative data gathering. So if you have a method you’d like to share, but you’ve been holding back because you think we might not be interested – bring it on!

While I’m talking about ICRMC, I have a proposal for you, especially if you work outside the UK or have colleagues who do. I am currently in discussions with people in Australia and Canada about the possibility of holding ICRMCs in those countries – not instead of the UK, but as well. Whether those discussions will bear fruit remains to be seen. But I would love to see ICRMC spread its wings and take to the skies. So if you would like to host an ICRMC outside the UK, please get in touch, and I will give you all the help and information I can.

Musings On Language

Academics love a definition. Many journal articles, book chapters, even whole books include definitions of various terms. This is sensible because words are slippery. Defining terms helps us to use language with more precision. This makes it more likely that our readers will take from our writing what we had intended it to say, rather than attributing some other meaning to our work.

People attribute meanings to our work which we did not put there because everyone is always influenced by their own past experiences and present emotions. These lead some readers to reach different conclusions from other readers about the same piece of writing. Even though they have read the same words in the same order, they have had different experiences.

Also, human memory is notoriously fallible. Think of something you read a few months ago. How much of it can you remember? Unless you are one of the rare people with a photographic or eidetic memory, I bet the answer is ‘not much’. This means that even when we define our terms, the impact of that definition will fade with time.

This has happened even to key research terms such as ‘statistical significance’. That term was originally intended to indicate how likely or unlikely a result was to have occurred by chance. However, lay people might read a newspaper article reporting that a research finding is ‘statistically significant’ and attribute the everyday meaning of ‘significance’, i.e. important or meaningful, to the findings. Indeed the journalist who wrote the article may have made the same attribution. Yet statistical significance was never intended to imply that a result was important or meaningful in everyday terms.

The statisticians who devised tests of statistical significance were careful to define their terms. Unfortunately the care they took was diluted over time, and the conflation between the use of ‘significance’ as both a technical and an everyday term caused a multitude of errors and conflicts, ultimately leading to mass calls for its retirement as a technical term.

There are examples of this from other fields too. Emergency medical dispatchers in the US used to ask callers whether the patient was alert. This caused confusion and delay, which is not what anyone wants in a crisis. Enquiries revealed that ‘alert’ has a specific clinical meaning which is not understood in the same way by members of the public. Now dispatchers ask callers whether the patient is responding normally, which is much easier for most people to answer and still tells the dispatcher what they need to know.

There is a symbiotic relationship between language and thought. Language helps us to think; many of us think in language, at least some of the time; the language we use, hear, and read influences the thoughts we have. When we need to name something, such as a new research method we have devised, it is tempting to reach for a name with pizazz, a name that will be eye-catching and memorable. Perhaps those early statisticians chose ‘statistical significance’ for that very reason. However, experience shows that it makes more sense to choose a term which offers a description that is as simple, clear, and accurate as possible. Even then there are no guarantees that everyone will understand – but at least we have given it our best shot.

I Won An Award!

On Monday 29 July I got an absolutely astonishing email from the National Centre for Research Methods (NCRM). I had to read it about 12 times – all of the words and sentences made sense, but I couldn’t get the meaning to sink into my brain. Particularly the part that said “We would like to award you an NCRM Lifetime Achievement Award.” I was completely gobsmacked.

The following morning at 5 am I went from being fast asleep to wide awake, my heart racing and my brain in overdrive, thinking “I’m getting a lifetime achievement award!” I couldn’t tell anyone apart from my nearest and dearest, as the news was not to be publicised for another month. NCRM eventually went public on 29 August when I was away at the Autscape conference. The reaction on social media, and from the couple of people I mentioned it to at the conference, was hearteningly positive.

One interesting thing about getting a lifetime achievement award: once the news has sunk in and a few people have responded positively, it’s really good for counteracting impostor syndrome. When the critical voices start up in my head, I tell them “shut up, I got a lifetime achievement award” – and it works!

Image of me collecting my award in Manchester

The award ceremony was in Manchester as part of NCRM’s MethodsCon conference. It was in the early evening, two days after ICRMC and the day before I was going on holiday. So I could get there which was nice. I arrived a bit early and chatted with a few people, including Jackie Carter who I was delighted to meet in person after knowing her online for years.

I wasn’t the only award recipient; there were two other awards. One was for developing new methods, which went to a team of four researchers: Susie Weller, Emma Davidson, Ros Edwards and Lynn Jamieson. The other was for positively impacting the wellbeing of older adults, which went to doctoral student Naureen Meghani and her supervisor Joanna Hudson.

I was given the opportunity to say a few words, so I talked about my work on the International Creative Research Methods Conference, the Creative Research Methods in Practice book series, and the Journal of Creative Research Methods. My main point was that, with a little support, independent researchers can do a great deal. There were funders in the room; I hope they were listening.

I have worked quite closely with NCRM in a wide variety of ways over the last 10 years. They have been incredibly supportive of my work and that has been a huge help to me. They are set to lose their funding in a year’s time, which I think is a great shame, because I think there is still a considerable need for the kind of work they do. Not least in supporting independent as well as other researchers. There is very little support available for independent researchers in the UK, and almost no access to research funding. I have argued for years that this is a waste of a national resource because independent researchers are agile and responsive, can do a lot with a small amount of funding, and have tiny overheads. I will continue to make this argument at every chance I get. As I’m now 60 years old, I doubt this will lead to real change in my own working lifetime, but I hope I can help to facilitate change for the next generation of researchers. If this award provides a stepping stone towards reaching that position, NCRM’s impact will be greater than they might currently expect.