PhD Guides As Audiobooks

I have news! The short e-books I have written in the PhD Guides series are being produced as audiobooks! They are being voiced by the excellent Leigh Forbes who is a joy to work with. As with the print books, the first one – Starting Your PhD: What You Need To Know – is free of charge, and it is available now. There are currently multiple distributors working to make the audiobook available. As each platform goes live, the links will be added below — so please check back regularly for updates.

I know many people prefer listening to reading. Some find it more accessible, and for others it is simply more convenient because they can listen as they run, drive, clean their home or do other activities that don’t require much brainpower. I have been wanting to produce audiobooks for years so I am delighted to have reached this stage.

There are various reasons I made this e-book free in both formats. Partly it’s a marketing decision: the first e-book in a series is often free because (a) people are more likely to take it up and (b) if they then enjoy listening or reading, they are more likely to buy other books in the series. And partly it’s a gift to the research community which has given me so much. It includes all the information I wish I’d had before I started my own PhD. Though one of my aims for the book is that it can help people, for whom a PhD is not the best course of action, to figure that out before making a financially, mentally, and emotionally costly commitment. I loved my PhD, but it’s not right for everyone, and finding that out before you start is infinitely preferable to finding out a year or two into the process.

I’m hoping that one day some of my other books will be produced as audiobooks, but that is beyond my control. If it does happen you can be sure I will write about it here. In the meantime, if you decide to listen to any of my audiobooks, please do let me know what you think.

Available links so far:

Kobo, Walmart
NOOK Audiobooks
Libro.FM
Google Play
Storytel
Everand
Chirp

Blog or YouTube?

I guess this is a very 2010s way of looking at the world. I should probably be writing about ‘TikTok or Instagram?’ but as I have never TikToked in my life, and I came off Instagram some years ago, that post is never going to happen.

I started this blog in 2014 – over 10 years ago – and have been hovering at around 1,000 subscribers for a couple of years now. I started putting content on my YouTube channel regularly in mid-2020 – five years ago – and have over 2,000 subscribers there. So, on those stats, YouTube definitely wins.

I love my blog though, probably because I enjoy writing. Also it’s a great archive for me to remember what I was doing and thinking at different times. Yes of course I use both platforms to promote my work, but that’s not the only reason (if it was, I wouldn’t be able to do it – too soul-destroying). Another reason is to build up a body of content that might be useful for people who don’t have enough resources to book me to run a workshop or retreat, or to buy my books. This is why I pay extra to have a searchable blog, so people who want to find out about creative research methods, or radical research ethics, or creative workplace or academic writing, or being an independent researcher, can follow relevant tags or use the search function. And there is also relevant material on my YouTube channel; I sometimes turn a blog post into a video, or vice versa, but mostly the content is different.

It’s interesting to compare the most popular posts and videos of all time. The four most popular posts on this blog are:

Creative Research Methods (2015 – 6,853 views)

Why and How to Negotiate with Academic Publishers (2018 – 3,506 views)

Methodology, Method, and Theory (2018 – 2,826 views)

How To Chair An Event (2018 – 2,480 views)

The four most popular videos on my YouTube channel are:

Methodology, Ontology and Epistemology (2020 – 17,549 views)

Why Is Research Ethics So Important? (2021 – 16,859 views)

Do You Want To Be An Independent Researcher? (2020 – 12,424 views)

Finding And Using Secondary Data In Research (2020 – 11,399 views)

It evidently takes a while for a post or video to become popular (unless something goes viral, but that is very unlikely to happen to any of my content). And I think it is interesting that content intended to demystify – methodology etc – features in both lists. My video on ‘methodology, ontology and epistemology’ has been the most popular one on the channel for some time now. I know it gets used in teaching, which is fine by me – occasionally some lecturer with a conscience will contact me to ask permission, but I always give it because it’s extra promotion for my work and if it also helps a beleaguered lecturer then I’m happy about that too.

That being the most popular video inspired me to start my new ‘What Is?’ series of back-to-basics videos giving brief introductory explanations of key research terms and concepts. More content coming soon!

I was surprised to find that my YouTube videos get so many more views than my blog posts. Logically, this means I should stop writing my blog and focus all my energies on my YouTube channel. However, I have been blogging for 20 years (I had other blogs before this one) and I don’t want to give it up. But my posts may become a little less frequent… we’ll see.

Volunteer Needed for ICRMC25!

We are looking for a volunteer to attend the International Creative Research Methods Conference online and help to manage the chat. In return for this work we will provide a free ticket to the online conference.

We already have one volunteer, Linda Baines who is President of the National Coalition of Independent Scholars. Linda has attended the conference online in both previous years so has useful experience.

Our second volunteer would be working with Linda to help manage the Zoom chat and liaise with the people managing the online side of things from the venue. There is a keynote at the start of each day, then two streams; we would need you to decide between you who will be attending which sessions, so that they are all covered.

If you would like to apply for this opportunity, please create a Word document providing confirmation that you can work in the BST time zone and that you will be available for the entire conference. Also, please explain why we should select you in no more than 200 words. Send your application to enquiries@creativeresearchmethods.com by midnight on Tuesday 24 June 2025. We look forward to hearing from you!

Ten Years Of Creative Research Methods

Ten years ago this month my book on creative research methods was launched. It came about in a classic way. The backstory is that 20 years ago, when I was finishing my PhD, I decided I wanted to write a book on research methods. I knew I couldn’t just write another book on qualitative methods or interviewing or something; I needed a hook. I didn’t come up with one till early 2011 when I was short of work due to the austerity measures imposed by the new UK coalition government. Research and Evaluation for Busy Students and Practitionerswas published by Policy Press in September 2012, and I thought good, that’s one ambition ticked off the bucket list. Then in 2013 I wanted to read a book on creative research methods so I went looking for one online. And I looked, and looked, and eventually realised, with a sinking heart, that if I wanted to read that book I would have to write it first.

A lot has changed in the last ten years: globally of course, but also in the field of creative research methods, and for me professionally and personally. I don’t think my book contributed to any global changes but I do think it has made a difference to the field and I’m absolutely sure it has made a big difference for me. Perhaps also for my publisher, Policy Press, who are publishing a lot on creative methods now – but I’ll let them reflect on that on their own blog if they so choose.

By the time the book was published I had already started teaching at universities. I ran my first creative academic writing retreat in the UK in spring 2015, and my first creative research methods workshop in Canada in the autumn of that year. I haven’t stopped running those types of events, and they now provide most of my income. If I hadn’t written the book I don’t think that would have happened. Also it launched me firmly into scholarly writing: I have written and edited a number of books now, and I co-edit one book series and edit another. Scholarly writing pays pennies per hour but fortunately it enables the teaching I do which is much better paid.

Talking of money, another change for me is that I am not skint any more. When Creative Research Methods first came out in 2015 I was very short of money. Things were improving a bit – 2011-2013 were the worst years; I had to get a part-time office job to pay the bills – but by 2015 I was still only able to pay myself £1,000/month, which was just enough to pay the mortgage and bills, cover my car costs and buy food. This was partly because in those days universities mostly asked me to work for nothing, a hangover from the time when everyone who came to run a workshop in a university was already employed by another university elsewhere. I published a post about saying no to this in 2015, and it was my most popular post in that year by a long way which suggests it struck a chord.

In terms of the field, I think the book has helped, and is helping, to make creative research methods more understandable and acceptable. My book and my work are far from the only factors in this shift; there are many other people doing amazing work on creative methods. In fact my own scholarly work is largely a collation job, pulling together a lot of people’s work into one place, though it is also built on years of my own experience of using creative methods in commissioned research. This also applies for later publications I have had a hand in, such as The Bloomsbury Handbook of Creative Research Methods and the Handbook of Creative Data Analysis, as well as the Creative Research Methods in Practice book series. Also there are now a range of single-discipline books on creative research methods: I was involved in the one on education, but not in the ones on geography, economics, music, and I’m sure there are others too.

The good thing about people who are interested in creative methods is that they have formed a very kind and supportive community. I can think of only two creative methods people, in the whole of the last 10 years, who have not been kind and supportive. As well as our community, we also now have a conference (the International Creative Research Methods Conference) and a journal (the Journal of Creative Research Methods – first issue due in September 2025). I founded both of those, which took a massive amount of work, though I couldn’t have done it without the support of too many individuals and organisations to name. I am particularly grateful to Policy Press, who have sponsored the conference every year so far and whose Journals Director, Julia Mortimer, helped us set up the journal (published by Bristol University Press of which Policy Press is an imprint). I am also particularly grateful to the journal’s editors-in-chief, Sophie Woodward of the University of Manchester and the National Centre for Research Methods, Harriet Shortt of Bath Spa University, and Su-ming Khoo of the National University of Ireland, who are carrying most of the load. I am Consulting Editor which means I don’t have to do very much work now it’s all up and running, thank goodness!

As yet I haven’t made any money from either of these initiatives, though I live in hope. But again, fortunately, my teaching work and a few other bits and pieces keep me afloat financially. I am, in one sense, my own patron – though I do have a handful of real patrons on Patreon and they are an enormous support for my work.

I am now being asked to keynote research methods conferences in various enticing places (and one or two less enticing ones) around the world. It seems I have become something of a figurehead for creative research methods – but creative research methods do not, in any sense, belong to me. They belong to all of us, and any of us can expand the field. Also, creative research methods are not new or some kind of fad: they have been used by Indigenous researchers for tens of thousands of years. I am happy and proud to be doing this work, and I like the thought that I am, in a very small way, building on ancient traditions and helping to keep them alive for the generations to come.

Writing Creatively For Work or Study – Events

This week’s post is over on the Research Whisperer. Click here to read.

My new book Writing Creatively For Work Or Study: Easy Techniques to Engage Your Readers is published this month by Manchester University Press. We are holding two free online book launches on Tuesday 10 June, one at 9 am BST and one at 6 pm BST. Also, New Zealand poet and academic Helen Sword has very kindly invited me to be a guest at a WriteSPACE special event at 9 am BST on Tuesday 17 June. Feel free to register for any or all – hope to see you there!

Open Courses Online

I recently joined the Instats platform which was created “for researchers, by researchers” to offer online training in research methods to the research community. People often ask me where they can access my courses. Most of my teaching is commissioned by universities for their own postgraduate researchers and/or staff and so is not open to people from outside those institutions. But now I can offer courses online which are open for anyone to attend.

Teaching online is more restrictive than teaching in person – but it’s a lot more accessible for many students, and less tiring for me. I’m glad I get to do both, because I love teaching in person. I’m writing this post on a train to Manchester to teach creative academic writing to postgraduate students at the University of Manchester, and I know it will be a good day, though I will be shattered tonight. I also enjoy teaching online. It’s a very different challenge but can be just as rewarding to witness students having “light-bulb moments”.

My first offerings via the Instats platform are three one-hour taster seminars, which are free, and three one-day workshops which come at a cost. All will be recorded and available to view. Again, the tasters are free to view, and there is a charge to see the recordings of the one-day workshops; that is lower than the fee for attending in person because of course you don’t get to ask questions.

Here’s the schedule, with links to each session:

SessionDateTimes – BST
Creative research methods taster seminar Tuesday 13 May2.30-3.30 pm
Radical research ethics taster seminarThursday 15 May12-1 pm
Creative academic writing taster seminar Wednesday 28 May2.30-3.30 pm
Creative research methods workshopThursday 29 May9.30 am – 3.30 pm
Radical research ethics workshopWednesday 4 June9.30 am – 3.30 pm
Creative academic writing workshopMonday 16 June9.30 am – 3.30 pm

I am delighted to be able to offer these sessions to anyone who wants to attend and can make the date and time (and, for the workshops, afford the fee). I think it’s great, too, that they will be recorded. Instats take pride in making recordings available very quickly, usually 12-24 hours after the session. You might want to check out their website and see what else they offer – there are lots of sessions coming up and many more available as recordings. I hope to see you at one of my sessions soon!

Teaching Creative Research Methods for GEDIS

I got an email through my contact form in mid-December. Would I like to spend a day teaching creative research methods to an EU-funded project team, at the University of Barcelona, in the middle of March – on a Friday?

Would I ever!

This was my first time teaching creative research methods to an EU-funded project team. However, I have been part of an EU-funded project team, and I work with EU-funded project teams as an ethics expert, so I am familiar with the terrain. It was also my first time working in Spain, and fortunately my client and his colleagues were able and willing to help me find my way through the maze of Spanish bureaucracy.

My client, Juan-José Bo, is a Professor at the University of Barcelona. He is also the project lead for GEDIS, which stands for Gender Diversity in Information Science, a project that began work at the start of 2025. Juanjo had a copy of my book on creative research methods which had led him to ask for my help. He wanted a day of awareness-raising, with some hands-on practice, and an emphasis on gender. So that’s what we did.

Juanjo kindly arranged for me to stay at a comfortable hotel very close to the university. On the morning of our session he came to meet me at the hotel to show me the way to the university and the room where we would be working. Before we got around to the business of the day, Juanjo showed me a marvellous three-dimensional ‘toolkit for librarians’ he had created with help from an artist, which is designed to fold down flat for easy transport to conferences.

There were 21 people in the group, who came from a range of countries including Czechia, Austria, Bosnia, Germany, Croatia, Spain and Mexico. I began, as I always do, by asking people to introduce themselves and say what they wanted to get out of the day. It turned out that most were quite senior, including librarians and Professors, and several were also teachers of research methods. I had a moment of internal ‘eek!’, wondering whether I had taken on an assignment I couldn’t fulfil, but it turned out to be fine.

Our day together came at the end of a week of intensive co-working, and everyone was tired, but they all concentrated hard and asked really good questions. The discussions were focused on how creative methods could help GEDIS, and the group generated some excellent ideas.

Juanjo emailed me the following week to say:

“Partners told me that they were really happy with the session. It reported to them new and fresh ideas, to use not only at GEDIS at some point but also to their projects. In my case, I enjoyed the session so much.”

I was particularly pleased, in this global moment, to be able to assist a project working to strengthen gender diversity. And on a personal level I was delighted to be able to spend the weekend in Barcelona!

Claude Wrote A Parallel Post On AI

Following my own post on AI and the parallel post written by ChatGPT, I also have a parallel post written by Claude with the same sources and prompt (“write 1000 words on the extent to which humans should trust AI and why using the sources cited”).

I find it interesting that although the AI-generated posts each have a different ‘voice’, they make similar points and draw similar conclusions. They come across as more balanced than my own rather grumpy initial post! Since I wrote that post, I have heard from a few people who are using Gen AI in ways they find beneficial. I also enjoyed reading the Thesis Whisperer’s latest musings on Gen AI and its pros and cons. This whole thing is evidently a massive ‘work in progress’ and it will be fascinating to see how it all pans out. In the meantime, here is Claude’s take on the extent to which humans can trust AI.

   The Delicate Balance: To What Extent Can Humans Trust AI?

In an era where artificial intelligence increasingly shapes our daily experiences, from content summarization to creative work, the question of trust becomes paramount. How much faith should we place in these complex systems? This examination draws from recent research and reporting to explore the nuanced landscape of human-AI trust relationships.

     The Promise of AI Partnership

Organizations worldwide have embraced AI for its considerable benefits.

According to Statista data, businesses report significant advantages from generative AI implementation, including increased productivity, enhanced creativity, and improved decision-making. This practical utility provides a compelling foundation for trust.

Ethan Mollick’s concept of “co-intelligence,” detailed in his 2024 book, suggests that humans and AI can form powerful collaborative partnerships that leverage the strengths of both. This model doesn’t require blind trust, but rather an informed collaborative approach where humans maintain oversight while benefiting from AI capabilities.

Research from Harvard’s Graduate School of Education reveals that students are already forming working relationships with AI tools, developing nuanced perspectives on appropriate use. Many have established personal boundaries and expectations, demonstrating that trust can develop naturally through practical engagement rather than through theoretical consideration.

     The Technical Limitations Challenge Trust

Despite impressive capabilities, AI systems contain fundamental limitations that complicate trust relationships. The now-infamous problem of AI-generated hands – distorted, extra-fingered monstrosities that appear in otherwise convincing images – serves as a visible reminder of underlying technical imperfections. As Science Focus explains, these errors stem from training data limitations and the challenge of representing complex three-dimensional structures in two-dimensional space.

More concerning are the instances of AI “hallucinations” – confident assertions of false information. Google’s AI summary tool recently generated misleading responses about certain topics, as reported by Euronews, eroding trust through demonstrable inaccuracy. The blog “One Useful Thing” documents how AI systems can be manipulated into providing false information through careful prompting, highlighting vulnerability to exploitation.

Compounding these issues is the environmental cost of large AI systems.

The Conversation reports that data centers powering AI consume substantial energy resources, raising ethical questions about whether the benefits justify the environmental impact – another dimension of trustworthiness.

     The Ethical Dimension of Trust

Trust extends beyond technical capability to questions of intent and governance. Recent controversies surrounding academic publisher Taylor & Francis selling research content to Microsoft for AI training (reported by The Bookseller) left many authors feeling betrayed, as they had not given consent for their work to be used in this manner. Salmon’s Blog characterized this as “selling out authors to AI,” highlighting how trust can be undermined when commercial interests appear to override ethical considerations.

Similar tensions appear in the BBC’s research into AI assistants, which found significant public concern about data privacy and security. Users worry about how their interactions with AI might be stored, analyzed, or monetized without their informed consent or control.

Trust also involves understanding power dynamics. Springer’s recent article on the “Anthropology of Trust in Human-AI Relations” emphasizes the importance of examining who benefits from AI deployment and who bears the risks. When ordinary users perceive that AI systems primarily serve corporate interests rather than public good, trust naturally diminishes.

     Practicing Calibrated Trust

Given this complex landscape, what approach to trust makes sense?

Research published in Springer suggests adopting “calibrated trust” – adjusting our level of trust according to context, stakes, and demonstrated reliability. This nuanced position rejects both uncritical acceptance and blanket rejection of AI.

The “Nobot” concept proposed by Thesis Whisperer advocates for strategic AI engagement, where humans maintain agency by carefully choosing when and how to involve AI in their work. This approach treats AI as a tool whose trustworthiness must be continually evaluated rather than assumed.

SAGE’s “Generative AI for Academics” similarly promotes informed skepticism, encouraging users to verify AI outputs independently and maintain awareness of both capabilities and limitations. This verification process builds appropriate trust over time through evidence rather than through marketing claims or theoretical potential.

     The Psychological Element

Our relationship with AI also involves psychological dynamics that influence trust. Research published in “Our Mental Health” on people-pleasing behavior offers an interesting parallel – humans may over-trust AI because of a natural tendency to assume good intentions and avoid conflict. The pleasant, helpful personas of many AI systems can trigger this tendency, potentially leading to uncritical acceptance.

Conversely, the highly publicized failures of AI systems can trigger an opposite reaction of excessive distrust, even in situations where AI performs reliably. BBC research found that negative experiences with AI significantly damaged trust across all AI applications, suggesting that trust is fragile and easily undermined.

     Finding the Balance

The extent to which humans can trust AI defies simple answers. Trust must be contextual, informed by:

  1. The specific application and its demonstrated reliability

  2. The stakes involved and consequences of error

  3. The transparency of the system’s limitations

  4. The governance structures ensuring ethical use

  5. The balance of power between users and providers

The most trustworthy relationship with AI may be one where trust itself isn’t the goal, but rather effective collaboration with clear boundaries and expectations. As IBM observes in their overview of artificial intelligence, AI works best when designed to enhance human capabilities rather than replace human judgment.

In this evolving landscape, trust becomes less about faith in AI systems themselves and more about confidence in our collective ability to deploy them wisely, govern them effectively, and maintain human agency in their use. The question isn’t simply whether we can trust AI, but whether we can trust ourselves and our institutions to manage these powerful tools responsibly.

The evidence suggests that some trust is warranted in specific contexts where AI consistently delivers value, but this trust should remain conditional, informed by ongoing critical evaluation rather than by either techno-optimism or reflexive skepticism. The wisest approach embraces neither unlimited trust nor categorical distrust, but rather an engaged, discerning relationship that recognizes both the remarkable capabilities and real limitations of these increasingly consequential technologies.

ChatGPT Wrote A Parallel Post On AI

Further to my last post about AI, by way of an experiment, one of my friends kindly agreed to create a new ‘instance’ in ChatGPT, give it the sources I used for my post, and prompt it to “write 1000 words on the extent to which humans should trust AI and why using the sources cited”.

This is the result. Another friend has offered to do the same thing with Claude, so I will post that one next time, and then see what conclusions might be drawn.

The Extent to Which Humans Should Trust AI and Why?

Artificial intelligence (AI) has become an integral part of modern life, influencing sectors as diverse as healthcare, education, finance, and creative industries. However, the extent to which humans should trust AI remains a pressing question. While AI offers remarkable efficiency, decision-making capabilities, and automation, it also presents significant risks, including misinformation, bias, ethical dilemmas, and environmental concerns. This essay explores the degree of trust humans should place in AI by analyzing its benefits, limitations, and potential dangers, drawing from the latest research and expert insights.


1. The Benefits of AI: Why Trust AI?

AI has transformed various industries, making processes more efficient, reducing human errors, and improving decision-making. Some of the key reasons why AI deserves a degree of trust include:

1.1 Enhanced Decision-Making and Efficiency

AI systems can process vast amounts of data and identify patterns faster than humans. For instance, in the medical field, AI assists in diagnosing diseases such as cancer by analyzing imaging scans with greater accuracy than some human doctors (IBM, 2024). Similarly, financial institutions use AI for fraud detection, recognizing suspicious transactions in real time. AI’s ability to summarize information efficiently also contributes to its utility. However, as seen in Google’s AI-generated summaries, the technology can sometimes produce misleading responses (Euronews, 2024). While AI can be a valuable tool for decision-making, blind trust without human oversight is risky.

1.2 AI as a Collaborative Tool

Rather than replacing human intelligence, AI can complement it. Ethan Mollick (2024) argues that AI should be viewed as a tool for “co-intelligence,” where humans and AI collaborate for better outcomes. This is particularly relevant in education, where students are already using AI tools to enhance learning, often more effectively than traditional teaching methods (Harvard GSE, 2024). When used responsibly, AI can augment human capabilities and improve productivity, but trust should be conditional on human supervision and verification of AI-generated content.

1.3 Reducing Bias in Certain Scenarios

While AI is often criticized for bias, in some cases, it can reduce human prejudice. For example, AI-driven recruitment tools can help identify candidates based on merit rather than unconscious biases in human decision-making. However, this benefit depends on how the AI models are trained and whether they inherit biases from historical data.


2. The Risks of AI: Why Caution Is Necessary

While AI has many advantages, there are several reasons why humans should be cautious about fully trusting it.

2.1 AI Can Generate False or Misleading Information

One of the most pressing concerns is AI’s tendency to “hallucinate” or generate inaccurate information. Google’s AI summaries have been shown to produce misleading or entirely false responses, raising concerns about the reliability of AI-generated content (Euronews, 2024). Ethan Mollick (2024) discusses how it is even possible to manipulate AI into lying, which raises ethical concerns about its trustworthiness in high-stakes situations like journalism, law, and medicine. If AI can be tricked into producing deceptive information, its outputs should always be verified by human experts.

2.2 AI Can Be Exploitative

Recent reports indicate that AI companies have been exploiting academic research without proper consent. Taylor & Francis, for instance, was found to have sold access to researchers’ work to Microsoft AI, shocking many authors (The Bookseller, 2024). This exploitation of intellectual property highlights the need for greater transparency in AI development and usage. Similarly, Informa, a company that provides AI-driven data solutions, has seen financial success after striking deals with Microsoft, but ethical concerns remain about the use of proprietary information without clear consent (Interactive Investor, 2024). Such cases erode trust in AI companies and raise questions about who controls AI’s knowledge base.

2.3 AI’s Environmental Impact

Trust in AI must also consider its environmental cost. Data centers powering AI models consume vast amounts of energy, contributing to climate change. Research has shown that even making AI data centers more energy-efficient will not fully solve this issue (The Conversation, 2024). Given AI’s high carbon footprint, over-reliance on it could have unintended ecological consequences.

2.4 The Psychological Impact of AI Deception

AI’s ability to generate human-like interactions can lead to manipulation. Studies suggest that AI’s tendency to act as a “people-pleaser” may cause psychological harm, as users may form attachments to AI-generated responses without realizing they are interacting with a machine (Our Mental Health, 2024). This can have significant implications for mental well-being, particularly in cases where AI is used in therapy or customer service.

2.5 AI’s Impact on Academic Integrity

In the academic world, there are growing concerns about AI-generated research. The rise of generative AI tools has sparked debates about plagiarism and the authenticity of research. Some scholars argue that AI should not be trusted as a source of original thought, as it often repurposes existing content without proper attribution (Salmons Blog, 2024). Moreover, students and researchers are already using AI extensively, but there is a gap in understanding how to regulate its use ethically (Harvard GSE, 2024). If academic institutions fail to set clear guidelines, trust in AI-driven education could erode.


3. How Much Trust Is Justified?

3.1 The Case for Conditional Trust

Trust in AI should be conditional rather than absolute. While AI is a powerful tool, it should always be used with human oversight. The best approach is to view AI as an assistant rather than a decision-maker. For example, AI-generated academic summaries should be fact-checked before being cited, and AI-driven recruitment processes should involve human reviewers to ensure fairness. In fields like medicine and finance, AI should support, not replace, expert judgment.

3.2 Transparency and Regulation Are Key

For AI to be more trustworthy, companies must be transparent about how their models work and where they source their data. The backlash against AI companies exploiting academic research (The Bookseller, 2024) underscores the need for stronger regulations to ensure ethical AI development. Governments and institutions must implement stricter policies on AI-generated content, ensuring that it does not spread misinformation or infringe on intellectual property rights.

3.3 Public Awareness and AI Literacy

A major factor in AI trustworthiness is the level of public understanding. Many people use AI without fully grasping its limitations. As AI becomes more integrated into everyday life, improving AI literacy is essential. Users should be educated on verifying AI-generated content and recognizing its potential biases.

3.4 Addressing AI’s Environmental Impact

To ensure AI remains sustainable, developers must prioritize energy-efficient AI models and explore alternative computing methods. If AI continues to expand unchecked, its environmental toll could become a major factor in reducing public trust.


Conclusion

AI is a transformative technology with the potential to improve efficiency, enhance decision-making, and augment human capabilities. However, trust in AI should not be blind. Its tendency to generate misinformation, exploit intellectual property, and contribute to environmental damage means that caution is necessary. The best approach is conditional trust, where AI is used as a collaborative tool with human oversight. Ethical AI development, transparency, and regulation are essential for ensuring AI remains a trustworthy asset rather than a source of deception or harm.

Ultimately, the degree to which humans trust AI should depend on its ability to demonstrate reliability, fairness, and accountability. By balancing AI’s potential with its risks, society can harness its benefits without falling prey to its dangers.

AI Lies, Steals, and Cheats

I don’t trust AI. Even its name is misleading because it implies that AI is one homogeneous thing, and that is far from the truth. The global multinational computer company IBM describes AI as a series of concepts that developed over the last 70 years. The concept of AI was initially defined in the 1950s. In the 1980s machine learning was defined, i.e. AI systems that learn from data. Then in the 2010s deep learning was developed: a form of machine learning that mimics human brain function. And in the 2020s deep learning was used to develop generative AI (aka Gen AI), i.e. large language models that can synthesise “new” content from existing data. I put “new” in inverted commas because content synthesised by a computer from existing data is not in fact new, it is simply remixed. If Gen AI output is repurposed as Gen AI input too many times, that can lead to ‘knowledge collapse’. This is a point where the diversity of available knowledge and understanding has diminished so much that outputs are no longer useful.

Gen AI also produces lies at times. These are commonly called ‘hallucinations’, probably to try to embed the concept that Gen AI is a kind of brain. This worries me because hallucinations are closely identified with some mental illnesses and the use of illegal drugs, so there is an implicit suggestion that “normal people” or “most people” would be able to recognise them. And indeed some Gen AI ‘hallucinations’ seem unmissable, like its inability to produce realistic images of hands, or suggestions that astronauts met cats when they landed on the moon. But others may seem very real, particularly as Gen AI ‘hallucinations’ will be presented to users in the same way as accurate information. This makes me wonder how many of Gen AI’s ‘hallucinations’ are going undetected. Even Professors who are experts in using Gen AI, such as Ethan Mollick, admit they can be taken in. The BBC has just published research in which Gen AI had access to the BBC website and was asked questions and given tasks about the news. Almost one in five of those answers introduced factual errors, over half involved ‘significant issues’, and over 90% contained at least some problematic content. There is a serious risk here of misinformation and we’re dealing with enough of that from human beings; we don’t need computers adding to the problem.

Gen AI also steals. Not directly, to be fair, but it is certainly in possession of stolen goods. This blog is written under a Creative Commons licence 4.0 which permits reuse, even for commercial purposes, as long as appropriate credit is given. ‘Appropriate credit’ includes my name and a link to the material. I cannot prove that my blog has been used to train Gen AI, but I bet it has, and I also bet the resulting material does not include my name or a link to my blog. Also, my conventionally published books are subject to copyright laws which prohibit their sale or use, beyond the terms of my contract with the publishers, without my permission. Yet the books I have written and commissioned for Routledge, and the articles I have written and co-written for journals published by Taylor & Francis, formed part of a deal worth millions of pounds made by their parent company Informa to sell access to their academic content for Gen AI training purposes. My contracts with Routledge do not mention AI training, I was not asked for my consent, and I have not seen a single penny of the income received or generated by this deal. Neither have any other authors I know, and some, like my co-author Janet Salmons, are very very angry.

And Gen AI both cheats and enables cheating. Gen AI has enabled students to cheat on assignments, homework, and tests. Also fraudulent AI-generated data is increasingly problematic for researchers who collect data through online surveys. There are many other examples too.

In humans, lying, stealing, and cheating are toxic behaviours. People-pleasing is another toxic human behaviour which also appears in Gen AI. Gen AI is designed to please its human operators, which can lead to ‘fake alignment’ i.e. giving different answers under different conditions rather than sticking to the truth.

Because I don’t trust Gen AI, I have never used it. I should, however, acknowledge that the opposite may be true: it may be that because I have never used Gen AI, I don’t trust it. Some people I know personally, and for whose integrity I have the utmost respect, advocate using Gen AI. Inger Mewburn, aka The Thesis Whisperer, finds it very useful. Mark Carrigan has written an excellent book, Generative AI for Academics, which recommends that readers experiment with Gen AI to assess its potential for themselves. I do not know Ethan Mollick except through his work but he too seems like a person of integrity. He has written a book, Co-Intelligence: Living and Working with AI, which recommends that readers always invite AI to the table. And he has a useful blog about AI, One Useful Thing, where he publishes his latest discoveries and thinking.

Gen AI makes computers seem like they understand us. Inger Mewburn calls Claude her work husband; Ethan Mollick recommends that we treat Gen AI like a person.  But computers are not people and I think this conflation is potentially dangerous. Computers don’t understand anything, they simply produce content in response to patterns in their training data. For me, it makes more sense to treat Gen AI as the machines they are: non-sentient, but able to mimic sentience at times.

Also, Gen AI has not been created for our benefit. Although there are undoubtedly benefits we can derive from using Gen AI, it has been created primarily to make money for big corporations and their shareholders. And it is doing that very successfully at present, not only through deals like the one Informa struck with Microsoft, but also because its primary benefits seem to be improving efficiency and productivity while reducing costs – and therefore increasing profits. And increases in the profits of global organisations benefit the few, not the many.

So for all these reasons, I have not used Gen AI, and I do not intend to any time soon. This is primarily because I think its features are more unethical than not. (And we haven’t even talked about data centres and their environmental impact.) Though I am watching and listening, and will be happy to change my mind as soon as I see some evidence that Gen AI’s features have become more ethical than unethical.