AI Lies, Steals, and Cheats

I don’t trust AI. Even its name is misleading because it implies that AI is one homogeneous thing, and that is far from the truth. The global multinational computer company IBM describes AI as a series of concepts that developed over the last 70 years. The concept of AI was initially defined in the 1950s. In the 1980s machine learning was defined, i.e. AI systems that learn from data. Then in the 2010s deep learning was developed: a form of machine learning that mimics human brain function. And in the 2020s deep learning was used to develop generative AI (aka Gen AI), i.e. large language models that can synthesise “new” content from existing data. I put “new” in inverted commas because content synthesised by a computer from existing data is not in fact new, it is simply remixed. If Gen AI output is repurposed as Gen AI input too many times, that can lead to ‘knowledge collapse’. This is a point where the diversity of available knowledge and understanding has diminished so much that outputs are no longer useful.

Gen AI also produces lies at times. These are commonly called ‘hallucinations’, probably to try to embed the concept that Gen AI is a kind of brain. This worries me because hallucinations are closely identified with some mental illnesses and the use of illegal drugs, so there is an implicit suggestion that “normal people” or “most people” would be able to recognise them. And indeed some Gen AI ‘hallucinations’ seem unmissable, like its inability to produce realistic images of hands, or suggestions that astronauts met cats when they landed on the moon. But others may seem very real, particularly as Gen AI ‘hallucinations’ will be presented to users in the same way as accurate information. This makes me wonder how many of Gen AI’s ‘hallucinations’ are going undetected. Even Professors who are experts in using Gen AI, such as Ethan Mollick, admit they can be taken in. The BBC has just published research in which Gen AI had access to the BBC website and was asked questions and given tasks about the news. Almost one in five of those answers introduced factual errors, over half involved ‘significant issues’, and over 90% contained at least some problematic content. There is a serious risk here of misinformation and we’re dealing with enough of that from human beings; we don’t need computers adding to the problem.

Gen AI also steals. Not directly, to be fair, but it is certainly in possession of stolen goods. This blog is written under a Creative Commons licence 4.0 which permits reuse, even for commercial purposes, as long as appropriate credit is given. ‘Appropriate credit’ includes my name and a link to the material. I cannot prove that my blog has been used to train Gen AI, but I bet it has, and I also bet the resulting material does not include my name or a link to my blog. Also, my conventionally published books are subject to copyright laws which prohibit their sale or use, beyond the terms of my contract with the publishers, without my permission. Yet the books I have written and commissioned for Routledge, and the articles I have written and co-written for journals published by Taylor & Francis, formed part of a deal worth millions of pounds made by their parent company Informa to sell access to their academic content for Gen AI training purposes. My contracts with Routledge do not mention AI training, I was not asked for my consent, and I have not seen a single penny of the income received or generated by this deal. Neither have any other authors I know, and some, like my co-author Janet Salmons, are very very angry.

And Gen AI both cheats and enables cheating. Gen AI has enabled students to cheat on assignments, homework, and tests. Also fraudulent AI-generated data is increasingly problematic for researchers who collect data through online surveys. There are many other examples too.

In humans, lying, stealing, and cheating are toxic behaviours. People-pleasing is another toxic human behaviour which also appears in Gen AI. Gen AI is designed to please its human operators, which can lead to ‘fake alignment’ i.e. giving different answers under different conditions rather than sticking to the truth.

Because I don’t trust Gen AI, I have never used it. I should, however, acknowledge that the opposite may be true: it may be that because I have never used Gen AI, I don’t trust it. Some people I know personally, and for whose integrity I have the utmost respect, advocate using Gen AI. Inger Mewburn, aka The Thesis Whisperer, finds it very useful. Mark Carrigan has written an excellent book, Generative AI for Academics, which recommends that readers experiment with Gen AI to assess its potential for themselves. I do not know Ethan Mollick except through his work but he too seems like a person of integrity. He has written a book, Co-Intelligence: Living and Working with AI, which recommends that readers always invite AI to the table. And he has a useful blog about AI, One Useful Thing, where he publishes his latest discoveries and thinking.

Gen AI makes computers seem like they understand us. Inger Mewburn calls Claude her work husband; Ethan Mollick recommends that we treat Gen AI like a person.  But computers are not people and I think this conflation is potentially dangerous. Computers don’t understand anything, they simply produce content in response to patterns in their training data. For me, it makes more sense to treat Gen AI as the machines they are: non-sentient, but able to mimic sentience at times.

Also, Gen AI has not been created for our benefit. Although there are undoubtedly benefits we can derive from using Gen AI, it has been created primarily to make money for big corporations and their shareholders. And it is doing that very successfully at present, not only through deals like the one Informa struck with Microsoft, but also because its primary benefits seem to be improving efficiency and productivity while reducing costs – and therefore increasing profits. And increases in the profits of global organisations benefit the few, not the many.

So for all these reasons, I have not used Gen AI, and I do not intend to any time soon. This is primarily because I think its features are more unethical than not. (And we haven’t even talked about data centres and their environmental impact.) Though I am watching and listening, and will be happy to change my mind as soon as I see some evidence that Gen AI’s features have become more ethical than unethical.

8 thoughts on “AI Lies, Steals, and Cheats

  1. Pingback: ChatGPT Wrote A Parallel Post On AI | Helen Kara

  2. Pingback: Claude Wrote A Parallel Post On AI | Helen Kara

Leave a reply to Jo VanEvery, Academic Career Guide Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.