Reviewing Work by Indigenous Scholars

Indigenous methods booksA year ago I launched my book on research ethics which draws on the work of Indigenous researchers from around the world, setting the Indigenous research paradigm and literature side-by-side with the Euro-Western research paradigm and literature. I state in the book that I am not an expert on Indigenous research or ethics. And I never will be – I am a student of the literature, aiming to decolonise my own thought and practice. When I was writing the book I realised that after it came out, Euro-Western institutions would try to position me as an expert by offering invitations to speak about and review the work of Indigenous scholars. And indeed they have.

I have turned down all invitations to speak that would constitute me speaking for Indigenous scholars, and I will continue to do so. I tell whoever has invited me that Indigenous researchers and scholars need to take these assignments, and give them pointers on how to contact suitable people. I can, and do, speak about Indigenous research and ethics in the keynotes and workshops I give. For example, when I was asked to focus on the history of creative research methods in my keynote for the recent Manchester Methods Fair, I included what I know of the history of Indigenous research. I know some Indigenous scholars think I shouldn’t speak on this topic at all, while others call for inclusion of their work in Euro-Western scholarly spaces. I am working to respond to these calls because my own view, currently, is that dialogue is more important than segregation.

Reviewing written work is a different matter. I have been asked, by prominent Euro-Western academic journals, to review articles by Indigenous scholars. Here is an example of actual email correspondence I have had with such a journal (which I will not name as that seems unfair):

Me: Hi. I can review this if you can confirm that at least one of the other reviews will be done by an Indigenous scholar/researcher. I’m not sure COPE has caught up with the ethical aspects of Indigenous scholarship, research, and publishing. Essentially, it’s not ethical for non-Indigenous people to make pronouncements about anything to do with Indigenous issues without Indigenous input. I’m hoping you’re already aware of this and have one or more Indigenous reviewers lined up – in which case, I’m in.

Journal editor: We understand the importance of this and I have forwarded your concerns on to the internal editorial team to ask them for further information. However, please be aware that we do operate our external peer reviews on a largely blind basis in terms of names, background etc. For example, when the authors receive the reviewer’s comments they do not see the names of the reviewers. I only receive the names of the reviewers from the editorial team who are the ones with the in depth knowledge concerning the reviewer’s research specialties etc. Thus, at this point I’m unsure as to the backgrounds of the reviewers we have invited as I only communicate with them through the manuscript and e-mail system we use.

Me: I take your point about blind peer reviews. This of course is in direct opposition to the Indigenous ethical principle of accountability which I expect your author has addressed in their article. My own engagement with the Indigenous methods literature, plus a small amount of work directly with Indigenous researchers and scholars, has brought me to my current position. This is that I won’t act as any kind of authority on Indigenous issues unless I know for sure that Indigenous people are involved at the same level. And ‘authority’ includes peer reviewing.

Having solely non-Indigenous people act as authorities on Indigenous issues is analogous to having solely men act as authorities on women’s issues. I’ve fought against the latter all my life. It would be hypocritical of me then to take an equivalent stance in another arena.

Internal editorial team rep: Thank you for this, and for the important point you raise. Supporting and finding a space for indigenous scholarship and methodological discussion is something that, as a new editorial team, we take seriously and are currently discussing. We will endeavour to recruit an indigenous reviewer (with the recognition with all that is bound up with this category and how often it is a little too broad for each context) and will be reviewing policy on this matter at our next meeting.

This did nothing to reassure me so I declined to review.

From dr.whomever on Instagram, aka Em Rabelais from the University of Illinois in the US, I have recently learned that the preoccupation of Euro-Westerners with the ‘evidence base’ is colonialist and gets in the way of a lot of anti-oppressive work. Many Indigenous peoples have a different view of evidence: for example, if someone has lived through a phenomenon, event, or relationship, they know about it and so can provide evidence. In Euro-Western cultures, we accept this kind of evidence to convict people of crimes and call it a ‘witness statement’, but we will not accept it in research where we dismiss it as ‘anecdote’. This seems to me an anomaly, and one I have never understood.

Imposing this approach to research on people from other cultures who take a different view, as dr.whomever says, is epistemic violence. Last week I spotted this tweet by Grieve Chelwa from the University of Cape Town in South Africa:

I’m now wondering whether I should accept invitations to review because at least I understand this and have some knowledge of Indigenous ethical principles. But I’m also aware that the little knowledge I have can be a dangerous thing. And I don’t want to end up being seen as “an expert” on Indigenous scholarship, or even “a go-to person”.

In an ideal world, I would like Euro-Western and Indigenous scholars to review each other’s work with a good understanding of each other’s perspectives. I was very grateful to receive a review of the draft manuscript of my book on research ethics from Indigenous scholar Deborah McGregor from York University in Canada, who waived anonymity to enable dialogue, and was helpfully constructive with her criticism and generous with her praise. However, in our far-from-ideal world, I recognise that Indigenous scholars have higher priorities than reviewing the work of a privileged Euro-Western scholar.

I think waiving anonymity would help a lot in these situations. I would be happier to review the work of Indigenous scholars if I knew they were happy for me to review their work, and that we could have a dialogue to ensure mutual understanding.

Having said all that, I definitely want to support Indigenous researchers (and other marginalised researchers) whenever I can do so ethically. But figuring out when and how to do that is not straightforward. If you have any ideas or suggestions to contribute I’d love to read them in the comments.

This blog, and the monthly #CRMethodsChat on Twitter, is funded by my beloved patrons. It takes me at least one working day per month to post here each week and run the Twitterchat. At the time of writing I’m receiving funding from Patrons of $44 per month. If you think a day of my time is worth more than $44 – you can help! Ongoing support would be fantastic but you can also make a one-time donation through the PayPal button on this blog if that works better for you. Support from Patrons and donors also enables me to keep this blog ad-free. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

To Cite Or Not To Cite Your Friends

One of the things I love about my scholarly activity is reading the work of people I know and like. I tweeted about this a while ago:

And that was indeed how I felt. The people I tagged in that tweet are all people I have shared social as well as professional space with, and I would count them, more or less, as friends. But I’ve been thinking about this recently, and wondering… is it a good thing to cite your friends’ work? Or is it a form of cronyism?

Cronyism is a dirty word, hurled at politicians and others who are seen to be giving jobs to friends or relatives. Yet in the small businesses I see around me, it seems absolutely natural to give jobs to people you know and have faith in, and those are friends or family. Why would you trust a stranger with your livelihood? In normal human terms it doesn’t make sense.

Yet we’re supposed to treat people and their work equally and on merit. Even the law says so, here in the UK at least, and in many other countries too. But I’m sure plenty of my readers, like me, have tales from inside and outside academia of times when this hasn’t happened. For example, I know an IT expert, I’ll call her Jade, who was asked by a local charity to help them recruit an IT professional. The charity had about 60 staff and really needed in-house IT support. Jade worked with them to prepare a job description, person specification, and advertisement, then she helped with shortlisting and interviewing. I saw her soon after the interviews and she was fuming. ‘I don’t know why they even asked me,’ she said. ‘They took no notice of what I said, they just appointed the person they already knew. Who was not the best person for the job.’

In theory scholars should treat academic literature equally and on merit, though there are debates about what ‘equal’ means here. I regularly see – and support – calls for positive discrimination, to ensure that women, people of colour, and others who struggle to get their voices heard are cited by those with more privilege. And I try to do this. But when I am writing myself, I feel a real pull to cite work by my friends. I like spending time in their company, whether across a café table or as a reader of their work. I want to share their ideas which are often kin to my own. I feel encouraged by them; they inspire me to do my best, whether through their physical presence or their written words.

I know that I should find and read and cite writing which contradicts my own, which I disagree with. This is necessary intellectual work. I tell students how important it is, and when I do it myself I feel clever and a bit smug. But when I cite my friends I feel loving and loved, which are much nicer feelings. And I hate when I read something by a friend which I can’t cite, not because it’s poor quality (my friends don’t write bad stuff!) but because it doesn’t fit with the work I’m doing.

We can’t separate our emotion from our intellect, whether we’re interviewing people for a job, or reading scholarly writing with a view to citing it ourselves, or simply taking a walk. So maybe we should stop pretending we can make that separation, or even that it’s somehow desirable. Perhaps it’s time to give feelings and thoughts equal billing in our decision-making, and to acknowledge this in our writing and other work. Those who practise reflexivity advocate this, but I don’t remember anyone I’ve read writing about the ethical and emotional aspects of citing (or not citing) work by your friends. I had a look online and there’s very little written about this. I did find one interesting recent open access article from the field of economics, by fellow independent Steven Payson. He points out that if you cite your friends in academic journal articles, the editors are more likely to pick them as reviewers, which can work in your favour. His article also states that close friends may ‘cross an ethical line’ and game the metrics system by citing each other as much as possible for mutual gain.

These are interesting perspectives on academia, but as an independent researcher they’re not relevant for me. Also I’m working on a book, not a journal article. So I guess what I need to do is get my emotion and my intellect working in tandem. They already do, to some extent; however much I love a friend, if they write rubbish I’m not going to cite their work. Also it’s not as if I only cite my friends. But I do recognise that the pull to spend time with the written work of people I like is strong, as is the wish to cite their work. This may be skewing me away from other potentially useful sources. So I need to aim for a balance: cite my friends’ work where relevant, be sure to seek out opposing views, and cite the work of lots of people I don’t know. Especially women and people of colour. That’s what I think I’ll do. As always, though, alternative views and counter-arguments are welcome in the comments.

This blog, and the monthly #CRMethodsChat on Twitter, is funded by my beloved patrons. It takes me at least one working day per month to post here each week and run the Twitterchat. At the time of writing I’m receiving funding from Patrons of $23 per month. If you think a day of my time is worth more than $23 – you can help! Ongoing support would be fantastic but you can also make a one-time donation through the PayPal button on this blog if that works better for you. Support from Patrons and donors also enables me to keep this blog ad-free. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

The Ethics of Independent Research Work #1

ethicsI guess we all know by now that I bang on a fair bit about research ethics, but I haven’t written about the ethical aspects of working as an independent researcher. I have come up with ten ethical principles for indie researchers. Many of these no doubt apply to other forms of self-employment too, but they definitely all apply to independent research work. This post contains the first five principles; I will post the other five next week.

  1. Be honest about what you don’t know

If a client says, ‘You know the legislation that…’ and you don’t, it’s best to say so. It can be tempting to nod while making a mental note to look it up online later, but that can lead to disaster. People often fear that saying they don’t know something will make them look stupid, but paradoxically the reverse is true. If you are clear about what you do know and honest about what you don’t, you will build trust with your clients much more quickly and effectively.

  1. Be clear about your capacity

Allied to this: don’t take on work you haven’t got time to do, because that won’t do anyone any favours. You won’t produce your best work for your clients, and you’ll end up burned out. OK there are times where you may choose to work at maximum capacity for a short time, e.g. as one contract ends while another begins, or to fit in a quick piece of work for a valued client. But keep these brief and infrequent, and make sure you build in recovery time. Independent research is a great career (at least, in my view), but no career is worth damage to your health and relationships.

  1. Charge a fair rate for the job

If possible, find out what the going rate is, and charge that. The going rate will vary across sectors and between countries. I have written before about how I charge for work: in brief, I charge less for charities and longer projects, more for universities, governments, and work I don’t really want to do.

Also, don’t take on jobs with inadequate budgets, unless you’re desperate for the money and prepared to accept a very low day rate. I’ve been offered a three-year national evaluation with a total budget of £5,000. Perhaps someone ended up doing that work for that money, but they would either have done a very poor job or effectively accepted an extremely low day rate.

  1. Don’t accept work on an unethical basis

One potential client rang me towards the end of the financial year to ask if I could invoice her for several thousand pounds that she had left in her budget. She said she was a bit busy, so could we sort out what I would do for the money at a later date? I didn’t know her so I asked why she had rung me. She told me she had wanted person A, but they were too busy so they suggested person B, who couldn’t take it on either and suggested me. Nowadays I would probably say a simple ‘no’, but it was early in my career, and person B was quite influential. I agreed to invoice, but only after meeting with my potential client to decide whether we could work together and what I would do for her.

Another time a commissioner rang me to ask me to evaluate a service because he wanted to close it down. I said I would evaluate the service if he wished, but I would not pre-determine the findings; they would be based on my analysis of the data I gathered. He agreed to this. I did the evaluation, and found – unequivocally – that the service was highly valued and doing necessary work. The commissioner paid my invoice, then found someone else to do another evaluation saying the service should be closed down, whereupon he closed it down. Again, with the benefit of hindsight I probably should have said ‘no’ to the assignment, but I naïvely thought that if I did the research the commissioner would abide by the findings.

  1. Don’t take work outside your areas of expertise

You may have more than one area of expertise. I have a few: children/young people/families, housing/homelessness, substance misuse, volunteering, service user involvement, third sector, training. Each of these areas formed part of my professional work before I became an independent researcher.

Earlier this decade I got an email asking me to do some work around learning disability. I replied, explaining that it was not one of my areas of expertise, and saying I didn’t think I was the best person for the job. The potential client came back saying they thought I was right and apologising for having bothered me. (I didn’t mind. I never mind answering queries about possible paid work.)

Oddly enough, a few weeks later I got another email, from someone completely different, asking me to do some work around learning disability. After rolling my eyes and thinking about buses, I sent a similar reply. This time the potential client came back saying that I sounded perfect for the piece of work they wanted to commission. They thought someone with a good knowledge of research methods but little knowledge of learning disability would bring a usefully fresh perspective to the problems they were trying to solve. Which is further evidence for (1) above.

So there you have the first five principles of ethical research work, according to me. Come back next week for the other five.

Academic taboos #1: what cannot be said

An earlier version of this article first appeared in Funding Insight in summer 2017; this updated version is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

what can't be saidAcademia is a community with conventions, customs, and no-go areas. These vary, to some extent, between disciplines. For example, in most STEM subjects it is taboo for research authors to refer to themselves in writing in the first person. This leads to some astonishing linguistic contortions. Conversely, in arts disciplines, and increasingly in the humanities and social sciences, it is permissible to use more natural language.

It seems, though, that some conventions exist across all disciplines. For example, conference “provocations” are rarely provocative, though they may stretch the discussion’s comfort zone by a millimetre or two. Then conference “questions” are rarely questions that will draw more interesting and useful material from the speaker. Instead, they are taken as opportunities for academic grandstanding. Someone will seize the floor, and spend as long as they can get away with, effectively saying: “Look at me, aren’t I clever?” I have found, through personal experiment, that asking an actual question at a conference can cause consternation. I confess it amuses me to do this.

Perhaps the most interesting conventions are those around what cannot be said. Rosalind Gill, Professor of Cultural and Social Analysis at City University of London, UK, has noted the taboo around admitting how difficult, even impossible, it can be to cope with the pressures of life as an academic (2010:229). The airy tone when a colleague is heard to say: “I’m so shattered. The jobs on my to-do list seem to be multiplying. Haha, you know how it is.” Such statements can be a smokescreen for serious mental health problems.

A journal article published in 2017 by the theoretical physicist Oliver Rosten made a heartfelt statement about this in its acknowledgements, dedicating the article to the memory of a late colleague, and referring to “the psychological brutality of the post-doctoral system”. Several journals accepted the article for its scientific quality but refused to publish the acknowledgements in full; it took Rosten years to find a journal that would publish what he wrote. He has left academia and now works as a Senior Software Developer at Future Facilities Ltd in Brighton, UK.

Another thing that cannot be said, identified by Tseen Khoo, a Lecturer in Research Education and Development at La Trobe University, Melbourne, Australia, is that some academic research doesn’t need funding, it just needs time. This is anathema because everyone accepts that external funding makes the academic world go round. But what if it didn’t? What if student fees, other income (e.g. from hiring out university premises in the holidays), and careful stewardship was enough? What if all the time academics spent on funding applications, and making their research fit funders’ priorities, was actually spent on independent scholarship? It seems this is not only unsayable but also unthinkable. One of Khoo’s interlocutors described this as “a failure of the imagination”.

Another unspeakable truth I’m aware of is for someone to say that the system of research ethics governance is itself unethical. Ethics governance is something to comply with, not to question. That has led us to the situation where most research training contains little or no time spent on research ethics itself. Instead, young researchers learn that working ethically equates to filling in an audit form about participant welfare and data storage. They don’t receive the detailed reflective instruction necessary to equip them to manage the manifold ethical difficulties any researcher will encounter in the field.

I wonder what role the lack of research ethics education plays in the increasing number of journal articles that are retracted each year? I would argue that we need to separate ethical audit from ethical research, because they have different aims. The former exists to protect institutions, the latter to promote the quality of research and ensure the well-being of all concerned.

These areas of silence are particularly interesting given that academia exists to enable and develop conversations. However, I think that as well as acknowledging what academia enables, we also need to take a long hard look at what academia silences.

I Finished The Book!

Research ethics in the real world [FC]For the last three-and-a-quarter years I have been writing a book on research ethics. It has been like doing another PhD, only with reviewers instead of supervisors. Four sets of reviewers: two sets of proposal reviews and two sets of typescript reviews. I have to thank my lovely publisher, Policy Press (part of Bristol University Press), for giving me so much support to get this book right.

This has been the hardest book I’ve written and I hope never to write another as difficult. On the plus side, I’m happy with the result. It is different from other books on research ethics in three main ways. First, it doesn’t treat research ethics as though they exist in isolation. I look at the relationships between research ethics and individual, social, institutional, professional, and political ethics, and how those relationships play out in practice in the work of research ethics committees and in evaluation research. That makes up part 1 of the book.

Second, it demonstrates the need for ethical thinking and action throughout the research process. In part 2 there is a chapter covering the ethical aspects of each stage of the research process, from planning a research project through to aftercare. There is also a chapter on researcher well-being.

Third, the book sets the Indigenous and Euro-Western research paradigms side by side. This is not to try to decide which is ‘better’, but is intended to increase researchers’ ethical options and vocabularies. I am writing primarily for Euro-Western readers, though the book may be of use to some Indigenous researchers. There is a sizeable and growing body of literature on Indigenous research and ethics, including books, journals, and journal articles. Using this literature requires care – as indeed using all literature requires care (see chapter 7 of my forthcoming book for more on that). But Indigenous literature, as with other literatures by marginalised peoples, requires particular care to avoid tokenism or appropriation.

Many Euro-Western researchers are completely ignorant of Indigenous research. Some know of it but are under the misapprehension that it is an offshoot of Euro-Western research. In fact it is a separate paradigm that stands alone and predates Euro-Western research by tens of thousands of years. Some Indigenous researchers and scholars are now calling for Euro-Western academics to recognise this and use Indigenous work alongside their own. My book is, in part, a response to these calls.

It was so, so hard to cram all of that into 75,000 words – and that includes the bibliography which, as you can imagine, is extensive. There was so much to read that I was still reading, and incorporating, new material on the morning of the day I finished the book. I’ve found more work, since, that I’d love to include – but I had to stop somewhere.

I awaited my final review with great trepidation, aware of the possibility that the reviewer might loathe my book – some previous reviewers had – and that that could put an end to my hopes of publication. Was I looking at three-and-a-quarter years of wasted work? I was so relieved when my editor emailed to say the review was positive. Then the reviewer’s comments blew me away. Here’s one of my favourite parts: “In my view the author through excellent writing skills has covered very dense material (a ton of content) in a very accessible way.”

I was even more delighted because this review came from an Indigenous researcher. She waived anonymity, so I have been able to credit and thank her in the book. I will not name her here, as I do not have her permission to do so; you’ll have to read the book if you want to find out.

Finishing a book feels great, and also weird. It’s like losing a part of your identity, particularly with a book you’ve lived with for so long. Though there’s still lots of work to do: I have to write the companion website, give input on the book’s design, read the proofs, start marketing… publication is due on 1 November, which feels a long way off but I know how quickly five months can pass.

I think this book will be controversial. A senior and very knowledgeable academic told me that one reason I could write such a book is because I’m not in academia. I’m glad if I can use my independence to say things others cannot say – as long as I’m saying useful things, at least.

More than anything else, I hope the book helps to make a difference. In particular, I would like to make a difference to the current system of ethical regulation which is too focused on institutional protection and insufficiently focused on ethical research. It is also terrible at recognising and valuing the work of Indigenous research and of Euro-Western community-based or participatory research. When I was preparing to write the book, I interviewed 18 people around the world and promised them anonymity. Some were research ethics committee members and others had sought formal approval from ethics committees (or institutional review boards in the US). I heard tales of people completing ethical approval forms with information that committees wanted to see rather than with actual facts; people teaching students how to get through the ethical approval system instead of teaching them how to conduct ethical research; people acting ethically yet in complete contravention of their committee’s instructions; people struggling to balance ethical research with Indigenous communities with the inflexible conditions set by ethics committees. Although many of the people who serve on ethics committees are highly ethical, the system within which they are forced to work often prevents them from acting in entirely ethical ways. It seems to me that this system is not currently fit for purpose, and there are many other people who think the same. I hope the evidence I have gathered and presented will help to create much-needed change.

As an independent researcher, I am self-employed. This means I do all my writing in my own time; I don’t have a salary to support my work. Do you like what I do on this blog, or in my books, or anywhere else, enough that you might buy me a coffee now and again if we were co-located? If so, please consider supporting my independent work through Patreon for as little as one dollar per month. In exchange you’ll get exclusive previews of, and insights into, my work. Thank you.

How Open Is Open Access?

This article first appeared in Funding Insight on 18 January 2018 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

partly open doorThose outside the UK probably won’t be aware of Jisc. The non-profit organisation’s role is to provide technological solutions to academic problems, including researching and developing new ways of working supported by technology. (Full disclosure: they are also one of my clients.) Jisc is publicly funded by UK taxpayers’ money and member subscriptions. Its members are from the UK, and its objectives are designed to create benefit for the staff and students of adult education institutions in the UK.

But its strategy includes a stated intention of ‘growing our offering internationally to further benefit our members’.

Jisc is also very keen on open sharing of information and resources. It advocates open access to research publications, which its says means making them ‘freely available so anyone can benefit from reading and using research’. It promotes the sharing of research data, and the use of non-restrictive Creative Commons licensing to enable re-use of resources. Jisc identifies various potential benefits of this, one of which is that ‘researchers in developing countries can see your work’.

So far, so many good intentions. I’m sure most of my Euro-Western colleagues will be nodding their heads and thinking yes, marvellous, jolly well done Jisc. And indeed I am not writing this post to criticise those at Jisc, who are doing their best to be good guys, and who after all exist in the UK for the benefit of their UK members. My point here is to critique our more general Euro-Western academic mindset, which Jisc’s example illustrates.

You see, until very recently, I would have been one of those people nodding along, with a satisfied expression, thinking “oh Jisc you are doing well”. But my eyes have been opened by a recent blog post written by Andy Nobes of international development charity INASP, featuring the work of Florence Piron from Université Laval in Québec and her colleagues from around the world. Piron and her colleagues have written – in French – some publications that offer new perspectives and ideas to Euro-Western advocates of open access. In particular, they challenge the idea that Euro-Western researchers simply making their work visible to ‘researchers in developing countries’, as Jisc suggests, equates to open access. By contrast, they see it as an extension of colonialism and an ‘alienation epistemic’. This is because it does nothing to make knowledge generated in other parts of the world equally visible to researchers in Euro-Western countries. In turn, that serves to reinforce the use of Euro-Western theories and models as normative, which is to the detriment of local epistemologies in other parts of the world (Piron et al 2017).

We don’t think of these things, do we, us Euro-Western researchers? We’re too confident that we’re doing OK as long as we’re making some kind of gesture towards those with fewer privileges.

Piron and her colleagues point out that many Euro-Western academics are unable even to think that ‘valid and relevant knowledge’ could exist in other places and other ways; they can be ‘blind to epistemological diversity’ and regard Western science as universal (ibid). Even those Euro-Western academics who do respect other forms of knowledge are unlikely to engage in truly reciprocal knowledge exchanges. Collaborative projects often involve Euro-Western academics acting as principal investigators while researchers from other parts of the world are restricted to data-gathering and administrative work (Sherwood 2013, Yantio 2013).

In some Euro-Western academic circles there are moves afoot to ‘decolonise the curriculum’. But this is not only needed in Euro-Western establishments. Some teachers in other parts of the world also choose exclusively Euro-Western literature and examples for their students (Mboa Nkoudou 2016). This is a direct real-world consequence of the pervasive Euro-Western conviction that any other way of thinking must be inferior to our own. It makes life harder – not easier – for people in other parts of the world to solve their own local problems in appropriate and sustainable ways (ibid).

Truly open access will involve a two-way exchange of – and respect for – knowledge and the epistemological positions on which it is based. Obviously this is beyond the power of a single organisation, such as Jisc, or a single individual, such as you or I. However, all Euro-Western researchers, and those who work with them, need to be aware of the difference between open access as we tend to purvey it, and genuinely open access. Only with such awareness will we find ways to move from our one-way, take-it-or-leave-it approach to a true openness and sharing with other academics around the world.

The Ethics of Expertise

expertLast week I wrote about the ethics of research evidence, in which I cited Charles Knight’s contention that evidence should be used by people with expertise. Knight also questions how we can identify people with expertise. He suggests they would ‘have to do the sorts of things experts do – read the literature, do research, have satisfied clients, mentor novices, and so on’. He adds, ‘This approach is not likely to concentrate expertise in a few hands.’ (Knight 2004:2)

I like Knight’s attempts to widen the pool of acknowledged experts. He is evidently aware of the scope for tension between expert privilege and democracy. Conventionally, experts are few in number, specialists, and revered or at least respected for their expertise. However, this can also be viewed as exclusionary, particularly as most experts of this kind are older white men. Also, I’m not sure Knight goes far enough.

Knight was writing at the start of the century and, more recently, different definitions of ‘expert’ have begun to creep into the lexicon. For example, the UK’s Care Quality Commission (CQC), which inspects and regulates health and social care services, has defined ‘experts by experience‘. These are people with personal experience of using, or caring for someone who uses, services that the CQC oversees. Experts by experience take an active part in service inspections, and their findings are used to support the work of the CQC’s professional inspectors.

In research, there is a specific participatory approach known as critical communicative methodology (CCM) which was developed around 10 years ago. CCM takes the view that everyone is an expert in something, everyone has something to teach others, and everyone is capable of critical analysis. This is a fully egalitarian methodology which uses respectful dialogue as its main method.

However, in most of research and science, experts are still viewed as those rare beings who have developed enough knowledge of a specialist area to be able to claim mastery of their subject. There is a myth that experts are infallible, which of course they’re not; they are human, with all the associated incentives and pressures that implies. It seems that experts are falling from grace daily at present for committing social sins from fraud to sexual harassment (and getting caught).

Perhaps more worryingly, the work of scientific experts is also falling from grace, in the form of the replication crisis. This refers to the finding that scientific discoveries are not as easy to replicate as was once supposed. As replication is one of the key criteria scientists use to validate the quality of each other’s work, this is a Big Problem. There is an excellent explanation of the replication crisis, in graphic form, online here.

My own view is that replication is associated with positivism, objectivity, the neutrality of the researcher, and associated ideas which have now been fairly thoroughly discredited. I think this ‘crisis’ could be a really good moment for science, as it may lead more people to understand that realities are multiple, researchers influence and are influenced by their work, and the wider context inevitably plays a supporting and sometimes a starring role.

As a result of various factors, including the replication crisis, it seems that the conventional concept of an expert is under threat. This too may be no bad thing, if it leads us to value everyone’s expertise. Perhaps it could also help to overturn the ‘deficit model’ which still prevails in so much social science, where (expert) researchers focus on people’s deficits – their poverty, ill-health, low educational attainment, unemployment, inadequate housing, and so on – rather than on their strengths and the positive contributions they make to our society. The main argument in favour of the deficit model is that these are problems research can help to solve, but if that were true, I think they would have been solved long since.

For sure, at times you need an expert you can trust. For example, if your car goes wrong, you’ll want to take it to an expert mechanic; if you develop a health problem, you’ll want to seek advice from an expert medic. It doesn’t seem either ethical or sensible, to me, to try to discard the conventional role of the expert altogether. But it does seem sensible to attack the links between expertise and privilege. After all, experts can’t exercise their expertise without input from others. At its simplest, the mechanic needs you to tell them what kind of a funny noise your car is making, and under what circumstances; the medic needs you to explain where and when you feel pain. Also, it doesn’t seem sensible to restrict conventional experts to a single area of expertise. That mechanic may also be an expert bassoon player; the medic may know more about antique jewellery than you ever thought possible.

In my view, the ethical approach to expertise is to treat everyone as an expert in matters relating to their own life, and beyond that, as someone who has a positive contribution to make to a specific task at hand and/or wider society in general. Imagine a world in which we all acknowledged and valued each other’s knowledge, experience, and skills. You may say I’m a dreamer – but I’m not the only one.

The Ethics of Research Evidence

Like so many of the terms used in research, ‘evidence’ has no single agreed meaning. Nor does there seem to be much consensus about what constitutes good or reliable evidence. The differing approaches of other professions may confuse the picture. For example, evidence that would convince a judge to hand down a life sentence would be dismissed by many researchers as anecdote.

evidenceGiven that evidence is such a slippery, contentious topic, how can researchers begin to address its ethical aspects? A working definition might help: evidence is ‘information or data that people select to help them answer questions’ (Knight 2004:1). Using that definition, we can look at the ethical aspects of our relationship with evidence: how we choose, use, and apply the evidence we gather and construct.

Evidence is often talked and written about as though it is something neutral that simply exists, like a brick or a table, to be used by researchers at will. Knight’s definition is helpful because it highlights the fact that researchers select the evidence they use. Evidence, in the form of facts or artefacts, is neither ethical nor unethical. But in the process of selection, there is always room for bias, and that is where ethical considerations come into play.

To choose evidence ethically, I would argue that first you need to recognise the role of choice in the process, and the associated potential for bias. Then you need to consider some key questions, such as:

  • What is the question you want to answer?
  • What are your existing thoughts and feelings about that topic?
  • How might they affect your choices about evidence?
  • What can you do to make those choices open and defensible?

The aim is to be able to demonstrate that you have chosen the information or data you intend to define as ‘evidence’ in as ethical a way as possible.

Once you have chosen your evidence, you need to use it ethically within the research process. This means subjecting all your evidence to rigorous analysis, interpreting your findings accurately, and reporting in ways that will communicate effectively with your audiences. These are some of the key responsibilities of ethical researchers.

Research is a process that converts evidence into research evidence. It starts with the information or data that researchers choose to use as evidence, which may be anything from statistics to artworks. Then, through the process of (one would hope) diligent research, that evidence becomes research evidence. Whether and how research evidence is applied in the wider world is the third ethical aspect.

Sadly, there is a great deal of evidence that evidence is not applied well, or not applied at all. Most professional researchers have tales to tell of evidence being buried by research funders or commissioners. This seems particularly likely where findings conflict with political or money-making ambitions. In some sectors, such as third sector evaluation, this is widespread (Fiennes 2014). How can anyone make an evidence-based decision if the evidence collected by researchers has not been converted into evidence they can use?

The use of research evidence is often beyond the control of researchers. One practical action a researcher can take is to suggest a dissemination plan at the outset. This can be regarded as ethical, because such a plan should increase the likelihood of research evidence being used. But it could also be regarded as manipulative: using the initial excitement around a new project to persuade people to sign up to a plan they might later regret.

It seems that ethics and evidence are uneasy bedfellows. Again, Knight tries to help us here, by suggesting that research evidence should be used by people with expertise. This raises a further, pertinent question: what is the ethics of expertise? I will address that next week.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

Dissemination, Social Media, and Ethics

twitterstormI inadvertently caused a minor Twitterstorm last week, and am considering what I can learn from this.

I spotted a tweet from @exerciseworks reporting some research. It said “One in 12 deaths could be prevented with 30 minutes of exercise five times a week” (originally tweeted by @exerciseworks on 22 Sept, retweeted on the morning of 10 October). The tweet also included this link but I didn’t click through, I just responded directly to the content of the tweet.

Here’s their tweet and my reply:

 

The @exerciseworks account replied saying it wasn’t their headline. This was true; the article is in the prestigious British Medical Journal (BMJ) which should know better. And so should I: in retrospect, I should have checked the link, and overtly aimed my comment at the BMJ as well.

Then @exerciseworks blocked me on Twitter. Perhaps they felt I might damage their brand, or they just didn’t like the cut of my jib. It is of course their right to choose who to engage with on Twitter, though I’m a little disappointed that they weren’t up for debate.

I was surprised how many people picked up the tweet and retweeted it, sometimes with comment, such as this:

Rajat Chauhan tweet

and this:

Alan J Taylor tweet

which was ‘liked’ by the BMJ itself – presumably they are up for debate; I would certainly hope so. (It also led me to check out @AdamMeakins, a straight-talking sports physiotherapist who I was pleased to be bracketed with.)

Talking to people about this, the most common reaction was to describe @exerciseworks as a snowflake or similar, and say they should get over themselves. This is arguable, of course, though I think it is important to remember that we never know what – sometimes we don’t know who – is behind a Twitter account. Even with individual accounts where people disclose personal information, we should not assume that the struggles someone discloses are all the struggles they face. And with corporate or other collective accounts, we should remember that there is an individual person reading and responding to tweets, and that person has their own feelings and struggles.

Twitter is a fast-moving environment and it’s easy to make a point swiftly then move on. Being blocked has made me pause for thought, particularly as @exerciseworks is an account I’ve been following and interacting with for some time.

I stand by the point I made. It riles me when statistical research findings are reported as evidence that death is preventable. Yes, of course lives can be saved, and so death avoided at that particular time. Also, sensible life choices such as taking exercise are likely to help postpone death. But prevent death? No chance. To suggest that is inaccurate and therefore unethical. However, forgetting that there is an actual person behind each Twitter account is also unethical, so I’m going to try to take a little more time and care in future.

How to evaluate excellence in arts-based research

This article first appeared in Funding Insight on 19 May 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

judgementResearchers, research commissioners, and research funders all struggle with identifying good quality arts-based research. ‘I know it when I see it’ just doesn’t pass muster. Fortunately, Sarah J Tracy of Arizona State University has developed a helpful set of criteria that are now being used extensively to assess the quality of qualitative research, including arts-based and qualitative mixed-methods research.

Tracy’s conceptualisation includes eight criteria: worthy topic, rich rigour, sincerity, credibility, resonance, significant contribution, ethics, and meaningful coherence. Let’s look at each of those in a bit more detail.

A worthy topic is likely to be significant, meaningful, interesting, revealing, relevant, and timely. Such a topic may arise from contemporary social or personal phenomena, or from disciplinary priorities.

Rich rigour involves care and attention, particularly to sampling, data collection, and data analysis. It is the antithesis of the ‘quick and dirty’ research project, requiring diligence on the part of the researcher and leaving no room for short-cuts.

Sincerity involves honesty and transparency. Reflexivity is the key route to honesty, requiring researchers to interrogate and display their own impact on the research they conduct. Transparency focuses on the research process, and entails researchers disclosing their methods and decisions, the challenges they faced, any unexpected events that affected the research, and so on. It also involves crediting all those who have helped the researcher, such as funders, participants, or colleagues.

Credibility is a more complex criterion which, when achieved, produces research that can be perceived as trustworthy and on which people are willing to base decisions. Tracy suggests that there are four dimensions to achieving credibility: thick description, triangulation/crystallization, multiple voices, and participant input beyond data provision. Thick description means lots of detail and illustration to elucidate meanings which are clearly located in terms of theoretical, cultural, geographic, temporal, and other such location markers. Triangulation and crystallisation are both terms that refer to the use of multiplicity within research, such as through using multiple researchers, theories, methods, and/or data sources. The point of multiplicity is to consider the research question in a variety of ways, to enable the exploration of different facets of that question and thereby create deeper understanding. The use of multiple voices, particularly in research reporting, enables researchers more accurately to reflect the complexity of the research situation. Participant input beyond data provision provides opportunities for verification and elaboration of findings, and helps to ensure that research outputs are understandable and implementable.

Although all eight criteria are potentially relevant to arts-based research, resonance is perhaps the most directly relevant. It refers to the ability of research to have an emotional impact on its audiences or readers. Resonance has three aspects: aesthetic merit, generalisability, and transferability. Aesthetic merit means that style counts alongside, and works with, content, such that research is presented in a beautiful, evocative, artistic and accessible way. Generalisability refers to the potential for research to be valuable in a range of contexts, settings, or circumstances. Transferability is when an individual reader or audience member can take ideas from the research and apply them to their own situation.

Research can contribute to knowledge, policy, and/or practice, and will make a significant contribution if it extends knowledge or improves policy or practice. Research may also make a significant contribution to the development of methodology; there is a lot of scope for this with arts-based methods.

Several of the other criteria touch on ethical aspects of research. For example, many researchers would argue that reflexivity is an ethical necessity. However, ethics in research is so important that it also requires a criterion of its own. Tracy’s conceptualisation of ethics for research evaluation involves procedural, situational, relational, and exiting ethics. Procedural ethics refers to the system of research governance – or, for those whose research is not subject to formal ethical approval, the considerations therein such as participant welfare and data storage. Situational ethics requires consideration of the specific context for the research and how that might or should affect ethical decisions. Relational ethics involve treating others well during the research process: offering respect, extending compassion, keeping promises, and so on. And exiting ethics cover the ways in which researchers present and share findings, as well as aftercare for participants and others involved in the research.

Research that has meaningful coherence effectively does what it sets out to do. It will tell a clear story. That story may include paradox and contradiction, mess and disturbance. Nevertheless, it will bring together theory, literature, data and analysis in an interconnected and comprehensible way.

These criteria are not an unarguable rubric to which every qualitative researcher must adhere. Indeed there are times when they will conflict in practice. For example, you may have a delightfully resonant vignette, but be unable to use it because it would identify the participant concerned; participants may not be willing or able to be involved beyond data provision; and all the diligence in the world can’t guarantee a significant contribution. So, as always, researchers need to exercise their powers of thought, creativity, and improvisation in the service of good quality research, and use the criteria flexibly, as guidelines rather than rules. However, what these criteria do offer is a very helpful framework for assessing the likely quality of research at the design stage, and the actual quality of research on completion.

Next week I will post a case study demonstrating how these criteria can be used.