Researching Research Ethics

Research ethics in the real world [FC]I have written on this blog before about my book launch which is now only four weeks away (or less, if you’re reading this after 11 October). It’s a free event and you’re welcome to come along if you’re in London that day; details here. Copies of the book itself should arrive in the next 2-3 weeks. Exciting times!

I’ve written this week’s blog post on SAGE MethodSpace, talking about the research I did into research ethics around the world as background for writing the book. Head on over and have a read, and please feel free to leave a comment there or here.

I Finished The Book!

Research ethics in the real world [FC]For the last three-and-a-quarter years I have been writing a book on research ethics. It has been like doing another PhD, only with reviewers instead of supervisors. Four sets of reviewers: two sets of proposal reviews and two sets of typescript reviews. I have to thank my lovely publisher, Policy Press (part of Bristol University Press), for giving me so much support to get this book right.

This has been the hardest book I’ve written and I hope never to write another as difficult. On the plus side, I’m happy with the result. It is different from other books on research ethics in three main ways. First, it doesn’t treat research ethics as though they exist in isolation. I look at the relationships between research ethics and individual, social, institutional, professional, and political ethics, and how those relationships play out in practice in the work of research ethics committees and in evaluation research. That makes up part 1 of the book.

Second, it demonstrates the need for ethical thinking and action throughout the research process. In part 2 there is a chapter covering the ethical aspects of each stage of the research process, from planning a research project through to aftercare. There is also a chapter on researcher well-being.

Third, the book sets the Indigenous and Euro-Western research paradigms side by side. This is not to try to decide which is ‘better’, but is intended to increase researchers’ ethical options and vocabularies. I am writing primarily for Euro-Western readers, though the book may be of use to some Indigenous researchers. There is a sizeable and growing body of literature on Indigenous research and ethics, including books, journals, and journal articles. Using this literature requires care – as indeed using all literature requires care (see chapter 7 of my forthcoming book for more on that). But Indigenous literature, as with other literatures by marginalised peoples, requires particular care to avoid tokenism or appropriation.

Many Euro-Western researchers are completely ignorant of Indigenous research. Some know of it but are under the misapprehension that it is an offshoot of Euro-Western research. In fact it is a separate paradigm that stands alone and predates Euro-Western research by tens of thousands of years. Some Indigenous researchers and scholars are now calling for Euro-Western academics to recognise this and use Indigenous work alongside their own. My book is, in part, a response to these calls.

It was so, so hard to cram all of that into 75,000 words – and that includes the bibliography which, as you can imagine, is extensive. There was so much to read that I was still reading, and incorporating, new material on the morning of the day I finished the book. I’ve found more work, since, that I’d love to include – but I had to stop somewhere.

I awaited my final review with great trepidation, aware of the possibility that the reviewer might loathe my book – some previous reviewers had – and that that could put an end to my hopes of publication. Was I looking at three-and-a-quarter years of wasted work? I was so relieved when my editor emailed to say the review was positive. Then the reviewer’s comments blew me away. Here’s one of my favourite parts: “In my view the author through excellent writing skills has covered very dense material (a ton of content) in a very accessible way.”

I was even more delighted because this review came from an Indigenous researcher. She waived anonymity, so I have been able to credit and thank her in the book. I will not name her here, as I do not have her permission to do so; you’ll have to read the book if you want to find out.

Finishing a book feels great, and also weird. It’s like losing a part of your identity, particularly with a book you’ve lived with for so long. Though there’s still lots of work to do: I have to write the companion website, give input on the book’s design, read the proofs, start marketing… publication is due on 1 November, which feels a long way off but I know how quickly five months can pass.

I think this book will be controversial. A senior and very knowledgeable academic told me that one reason I could write such a book is because I’m not in academia. I’m glad if I can use my independence to say things others cannot say – as long as I’m saying useful things, at least.

More than anything else, I hope the book helps to make a difference. In particular, I would like to make a difference to the current system of ethical regulation which is too focused on institutional protection and insufficiently focused on ethical research. It is also terrible at recognising and valuing the work of Indigenous research and of Euro-Western community-based or participatory research. When I was preparing to write the book, I interviewed 18 people around the world and promised them anonymity. Some were research ethics committee members and others had sought formal approval from ethics committees (or institutional review boards in the US). I heard tales of people completing ethical approval forms with information that committees wanted to see rather than with actual facts; people teaching students how to get through the ethical approval system instead of teaching them how to conduct ethical research; people acting ethically yet in complete contravention of their committee’s instructions; people struggling to balance ethical research with Indigenous communities with the inflexible conditions set by ethics committees. Although many of the people who serve on ethics committees are highly ethical, the system within which they are forced to work often prevents them from acting in entirely ethical ways. It seems to me that this system is not currently fit for purpose, and there are many other people who think the same. I hope the evidence I have gathered and presented will help to create much-needed change.

As an independent researcher, I am self-employed. This means I do all my writing in my own time; I don’t have a salary to support my work. Do you like what I do on this blog, or in my books, or anywhere else, enough that you might buy me a coffee now and again if we were co-located? If so, please consider supporting my independent work through Patreon for as little as one dollar per month. In exchange you’ll get exclusive previews of, and insights into, my work. Thank you.

Free Online Research Ethics Resources

freeAre you grappling with research ethics? If so, fear not, for there are numerous free resources online to help you. Here are some examples.

Ethical codes and guidelines

There are loads of ethical codes and guidelines online. For example, some countries have national codes of research ethics, such as the Australian Code for the Responsible Conduct of Research, or the Canadian Tri-Council Policy Statement: Ethical Conduct for Research Involving Humans. This was developed in partnership between the Canadian Institutes of Health Research, the Natural Sciences and Engineering Research Council of Canada, and the Social Sciences and Humanities Research Council of Canada.

There are also codes of research ethics produced by Indigenous peoples who wish their own ethical principles to be followed by any researchers who wish to work with them. Examples of these include Te Ara Tika, Guidelines for Māori Research Ethics, from New Zealand, and the San Code of Research Ethics from South Africa.

Then there are professional and disciplinary codes of research ethics. Examples include the UK-based Market Research Society’s Code of Conduct, and the Code of Ethics of the Australian Association for Research in Education.

There must be a huge number of these kinds of codes and guidelines worldwide. They are not all the same, and the careful reader can find places where one code or guideline may contradict another. This is because of cultural (in its widest sense) differences in ideas of what is ethical. Nevertheless, they can be useful to read for learning, ideas, or of course specific contextual information.

Applying to a research ethics committee

If you have to apply to a research ethics committee for formal ethical approval, you might find it useful to see some other researchers’ successful application forms. You can find examples of these on The Research Ethics Application Database (TREAD), originally set up by Martin Tolich at Otago University in New Zealand and now hosted by The Global Health Network and the Social Research Association. This database holds copies of successful ethics applications from around the world which you can search and use for inspiration and learning. Applications are anonymised, though the researcher(s) must be named. Researchers often submit accompanying documents, such as consent forms and participant information sheets, which can be very useful to look through for ideas. The database managers are keen to add more applications, to help make formal ethical approval processes more accessible and less onerous. If you have an application you could submit, there is information on the website about how to share it via the database.

General guidance

The Research Ethics Guidebook is intended to provide general guidance for social scientists, but may also be useful for people from other fields. The Guidebook is supported by the UK’s Economic and Social Research Council, with the Researcher Development Initiative of the National Centre for Research Methods, and London University’s Institute of Education. Like TREAD, the Research Ethics Guidebook holds useful information about applying for formal ethical approval. However, it also covers other areas such as ethics in research design, conducting research, reporting, and dissemination. The Guidebook is ideal for reference at the start of a project, and also during research as unforeseen ethical dilemmas occur.

Ethics training

There are two free online courses in research ethics which are primarily geared towards health researchers and so focus heavily on participant wellbeing. Both have been through peer review and other quality assurance processes, and both offer certificates to students who complete the course successfully with a score of 80% or more. One is called Research Ethics Online Training and is adapted from an e-learning course and resource package designed and produced by the World Health Organisation. It contains 14 individual modules, plus resources in the form of a glossary, a “resource library” (aka bibliography), some case studies, examples of ethics guidelines, videos on research ethics, and links to other ethics websites. The second is Essential Elements of Ethics, adapted from an ethics tool kit created to support researchers at Harvard University in America. This course contains 11 modules, plus resources including a workbook and checklist of points to consider, and a discussion forum though this is not very active.

Free research ethics modules with a wider perspective are offered by Duke University in America. These cover topics such as cultural awareness and humility, ethical photography, power and privilege, and working with children. They are delivered through videos with transcripts also available.

Online research

For internet-based research, the Association of Internet Researchers has some useful resources free for download. The British Psychological Society offers Ethics Guidelines for Internet-mediated Research. The South East European Network for Professionalization of Media has produced Social Media Research: A Guide to Ethics.

Visual research

The International Visual Sociology Association has produced a Code of Research Ethics and Guidelines covering visual research.

Ethics of research publication

The Committee on Publication Ethics has a whole range of downloadable resources covering how to detect, prevent and handle misconduct, responsible publication standards for editors and authors, ethical guidelines for peer reviewers, and much more.

This list of resources is by no means exhaustive. There are loads more out there. It would be a huge task to identify them all. These are the ones I have found particularly useful. If there are any you like to use, which aren’t in this post, please add them in the comments below.

The Ethics of Expertise

expertLast week I wrote about the ethics of research evidence, in which I cited Charles Knight’s contention that evidence should be used by people with expertise. Knight also questions how we can identify people with expertise. He suggests they would ‘have to do the sorts of things experts do – read the literature, do research, have satisfied clients, mentor novices, and so on’. He adds, ‘This approach is not likely to concentrate expertise in a few hands.’ (Knight 2004:2)

I like Knight’s attempts to widen the pool of acknowledged experts. He is evidently aware of the scope for tension between expert privilege and democracy. Conventionally, experts are few in number, specialists, and revered or at least respected for their expertise. However, this can also be viewed as exclusionary, particularly as most experts of this kind are older white men. Also, I’m not sure Knight goes far enough.

Knight was writing at the start of the century and, more recently, different definitions of ‘expert’ have begun to creep into the lexicon. For example, the UK’s Care Quality Commission (CQC), which inspects and regulates health and social care services, has defined ‘experts by experience‘. These are people with personal experience of using, or caring for someone who uses, services that the CQC oversees. Experts by experience take an active part in service inspections, and their findings are used to support the work of the CQC’s professional inspectors.

In research, there is a specific participatory approach known as critical communicative methodology (CCM) which was developed around 10 years ago. CCM takes the view that everyone is an expert in something, everyone has something to teach others, and everyone is capable of critical analysis. This is a fully egalitarian methodology which uses respectful dialogue as its main method.

However, in most of research and science, experts are still viewed as those rare beings who have developed enough knowledge of a specialist area to be able to claim mastery of their subject. There is a myth that experts are infallible, which of course they’re not; they are human, with all the associated incentives and pressures that implies. It seems that experts are falling from grace daily at present for committing social sins from fraud to sexual harassment (and getting caught).

Perhaps more worryingly, the work of scientific experts is also falling from grace, in the form of the replication crisis. This refers to the finding that scientific discoveries are not as easy to replicate as was once supposed. As replication is one of the key criteria scientists use to validate the quality of each other’s work, this is a Big Problem. There is an excellent explanation of the replication crisis, in graphic form, online here.

My own view is that replication is associated with positivism, objectivity, the neutrality of the researcher, and associated ideas which have now been fairly thoroughly discredited. I think this ‘crisis’ could be a really good moment for science, as it may lead more people to understand that realities are multiple, researchers influence and are influenced by their work, and the wider context inevitably plays a supporting and sometimes a starring role.

As a result of various factors, including the replication crisis, it seems that the conventional concept of an expert is under threat. This too may be no bad thing, if it leads us to value everyone’s expertise. Perhaps it could also help to overturn the ‘deficit model’ which still prevails in so much social science, where (expert) researchers focus on people’s deficits – their poverty, ill-health, low educational attainment, unemployment, inadequate housing, and so on – rather than on their strengths and the positive contributions they make to our society. The main argument in favour of the deficit model is that these are problems research can help to solve, but if that were true, I think they would have been solved long since.

For sure, at times you need an expert you can trust. For example, if your car goes wrong, you’ll want to take it to an expert mechanic; if you develop a health problem, you’ll want to seek advice from an expert medic. It doesn’t seem either ethical or sensible, to me, to try to discard the conventional role of the expert altogether. But it does seem sensible to attack the links between expertise and privilege. After all, experts can’t exercise their expertise without input from others. At its simplest, the mechanic needs you to tell them what kind of a funny noise your car is making, and under what circumstances; the medic needs you to explain where and when you feel pain. Also, it doesn’t seem sensible to restrict conventional experts to a single area of expertise. That mechanic may also be an expert bassoon player; the medic may know more about antique jewellery than you ever thought possible.

In my view, the ethical approach to expertise is to treat everyone as an expert in matters relating to their own life, and beyond that, as someone who has a positive contribution to make to a specific task at hand and/or wider society in general. Imagine a world in which we all acknowledged and valued each other’s knowledge, experience, and skills. You may say I’m a dreamer – but I’m not the only one.

The Ethics of Research Evidence

Like so many of the terms used in research, ‘evidence’ has no single agreed meaning. Nor does there seem to be much consensus about what constitutes good or reliable evidence. The differing approaches of other professions may confuse the picture. For example, evidence that would convince a judge to hand down a life sentence would be dismissed by many researchers as anecdote.

evidenceGiven that evidence is such a slippery, contentious topic, how can researchers begin to address its ethical aspects? A working definition might help: evidence is ‘information or data that people select to help them answer questions’ (Knight 2004:1). Using that definition, we can look at the ethical aspects of our relationship with evidence: how we choose, use, and apply the evidence we gather and construct.

Evidence is often talked and written about as though it is something neutral that simply exists, like a brick or a table, to be used by researchers at will. Knight’s definition is helpful because it highlights the fact that researchers select the evidence they use. Evidence, in the form of facts or artefacts, is neither ethical nor unethical. But in the process of selection, there is always room for bias, and that is where ethical considerations come into play.

To choose evidence ethically, I would argue that first you need to recognise the role of choice in the process, and the associated potential for bias. Then you need to consider some key questions, such as:

  • What is the question you want to answer?
  • What are your existing thoughts and feelings about that topic?
  • How might they affect your choices about evidence?
  • What can you do to make those choices open and defensible?

The aim is to be able to demonstrate that you have chosen the information or data you intend to define as ‘evidence’ in as ethical a way as possible.

Once you have chosen your evidence, you need to use it ethically within the research process. This means subjecting all your evidence to rigorous analysis, interpreting your findings accurately, and reporting in ways that will communicate effectively with your audiences. These are some of the key responsibilities of ethical researchers.

Research is a process that converts evidence into research evidence. It starts with the information or data that researchers choose to use as evidence, which may be anything from statistics to artworks. Then, through the process of (one would hope) diligent research, that evidence becomes research evidence. Whether and how research evidence is applied in the wider world is the third ethical aspect.

Sadly, there is a great deal of evidence that evidence is not applied well, or not applied at all. Most professional researchers have tales to tell of evidence being buried by research funders or commissioners. This seems particularly likely where findings conflict with political or money-making ambitions. In some sectors, such as third sector evaluation, this is widespread (Fiennes 2014). How can anyone make an evidence-based decision if the evidence collected by researchers has not been converted into evidence they can use?

The use of research evidence is often beyond the control of researchers. One practical action a researcher can take is to suggest a dissemination plan at the outset. This can be regarded as ethical, because such a plan should increase the likelihood of research evidence being used. But it could also be regarded as manipulative: using the initial excitement around a new project to persuade people to sign up to a plan they might later regret.

It seems that ethics and evidence are uneasy bedfellows. Again, Knight tries to help us here, by suggesting that research evidence should be used by people with expertise. This raises a further, pertinent question: what is the ethics of expertise? I will address that next week.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

Dissemination, Social Media, and Ethics

twitterstormI inadvertently caused a minor Twitterstorm last week, and am considering what I can learn from this.

I spotted a tweet from @exerciseworks reporting some research. It said “One in 12 deaths could be prevented with 30 minutes of exercise five times a week” (originally tweeted by @exerciseworks on 22 Sept, retweeted on the morning of 10 October). The tweet also included this link but I didn’t click through, I just responded directly to the content of the tweet.

Here’s their tweet and my reply:

 

The @exerciseworks account replied saying it wasn’t their headline. This was true; the article is in the prestigious British Medical Journal (BMJ) which should know better. And so should I: in retrospect, I should have checked the link, and overtly aimed my comment at the BMJ as well.

Then @exerciseworks blocked me on Twitter. Perhaps they felt I might damage their brand, or they just didn’t like the cut of my jib. It is of course their right to choose who to engage with on Twitter, though I’m a little disappointed that they weren’t up for debate.

I was surprised how many people picked up the tweet and retweeted it, sometimes with comment, such as this:

Rajat Chauhan tweet

and this:

Alan J Taylor tweet

which was ‘liked’ by the BMJ itself – presumably they are up for debate; I would certainly hope so. (It also led me to check out @AdamMeakins, a straight-talking sports physiotherapist who I was pleased to be bracketed with.)

Talking to people about this, the most common reaction was to describe @exerciseworks as a snowflake or similar, and say they should get over themselves. This is arguable, of course, though I think it is important to remember that we never know what – sometimes we don’t know who – is behind a Twitter account. Even with individual accounts where people disclose personal information, we should not assume that the struggles someone discloses are all the struggles they face. And with corporate or other collective accounts, we should remember that there is an individual person reading and responding to tweets, and that person has their own feelings and struggles.

Twitter is a fast-moving environment and it’s easy to make a point swiftly then move on. Being blocked has made me pause for thought, particularly as @exerciseworks is an account I’ve been following and interacting with for some time.

I stand by the point I made. It riles me when statistical research findings are reported as evidence that death is preventable. Yes, of course lives can be saved, and so death avoided at that particular time. Also, sensible life choices such as taking exercise are likely to help postpone death. But prevent death? No chance. To suggest that is inaccurate and therefore unethical. However, forgetting that there is an actual person behind each Twitter account is also unethical, so I’m going to try to take a little more time and care in future.

Why Research Participants Rock

dancingI wrote last week about the creative methods Roxanne Persaud and I used in our research into diversity and inclusion at Queen Mary University of London last year. One of those was screenplay writing, which we thought would be particularly useful if it depicted an interaction between a student and a very inclusive lecturer, or between a student and a less inclusive lecturer.

I love to work with screenplay writing. I use play script writing too, sometimes, though less often. With play script writing, you’re bound by theatre rules, so everything has to happen in one room, with minimal special effects. This can be really helpful when you’re researching something that happens in a specific place such as a parent and toddler group or a team sport. Screenplay, though, is more flexible: you can cut from private to public space, or include an army of mermaids if you wish. Also, screenplay writing offers more scope for descriptions of settings and characters, which, from a researcher’s point of view, can provide very useful data.

Especially when participants do their own thing! Our screenplay-writing participants largely ignored our suggestions about interactions between students and lecturers. Instead, we learned about a south Asian woman, the first in her family to go to university, who was lonely, isolated, and struggling to cope. We found out about a non-binary student’s experience of homophobia, sexism and violence in different places on campus. We saw how difficult it can be for Muslim students to join in with student life when alcohol plays a central role. Scenes like these gave us a much richer picture of facets of student inclusion and exclusion than we would have had if our participants had kept to their brief.

Other researchers using creative techniques have found this too. For example, Shamser Sinha and Les Back did collaborative research with young migrants in London. One participant, who they call Dorothy, wanted to use a camera, but wasn’t sure what to capture. Sinha suggested exploring how her immigration status affected where she went and what she could buy. Instead, Dorothy went sightseeing, and took pictures of Buckingham Palace. The stories she told about what this place and experience meant to her enriched the researchers’ perceptions of migrant life, not just the ‘aggrieved’ life they were initially interested in, but ‘her free life’ (Sinha and Back 2013:483).

Katy Vigurs aimed to use photo-elicitation to explore different generations’ perceptions of the English village where they lived. She worked with a ladies’ choir, a running club, and a youth project. Vigurs asked her participants to take pictures that would show how they saw and experienced their community. The runners did as she asked. The singers, who were older, took a few photos and also, unprompted, provided old photographs of village events and landmarks, old and new newspaper cuttings, photocopied and hand-drawn maps of the area with added annotations, and long written narratives about their perceptions and experiences of the village. The young people also took some photos, mostly of each other, but then spent a couple of hours with a map of the village, tracing the routes they used and talking with the researcher about where and how they spent time. Rather than standard photo-elicitation, this became ‘co-created mixed-media elicitation’ as Vigurs puts it (Vigurs and Kara 2016:520) (yes, I am the second author of this article, but all the research and much of the writing is hers). Again, this provided insights for the researcher that she could not have found using the method she originally planned.

Research ethics committees might frown on this level of flexibility. I would argue that it is more ethical than the traditional prescriptive approach to research. Our participants have knowledge and ideas and creativity to share. They don’t need us to teach them how to interact and work with others. In fact, our participants have a great deal to teach us, if we are only willing to listen and learn.

How to evaluate excellence in arts-based research

This article first appeared in Funding Insight on 19 May 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

judgementResearchers, research commissioners, and research funders all struggle with identifying good quality arts-based research. ‘I know it when I see it’ just doesn’t pass muster. Fortunately, Sarah J Tracy of Arizona State University has developed a helpful set of criteria that are now being used extensively to assess the quality of qualitative research, including arts-based and qualitative mixed-methods research.

Tracy’s conceptualisation includes eight criteria: worthy topic, rich rigour, sincerity, credibility, resonance, significant contribution, ethics, and meaningful coherence. Let’s look at each of those in a bit more detail.

A worthy topic is likely to be significant, meaningful, interesting, revealing, relevant, and timely. Such a topic may arise from contemporary social or personal phenomena, or from disciplinary priorities.

Rich rigour involves care and attention, particularly to sampling, data collection, and data analysis. It is the antithesis of the ‘quick and dirty’ research project, requiring diligence on the part of the researcher and leaving no room for short-cuts.

Sincerity involves honesty and transparency. Reflexivity is the key route to honesty, requiring researchers to interrogate and display their own impact on the research they conduct. Transparency focuses on the research process, and entails researchers disclosing their methods and decisions, the challenges they faced, any unexpected events that affected the research, and so on. It also involves crediting all those who have helped the researcher, such as funders, participants, or colleagues.

Credibility is a more complex criterion which, when achieved, produces research that can be perceived as trustworthy and on which people are willing to base decisions. Tracy suggests that there are four dimensions to achieving credibility: thick description, triangulation/crystallization, multiple voices, and participant input beyond data provision. Thick description means lots of detail and illustration to elucidate meanings which are clearly located in terms of theoretical, cultural, geographic, temporal, and other such location markers. Triangulation and crystallisation are both terms that refer to the use of multiplicity within research, such as through using multiple researchers, theories, methods, and/or data sources. The point of multiplicity is to consider the research question in a variety of ways, to enable the exploration of different facets of that question and thereby create deeper understanding. The use of multiple voices, particularly in research reporting, enables researchers more accurately to reflect the complexity of the research situation. Participant input beyond data provision provides opportunities for verification and elaboration of findings, and helps to ensure that research outputs are understandable and implementable.

Although all eight criteria are potentially relevant to arts-based research, resonance is perhaps the most directly relevant. It refers to the ability of research to have an emotional impact on its audiences or readers. Resonance has three aspects: aesthetic merit, generalisability, and transferability. Aesthetic merit means that style counts alongside, and works with, content, such that research is presented in a beautiful, evocative, artistic and accessible way. Generalisability refers to the potential for research to be valuable in a range of contexts, settings, or circumstances. Transferability is when an individual reader or audience member can take ideas from the research and apply them to their own situation.

Research can contribute to knowledge, policy, and/or practice, and will make a significant contribution if it extends knowledge or improves policy or practice. Research may also make a significant contribution to the development of methodology; there is a lot of scope for this with arts-based methods.

Several of the other criteria touch on ethical aspects of research. For example, many researchers would argue that reflexivity is an ethical necessity. However, ethics in research is so important that it also requires a criterion of its own. Tracy’s conceptualisation of ethics for research evaluation involves procedural, situational, relational, and exiting ethics. Procedural ethics refers to the system of research governance – or, for those whose research is not subject to formal ethical approval, the considerations therein such as participant welfare and data storage. Situational ethics requires consideration of the specific context for the research and how that might or should affect ethical decisions. Relational ethics involve treating others well during the research process: offering respect, extending compassion, keeping promises, and so on. And exiting ethics cover the ways in which researchers present and share findings, as well as aftercare for participants and others involved in the research.

Research that has meaningful coherence effectively does what it sets out to do. It will tell a clear story. That story may include paradox and contradiction, mess and disturbance. Nevertheless, it will bring together theory, literature, data and analysis in an interconnected and comprehensible way.

These criteria are not an unarguable rubric to which every qualitative researcher must adhere. Indeed there are times when they will conflict in practice. For example, you may have a delightfully resonant vignette, but be unable to use it because it would identify the participant concerned; participants may not be willing or able to be involved beyond data provision; and all the diligence in the world can’t guarantee a significant contribution. So, as always, researchers need to exercise their powers of thought, creativity, and improvisation in the service of good quality research, and use the criteria flexibly, as guidelines rather than rules. However, what these criteria do offer is a very helpful framework for assessing the likely quality of research at the design stage, and the actual quality of research on completion.

Next week I will post a case study demonstrating how these criteria can be used.

The Variety Of Indie Research Work

varietyOne of the things I love about being an independent researcher is the sheer variety of projects I work on and tasks I might do in a day. Yesterday, I was only in the office for the afternoon, yet I worked on at least seven different things. Here’s what I did.

First, I checked Twitter, and found a tweet with a link to a blog post I wrote about an event that is part of a project I’m working on with and for the forensic science community. This is a new departure for me, in that I haven’t worked with forensic scientists before, though the work itself is straightforward. I’m supporting a small group of people with research to identify the best way to create a repository for good quality student research data, and it’s surprisingly interesting. So I retweeted the tweet.

Second, I dealt with the morning’s emails. The arrival of a purchase order I’d been waiting for weeks to receive – hurrah! I formulated the invoice and sent it off to the client. Then some correspondence about the creative research methods summer school I’m facilitating at Keele in early July – just three weeks away now, so the planning is hotting up (and there are still some places left if you’d like to join us – it’ll be informative and fun). The most interesting email was a blog post from Naomi Barnes, an Australian education scholar who is considering what it means to be a white educator in the Australian school system. This chimes with the work I am doing on my next book, so I leave a comment and tweet the link.

While on Twitter, I got side-tracked by a tweet announcing #AuthorsForGrenfell, an initiative set up by authors for authors to donate items for auction to raise funds for the Red Cross London Fire Relief Fund to help survivors of the Grenfell Tower fire. I’d been wanting to help: my father is a Londoner, I have always had family in London, I lived in London myself from 1982-1997, and one member of my family is working in the tower right now to recover bodies. So it feels very close to home. But I’m not in a position to give lots of money, so I was delighted to find this option which I hope will enable me to raise more money than I could give myself. I have offered one copy of each of my books plus a Skype consultation with each one. My items aren’t yet up on the site, but I hope they will be soon because bidding is open already. If you’re one of my wealthy readers, please go over there and make a bid!

Then I spent some time researching aftercare for data. Yes, indeed there is such a thing. So far I’ve come up with two ways to take care of your data after your project is finished: secure storage and open publication. They are of course diametrically opposed, and which you choose depends on the nature of your data. Open publication is the ethical choice in most cases, enabling your data to be reused and cited, increasing your visibility as a researcher, and reducing the overall burden on potential research participants. In some cases, though, personal or commercial sensitivities will require secure storage of data. There may be other ways to take care of data after the end of a project, and I’ll be on the lookout for those as I work on my next book.

By now it was 6 pm so I did a last trawl of the emails, and found one from Sage Publishing with a link to a Dropbox folder containing 20 research methods case studies for me to review. They publish these cases online as part of their Methodspace website. I like this work: it’s flexible enough to fit around other commitments and, like other kinds of review, it tests my knowledge of research methods while also helping me to stay up to date. Best of all, unlike other kinds of review, Sage pay for my expertise. So I downloaded all the documents, checked and signed the contract, and emailed it back with a ‘thank you’. By then it was 6.30 pm and time to go home.

As the old saying goes, variety is the spice of life. I certainly like the flavour it gives to my work. Some days I work on a single project all day; those days are fun too. Yesterday I worked in my own office, today I’m out at meetings locally, tomorrow I’m off to London. It’s always ‘all change’ and I wouldn’t have it any other way.

Let’s Talk About Research Misconduct

detective-152085__340Research misconduct is on the rise, certainly within hard science subjects, quite possibly elsewhere. Researchers around the world are inventing data, falsifying findings, and plagiarising the work of others. Part of this is due to the pressure on some researchers to publish their findings in academic journals. There is also career-related pressure on researchers to conduct accurate polls, produce statistically significant results, and get answers to questions, among other things. Some clients, managers, funders and publishers have a low tolerance for findings that chime with common sense or the familiar conclusion of ‘more research is needed’. They may expect researchers to produce interesting or novel findings that will direct action or support change.

Publishers are working to counteract misconduct in a variety of ways. Plagiarism detection software is now routinely used by most big publishers. Also, journal articles can be retracted (i.e. de-published) and this is on the increase, most commonly as a result of fraud. However, the effectiveness of retraction is questionable. The US organisation Retraction Watch has a ‘leaderboard’ of researchers with the most retracted papers, some of whom have had more papers retracted than you or I will ever write, which suggests that retraction of a paper – even for fraud – does not necessarily discredit a researcher or prevent them from working.

Some research misconduct can have devastating effects on people, organisations, and professions. People may lose their jobs, be stripped of prizes or honours, and be prosecuted in criminal courts. Organisations lose money, such as the cost of wasted research, disciplinary hearings, and recruitment to fill vacancies left by fraudulent researchers. And whole professions can suffer, as misconduct slows progress based on research. For example, in 2012 the Journal of Medical Ethics published a study showing that thousands of patients had been treated on the basis of research published in papers that were subsequently retracted. Retraction Watch shows that some papers receive hundreds of citations even after they have been retracted, which suggests that retraction may not be communicated effectively.

Yet even the potentially devastating consequences of misconduct are clearly not much of a deterrent – and in many cases may not occur at all. Let’s examine a case in more detail. Hwang Woo-Suk is a researcher from South Korea. In the early 2000s he was widely regarded as an eminent scientist. Then in 2006 he was found to have faked much of his research, and he admitted fraud. Hwang’s funding was withdrawn, criminal charges were laid against him, and in 2009 he received a suspended prison sentence. Yet he continued to work as a researcher (albeit in a different specialism) and to contribute to publications as a named author.

Closer to home, a survey of over 2,700 medical researchers published by the British Medical Journal in 2012 found that one in seven had ‘witnessed colleagues intentionally altering or fabricating data during their research or for the purposes of publication’. Given the pressures on researchers, perhaps this is not surprising – though it is deeply shocking.

The examples given in this article are from hard science rather than social research. Evidence of misconduct in social research is hard to find, so it would be tempting to conclude that it happens less and perhaps that social researchers are somehow more ethical and virtuous than other researchers. I feel very wary about making such assumptions. It is also possible that social research is less open about misconduct than other related disciplines, or that it’s easier to get away with misconduct in social research.

So what is the answer? Ethics books, seminars, conferences etc frequently exhort individual researchers to think and act ethically, but I’m not sure this provides sufficient safeguards. Should we watch each other, as well as ourselves? Maybe we should, at least up to a point. Working collaboratively can be a useful guard against unethical practice – but many researchers work alone or unsupervised. I don’t think formal ethical approval is much help here, either; it is certainly no safeguard against falsifying findings or plagiarism. Perhaps all we can do at present is to maintain awareness of the potential for, and dangers of, misconduct.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.