The Ethics of Research Evidence

Like so many of the terms used in research, ‘evidence’ has no single agreed meaning. Nor does there seem to be much consensus about what constitutes good or reliable evidence. The differing approaches of other professions may confuse the picture. For example, evidence that would convince a judge to hand down a life sentence would be dismissed by many researchers as anecdote.

evidenceGiven that evidence is such a slippery, contentious topic, how can researchers begin to address its ethical aspects? A working definition might help: evidence is ‘information or data that people select to help them answer questions’ (Knight 2004:1). Using that definition, we can look at the ethical aspects of our relationship with evidence: how we choose, use, and apply the evidence we gather and construct.

Evidence is often talked and written about as though it is something neutral that simply exists, like a brick or a table, to be used by researchers at will. Knight’s definition is helpful because it highlights the fact that researchers select the evidence they use. Evidence, in the form of facts or artefacts, is neither ethical nor unethical. But in the process of selection, there is always room for bias, and that is where ethical considerations come into play.

To choose evidence ethically, I would argue that first you need to recognise the role of choice in the process, and the associated potential for bias. Then you need to consider some key questions, such as:

  • What is the question you want to answer?
  • What are your existing thoughts and feelings about that topic?
  • How might they affect your choices about evidence?
  • What can you do to make those choices open and defensible?

The aim is to be able to demonstrate that you have chosen the information or data you intend to define as ‘evidence’ in as ethical a way as possible.

Once you have chosen your evidence, you need to use it ethically within the research process. This means subjecting all your evidence to rigorous analysis, interpreting your findings accurately, and reporting in ways that will communicate effectively with your audiences. These are some of the key responsibilities of ethical researchers.

Research is a process that converts evidence into research evidence. It starts with the information or data that researchers choose to use as evidence, which may be anything from statistics to artworks. Then, through the process of (one would hope) diligent research, that evidence becomes research evidence. Whether and how research evidence is applied in the wider world is the third ethical aspect.

Sadly, there is a great deal of evidence that evidence is not applied well, or not applied at all. Most professional researchers have tales to tell of evidence being buried by research funders or commissioners. This seems particularly likely where findings conflict with political or money-making ambitions. In some sectors, such as third sector evaluation, this is widespread (Fiennes 2014). How can anyone make an evidence-based decision if the evidence collected by researchers has not been converted into evidence they can use?

The use of research evidence is often beyond the control of researchers. One practical action a researcher can take is to suggest a dissemination plan at the outset. This can be regarded as ethical, because such a plan should increase the likelihood of research evidence being used. But it could also be regarded as manipulative: using the initial excitement around a new project to persuade people to sign up to a plan they might later regret.

It seems that ethics and evidence are uneasy bedfellows. Again, Knight tries to help us here, by suggesting that research evidence should be used by people with expertise. This raises a further, pertinent question: what is the ethics of expertise? I will address that next week.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

Dissemination, Social Media, and Ethics

twitterstormI inadvertently caused a minor Twitterstorm last week, and am considering what I can learn from this.

I spotted a tweet from @exerciseworks reporting some research. It said “One in 12 deaths could be prevented with 30 minutes of exercise five times a week” (originally tweeted by @exerciseworks on 22 Sept, retweeted on the morning of 10 October). The tweet also included this link but I didn’t click through, I just responded directly to the content of the tweet.

Here’s their tweet and my reply:

 

The @exerciseworks account replied saying it wasn’t their headline. This was true; the article is in the prestigious British Medical Journal (BMJ) which should know better. And so should I: in retrospect, I should have checked the link, and overtly aimed my comment at the BMJ as well.

Then @exerciseworks blocked me on Twitter. Perhaps they felt I might damage their brand, or they just didn’t like the cut of my jib. It is of course their right to choose who to engage with on Twitter, though I’m a little disappointed that they weren’t up for debate.

I was surprised how many people picked up the tweet and retweeted it, sometimes with comment, such as this:

Rajat Chauhan tweet

and this:

Alan J Taylor tweet

which was ‘liked’ by the BMJ itself – presumably they are up for debate; I would certainly hope so. (It also led me to check out @AdamMeakins, a straight-talking sports physiotherapist who I was pleased to be bracketed with.)

Talking to people about this, the most common reaction was to describe @exerciseworks as a snowflake or similar, and say they should get over themselves. This is arguable, of course, though I think it is important to remember that we never know what – sometimes we don’t know who – is behind a Twitter account. Even with individual accounts where people disclose personal information, we should not assume that the struggles someone discloses are all the struggles they face. And with corporate or other collective accounts, we should remember that there is an individual person reading and responding to tweets, and that person has their own feelings and struggles.

Twitter is a fast-moving environment and it’s easy to make a point swiftly then move on. Being blocked has made me pause for thought, particularly as @exerciseworks is an account I’ve been following and interacting with for some time.

I stand by the point I made. It riles me when statistical research findings are reported as evidence that death is preventable. Yes, of course lives can be saved, and so death avoided at that particular time. Also, sensible life choices such as taking exercise are likely to help postpone death. But prevent death? No chance. To suggest that is inaccurate and therefore unethical. However, forgetting that there is an actual person behind each Twitter account is also unethical, so I’m going to try to take a little more time and care in future.

Why Research Participants Rock

dancingI wrote last week about the creative methods Roxanne Persaud and I used in our research into diversity and inclusion at Queen Mary University of London last year. One of those was screenplay writing, which we thought would be particularly useful if it depicted an interaction between a student and a very inclusive lecturer, or between a student and a less inclusive lecturer.

I love to work with screenplay writing. I use play script writing too, sometimes, though less often. With play script writing, you’re bound by theatre rules, so everything has to happen in one room, with minimal special effects. This can be really helpful when you’re researching something that happens in a specific place such as a parent and toddler group or a team sport. Screenplay, though, is more flexible: you can cut from private to public space, or include an army of mermaids if you wish. Also, screenplay writing offers more scope for descriptions of settings and characters, which, from a researcher’s point of view, can provide very useful data.

Especially when participants do their own thing! Our screenplay-writing participants largely ignored our suggestions about interactions between students and lecturers. Instead, we learned about a south Asian woman, the first in her family to go to university, who was lonely, isolated, and struggling to cope. We found out about a non-binary student’s experience of homophobia, sexism and violence in different places on campus. We saw how difficult it can be for Muslim students to join in with student life when alcohol plays a central role. Scenes like these gave us a much richer picture of facets of student inclusion and exclusion than we would have had if our participants had kept to their brief.

Other researchers using creative techniques have found this too. For example, Shamser Sinha and Les Back did collaborative research with young migrants in London. One participant, who they call Dorothy, wanted to use a camera, but wasn’t sure what to capture. Sinha suggested exploring how her immigration status affected where she went and what she could buy. Instead, Dorothy went sightseeing, and took pictures of Buckingham Palace. The stories she told about what this place and experience meant to her enriched the researchers’ perceptions of migrant life, not just the ‘aggrieved’ life they were initially interested in, but ‘her free life’ (Sinha and Back 2013:483).

Katy Vigurs aimed to use photo-elicitation to explore different generations’ perceptions of the English village where they lived. She worked with a ladies’ choir, a running club, and a youth project. Vigurs asked her participants to take pictures that would show how they saw and experienced their community. The runners did as she asked. The singers, who were older, took a few photos and also, unprompted, provided old photographs of village events and landmarks, old and new newspaper cuttings, photocopied and hand-drawn maps of the area with added annotations, and long written narratives about their perceptions and experiences of the village. The young people also took some photos, mostly of each other, but then spent a couple of hours with a map of the village, tracing the routes they used and talking with the researcher about where and how they spent time. Rather than standard photo-elicitation, this became ‘co-created mixed-media elicitation’ as Vigurs puts it (Vigurs and Kara 2016:520) (yes, I am the second author of this article, but all the research and much of the writing is hers). Again, this provided insights for the researcher that she could not have found using the method she originally planned.

Research ethics committees might frown on this level of flexibility. I would argue that it is more ethical than the traditional prescriptive approach to research. Our participants have knowledge and ideas and creativity to share. They don’t need us to teach them how to interact and work with others. In fact, our participants have a great deal to teach us, if we are only willing to listen and learn.

How to evaluate excellence in arts-based research

This article first appeared in Funding Insight on 19 May 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

judgementResearchers, research commissioners, and research funders all struggle with identifying good quality arts-based research. ‘I know it when I see it’ just doesn’t pass muster. Fortunately, Sarah J Tracy of Arizona State University has developed a helpful set of criteria that are now being used extensively to assess the quality of qualitative research, including arts-based and qualitative mixed-methods research.

Tracy’s conceptualisation includes eight criteria: worthy topic, rich rigour, sincerity, credibility, resonance, significant contribution, ethics, and meaningful coherence. Let’s look at each of those in a bit more detail.

A worthy topic is likely to be significant, meaningful, interesting, revealing, relevant, and timely. Such a topic may arise from contemporary social or personal phenomena, or from disciplinary priorities.

Rich rigour involves care and attention, particularly to sampling, data collection, and data analysis. It is the antithesis of the ‘quick and dirty’ research project, requiring diligence on the part of the researcher and leaving no room for short-cuts.

Sincerity involves honesty and transparency. Reflexivity is the key route to honesty, requiring researchers to interrogate and display their own impact on the research they conduct. Transparency focuses on the research process, and entails researchers disclosing their methods and decisions, the challenges they faced, any unexpected events that affected the research, and so on. It also involves crediting all those who have helped the researcher, such as funders, participants, or colleagues.

Credibility is a more complex criterion which, when achieved, produces research that can be perceived as trustworthy and on which people are willing to base decisions. Tracy suggests that there are four dimensions to achieving credibility: thick description, triangulation/crystallization, multiple voices, and participant input beyond data provision. Thick description means lots of detail and illustration to elucidate meanings which are clearly located in terms of theoretical, cultural, geographic, temporal, and other such location markers. Triangulation and crystallisation are both terms that refer to the use of multiplicity within research, such as through using multiple researchers, theories, methods, and/or data sources. The point of multiplicity is to consider the research question in a variety of ways, to enable the exploration of different facets of that question and thereby create deeper understanding. The use of multiple voices, particularly in research reporting, enables researchers more accurately to reflect the complexity of the research situation. Participant input beyond data provision provides opportunities for verification and elaboration of findings, and helps to ensure that research outputs are understandable and implementable.

Although all eight criteria are potentially relevant to arts-based research, resonance is perhaps the most directly relevant. It refers to the ability of research to have an emotional impact on its audiences or readers. Resonance has three aspects: aesthetic merit, generalisability, and transferability. Aesthetic merit means that style counts alongside, and works with, content, such that research is presented in a beautiful, evocative, artistic and accessible way. Generalisability refers to the potential for research to be valuable in a range of contexts, settings, or circumstances. Transferability is when an individual reader or audience member can take ideas from the research and apply them to their own situation.

Research can contribute to knowledge, policy, and/or practice, and will make a significant contribution if it extends knowledge or improves policy or practice. Research may also make a significant contribution to the development of methodology; there is a lot of scope for this with arts-based methods.

Several of the other criteria touch on ethical aspects of research. For example, many researchers would argue that reflexivity is an ethical necessity. However, ethics in research is so important that it also requires a criterion of its own. Tracy’s conceptualisation of ethics for research evaluation involves procedural, situational, relational, and exiting ethics. Procedural ethics refers to the system of research governance – or, for those whose research is not subject to formal ethical approval, the considerations therein such as participant welfare and data storage. Situational ethics requires consideration of the specific context for the research and how that might or should affect ethical decisions. Relational ethics involve treating others well during the research process: offering respect, extending compassion, keeping promises, and so on. And exiting ethics cover the ways in which researchers present and share findings, as well as aftercare for participants and others involved in the research.

Research that has meaningful coherence effectively does what it sets out to do. It will tell a clear story. That story may include paradox and contradiction, mess and disturbance. Nevertheless, it will bring together theory, literature, data and analysis in an interconnected and comprehensible way.

These criteria are not an unarguable rubric to which every qualitative researcher must adhere. Indeed there are times when they will conflict in practice. For example, you may have a delightfully resonant vignette, but be unable to use it because it would identify the participant concerned; participants may not be willing or able to be involved beyond data provision; and all the diligence in the world can’t guarantee a significant contribution. So, as always, researchers need to exercise their powers of thought, creativity, and improvisation in the service of good quality research, and use the criteria flexibly, as guidelines rather than rules. However, what these criteria do offer is a very helpful framework for assessing the likely quality of research at the design stage, and the actual quality of research on completion.

Next week I will post a case study demonstrating how these criteria can be used.

The Variety Of Indie Research Work

varietyOne of the things I love about being an independent researcher is the sheer variety of projects I work on and tasks I might do in a day. Yesterday, I was only in the office for the afternoon, yet I worked on at least seven different things. Here’s what I did.

First, I checked Twitter, and found a tweet with a link to a blog post I wrote about an event that is part of a project I’m working on with and for the forensic science community. This is a new departure for me, in that I haven’t worked with forensic scientists before, though the work itself is straightforward. I’m supporting a small group of people with research to identify the best way to create a repository for good quality student research data, and it’s surprisingly interesting. So I retweeted the tweet.

Second, I dealt with the morning’s emails. The arrival of a purchase order I’d been waiting for weeks to receive – hurrah! I formulated the invoice and sent it off to the client. Then some correspondence about the creative research methods summer school I’m facilitating at Keele in early July – just three weeks away now, so the planning is hotting up (and there are still some places left if you’d like to join us – it’ll be informative and fun). The most interesting email was a blog post from Naomi Barnes, an Australian education scholar who is considering what it means to be a white educator in the Australian school system. This chimes with the work I am doing on my next book, so I leave a comment and tweet the link.

While on Twitter, I got side-tracked by a tweet announcing #AuthorsForGrenfell, an initiative set up by authors for authors to donate items for auction to raise funds for the Red Cross London Fire Relief Fund to help survivors of the Grenfell Tower fire. I’d been wanting to help: my father is a Londoner, I have always had family in London, I lived in London myself from 1982-1997, and one member of my family is working in the tower right now to recover bodies. So it feels very close to home. But I’m not in a position to give lots of money, so I was delighted to find this option which I hope will enable me to raise more money than I could give myself. I have offered one copy of each of my books plus a Skype consultation with each one. My items aren’t yet up on the site, but I hope they will be soon because bidding is open already. If you’re one of my wealthy readers, please go over there and make a bid!

Then I spent some time researching aftercare for data. Yes, indeed there is such a thing. So far I’ve come up with two ways to take care of your data after your project is finished: secure storage and open publication. They are of course diametrically opposed, and which you choose depends on the nature of your data. Open publication is the ethical choice in most cases, enabling your data to be reused and cited, increasing your visibility as a researcher, and reducing the overall burden on potential research participants. In some cases, though, personal or commercial sensitivities will require secure storage of data. There may be other ways to take care of data after the end of a project, and I’ll be on the lookout for those as I work on my next book.

By now it was 6 pm so I did a last trawl of the emails, and found one from Sage Publishing with a link to a Dropbox folder containing 20 research methods case studies for me to review. They publish these cases online as part of their Methodspace website. I like this work: it’s flexible enough to fit around other commitments and, like other kinds of review, it tests my knowledge of research methods while also helping me to stay up to date. Best of all, unlike other kinds of review, Sage pay for my expertise. So I downloaded all the documents, checked and signed the contract, and emailed it back with a ‘thank you’. By then it was 6.30 pm and time to go home.

As the old saying goes, variety is the spice of life. I certainly like the flavour it gives to my work. Some days I work on a single project all day; those days are fun too. Yesterday I worked in my own office, today I’m out at meetings locally, tomorrow I’m off to London. It’s always ‘all change’ and I wouldn’t have it any other way.

Let’s Talk About Research Misconduct

detective-152085__340Research misconduct is on the rise, certainly within hard science subjects, quite possibly elsewhere. Researchers around the world are inventing data, falsifying findings, and plagiarising the work of others. Part of this is due to the pressure on some researchers to publish their findings in academic journals. There is also career-related pressure on researchers to conduct accurate polls, produce statistically significant results, and get answers to questions, among other things. Some clients, managers, funders and publishers have a low tolerance for findings that chime with common sense or the familiar conclusion of ‘more research is needed’. They may expect researchers to produce interesting or novel findings that will direct action or support change.

Publishers are working to counteract misconduct in a variety of ways. Plagiarism detection software is now routinely used by most big publishers. Also, journal articles can be retracted (i.e. de-published) and this is on the increase, most commonly as a result of fraud. However, the effectiveness of retraction is questionable. The US organisation Retraction Watch has a ‘leaderboard’ of researchers with the most retracted papers, some of whom have had more papers retracted than you or I will ever write, which suggests that retraction of a paper – even for fraud – does not necessarily discredit a researcher or prevent them from working.

Some research misconduct can have devastating effects on people, organisations, and professions. People may lose their jobs, be stripped of prizes or honours, and be prosecuted in criminal courts. Organisations lose money, such as the cost of wasted research, disciplinary hearings, and recruitment to fill vacancies left by fraudulent researchers. And whole professions can suffer, as misconduct slows progress based on research. For example, in 2012 the Journal of Medical Ethics published a study showing that thousands of patients had been treated on the basis of research published in papers that were subsequently retracted. Retraction Watch shows that some papers receive hundreds of citations even after they have been retracted, which suggests that retraction may not be communicated effectively.

Yet even the potentially devastating consequences of misconduct are clearly not much of a deterrent – and in many cases may not occur at all. Let’s examine a case in more detail. Hwang Woo-Suk is a researcher from South Korea. In the early 2000s he was widely regarded as an eminent scientist. Then in 2006 he was found to have faked much of his research, and he admitted fraud. Hwang’s funding was withdrawn, criminal charges were laid against him, and in 2009 he received a suspended prison sentence. Yet he continued to work as a researcher (albeit in a different specialism) and to contribute to publications as a named author.

Closer to home, a survey of over 2,700 medical researchers published by the British Medical Journal in 2012 found that one in seven had ‘witnessed colleagues intentionally altering or fabricating data during their research or for the purposes of publication’. Given the pressures on researchers, perhaps this is not surprising – though it is deeply shocking.

The examples given in this article are from hard science rather than social research. Evidence of misconduct in social research is hard to find, so it would be tempting to conclude that it happens less and perhaps that social researchers are somehow more ethical and virtuous than other researchers. I feel very wary about making such assumptions. It is also possible that social research is less open about misconduct than other related disciplines, or that it’s easier to get away with misconduct in social research.

So what is the answer? Ethics books, seminars, conferences etc frequently exhort individual researchers to think and act ethically, but I’m not sure this provides sufficient safeguards. Should we watch each other, as well as ourselves? Maybe we should, at least up to a point. Working collaboratively can be a useful guard against unethical practice – but many researchers work alone or unsupervised. I don’t think formal ethical approval is much help here, either; it is certainly no safeguard against falsifying findings or plagiarism. Perhaps all we can do at present is to maintain awareness of the potential for, and dangers of, misconduct.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

Research Ethics – Can You Help?

telephoneDear Internet

I wonder whether you can help me. I need people, from outside the UK, to talk to about research ethics. They can be academics or practitioners, from any discipline or field, and they need to have some interest in research ethics. If they fit that specification, and they’re based outside the UK, I’d like to talk to them.

This is for my next book which will be on research ethics. I’ve done a bunch of interviews with people from the UK, but I want to take a more global look at the topic. I’ve asked everyone in my networks but I’m getting nowhere, so I’m throwing this out to you in the hope that you may know one or more people who might be willing to talk.

Interviews are taking about an hour and can be done by phone or Skype. The interviews will be kept confidential, and I’m happy to email over my questions for someone to look at before they decide whether or not they want to take part. I won’t use anyone’s name in connection with the book. I am compiling a list of the roles and countries of origin of the people I’ve spoken to, to give my readers some idea of the breadth of contributions, with interviewees choosing their own designation e.g. ‘senior lecturer in social work, British university’ or ‘independent researcher, UK’, but if someone doesn’t even want to go that far, that’s OK with me. Some of the people I’ve spoken to have held several roles in connection with research ethics but have chosen to speak to me from just one of their roles, which is fine. Some have chosen not to answer all my questions, or to answer several in one; that’s fine too.

I would be particularly interested in talking to people doing, or who have done, research in countries with authoritarian rather than democratic governments – though I’d also love to talk to Australians, Americans, Canadians, New Zealanders and so on. In fact I’d like to talk to people from pretty much anywhere outside the UK, though I only speak English.There’s no big rush; I’m hoping to get the interviews done some time in the next 3-4 months.

Obviously I wouldn’t expect you to give me someone’s contact details, but if you know people you think might be willing and able to help, perhaps you could draw their attention to this post and then, if they want to get in touch, they could email me through the contact form.

Here’s hoping…

Cross-Cultural Research Ethics

cross-culturalLast week I presented at a seminar at the University of Nottingham hosted by BAICE, aka the British Association for International and Comparative Education. Like the UK and Ireland Social Research Association (SRA), on whose Board I sit, BAICE is a learned society and an organisational member of the UK’s Academy of Social Sciences (AcSS). I was presenting, in my SRA role, on behalf of the AcSS. This always makes me slightly uncomfortable as I’m not a Fellow of the AcSS and don’t really feel qualified to speak for the Academy. Luckily another of my SRA colleagues, who is a Fellow, was at the seminar and was able to help me out.

The seminar was on ‘cross-cultural research ethics in international and comparative education’. Presenting for the AcSS on this topic was an interesting exercise, as the Academy is not a very cross-cultural organisation: the Fellows are 93% professors, 69% male, and my contacts with them suggest that the white middle classes are in a massive majority. My presentation focused on the five generic ethical principles the AcSS has developed for its member societies to use. I’ve been working on a redraft of the SRA’s ethical guidelines based around these principles, and had already registered that they are focused around concepts which are not culturally neutral, such as democracy and inclusivity. There are cultures that despise democracy, seeing it as a discredited belief system, and others that either do not practise inclusivity or practise a very different version from that which the UK educational and social research culture espouses.

Perhaps because BAICE is focused on international matters, ‘culture’ was in danger of being conflated with ‘nationality’, so I argued that it is a much wider issue. The previous day I had been in a workshop for a piece of evaluation research that had included service users, volunteers, staff, partners, and evaluators. That’s five different cultures, right there. Then of course those professionally defined cultures intersect with people’s race, gender, religion, sexual orientation, etc, to create a whole world of cultural complexity.

The other presentations covered a wide range of related questions. How should we manage cultural conflicts within and beyond academic departments? How ethical is it to use RCTs in educational or social research when you know that members of control groups will be disadvantaged? How can we be inclusive as researchers in situations where including marginalised people, or those living in difficult circumstances, may put them at risk? How can we support researchers and teachers who are operating in a global environment, whether physical or virtual, to work in ethical ways?

Then we were asked to discuss whether we thought it would be possible to formulate generic ethical principles for cross-cultural research. We didn’t reach firm conclusions, but we did agree that if such principles were to be devised, the fundamental value should be respect, and the key process would be dialogue. Any generic principles would need to be broad, neither prescriptive nor vacuous, and should be tested in a variety of locations. Generic principles will always be open to interpretation, and may in some contexts conflict with each other, so they would need to be constantly negotiated. But generic principles could be useful in overturning the current myth of cultural neutrality in some academic mechanisms such as anonymous peer review.

We also agreed that ethical research is not, and should not be, only or predominantly about data collection; it is relevant to all stages of the research process. And we agreed that it is not only students, researchers, and teachers who need educating in ethics, but also funders and members of ethical review committees.

As researchers and educators, we have an ethical duty to keep educating ourselves, because ethical approaches to research change as the world changes. It is essential to take a reflexive approach to this, including locating ourselves culturally. It helps to realise that the same ethical issues arise in lots of different types of work in different disciplines and locations, so if you look beyond your professional and geographic boundaries, you can often learn from others rather than re-inventing the ethical wheel.

We concluded that, from an ethical perspective, the quality of human interactions should be fundamental to the quality of research and teaching. This is especially the case in cross-cultural work, where people may be operating with very different assumptions. However, this is not considered relevant by the current arbitrators of quality in research or teaching. Our view, though, is that it would be more ethical all round to shift the focus away from regulations and bureaucracy and towards human well-being.

While I am, generally speaking, irrepressibly optimistic, I do wonder whether that will happen in my lifetime.

Putting Research Ethics Into Practice

ethicsDoing research ethically is not about finding a set of rules to follow or ticking boxes on a form. It’s about learning to think and act in an ethical way. How ethical an action is, or is not, usually depends on its context. Therefore, everything must be thought through as far as possible, because even standard ‘ethical’ actions may not always be right. For example, many researchers regard anonymity as a basic right for participants. However, if your participants have lived under a repressive regime where their voices were silenced, they may feel very upset at the thought of being anonymised, and want any information they provide to be attributed to them using their real names. In such a context, claiming that they must be anonymised because of research ethics would in fact be unethical, because it would cause unnecessary stress to your participants.

In my role as ethics lead for the UK’s Social Research Association, I’ve been helping a group of people from the Academy of Social Sciences who have been developing some common ethical principles for social science. This has involved a long and multi-faceted consultation process, during which a number of people spoke in favour of ‘virtue ethics’, or the idea that a good person will be an ethical person.

I fundamentally disagree with this position. As demonstrated in my last post, we are all subject to cultural conditioning which is bound to influence us as researchers. We are also all vulnerable to cognitive biases such as confirmation bias (giving more weight to views or phenomena that support what we already believe) and hindsight bias (seeing events as having been predictable when they happened). Given this, it doesn’t matter how virtuous we are, we’re not going to be as ethical as we could be if we put some simple steps in place.

The first step is to acknowledge, and try to identify, your own cultural conditioning, and to learn about the cognitive biases that may affect you. Although we’re notoriously bad at identifying our own cognitive biases, we are better at spotting other people’s, so if you’re working with others it can be helpful to look out for each other’s biases.

Then articulate the value base for your research. If you’re working alone, you need to devise this for yourself; if in a team, produce it collectively. And don’t just write a list of words; think through the meanings of the values you choose. For example, if you want your research to be ‘honest’, what does that mean in practice? We all tell lies all the time, even to ourselves, and research is no different. For example, researchers think it’s perfectly OK to lie in the interests of maintaining participant confidentiality. So if you want your research to be honest, you need to consider how honest you think it can actually be.

Try to identify your own assumptions. While it’s important to try not to make assumptions about other people, research is usually based on some assumptions, and it helps to act ethically if you know what these are. For example, are you assuming that your research is not intrusive? Or that it will be as high a priority for others as it is for you? Are you assuming that your sample is representative? Or that your data is accurate? Why are you making each assumption? What are the implications of your assumptions for your research?

Grounded theorists Strauss and Corbin suggested watching out for absolutes as a useful way to guard against biases and unhelpful assumptions. So if you find yourself, or a participant, using words like ‘never’ or ‘always’, or phrases like ‘couldn’t possibly’ or ‘everyone knows’, take time to work out what is behind the statement. You may well discover an obstructive bias or assumption, and then you can begin to search for a way to counteract that bias or assumption.

As social scientists, we try to include a wide range of people as research participants, but we can forget to take the same approach to literature. So another step is, when you’re reading, try to find relevant work by people with different backgrounds and perspectives from yours. This could include people from different nationalities, disciplines, genders, professions, and so on. Then, when you’re writing, try to draw on the work of a wide range of people too – though only if that work is relevant and worth citing, otherwise you are being tokenistic which is not ethical.

It is of course impossible to write a full set of ethical guidelines in a blog post. However, following these suggestions will lead you to a wider, more fully ethical approach to your research. If you want to delve further into the whys and wherefores of ethical research, there is plenty of material online. Here are some useful links:

Economic and Social Research Council (ESRC) Framework for Research Ethics – the ESRC is one of the UK’s biggest research funders, and this Framework was updated in January 2015.

The Research Ethics Guidebook – actually a website with a wealth of information, linked to the ESRC principles.

Association of Internet Researchers Ethics Guide – a wiki containing useful pointers for doing ethical research online.