Australasian Research Ethics

AHRECS logoSystems of research ethics regulation differ around the world. Some countries have no research ethics regulation system at all. Others may have a system but, if they do, it is only available in their home language so people like me who only speak and read English are unable to study that system (Israel 2015:45). The main English-speaking countries tend to have formal systems of research ethics regulation, stemming from biomedical research in response to ethical crises such as Nuremberg and Tuskegee. These are usually implemented through research ethics committees or their equivalents such as institutional review boards in the US.

One big difference in Australasia is that work on research ethics by and for Indigenous communities seems to be further ahead in Australia and New Zealand than in any other continental region as a whole. Australia has the Guidelines for Ethical Research in Australian Indigenous Studies produced by the Australian Institute of Aboriginal and Torres Strait Islander Studies (AIATSIS). AIATSIS is a statutory organisation, set up by white settlers in the 1960s and governed by a Council, with the first Aboriginal Council member joining in 1970. The Council is now predominantly made up of Aboriginal people and Torres Strait Islanders. The latest edition of the Guidelines is dated 2012 but they are under review at the time of writing. In New Zealand, Māori people with experience from research ethics committees came together to write Te Ara Tika, a document offering guidelines for Māori research ethics published in 2010. These kinds of guidelines help Indigenous peoples to claim their right of research sovereignty, i.e. control over the conduct of and participation in research that affects them. However, they are not necessarily aligned with each other, or with other systems of ethical governance for research that may exist in the same jurisdictions. This may hamper collaborative or multi-area research and lead to increased separation rather than reconciliation between peoples (Ríos, Dion and Leonard 2018).

So it’s a complex and fascinating picture. I am fortunate to be working on a project at present with three experts in Australasian research ethics: Gary Allen, Mark Israel, and Colin Thomson. (The sharp-eyed among you may notice that I cited Israel in the first paragraph above. He has written a rather good book on research ethics subtitled Beyond Regulatory Compliance and now in its second edition.) Together they are the senior consultants of the Australasian Human Research Ethics Consultancy (AHRECS), established in 2007 to provide expert consultancy services around research ethics in Australasia and Asia-Pacific. AHRECS also works with Indigenous consultants from both Australia and New Zealand, one of the latter being Barry Smith who is a co-author of Te Ara Tika.

The amount of expertise in AHRECS is enormous. Better still, they offer to share some of this expertise to anyone who wants to sign up for their free monthly e-newsletter on research ethics (and I can confirm from experience that they don’t spam you). Link here (scroll down, it’s on the right). Their blog provides a useful archive and they accept guest posts on relevant topics; I just wrote one for them on The Ethics of Evaluation Research. So you get two for the price of one this week!

This blog is funded by my beloved patrons. It takes me around one working day per month to post here each week. At the time of writing I’m receiving funding of $17 per month. If you think 4-5 of my blog posts is worth more than $17 – you can help! Ongoing support would be fantastic but you can also support for a single month if that works better for you. Support from Patrons also enables me to keep this blog ad-free. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

Evaluating excellence in arts-based research: a case study

This article first appeared in Funding Insight on 16 June 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

peacock-536478__340I recently wrote on this topic citing the work of Sarah J Tracy from Arizona State University, who developed a set of eight criteria for assessing the quality of arts-based and other forms of qualitative and mixed-methods research. Now I propose to apply those criteria to an example of arts-based research, to find out how they can work in practice.

The research example I have chosen is by Jennifer Lapum and her colleagues from Toronto in Canada, who investigated 16 patients’ experiences of open-heart surgery. Their work is methodologically interesting because they used arts-based techniques, not only for data generation, but also for data analysis and dissemination. They published an account of their work in Qualitative Inquiry which I will interrogate here.

Lapum  gathered narrative data from two interviews with post-operative patients, one while they were still in hospital and the other some weeks after returning home. Also, journals were kept by patients between the two interviews. She then put together a multi-disciplinary team of people, including artists, researchers, designers, and medical staff, and they spent a year doing arts-based analysis of the patients’ stories. This included metaphor analysis, poetic inquiry, sketching, concept mapping, and construction of photographic images. The team then developed an installation, covering 1,739 square feet, with seven sections representing the seven stages of a patient’s journey. These sections were arranged along a labyrinthine route, with the operating room at the centre, all hung with textile compositions incorporating poems and photographic images that had been generated at the analytic stage. Further dissemination via a short video on YouTube gives some idea of how it would be to visit this installation.

So how does this research fit with Tracy’s eight criteria? First we ask: is the research topic worthy? I would argue that in this case the answer is yes. Open-heart surgery must be a daunting prospect, even though the rewards can be immense. Lapum’s work offers potential patients and carers some insight into the journey they may take, and offers medical and other relevant staff an increased understanding of patients’ experiences. This is likely to improve outcomes for patients.

Second, is this project richly rigorous? The sample size is small, but the data was carefully constructed. Also, the analytic process was extremely thorough, with a multi-disciplinary team spending a year working with the data. Therefore I would conclude that this criterion has been met.

Do we have sincerity? Is the research reflexive, honest, and transparent? The published article is quite explicit about the methods used, and credits several people who have been involved with the process. The article asserts that the research was reflective, though the article itself is not. Nor do the writers outline all the decisions they took in the course of analysis and dissemination. However, space in a journal article is limited – but there is no mention of what was left out and why. So the research as presented here is sincere up to a point, but there is scope for more reflexivity and transparency.

What about credibility? There is certainly thick description and multiplicity of voices and perspectives in this research. Also, while the research team did not include participants as such, contributions were made by ‘knowledge users’ including cardiovascular health practitioners and former heart surgery patients. So, in Tracy’s terms, this research is definitely credible.

The next criterion is resonance. The installation certainly had aesthetic merit. It was generalisable to some extent: certainly to heart surgery patients and practitioners from other geographic locations, and perhaps to patients and practitioners of other kinds of major organ surgery. And it was also transferable: ‘we found people of diverse backgrounds not only resonated with the work but were also able to consider the application of these ideas to their lives and/or professional field’ (Lapum et al 2012:221). So, yes, it was resonant.

Did this research make a significant contribution? It evidently extended the knowledge, and may have improved the practice, of the research team. The project was methodologically unusual, and explicitly aimed to engage the audience’s aesthetic and emotional faculties, as well as their intellectual abilities, in responding to the research findings. However, there is no report of the installation’s impact on its audience but, again, this may be due to lack of space. So I would argue that this criterion was met, and the research may in fact have made a more significant contribution than we can discern from one journal article.

How ethical was the research? The article does not mention ethics, though it seems inevitable that the research must have received formal ethical approval. The level of thought and care applied to the research suggests that it was ethical, though this is implicit rather than explicit. But, once again, this may be due to space constraints.

And finally, does the research have meaningful coherence? The article tells an engaging and comprehensible story, so yes, it does.

It is perhaps unfair to judge a long and complex research project on the basis of a single journal article of just a few thousand words. Lapum and her colleagues have published several articles about their research; to make a full judgement I should really read them all. However, if the authors had carried out an analysis of their article based on Tracy’s criteria, they might have chosen to add a sentence or two about what they left out, a paragraph or two on reflexivity, a short description of the impact of the installation on its audience, and some information about ethics. The article as it stands is excellent; with these amendments, it could have been outstanding. This demonstrates that Tracy’s criteria are useful for assessing not only research itself, but also reports of research.

How to evaluate excellence in arts-based research

This article first appeared in Funding Insight on 19 May 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

judgementResearchers, research commissioners, and research funders all struggle with identifying good quality arts-based research. ‘I know it when I see it’ just doesn’t pass muster. Fortunately, Sarah J Tracy of Arizona State University has developed a helpful set of criteria that are now being used extensively to assess the quality of qualitative research, including arts-based and qualitative mixed-methods research.

Tracy’s conceptualisation includes eight criteria: worthy topic, rich rigour, sincerity, credibility, resonance, significant contribution, ethics, and meaningful coherence. Let’s look at each of those in a bit more detail.

A worthy topic is likely to be significant, meaningful, interesting, revealing, relevant, and timely. Such a topic may arise from contemporary social or personal phenomena, or from disciplinary priorities.

Rich rigour involves care and attention, particularly to sampling, data collection, and data analysis. It is the antithesis of the ‘quick and dirty’ research project, requiring diligence on the part of the researcher and leaving no room for short-cuts.

Sincerity involves honesty and transparency. Reflexivity is the key route to honesty, requiring researchers to interrogate and display their own impact on the research they conduct. Transparency focuses on the research process, and entails researchers disclosing their methods and decisions, the challenges they faced, any unexpected events that affected the research, and so on. It also involves crediting all those who have helped the researcher, such as funders, participants, or colleagues.

Credibility is a more complex criterion which, when achieved, produces research that can be perceived as trustworthy and on which people are willing to base decisions. Tracy suggests that there are four dimensions to achieving credibility: thick description, triangulation/crystallization, multiple voices, and participant input beyond data provision. Thick description means lots of detail and illustration to elucidate meanings which are clearly located in terms of theoretical, cultural, geographic, temporal, and other such location markers. Triangulation and crystallisation are both terms that refer to the use of multiplicity within research, such as through using multiple researchers, theories, methods, and/or data sources. The point of multiplicity is to consider the research question in a variety of ways, to enable the exploration of different facets of that question and thereby create deeper understanding. The use of multiple voices, particularly in research reporting, enables researchers more accurately to reflect the complexity of the research situation. Participant input beyond data provision provides opportunities for verification and elaboration of findings, and helps to ensure that research outputs are understandable and implementable.

Although all eight criteria are potentially relevant to arts-based research, resonance is perhaps the most directly relevant. It refers to the ability of research to have an emotional impact on its audiences or readers. Resonance has three aspects: aesthetic merit, generalisability, and transferability. Aesthetic merit means that style counts alongside, and works with, content, such that research is presented in a beautiful, evocative, artistic and accessible way. Generalisability refers to the potential for research to be valuable in a range of contexts, settings, or circumstances. Transferability is when an individual reader or audience member can take ideas from the research and apply them to their own situation.

Research can contribute to knowledge, policy, and/or practice, and will make a significant contribution if it extends knowledge or improves policy or practice. Research may also make a significant contribution to the development of methodology; there is a lot of scope for this with arts-based methods.

Several of the other criteria touch on ethical aspects of research. For example, many researchers would argue that reflexivity is an ethical necessity. However, ethics in research is so important that it also requires a criterion of its own. Tracy’s conceptualisation of ethics for research evaluation involves procedural, situational, relational, and exiting ethics. Procedural ethics refers to the system of research governance – or, for those whose research is not subject to formal ethical approval, the considerations therein such as participant welfare and data storage. Situational ethics requires consideration of the specific context for the research and how that might or should affect ethical decisions. Relational ethics involve treating others well during the research process: offering respect, extending compassion, keeping promises, and so on. And exiting ethics cover the ways in which researchers present and share findings, as well as aftercare for participants and others involved in the research.

Research that has meaningful coherence effectively does what it sets out to do. It will tell a clear story. That story may include paradox and contradiction, mess and disturbance. Nevertheless, it will bring together theory, literature, data and analysis in an interconnected and comprehensible way.

These criteria are not an unarguable rubric to which every qualitative researcher must adhere. Indeed there are times when they will conflict in practice. For example, you may have a delightfully resonant vignette, but be unable to use it because it would identify the participant concerned; participants may not be willing or able to be involved beyond data provision; and all the diligence in the world can’t guarantee a significant contribution. So, as always, researchers need to exercise their powers of thought, creativity, and improvisation in the service of good quality research, and use the criteria flexibly, as guidelines rather than rules. However, what these criteria do offer is a very helpful framework for assessing the likely quality of research at the design stage, and the actual quality of research on completion.

Next week I will post a case study demonstrating how these criteria can be used.

Creative Methods for Evaluation: A Frustration

frustrationEvaluation is a particular type of applied research designed to assess the value of a service, intervention, policy or other such phenomenon. This is relevant to all of us as it forms the basis for many decisions about public service provision. Despite being applied research, evaluation also has a significant academic profile, with dedicated journals, many books, and university departments with professors of evaluation in countries around the world.

There are a range of types of, and approaches to, evaluation research. They all have some things in common: they start with the desired outcomes of the service, intervention etc; formulate indicators that would show those outcomes had been met; then collect data in line with those indicators and analyse it to identify the extent to which the outcomes have been met. So, for example, if a community service aims to reduce loneliness, they might decide that one indicator could be a reduction of reports of loneliness to community-based doctors and nurses, then work with health colleagues to collect information from health records before and after the provision of the service to show whether there was any difference. Evaluators also write recommendations for ways to improve the service, intervention etc. The intention is that these recommendations are implemented, then later reviewed in another cycle of evaluation research.

The basics of general research practice also apply to evaluation: plan thoroughly, collect and analyse data, produce written and other outcomes, and publish your findings. From time to time I teach a course called ‘Creative Research Methods for Evaluation’, usually as part of the UK and Ireland Social Research Association’s open training programme. All sorts of people come on this course: central and local Government researchers, charity researchers, health researchers, researchers in private practice, research funders – a real mix, which makes it great fun.

I know quite a bit about creative methods; after all, I wrote a book on the subject. I tell my students about arts-based methods, research using technology, mixed methods, and transformative research frameworks. We talk about when these methods are appropriate to use, and how they can work side-by-side with more established methods. I give them lots of examples of creative methods in use.

And here is the huge frustration. While I have plenty of examples of creative methods in practice, very few come from evaluation research. I have some examples from my own practice, though only as verbal stories because the written and other outputs are subject to client confidentiality. This is a big problem with evaluation research: because it is applied, i.e. often conducted by and for individual organisations, it is rarely published beyond its immediate audience. When it is published, it is often simply uploaded to a web page and so disappears into the depths of the internet. And if it is both published and findable, it is not likely to include the use of creative methods.

There are many examples of perfectly competent evaluations using well-established methods. However, evaluators today are working on complex projects and benefit from having more methodological options at their fingertips. I know my course helps, because former students have told me so, but during the course someone always asks why I’m not using examples from evaluation research. (Even though I explain this problem at the start!) I wish I could use such examples; I’m sure they’re out there; but even though I have searched, and asked, and searched again, I can’t find them. So this is by way of an appeal: do you know of any good resources that showcase creative methods in evaluation research? By ‘good resources’ I mean well written outputs or short engaging videos (3-4 minutes at most) that are not too basic, as my students are generally quite experienced. If you have anything to suggest, please let me know in the comments.

When A Contract Ends

finish lineI’m putting the finishing touches to the report of a research project that’s been running for the last 18 months. And then it’ll be over. Which is a bit sad, for a number of reasons.

First, the work is for a national organisation, but unusually that organisation is based close to where I live in the Midlands of England. So, unlike most, this job hasn’t involved a lot of travelling: much of the work has been done within half an hour’s drive of my office.

Second, I’ve been working with another researcher, a colleague I met for the first time on the day we went to be interviewed for this job. I liked him then and my respect and appreciation for him has grown throughout the project. He’s responsive, thoughtful, caring, creative, and generally a terrific collaborator. I will miss working with him.

Third, it’s been an interesting, complex project, evaluating a community-based advocacy service for older people with cancer. The work is multi-faceted and that makes it a real challenge to investigate it fully and come up with suitable recommendations for taking the work forward.

Fourth, it’s paid some of the bills. These kinds of longer-term contracts, that provide a basic level of income for a period of time, don’t come along so often but are invaluable for indie researchers.

Letting go of a project can be hard for anyone, but there are some specific areas of difficulty for indie researchers. Commissioners don’t think to get back in touch to tell us how our work is being used, and seem surprised if we email or phone to ask. We have very little say in how our work is disseminated, and sometimes it’s not disseminated at all, which can be really frustrating. And unlike our academic colleagues, we don’t have the requirement to publish that can keep the relationships formed during a project alive for months and years after completion.

So in many ways I’m sorry to see this contract end, but the pill is very thoroughly sugared by the new contract I landed earlier this month. Without that I think I’d be in deep mourning. But this time it really does feel as though, as one door is closing, another opens.