Why Research Participants Rock

dancingI wrote last week about the creative methods Roxanne Persaud and I used in our research into diversity and inclusion at Queen Mary University of London last year. One of those was screenplay writing, which we thought would be particularly useful if it depicted an interaction between a student and a very inclusive lecturer, or between a student and a less inclusive lecturer.

I love to work with screenplay writing. I use play script writing too, sometimes, though less often. With play script writing, you’re bound by theatre rules, so everything has to happen in one room, with minimal special effects. This can be really helpful when you’re researching something that happens in a specific place such as a parent and toddler group or a team sport. Screenplay, though, is more flexible: you can cut from private to public space, or include an army of mermaids if you wish. Also, screenplay writing offers more scope for descriptions of settings and characters, which, from a researcher’s point of view, can provide very useful data.

Especially when participants do their own thing! Our screenplay-writing participants largely ignored our suggestions about interactions between students and lecturers. Instead, we learned about a south Asian woman, the first in her family to go to university, who was lonely, isolated, and struggling to cope. We found out about a non-binary student’s experience of homophobia, sexism and violence in different places on campus. We saw how difficult it can be for Muslim students to join in with student life when alcohol plays a central role. Scenes like these gave us a much richer picture of facets of student inclusion and exclusion than we would have had if our participants had kept to their brief.

Other researchers using creative techniques have found this too. For example, Shamser Sinha and Les Back did collaborative research with young migrants in London. One participant, who they call Dorothy, wanted to use a camera, but wasn’t sure what to capture. Sinha suggested exploring how her immigration status affected where she went and what she could buy. Instead, Dorothy went sightseeing, and took pictures of Buckingham Palace. The stories she told about what this place and experience meant to her enriched the researchers’ perceptions of migrant life, not just the ‘aggrieved’ life they were initially interested in, but ‘her free life’ (Sinha and Back 2013:483).

Katy Vigurs aimed to use photo-elicitation to explore different generations’ perceptions of the English village where they lived. She worked with a ladies’ choir, a running club, and a youth project. Vigurs asked her participants to take pictures that would show how they saw and experienced their community. The runners did as she asked. The singers, who were older, took a few photos and also, unprompted, provided old photographs of village events and landmarks, old and new newspaper cuttings, photocopied and hand-drawn maps of the area with added annotations, and long written narratives about their perceptions and experiences of the village. The young people also took some photos, mostly of each other, but then spent a couple of hours with a map of the village, tracing the routes they used and talking with the researcher about where and how they spent time. Rather than standard photo-elicitation, this became ‘co-created mixed-media elicitation’ as Vigurs puts it (Vigurs and Kara 2016:520) (yes, I am the second author of this article, but all the research and much of the writing is hers). Again, this provided insights for the researcher that she could not have found using the method she originally planned.

Research ethics committees might frown on this level of flexibility. I would argue that it is more ethical than the traditional prescriptive approach to research. Our participants have knowledge and ideas and creativity to share. They don’t need us to teach them how to interact and work with others. In fact, our participants have a great deal to teach us, if we are only willing to listen and learn.

Creative Research In Practice

like cloudIt’s not often I get to share an output from the commissioned research I do. Sometimes clients don’t want to share publicly for reasons of confidentiality, and sometimes there are other reasons they don’t publish. As a commissioned researcher, I can’t publish the work someone else has paid for without their agreement. But I’m glad to say that Queen Mary University of London (QMUL) has published the full report of the research I did for them last year with my colleague Roxanne Persaud.

The research question was: How can QMUL improve students’ experience with respect to the inclusivity of their teaching, learning, and curricula? The original brief focused on the protected characteristics covered by the UK Equality Act 2010: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion and belief, and sexual orientation. Roxanne and I advised QMUL to take a more holistic approach to inclusivity, as the protected characteristics don’t cover some factors that we know can lead to discrimination and disadvantage, such as socioeconomic status and caring responsibilities. We recommended Appreciative Inquiry as a methodological framework, because it doesn’t start from a deficit perspective emphasising problems and complaints, but focuses on what an organisation does well and what it could do better. (It doesn’t ignore or sideline problems and complaints, either; it simply starts from the standpoint that there are assets to build on.)  And of course we suggested creative techniques, particularly for data-gathering and sense-making, alongside more conventional methods.

Roxanne and I were both keen to do this piece of work because we share an interest in diversity and inclusion. Neither of us had worked with QMUL before and we weren’t sure whether they would appreciate our approach to their brief. Sometimes commissioners want to recruit people who will do exactly what they specify. Even so, I’d rather say how I think a piece of work needs to be done; if the commissioner doesn’t want it done that way, then I don’t want the job.

QMUL shortlisted six sets of applicants. The interview was rigorous. Roxanne and I came out feeling we’d done ourselves justice, but with no clue as to whether we might have got the work or not. But we did!

The research was overseen by a Task & Finish group, made up of staff from different departments, who approved the methods we had put forward. We conducted a targeted literature review to identify key issues and best practice for inclusivity in the UK and overseas, and set the research in an institutional, societal, and theoretical context. The theoretical perspectives we used began with the theory of intersectionality developed by the law professor Kimberlé Crenshaw, which we then built on using the diffraction methodology of the physicist and social theorist Karen Barad. These two theories together provided a binocular lens for looking at a very complex phenomenon.

The timescale for the research was tight, and data gathering collided with Ramadan, exams, and the summer holidays. So, not surprisingly, we struggled with recruitment, despite strenuous efforts by us and by helpful colleagues at QMUL. We were able to involve 17 staff and 22 students from a wide range of departments. We conducted semi-structured telephone interviews with the staff, and gave students the option of participating in face-to-face interviews or group discussions using creative methods. These methods included:

  • The life-sized lecturer: an outline figure on a large sheet of paper, with a label indicating what kind of person they are e.g. ‘a typical QMUL lecturer’ and ‘an ideally inclusive lecturer’, which students could write and draw on.
  • Sticker maps: a map of organisational inclusivity, which we developed for QMUL, on which students could place small green stickers to indicate areas of good practice and small red stickers to indicate areas for further improvement.
  • Empathy maps: tools to help participants consider how other students or staff in different situations think and feel; what they might see, say, and do; and where they might experience ‘pain or gain’ with respect to inclusive learning.
  • Screenplay writing: a very short screenplay depicting an interaction between a student and a very inclusive lecturer, or between a student and a less inclusive lecturer. The screenplay will include dialogue and may also include information about characters’ attributes, the setting, and so on.

We generated over 50,000 words of data, which we imported into NVivo. Roxanne and I spent a day working together on emergent data coding, discussing excerpts from different interviews and group sessions, with the aim of extracting maximum richness. Then I finished the coding and carried out a thematic analysis while Roxanne finished the literature review.

We wrote a draft report, and then had two ‘review and refine’ meetings for sense-making, which were attended by 24 people. The first meeting was with members of the Task & Finish group, and the second was an open meeting, for participants and other interested people. We presented the draft findings, and put up sheets on the walls listing 37 key factors identified in the draft report. We gave participants three sticky stars to use to indicate their top priorities, and 10 sticky dots to use to indicate where they would allocate resources. People took the resource allocation incredibly seriously, and it was interesting to see how collaboratively they worked on this. I heard people saying things like, ‘That’s important, but it’s already got five dots on, so I’m going to put another one here.’ I wish I could have recorded all their conversations! We did collect some further data at these meetings, including touch-typed notes of group discussions and information about the relative frequency of occurrence, and importance, of the 37 key factors. All of this data was synthesised together with the previously collected data in the final report and its recommendations.

The comparatively small number of participants was a limitation, though we did include people from all faculties and most schools, and we certainly collected enough data for a solid qualitative study. We would have liked some quantitative data too, but the real limitation was that most of the people we reached were already concerned about inclusivity. We didn’t reach enough people to be able to say with certainty whether this was, or was not, the case more widely at QMUL. Also, while none of our participants disagreed unduly with our methodology or methods, others at QMUL may have done so. In a university including physicists, mathematicians, engineers, social scientists, artists, doctors, dentists and lawyers, among others, it seems highly unlikely that anyone could come up with an approach to research that would receive universal approval.

Yet I’m proud of this research. It’s not perfect – for example, I’ve realised, in the course of writing this blog post, that we didn’t explicitly include the research question in the research report! But its title is Inclusive Curricula, Teaching, and Learning: Adaptive Strategies for Inclusivity, which seems clear enough. I’m sure there are other ways it could be improved. But I’m really happy with the central features: the methodology, the methods, and the flexibility Roxanne and I offered to our client.

Evaluating excellence in arts-based research: a case study

This article first appeared in Funding Insight on 16 June 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

peacock-536478__340I recently wrote on this topic citing the work of Sarah J Tracy from Arizona State University, who developed a set of eight criteria for assessing the quality of arts-based and other forms of qualitative and mixed-methods research. Now I propose to apply those criteria to an example of arts-based research, to find out how they can work in practice.

The research example I have chosen is by Jennifer Lapum and her colleagues from Toronto in Canada, who investigated 16 patients’ experiences of open-heart surgery. Their work is methodologically interesting because they used arts-based techniques, not only for data generation, but also for data analysis and dissemination. They published an account of their work in Qualitative Inquiry which I will interrogate here.

Lapum  gathered narrative data from two interviews with post-operative patients, one while they were still in hospital and the other some weeks after returning home. Also, journals were kept by patients between the two interviews. She then put together a multi-disciplinary team of people, including artists, researchers, designers, and medical staff, and they spent a year doing arts-based analysis of the patients’ stories. This included metaphor analysis, poetic inquiry, sketching, concept mapping, and construction of photographic images. The team then developed an installation, covering 1,739 square feet, with seven sections representing the seven stages of a patient’s journey. These sections were arranged along a labyrinthine route, with the operating room at the centre, all hung with textile compositions incorporating poems and photographic images that had been generated at the analytic stage. Further dissemination via a short video on YouTube gives some idea of how it would be to visit this installation.

So how does this research fit with Tracy’s eight criteria? First we ask: is the research topic worthy? I would argue that in this case the answer is yes. Open-heart surgery must be a daunting prospect, even though the rewards can be immense. Lapum’s work offers potential patients and carers some insight into the journey they may take, and offers medical and other relevant staff an increased understanding of patients’ experiences. This is likely to improve outcomes for patients.

Second, is this project richly rigorous? The sample size is small, but the data was carefully constructed. Also, the analytic process was extremely thorough, with a multi-disciplinary team spending a year working with the data. Therefore I would conclude that this criterion has been met.

Do we have sincerity? Is the research reflexive, honest, and transparent? The published article is quite explicit about the methods used, and credits several people who have been involved with the process. The article asserts that the research was reflective, though the article itself is not. Nor do the writers outline all the decisions they took in the course of analysis and dissemination. However, space in a journal article is limited – but there is no mention of what was left out and why. So the research as presented here is sincere up to a point, but there is scope for more reflexivity and transparency.

What about credibility? There is certainly thick description and multiplicity of voices and perspectives in this research. Also, while the research team did not include participants as such, contributions were made by ‘knowledge users’ including cardiovascular health practitioners and former heart surgery patients. So, in Tracy’s terms, this research is definitely credible.

The next criterion is resonance. The installation certainly had aesthetic merit. It was generalisable to some extent: certainly to heart surgery patients and practitioners from other geographic locations, and perhaps to patients and practitioners of other kinds of major organ surgery. And it was also transferable: ‘we found people of diverse backgrounds not only resonated with the work but were also able to consider the application of these ideas to their lives and/or professional field’ (Lapum et al 2012:221). So, yes, it was resonant.

Did this research make a significant contribution? It evidently extended the knowledge, and may have improved the practice, of the research team. The project was methodologically unusual, and explicitly aimed to engage the audience’s aesthetic and emotional faculties, as well as their intellectual abilities, in responding to the research findings. However, there is no report of the installation’s impact on its audience but, again, this may be due to lack of space. So I would argue that this criterion was met, and the research may in fact have made a more significant contribution than we can discern from one journal article.

How ethical was the research? The article does not mention ethics, though it seems inevitable that the research must have received formal ethical approval. The level of thought and care applied to the research suggests that it was ethical, though this is implicit rather than explicit. But, once again, this may be due to space constraints.

And finally, does the research have meaningful coherence? The article tells an engaging and comprehensible story, so yes, it does.

It is perhaps unfair to judge a long and complex research project on the basis of a single journal article of just a few thousand words. Lapum and her colleagues have published several articles about their research; to make a full judgement I should really read them all. However, if the authors had carried out an analysis of their article based on Tracy’s criteria, they might have chosen to add a sentence or two about what they left out, a paragraph or two on reflexivity, a short description of the impact of the installation on its audience, and some information about ethics. The article as it stands is excellent; with these amendments, it could have been outstanding. This demonstrates that Tracy’s criteria are useful for assessing not only research itself, but also reports of research.