How To Market Your Academic Journal Article

millions

What can you do to stand out in such a crowd?

Last week I wrote about how to market your academic book. Journal articles, too, benefit from marketing. If you’ve ever had one published, you have probably had one or more emails from the publisher encouraging you to help market your article. It is in the publisher’s interest for you to help them with marketing, because higher visibility usually leads to more citations, and more citations (within two years of publication) help the journal concerned increase its impact factor. It may, though, be in your interest too.

Around two and a half million academic journal articles are published each year on this planet. Generally speaking, if you want your one journal article to be noticed and read among these millions, you need to help it along.

Marketing your journal article begins before you finish writing. A clear and descriptive title, and the most relevant keywords or phrases, will help your article to be visible online. Use as many keywords or phrases as the journal permits. If you find it difficult to come up with enough keywords or phrases, think about what your readers might search for. Also, make sure at least three or four of your keywords or phrases appear at least once in your abstract. This all helps to make your article easier for search engines to find.

The abstract, too, is important. It should tell a clear story in itself, and should include the key ‘take-away’ point you have made. To figure out what that is, it may help to think what headline a journalist would give to a piece based on your article. A well written and structured abstract will entice more people to read further.

Once your article is published, there is plenty more work to be done if you want your research to have an impact. Some publishers make your article free to access online for a specific period or give you a limited number of free eprints. Either way, you can advertise this through email, discussion lists, and social media. Talking of email, it can be useful to add a link to your article as part of your email signature. Also, you can add the link to any online presence, such as your institutional web page and your LinkedIn profile.

If you teach a course for which your article would be suitable material, add it to the reading list. Also, unless it’s open access, check whether your university library subscribes to the journal concerned; if not, recommend it to them.

It’s helpful to write a blog post about your research with a link to your article. This could be on your own blog (if you have one), or on a blog in your field with a wide readership, or on your publisher’s blog. Again, you can advertise the link through social media.

An infographic can be useful too, either as part of a blog post or as a stand-alone information source – or both. Information here on how to create an infographic.

You can make a short video abstract of your journal article and upload it to a video sharing site such as YouTube or Vimeo. This is becoming an increasingly popular way to share information, and there are some great examples online such as this one on social jetlag. It’s not difficult to do, and can be done using a smartphone; there is a tutorial here.

Another option is to create a press release to alert the mainstream media. This is generally only worth doing if the journal article contains information that will interest a lot of people. It also needs to be ‘newsworthy’, e.g. relevant to current news coverage, providing a new perspective on past news coverage, or coinciding with an anniversary. A press release is a short document, usually only a page or at most two, with a specific format; details here.

If this all sounds like a lot of extra work: it is. I’m not suggesting you should do everything listed above; nor am I suggesting that this post is exhaustive. But if you want to make your journal article visible to potential readers, you will almost certainly need to take one or more of these steps.

How To Market Your Academic Book

Norwich market by Lane

Norwich Market by talented artist Lane Mathias

If you’re going to write an academic book, you need to be prepared to do some marketing. Otherwise it will sink, without so much as a bubble, deep into the ocean of published academic books. Of course if all you need is the publication on your CV, then don’t waste your time on marketing. But if you’ve written something you actually want people to read and use, you need to get to grips with the whole marketing thing.

 

There are three main categories of sole-authored academic book: monograph, textbook, and trade book. A monograph usually has quite a narrow topic, perhaps just one research project. Its audience will be small, primarily academic peers and perhaps a few doctoral students, and its royalties will be low or non-existent. A textbook is probably for undergraduates, maybe also early stage postgraduates, with a potential audience of millions and, if you’re lucky, significant royalties. A trade book is anywhere in between. You need to know which yours is to help you figure out who your readers might be and so how to market your book.

Your publisher’s marketing department should help you. After all, it’s in their interest to sell as many copies of your book as possible. But they can only help up to a point, because they have a lot of other books to try to sell as well as yours. It’s worth having a chat with them, and finding out what they can and can’t do to help you. For example, they should:

  • Post information about your book online well ahead of its publication date
  • Market your book to relevant retailers, including bookshops and online retailers, and wholesalers, and to academic libraries
  • Include your book in their catalogue and on their flyers for specific events such as conferences in your field
  • Send out review copies, including to people you find who are willing to write reviews or can otherwise promote the book to a significant number of people
  • Take your book to academic conferences, display it along with other books on their stand, and offer a conference discount.
  • Promote your book via their e-newsletter and social media channels
  • Give you a jpeg of the cover for your own use
  • Make flyers for you to take to conferences and seminars

Realistically, though, a lot of this will happen around the time of publication. They won’t ignore your book thereafter, but they simply can’t push all of their books all of the time. So, if you want your book to be widely read and used, you need to market it too.

I have no background or training in marketing; I’ve been learning on the job since my first research methods book came out in 2012. I’ve been lucky to have had terrific support from the marketing department at my lovely publisher, Policy Press, though I know not every academic writer has this experience. I have learned some things you can do to help raise awareness of your book. These include:

  • Add information about the book to your email signature
  • Add information about the book to any web pages featuring you, such as your profile on your employer’s website and your LinkedIn page
  • Send information about the book to any e-lists you subscribe to
  • Send information about the book to your professional association(s) to include in their e-newsletter
  • Ask your employer for help publicising your book through their website, newsletter, and other publicity channels
  • Write one or more blog posts featuring the book for blogs with big readerships in your field, and publicise the blog post(s) at and after publication through your social media channels
  • Create a video about the book or some aspect of the book, upload to YouTube or Vimeo and publicise through your social media channels
  • Create a podcast about the book or some aspect of the book, upload and publicise through your social media channels
  • Publicise the book itself through social media – don’t keep saying ‘buy my book’, but promote any good reviews or positive comments you receive
  • Write an article for the mainstream media based on, or featuring, your book
  • Make sure your book cover appears on any PowerPoint or other presentation you give, and mention it in the presentation

Then there’s the more unofficial kind of marketing. This blog is, in one sense, a marketing tool. It’s other things too – a place to keep my professional musings, for a start – but marketing is part of its purpose. This is marketing by providing something of value (or at least doing my best to do so!). Another method I use is to mail signed bookplates to people who have bought copies of my books. That’s counter-intuitive marketing: in theory, I should be wooing people who haven’t yet bought copies. But I think it can help, because it will improve the likelihood of people talking to others about my work.

Another unofficial kind is marketing through networks. This is unpredictable and you always need to be alert for opportunities. For example, at an academic event recently I met a Prof from a university where I don’t have any contacts. We were talking about graphic novels in research, and I remarked that I’d written about that in my last book on creative research methods. The Prof was interested and asked me to email over details of my book. I did so a few days later, and received a reply saying, ‘Thank you for this. I will raise it with other staff for dissertations as it looks useful.’ So that should at least have sold a copy or two for their library, and with luck it’ll make its way onto more course lists.

I need to figure out what else to do, though, because my royalties this year were lower than last year: £1,236.70 as against £1,627.20. That’s quite a drop, and disappointing in a year when I published a second edition and had lots of positive feedback on both books. There are two tried-and-tested ways of increasing royalties that I know of. One is to write more books, and I’m working on that. The other is to do more marketing: not only for my books, but also for the journal articles I’ve written and co-written. More on marketing those next week.

The Ethics of Expertise

expertLast week I wrote about the ethics of research evidence, in which I cited Charles Knight’s contention that evidence should be used by people with expertise. Knight also questions how we can identify people with expertise. He suggests they would ‘have to do the sorts of things experts do – read the literature, do research, have satisfied clients, mentor novices, and so on’. He adds, ‘This approach is not likely to concentrate expertise in a few hands.’ (Knight 2004:2)

I like Knight’s attempts to widen the pool of acknowledged experts. He is evidently aware of the scope for tension between expert privilege and democracy. Conventionally, experts are few in number, specialists, and revered or at least respected for their expertise. However, this can also be viewed as exclusionary, particularly as most experts of this kind are older white men. Also, I’m not sure Knight goes far enough.

Knight was writing at the start of the century and, more recently, different definitions of ‘expert’ have begun to creep into the lexicon. For example, the UK’s Care Quality Commission (CQC), which inspects and regulates health and social care services, has defined ‘experts by experience‘. These are people with personal experience of using, or caring for someone who uses, services that the CQC oversees. Experts by experience take an active part in service inspections, and their findings are used to support the work of the CQC’s professional inspectors.

In research, there is a specific participatory approach known as critical communicative methodology (CCM) which was developed around 10 years ago. CCM takes the view that everyone is an expert in something, everyone has something to teach others, and everyone is capable of critical analysis. This is a fully egalitarian methodology which uses respectful dialogue as its main method.

However, in most of research and science, experts are still viewed as those rare beings who have developed enough knowledge of a specialist area to be able to claim mastery of their subject. There is a myth that experts are infallible, which of course they’re not; they are human, with all the associated incentives and pressures that implies. It seems that experts are falling from grace daily at present for committing social sins from fraud to sexual harassment (and getting caught).

Perhaps more worryingly, the work of scientific experts is also falling from grace, in the form of the replication crisis. This refers to the finding that scientific discoveries are not as easy to replicate as was once supposed. As replication is one of the key criteria scientists use to validate the quality of each other’s work, this is a Big Problem. There is an excellent explanation of the replication crisis, in graphic form, online here.

My own view is that replication is associated with positivism, objectivity, the neutrality of the researcher, and associated ideas which have now been fairly thoroughly discredited. I think this ‘crisis’ could be a really good moment for science, as it may lead more people to understand that realities are multiple, researchers influence and are influenced by their work, and the wider context inevitably plays a supporting and sometimes a starring role.

As a result of various factors, including the replication crisis, it seems that the conventional concept of an expert is under threat. This too may be no bad thing, if it leads us to value everyone’s expertise. Perhaps it could also help to overturn the ‘deficit model’ which still prevails in so much social science, where (expert) researchers focus on people’s deficits – their poverty, ill-health, low educational attainment, unemployment, inadequate housing, and so on – rather than on their strengths and the positive contributions they make to our society. The main argument in favour of the deficit model is that these are problems research can help to solve, but if that were true, I think they would have been solved long since.

For sure, at times you need an expert you can trust. For example, if your car goes wrong, you’ll want to take it to an expert mechanic; if you develop a health problem, you’ll want to seek advice from an expert medic. It doesn’t seem either ethical or sensible, to me, to try to discard the conventional role of the expert altogether. But it does seem sensible to attack the links between expertise and privilege. After all, experts can’t exercise their expertise without input from others. At its simplest, the mechanic needs you to tell them what kind of a funny noise your car is making, and under what circumstances; the medic needs you to explain where and when you feel pain. Also, it doesn’t seem sensible to restrict conventional experts to a single area of expertise. That mechanic may also be an expert bassoon player; the medic may know more about antique jewellery than you ever thought possible.

In my view, the ethical approach to expertise is to treat everyone as an expert in matters relating to their own life, and beyond that, as someone who has a positive contribution to make to a specific task at hand and/or wider society in general. Imagine a world in which we all acknowledged and valued each other’s knowledge, experience, and skills. You may say I’m a dreamer – but I’m not the only one.

The Ethics of Research Evidence

Like so many of the terms used in research, ‘evidence’ has no single agreed meaning. Nor does there seem to be much consensus about what constitutes good or reliable evidence. The differing approaches of other professions may confuse the picture. For example, evidence that would convince a judge to hand down a life sentence would be dismissed by many researchers as anecdote.

evidenceGiven that evidence is such a slippery, contentious topic, how can researchers begin to address its ethical aspects? A working definition might help: evidence is ‘information or data that people select to help them answer questions’ (Knight 2004:1). Using that definition, we can look at the ethical aspects of our relationship with evidence: how we choose, use, and apply the evidence we gather and construct.

Evidence is often talked and written about as though it is something neutral that simply exists, like a brick or a table, to be used by researchers at will. Knight’s definition is helpful because it highlights the fact that researchers select the evidence they use. Evidence, in the form of facts or artefacts, is neither ethical nor unethical. But in the process of selection, there is always room for bias, and that is where ethical considerations come into play.

To choose evidence ethically, I would argue that first you need to recognise the role of choice in the process, and the associated potential for bias. Then you need to consider some key questions, such as:

  • What is the question you want to answer?
  • What are your existing thoughts and feelings about that topic?
  • How might they affect your choices about evidence?
  • What can you do to make those choices open and defensible?

The aim is to be able to demonstrate that you have chosen the information or data you intend to define as ‘evidence’ in as ethical a way as possible.

Once you have chosen your evidence, you need to use it ethically within the research process. This means subjecting all your evidence to rigorous analysis, interpreting your findings accurately, and reporting in ways that will communicate effectively with your audiences. These are some of the key responsibilities of ethical researchers.

Research is a process that converts evidence into research evidence. It starts with the information or data that researchers choose to use as evidence, which may be anything from statistics to artworks. Then, through the process of (one would hope) diligent research, that evidence becomes research evidence. Whether and how research evidence is applied in the wider world is the third ethical aspect.

Sadly, there is a great deal of evidence that evidence is not applied well, or not applied at all. Most professional researchers have tales to tell of evidence being buried by research funders or commissioners. This seems particularly likely where findings conflict with political or money-making ambitions. In some sectors, such as third sector evaluation, this is widespread (Fiennes 2014). How can anyone make an evidence-based decision if the evidence collected by researchers has not been converted into evidence they can use?

The use of research evidence is often beyond the control of researchers. One practical action a researcher can take is to suggest a dissemination plan at the outset. This can be regarded as ethical, because such a plan should increase the likelihood of research evidence being used. But it could also be regarded as manipulative: using the initial excitement around a new project to persuade people to sign up to a plan they might later regret.

It seems that ethics and evidence are uneasy bedfellows. Again, Knight tries to help us here, by suggesting that research evidence should be used by people with expertise. This raises a further, pertinent question: what is the ethics of expertise? I will address that next week.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

Dissemination, Social Media, and Ethics

twitterstormI inadvertently caused a minor Twitterstorm last week, and am considering what I can learn from this.

I spotted a tweet from @exerciseworks reporting some research. It said “One in 12 deaths could be prevented with 30 minutes of exercise five times a week” (originally tweeted by @exerciseworks on 22 Sept, retweeted on the morning of 10 October). The tweet also included this link but I didn’t click through, I just responded directly to the content of the tweet.

Here’s their tweet and my reply:

 

The @exerciseworks account replied saying it wasn’t their headline. This was true; the article is in the prestigious British Medical Journal (BMJ) which should know better. And so should I: in retrospect, I should have checked the link, and overtly aimed my comment at the BMJ as well.

Then @exerciseworks blocked me on Twitter. Perhaps they felt I might damage their brand, or they just didn’t like the cut of my jib. It is of course their right to choose who to engage with on Twitter, though I’m a little disappointed that they weren’t up for debate.

I was surprised how many people picked up the tweet and retweeted it, sometimes with comment, such as this:

Rajat Chauhan tweet

and this:

Alan J Taylor tweet

which was ‘liked’ by the BMJ itself – presumably they are up for debate; I would certainly hope so. (It also led me to check out @AdamMeakins, a straight-talking sports physiotherapist who I was pleased to be bracketed with.)

Talking to people about this, the most common reaction was to describe @exerciseworks as a snowflake or similar, and say they should get over themselves. This is arguable, of course, though I think it is important to remember that we never know what – sometimes we don’t know who – is behind a Twitter account. Even with individual accounts where people disclose personal information, we should not assume that the struggles someone discloses are all the struggles they face. And with corporate or other collective accounts, we should remember that there is an individual person reading and responding to tweets, and that person has their own feelings and struggles.

Twitter is a fast-moving environment and it’s easy to make a point swiftly then move on. Being blocked has made me pause for thought, particularly as @exerciseworks is an account I’ve been following and interacting with for some time.

I stand by the point I made. It riles me when statistical research findings are reported as evidence that death is preventable. Yes, of course lives can be saved, and so death avoided at that particular time. Also, sensible life choices such as taking exercise are likely to help postpone death. But prevent death? No chance. To suggest that is inaccurate and therefore unethical. However, forgetting that there is an actual person behind each Twitter account is also unethical, so I’m going to try to take a little more time and care in future.

Crowdfunding For Academia

crowded-390840__340Crowdfunding is a way of raising money, from anyone you can persuade to give you money, for anything you like. You can crowdfund for personal needs, projects, charities, disaster appeals, creative endeavours – anything from pet food to space travel. Some projects that have been successfully funded through Kickstarter alone include combat cookware, amusing rap songs about the iconic television character Doctor Who, and bacon-scented soap.

There are quite a few crowdfunding sites now and they have different USPs. For example, Teespring was set up specifically for crowdfunding unique t-shirt designs, though it now also enables the design and creation of other products such as beach towels, phone cases, and mugs. Unbound is for publishing books (though not academic ones, sadly). GoFundMe is mostly used for medical, memorial, and charitable fundraising, though it is also used by a lot of doctoral students around the world to help fund part or all of their studies.

Kickstarter is for creative projects, including those related to academia. Indiegogo is for innovations in technology and design; its links with academia seem more tenuous, but nevertheless exist. However, unlike GoFundMe and Kickstarter, it does include quite a few research projects. All of these websites take a small percentage of any funds raised, to cover their costs.

Although people doing academic work are free to use any crowdfunding website, there one that seems particularly applicable is Patreon. This is for ‘creators’ who can crowdfund per ‘thing’ they create (song, podcast, etc), or per month (which is more predictable for donors). Patreon is increasingly being used by researchers, such as Brian Danielak who creates free open source software for research; Asia Murphy who researches wildlife in remote forests (with great photos and videos!); and Kylie Budge who is researching creativity in cities.

Crowdfunding is not a soft option. Yes, you can slap together a web page, sit back, and wait for the donations to roll in. But if you do that, they won’t. For any chance of success, you need an appealing offer, a well-made fundraising page, healthy personal and professional networks, and no shame at all about asking people for money, over and over again. On Patreon, your offer is made up of goals and rewards. Goals need to be intriguing and credible, and rewards need to be enticing (to potential fundraisers), achievable (for you), and ongoing rather than one-offs, with at least one reward per year even for people who fund you at the lowest level. This all takes a lot of thought and research. Then, once you have your page up, you need to promote, promote, promote.

Talking of which: I am launching my own Patreon page this very day! I am lucky to have a great mentor for this project, Jonathan O’Donnell of RMIT University in Melbourne, Australia. He is currently doing a PhD in academic crowdfunding, and will be producing a guide to this in due course. If you appreciate my blog, please consider supporting me for one dollar per month – or more, if you wish. Whether or not you think you might want to support me, I’d be grateful if you could take a look at my page. All feedback welcome, either here or there. Thank you.

Why Research Participants Rock

dancingI wrote last week about the creative methods Roxanne Persaud and I used in our research into diversity and inclusion at Queen Mary University of London last year. One of those was screenplay writing, which we thought would be particularly useful if it depicted an interaction between a student and a very inclusive lecturer, or between a student and a less inclusive lecturer.

I love to work with screenplay writing. I use play script writing too, sometimes, though less often. With play script writing, you’re bound by theatre rules, so everything has to happen in one room, with minimal special effects. This can be really helpful when you’re researching something that happens in a specific place such as a parent and toddler group or a team sport. Screenplay, though, is more flexible: you can cut from private to public space, or include an army of mermaids if you wish. Also, screenplay writing offers more scope for descriptions of settings and characters, which, from a researcher’s point of view, can provide very useful data.

Especially when participants do their own thing! Our screenplay-writing participants largely ignored our suggestions about interactions between students and lecturers. Instead, we learned about a south Asian woman, the first in her family to go to university, who was lonely, isolated, and struggling to cope. We found out about a non-binary student’s experience of homophobia, sexism and violence in different places on campus. We saw how difficult it can be for Muslim students to join in with student life when alcohol plays a central role. Scenes like these gave us a much richer picture of facets of student inclusion and exclusion than we would have had if our participants had kept to their brief.

Other researchers using creative techniques have found this too. For example, Shamser Sinha and Les Back did collaborative research with young migrants in London. One participant, who they call Dorothy, wanted to use a camera, but wasn’t sure what to capture. Sinha suggested exploring how her immigration status affected where she went and what she could buy. Instead, Dorothy went sightseeing, and took pictures of Buckingham Palace. The stories she told about what this place and experience meant to her enriched the researchers’ perceptions of migrant life, not just the ‘aggrieved’ life they were initially interested in, but ‘her free life’ (Sinha and Back 2013:483).

Katy Vigurs aimed to use photo-elicitation to explore different generations’ perceptions of the English village where they lived. She worked with a ladies’ choir, a running club, and a youth project. Vigurs asked her participants to take pictures that would show how they saw and experienced their community. The runners did as she asked. The singers, who were older, took a few photos and also, unprompted, provided old photographs of village events and landmarks, old and new newspaper cuttings, photocopied and hand-drawn maps of the area with added annotations, and long written narratives about their perceptions and experiences of the village. The young people also took some photos, mostly of each other, but then spent a couple of hours with a map of the village, tracing the routes they used and talking with the researcher about where and how they spent time. Rather than standard photo-elicitation, this became ‘co-created mixed-media elicitation’ as Vigurs puts it (Vigurs and Kara 2016:520) (yes, I am the second author of this article, but all the research and much of the writing is hers). Again, this provided insights for the researcher that she could not have found using the method she originally planned.

Research ethics committees might frown on this level of flexibility. I would argue that it is more ethical than the traditional prescriptive approach to research. Our participants have knowledge and ideas and creativity to share. They don’t need us to teach them how to interact and work with others. In fact, our participants have a great deal to teach us, if we are only willing to listen and learn.

Creative Research In Practice

like cloudIt’s not often I get to share an output from the commissioned research I do. Sometimes clients don’t want to share publicly for reasons of confidentiality, and sometimes there are other reasons they don’t publish. As a commissioned researcher, I can’t publish the work someone else has paid for without their agreement. But I’m glad to say that Queen Mary University of London (QMUL) has published the full report of the research I did for them last year with my colleague Roxanne Persaud.

The research question was: How can QMUL improve students’ experience with respect to the inclusivity of their teaching, learning, and curricula? The original brief focused on the protected characteristics covered by the UK Equality Act 2010: age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion and belief, and sexual orientation. Roxanne and I advised QMUL to take a more holistic approach to inclusivity, as the protected characteristics don’t cover some factors that we know can lead to discrimination and disadvantage, such as socioeconomic status and caring responsibilities. We recommended Appreciative Inquiry as a methodological framework, because it doesn’t start from a deficit perspective emphasising problems and complaints, but focuses on what an organisation does well and what it could do better. (It doesn’t ignore or sideline problems and complaints, either; it simply starts from the standpoint that there are assets to build on.)  And of course we suggested creative techniques, particularly for data-gathering and sense-making, alongside more conventional methods.

Roxanne and I were both keen to do this piece of work because we share an interest in diversity and inclusion. Neither of us had worked with QMUL before and we weren’t sure whether they would appreciate our approach to their brief. Sometimes commissioners want to recruit people who will do exactly what they specify. Even so, I’d rather say how I think a piece of work needs to be done; if the commissioner doesn’t want it done that way, then I don’t want the job.

QMUL shortlisted six sets of applicants. The interview was rigorous. Roxanne and I came out feeling we’d done ourselves justice, but with no clue as to whether we might have got the work or not. But we did!

The research was overseen by a Task & Finish group, made up of staff from different departments, who approved the methods we had put forward. We conducted a targeted literature review to identify key issues and best practice for inclusivity in the UK and overseas, and set the research in an institutional, societal, and theoretical context. The theoretical perspectives we used began with the theory of intersectionality developed by the law professor Kimberlé Crenshaw, which we then built on using the diffraction methodology of the physicist and social theorist Karen Barad. These two theories together provided a binocular lens for looking at a very complex phenomenon.

The timescale for the research was tight, and data gathering collided with Ramadan, exams, and the summer holidays. So, not surprisingly, we struggled with recruitment, despite strenuous efforts by us and by helpful colleagues at QMUL. We were able to involve 17 staff and 22 students from a wide range of departments. We conducted semi-structured telephone interviews with the staff, and gave students the option of participating in face-to-face interviews or group discussions using creative methods. These methods included:

  • The life-sized lecturer: an outline figure on a large sheet of paper, with a label indicating what kind of person they are e.g. ‘a typical QMUL lecturer’ and ‘an ideally inclusive lecturer’, which students could write and draw on.
  • Sticker maps: a map of organisational inclusivity, which we developed for QMUL, on which students could place small green stickers to indicate areas of good practice and small red stickers to indicate areas for further improvement.
  • Empathy maps: tools to help participants consider how other students or staff in different situations think and feel; what they might see, say, and do; and where they might experience ‘pain or gain’ with respect to inclusive learning.
  • Screenplay writing: a very short screenplay depicting an interaction between a student and a very inclusive lecturer, or between a student and a less inclusive lecturer. The screenplay will include dialogue and may also include information about characters’ attributes, the setting, and so on.

We generated over 50,000 words of data, which we imported into NVivo. Roxanne and I spent a day working together on emergent data coding, discussing excerpts from different interviews and group sessions, with the aim of extracting maximum richness. Then I finished the coding and carried out a thematic analysis while Roxanne finished the literature review.

We wrote a draft report, and then had two ‘review and refine’ meetings for sense-making, which were attended by 24 people. The first meeting was with members of the Task & Finish group, and the second was an open meeting, for participants and other interested people. We presented the draft findings, and put up sheets on the walls listing 37 key factors identified in the draft report. We gave participants three sticky stars to use to indicate their top priorities, and 10 sticky dots to use to indicate where they would allocate resources. People took the resource allocation incredibly seriously, and it was interesting to see how collaboratively they worked on this. I heard people saying things like, ‘That’s important, but it’s already got five dots on, so I’m going to put another one here.’ I wish I could have recorded all their conversations! We did collect some further data at these meetings, including touch-typed notes of group discussions and information about the relative frequency of occurrence, and importance, of the 37 key factors. All of this data was synthesised together with the previously collected data in the final report and its recommendations.

The comparatively small number of participants was a limitation, though we did include people from all faculties and most schools, and we certainly collected enough data for a solid qualitative study. We would have liked some quantitative data too, but the real limitation was that most of the people we reached were already concerned about inclusivity. We didn’t reach enough people to be able to say with certainty whether this was, or was not, the case more widely at QMUL. Also, while none of our participants disagreed unduly with our methodology or methods, others at QMUL may have done so. In a university including physicists, mathematicians, engineers, social scientists, artists, doctors, dentists and lawyers, among others, it seems highly unlikely that anyone could come up with an approach to research that would receive universal approval.

Yet I’m proud of this research. It’s not perfect – for example, I’ve realised, in the course of writing this blog post, that we didn’t explicitly include the research question in the research report! But its title is Inclusive Curricula, Teaching, and Learning: Adaptive Strategies for Inclusivity, which seems clear enough. I’m sure there are other ways it could be improved. But I’m really happy with the central features: the methodology, the methods, and the flexibility Roxanne and I offered to our client.

Evaluating excellence in arts-based research: a case study

This article first appeared in Funding Insight on 16 June 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

peacock-536478__340I recently wrote on this topic citing the work of Sarah J Tracy from Arizona State University, who developed a set of eight criteria for assessing the quality of arts-based and other forms of qualitative and mixed-methods research. Now I propose to apply those criteria to an example of arts-based research, to find out how they can work in practice.

The research example I have chosen is by Jennifer Lapum and her colleagues from Toronto in Canada, who investigated 16 patients’ experiences of open-heart surgery. Their work is methodologically interesting because they used arts-based techniques, not only for data generation, but also for data analysis and dissemination. They published an account of their work in Qualitative Inquiry which I will interrogate here.

Lapum  gathered narrative data from two interviews with post-operative patients, one while they were still in hospital and the other some weeks after returning home. Also, journals were kept by patients between the two interviews. She then put together a multi-disciplinary team of people, including artists, researchers, designers, and medical staff, and they spent a year doing arts-based analysis of the patients’ stories. This included metaphor analysis, poetic inquiry, sketching, concept mapping, and construction of photographic images. The team then developed an installation, covering 1,739 square feet, with seven sections representing the seven stages of a patient’s journey. These sections were arranged along a labyrinthine route, with the operating room at the centre, all hung with textile compositions incorporating poems and photographic images that had been generated at the analytic stage. Further dissemination via a short video on YouTube gives some idea of how it would be to visit this installation.

So how does this research fit with Tracy’s eight criteria? First we ask: is the research topic worthy? I would argue that in this case the answer is yes. Open-heart surgery must be a daunting prospect, even though the rewards can be immense. Lapum’s work offers potential patients and carers some insight into the journey they may take, and offers medical and other relevant staff an increased understanding of patients’ experiences. This is likely to improve outcomes for patients.

Second, is this project richly rigorous? The sample size is small, but the data was carefully constructed. Also, the analytic process was extremely thorough, with a multi-disciplinary team spending a year working with the data. Therefore I would conclude that this criterion has been met.

Do we have sincerity? Is the research reflexive, honest, and transparent? The published article is quite explicit about the methods used, and credits several people who have been involved with the process. The article asserts that the research was reflective, though the article itself is not. Nor do the writers outline all the decisions they took in the course of analysis and dissemination. However, space in a journal article is limited – but there is no mention of what was left out and why. So the research as presented here is sincere up to a point, but there is scope for more reflexivity and transparency.

What about credibility? There is certainly thick description and multiplicity of voices and perspectives in this research. Also, while the research team did not include participants as such, contributions were made by ‘knowledge users’ including cardiovascular health practitioners and former heart surgery patients. So, in Tracy’s terms, this research is definitely credible.

The next criterion is resonance. The installation certainly had aesthetic merit. It was generalisable to some extent: certainly to heart surgery patients and practitioners from other geographic locations, and perhaps to patients and practitioners of other kinds of major organ surgery. And it was also transferable: ‘we found people of diverse backgrounds not only resonated with the work but were also able to consider the application of these ideas to their lives and/or professional field’ (Lapum et al 2012:221). So, yes, it was resonant.

Did this research make a significant contribution? It evidently extended the knowledge, and may have improved the practice, of the research team. The project was methodologically unusual, and explicitly aimed to engage the audience’s aesthetic and emotional faculties, as well as their intellectual abilities, in responding to the research findings. However, there is no report of the installation’s impact on its audience but, again, this may be due to lack of space. So I would argue that this criterion was met, and the research may in fact have made a more significant contribution than we can discern from one journal article.

How ethical was the research? The article does not mention ethics, though it seems inevitable that the research must have received formal ethical approval. The level of thought and care applied to the research suggests that it was ethical, though this is implicit rather than explicit. But, once again, this may be due to space constraints.

And finally, does the research have meaningful coherence? The article tells an engaging and comprehensible story, so yes, it does.

It is perhaps unfair to judge a long and complex research project on the basis of a single journal article of just a few thousand words. Lapum and her colleagues have published several articles about their research; to make a full judgement I should really read them all. However, if the authors had carried out an analysis of their article based on Tracy’s criteria, they might have chosen to add a sentence or two about what they left out, a paragraph or two on reflexivity, a short description of the impact of the installation on its audience, and some information about ethics. The article as it stands is excellent; with these amendments, it could have been outstanding. This demonstrates that Tracy’s criteria are useful for assessing not only research itself, but also reports of research.

How to evaluate excellence in arts-based research

This article first appeared in Funding Insight on 19 May 2016 and is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

judgementResearchers, research commissioners, and research funders all struggle with identifying good quality arts-based research. ‘I know it when I see it’ just doesn’t pass muster. Fortunately, Sarah J Tracy of Arizona State University has developed a helpful set of criteria that are now being used extensively to assess the quality of qualitative research, including arts-based and qualitative mixed-methods research.

Tracy’s conceptualisation includes eight criteria: worthy topic, rich rigour, sincerity, credibility, resonance, significant contribution, ethics, and meaningful coherence. Let’s look at each of those in a bit more detail.

A worthy topic is likely to be significant, meaningful, interesting, revealing, relevant, and timely. Such a topic may arise from contemporary social or personal phenomena, or from disciplinary priorities.

Rich rigour involves care and attention, particularly to sampling, data collection, and data analysis. It is the antithesis of the ‘quick and dirty’ research project, requiring diligence on the part of the researcher and leaving no room for short-cuts.

Sincerity involves honesty and transparency. Reflexivity is the key route to honesty, requiring researchers to interrogate and display their own impact on the research they conduct. Transparency focuses on the research process, and entails researchers disclosing their methods and decisions, the challenges they faced, any unexpected events that affected the research, and so on. It also involves crediting all those who have helped the researcher, such as funders, participants, or colleagues.

Credibility is a more complex criterion which, when achieved, produces research that can be perceived as trustworthy and on which people are willing to base decisions. Tracy suggests that there are four dimensions to achieving credibility: thick description, triangulation/crystallization, multiple voices, and participant input beyond data provision. Thick description means lots of detail and illustration to elucidate meanings which are clearly located in terms of theoretical, cultural, geographic, temporal, and other such location markers. Triangulation and crystallisation are both terms that refer to the use of multiplicity within research, such as through using multiple researchers, theories, methods, and/or data sources. The point of multiplicity is to consider the research question in a variety of ways, to enable the exploration of different facets of that question and thereby create deeper understanding. The use of multiple voices, particularly in research reporting, enables researchers more accurately to reflect the complexity of the research situation. Participant input beyond data provision provides opportunities for verification and elaboration of findings, and helps to ensure that research outputs are understandable and implementable.

Although all eight criteria are potentially relevant to arts-based research, resonance is perhaps the most directly relevant. It refers to the ability of research to have an emotional impact on its audiences or readers. Resonance has three aspects: aesthetic merit, generalisability, and transferability. Aesthetic merit means that style counts alongside, and works with, content, such that research is presented in a beautiful, evocative, artistic and accessible way. Generalisability refers to the potential for research to be valuable in a range of contexts, settings, or circumstances. Transferability is when an individual reader or audience member can take ideas from the research and apply them to their own situation.

Research can contribute to knowledge, policy, and/or practice, and will make a significant contribution if it extends knowledge or improves policy or practice. Research may also make a significant contribution to the development of methodology; there is a lot of scope for this with arts-based methods.

Several of the other criteria touch on ethical aspects of research. For example, many researchers would argue that reflexivity is an ethical necessity. However, ethics in research is so important that it also requires a criterion of its own. Tracy’s conceptualisation of ethics for research evaluation involves procedural, situational, relational, and exiting ethics. Procedural ethics refers to the system of research governance – or, for those whose research is not subject to formal ethical approval, the considerations therein such as participant welfare and data storage. Situational ethics requires consideration of the specific context for the research and how that might or should affect ethical decisions. Relational ethics involve treating others well during the research process: offering respect, extending compassion, keeping promises, and so on. And exiting ethics cover the ways in which researchers present and share findings, as well as aftercare for participants and others involved in the research.

Research that has meaningful coherence effectively does what it sets out to do. It will tell a clear story. That story may include paradox and contradiction, mess and disturbance. Nevertheless, it will bring together theory, literature, data and analysis in an interconnected and comprehensible way.

These criteria are not an unarguable rubric to which every qualitative researcher must adhere. Indeed there are times when they will conflict in practice. For example, you may have a delightfully resonant vignette, but be unable to use it because it would identify the participant concerned; participants may not be willing or able to be involved beyond data provision; and all the diligence in the world can’t guarantee a significant contribution. So, as always, researchers need to exercise their powers of thought, creativity, and improvisation in the service of good quality research, and use the criteria flexibly, as guidelines rather than rules. However, what these criteria do offer is a very helpful framework for assessing the likely quality of research at the design stage, and the actual quality of research on completion.

Next week I will post a case study demonstrating how these criteria can be used.