How To Give Feedback On Academic Writing – Twelve Top Tips

feedback peopleA recent discussion on Facebook reminded me that I’ve written about how to deal with feedback from reviewers, but I haven’t written about how to give feedback to peers and colleagues. There is an art to this which I have learned, paradoxically, from receiving feedback, which taught me what helps and what does not help.

Feedback is a fairly neutral word but what we’re actually dealing with is criticism. Some people call it ‘critique’ to make it sound better but it’s still criticism. Criticism is not neutral and so it has lots of emotion attached.

In the last decade I joined a closed online short story writing group of around a dozen fiction writers. We all knew each other online through blogging and wanted to improve our writing. The idea was that we would each write and share a story once a fortnight. The stories were posted anonymously by one of the group – we took turns – and the others would give feedback. To begin with we only gave positive feedback until one of us pointed out that we weren’t going to get very far that way. We were a bit scared about being more critical, but gradually our feedback became more robust, with honesty about the elements of each story that didn’t work for us and why, as well as praise for the parts that did and suggestions for how to overcome weaknesses. We built up a lot of trust in that group and it helped us to give better feedback and so become better writers.

This experience taught me that trust is important to effective feedback. In the group we built trust over time. If you’re writing an anonymous peer review, you need to create trust all at once.

Another thing that is important is blending praise where possible, or at least advice, with your criticism. I had a review for the typescript of my last book which was entirely critical. Essentially, it said the book was rubbish and should never be published. The reviewer is entitled to their opinion, and I have been a writer for far too long to be upset by critical feedback, but the problem was that the review gave me no help at all. There was nothing in it which I could use to improve my writing. (Luckily I had two other reviewers at that stage who took a more balanced approach and did give me constructive criticism, advice, and some praise.)

So, from all my years of experience of receiving and giving feedback on writing in several genres, here are my twelve top tips for giving good quality feedback that others will trust.

  1. Be honest in all the feedback you give.
  2. Read the piece you’re giving feedback on carefully, thoroughly, at least twice.
  3. While you read, make notes of thoughts that occur to you. As a minimum, these should include: aspects of the work you think are good; where you think there is room for improvement; anything you don’t understand; references the author might find helpful.
  4. Be sure to praise the good points in the author’s work. This helps to build trust and also lets the author know what they can relax about.
  5. Be open about anything you don’t understand. Doing this worries some people because they think they may look stupid, particularly if they’re giving feedback to a peer or colleague rather than writing an anonymous review. But it’s really helpful feedback for writers because it may be that they haven’t written clearly enough.
  6. Give a straightforward assessment of areas where you think there is room for improvement.
  7. Tell the author how you think they can improve their work. This is crucial. If you’re only saying where improvement is needed, you’re only doing half the job.
  8. Where relevant, suggest references the author has missed.
  9. If you think extra references would be helpful but nothing specific springs to mind, have a quick look on a website such as Google Scholar or the Directory of Open Access Journals and see if you can find something to point the author towards.
  10. Don’t worry if you can only offer a certain amount of help because of the limits to your own knowledge. It’s fine to say, for example, that a quick online search suggests there is more relevant literature in the area of X; you’re not certain because X lies outside your own areas of interest but you think it would be worth the author taking a look.
  11. Acknowledge the author’s emotions. For example, after giving quite critical feedback, you might say something like, “I realise that implementing my suggestions will involve a fair amount of extra work and this may seem discouraging. I hope you won’t be put off because I do think you have a solid basis here and you are evidently capable of producing an excellent piece of writing.” (Though remember #1 above and don’t say this if it’s not true.)
  12. Be polite throughout, even if your review is anonymous. Anonymity is not an excuse for rudeness.

If there’s anything I’ve missed, please add it in the comments.

This blog is funded by my beloved patrons. It takes me around one working day per month to post here each week. At the time of writing I’m receiving funding of $12 per month. If you think four of my blog posts are worth more than $12 – you can help! Ongoing support would be fantastic but you can also support for a single month if that works better for you. Support from Patrons also enables me to keep this blog ad-free. If you are not able to give financial support at this time, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

Independent Research, Writing, and Financial Reality

money twenty pound notesEvery so often I post about how much money I make. As I’m just finishing my 2017-18 accounts, it seems a good time to update this.

I have written before about the difficulties the recession caused to my business and the bumpy road back to reasonable prosperity. In 2017-18 I invoiced for £34,338.54 of business, a bit down on the 2016-17 figure of £39,939 though that was partly because I took on a sizeable contract in the spring of 2018 but didn’t receive my first payment instalment until after my year end on 31.7.18.

The amount I invoice for is representative of the amount of work I do, not the amount of money I have in my pockets. In 2016-17 my post-tax profit was £14,057 – and I was able to pay myself a bit more than that because I’d had an even better year in 2015-16, as reported in my earlier post. In fact, 2015-16 was by far the best year of the last 8 years.

So it’s still bumpy, but the bumps are evening out, and I’m beginning to feel that I’m back on my financial feet (except when I think about my pension plans, eek, must do something about that). It helps that my mortgage is paid off, I’m happily child-free, and I don’t have expensive tastes. Also, I have plenty of work scheduled in for early 2018. For the first time in eight years, I don’t feel as if I should spend every spare moment trying to generate work.

Also, my research business doesn’t represent the whole of my income. There is also the income I derive from writing, which in 2017-18 was royalties of £1,663.70 from my trade published books and £306.25 from my self-published books, plus £268.64 from the wonderful ALCS. That’s a total of £2,238.59 for the year – though again there were outgoings to set against that: memberships of the Society of Authors and the Textbook and Academic Authors’ Association, royalties to Nathan Ryder who co-authored Self-Publishing for Academics, and all the books I bought. Altogether that comes to £593.48 and brings down my writing-related income to £1,645.11. Which is enough to pay for a month of writing time. I have to look at it that way, and not think in terms of an hourly rate, or I’d never write another word… if I wasn’t a writing addict.

Writing income is bumpy too. As my trade royalties arrive annually in October, I already know that they are lower in 2018-19 (£947.46) and I don’t really understand why. But I have a new book out this month, and I’ll have two short books out next month in the new series I’m working on for SAGE, plus two more next July, and I’m also co-editing and writing for a new series for Routledge, and have three other book proposals in the pipeline. The SAGE and Routledge books come with small advances totalling £1,250 so far, so in this financial year I’ve already made more from those than from the royalties on my published books. I’m hopeful that perhaps by 2021 I’ll make enough to buy myself out for two months of writing time. At that rate it should only take another 30 years of work to be able to write full-time, so it doesn’t look as though I’ll achieve that dream, as I’ll be 87 in 2051!

Sometimes people think that because my day rates are comparatively high, I must be rich. In fact, my day rates don’t only cover a day’s work, they also cover holidays, sickness and bereavement leave, time spent on unpaid but essential work such as admin and accounts, travelling time, business expenses such as heat and light and IT equipment and accountants’ fees and so on, and of course tax to be paid.

There are independent researchers who make more money than me – I know of one who is registered for VAT, which suggests they turn over more than £85,000 per year, but they work very hard for that, travelling all around the world for most of the year. That may sound delightful and glamorous but I can assure you that travelling for work, while it does have lovely moments, is mostly about trains, planes, taxis, hotel rooms and classrooms or meeting rooms. I like to work overseas, and could probably make more money if I did more of it, but once or twice a year is about right for me.

I think it is important to be open about how much money I make overall, not least because so many people ask me what it’s like to be an independent researcher. For me, it’s a terrific lifestyle, but it wouldn’t suit everyone. I’d say it’s probably as difficult as being an academic or practice-based researcher but the difficulties are in different places. If it’s an option you’re considering, you need to be as realistic as possible about the financial side.

This blog is funded by my beloved patrons. It takes me around one working day per month to post here each week. At the time of writing I’m receiving funding of $11 per month. If you think four or five of my blog posts are worth more than $11 in total – you can help! Ongoing support would be fantastic but you can also support for a single month if that works better for you. Support from Patrons also enables me to keep this blog ad-free. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

The Ethics of Independent Research Work #1

ethicsI guess we all know by now that I bang on a fair bit about research ethics, but I haven’t written about the ethical aspects of working as an independent researcher. I have come up with ten ethical principles for indie researchers. Many of these no doubt apply to other forms of self-employment too, but they definitely all apply to independent research work. This post contains the first five principles; I will post the other five next week.

  1. Be honest about what you don’t know

If a client says, ‘You know the legislation that…’ and you don’t, it’s best to say so. It can be tempting to nod while making a mental note to look it up online later, but that can lead to disaster. People often fear that saying they don’t know something will make them look stupid, but paradoxically the reverse is true. If you are clear about what you do know and honest about what you don’t, you will build trust with your clients much more quickly and effectively.

  1. Be clear about your capacity

Allied to this: don’t take on work you haven’t got time to do, because that won’t do anyone any favours. You won’t produce your best work for your clients, and you’ll end up burned out. OK there are times where you may choose to work at maximum capacity for a short time, e.g. as one contract ends while another begins, or to fit in a quick piece of work for a valued client. But keep these brief and infrequent, and make sure you build in recovery time. Independent research is a great career (at least, in my view), but no career is worth damage to your health and relationships.

  1. Charge a fair rate for the job

If possible, find out what the going rate is, and charge that. The going rate will vary across sectors and between countries. I have written before about how I charge for work: in brief, I charge less for charities and longer projects, more for universities, governments, and work I don’t really want to do.

Also, don’t take on jobs with inadequate budgets, unless you’re desperate for the money and prepared to accept a very low day rate. I’ve been offered a three-year national evaluation with a total budget of £5,000. Perhaps someone ended up doing that work for that money, but they would either have done a very poor job or effectively accepted an extremely low day rate.

  1. Don’t accept work on an unethical basis

One potential client rang me towards the end of the financial year to ask if I could invoice her for several thousand pounds that she had left in her budget. She said she was a bit busy, so could we sort out what I would do for the money at a later date? I didn’t know her so I asked why she had rung me. She told me she had wanted person A, but they were too busy so they suggested person B, who couldn’t take it on either and suggested me. Nowadays I would probably say a simple ‘no’, but it was early in my career, and person B was quite influential. I agreed to invoice, but only after meeting with my potential client to decide whether we could work together and what I would do for her.

Another time a commissioner rang me to ask me to evaluate a service because he wanted to close it down. I said I would evaluate the service if he wished, but I would not pre-determine the findings; they would be based on my analysis of the data I gathered. He agreed to this. I did the evaluation, and found – unequivocally – that the service was highly valued and doing necessary work. The commissioner paid my invoice, then found someone else to do another evaluation saying the service should be closed down, whereupon he closed it down. Again, with the benefit of hindsight I probably should have said ‘no’ to the assignment, but I naïvely thought that if I did the research the commissioner would abide by the findings.

  1. Don’t take work outside your areas of expertise

You may have more than one area of expertise. I have a few: children/young people/families, housing/homelessness, substance misuse, volunteering, service user involvement, third sector, training. Each of these areas formed part of my professional work before I became an independent researcher.

Earlier this decade I got an email asking me to do some work around learning disability. I replied, explaining that it was not one of my areas of expertise, and saying I didn’t think I was the best person for the job. The potential client came back saying they thought I was right and apologising for having bothered me. (I didn’t mind. I never mind answering queries about possible paid work.)

Oddly enough, a few weeks later I got another email, from someone completely different, asking me to do some work around learning disability. After rolling my eyes and thinking about buses, I sent a similar reply. This time the potential client came back saying that I sounded perfect for the piece of work they wanted to commission. They thought someone with a good knowledge of research methods but little knowledge of learning disability would bring a usefully fresh perspective to the problems they were trying to solve. Which is further evidence for (1) above.

So there you have the first five principles of ethical research work, according to me. Come back next week for the other five.

How Do Research Methods Affect Results?

questionsLast week, for reasons best known to one of my clients, I was reading a bunch of systematic reviews and meta-analyses. A systematic review is a way of assessing a whole lot of research at once. A researcher picks a topic, say the effectiveness of befriending services in reducing the isolation of housebound people, then searches all the databases they can for relevant research. That usually yields tens of thousands of results, which of course is far more than anyone can read, so the researcher has to devise inclusion and/or exclusion criteria. Some of these may be about the quality of the research. Does it have a good enough sample size? Is the methodology robust? And some may be about the topic. Would the researcher include research into befriending services for people who have learning disabilities but are not housebound? Would they include research into befriending services for people in prison?

These decisions are not always easy to make. Researcher discretion is variable and fallible, and this means that systematic reviews themselves can vary in quality. One thing they almost all have in common, though, is a despairing paragraph about the tremendous variability of the research they have assessed and a plea to other researchers to work more carefully and consistently.

One of the systematic reviews I read last week reported an earlier meta-analysis on the same topic. A meta-analysis is similar to a systematic review but uses statistical techniques to assess the combined numerical results of the studies, and may even re-analyse data if available. The report of the meta-analysis I read, in the systematic review, contained a sentence which jumped out at me: ‘…differences in study design explained much of the heterogeneity [in findings], with studies using randomised designs showing weaker results.’

Randomised designs are at the top of the hierarchy of evidence. The theory behind the hierarchy of evidence is that the methods at the top are free from bias. I don’t subscribe to this theory. I think all research methods are subject to bias, and different methods are subject to different biases. For example, take the randomised controlled trial or RCT. This is an experimental design where participants are randomly assigned to the treatment or intervention group (i.e. they receive some kind of service) or to the control group (i.e. they don’t). This design assumes that random allocation alone can iron out all the differences between people. It also assumes that the treatment/intervention/service is the only factor that changes in people’s lives. Clearly, each of those may not in fact be the case.

Now don’t get me wrong, I’m not anti-RCTs. After all, every research method is based on assumptions, and in the right context an RCT is a great tool. But I am against bias in favour of any particular method per se. And the sentence in the systematic review stood out for me because I know the current UK Government is heavily biased towards randomised designs. It got me wondering, do randomised designs always show weaker results? If so, is that because the method is more robust – or less? And does the UK Government, which is anti-public spending, prefer randomised designs because they show weaker results, and therefore are less likely to lead to conclusions that investment is needed?

And that got me thinking we really don’t know enough about how research methods influence research results. I went looking for work on this and found none, just the occasional assertion that methods do affect results. Which seems like common sense… but how do they? Does the systematic review I read hold a clue, or is it a red herring? The authors didn’t say any more on the subject.

We can’t always do an RCT, even when the context means it would be useful, because (for example) in some circumstances it would be unethical to withhold provision of a treatment/intervention/service. So what about other methods? Do we understand the implications of asking a survey question that a participant has never thought about and doesn’t care about – or cares about a great deal? I know that taking part in an interview or focus group can lead people to think and feel in ways they would not otherwise have done. What impact does that have on our research? Can we trust participants to tell us the truth, or at least something useful?

This is troubling me and I have more questions than answers. I fear I may be up an epistemological creek without an ontological paddle. But I think that bias in favour of – or against – a particular research method, without good evidence of its benefits and disadvantages, is poor research practice. And it’s not only the positivists who are subject to this. Advocates of participatory research are every bit as biased, albeit in the opposite direction. The way some participatory researchers write, you’d think their research caused bluebirds to sing and rainbows to gleam and all to be well in the world.

It seems to me that we all need to be more discerning about method. And that’s not easy when there are so many available, and a plethora of arguments about what works in which circumstances. So I think we may need to go meta here and do some research on the research. But ‘further research needed’ is a very researcher-y way of thinking, and I’m a researcher, so… does my bias look big in this?

Book Launch! And Other Events

Research ethics in the real world [FC]I am delighted to have been invited to launch my forthcoming book, Research Ethics in the Real World: Euro-Western and Indigenous Perspectives, at a seminar at City University in London on Thursday 8 Nov. This is part of a seminar series run by NatCen, City University, and the European Social Survey. I’ll be talking about why it is crucial to view research ethics in the context of its links with individual, social, professional, institutional and political ethics. I will explain why I think the Indigenous research paradigm is as important for our world as the Euro-Western research paradigm. I will outline why applying research ethics at all stages of the research process is equally essential for quantitative, qualitative, and mixed-methods researchers.

This was a much more difficult book to write than my book on creative research methods. Since that book came out, I have been asked to do a lot of speaking and teaching on creative methods. For example, I’m running an open course on creative methods in evaluation research for the UK and Ireland Social Research Association in Sheffield on 16 October, and a more academically-oriented version on using creative methods for the ESRC‘s National Centre for Research Methods in Southampton on 21 November. (And one for social work researchers in Birmingham next week, but that’s been fully booked for some time and has a long waiting list.)

If my ethics book has the same effect, I’m not quite sure how I’ll manage the workload. Still, that would be a great problem to have. In the meantime: fancy a free seminar on research ethics? Of course you do! It’s at 5.45 for 6 pm with a wine reception afterwards. I’d love to see some of my blog followers there – if you can make it, please come and introduce yourself.

Aftercare in Social Research

aftercareWhen does a research project end? When a report has been written? When a budget has been spent? When the last discussion of a project has taken place? It’s not clear, is it?

Neither is it clear when a researcher’s responsibility ends. This is rarely spoken of in the context of social research, which is an unfortunate omission. A few Euro-Western researchers recognise the need for aftercare, but they are a tiny minority of individuals. There seems to be no collective or institutional support for aftercare. In the Indigenous paradigm, by contrast, aftercare is part of people’s existing commitment to community-based life and work. Euro-Western researchers could learn much from Indigenous researchers about aftercare: for participants, data, findings, and researchers ourselves.

The standard Euro-Western aftercare for participants is to tell them they can withdraw their data if they wish. However, it is rare for researchers to explain the limits to this, which can cause problems as it did for Roland Bannister from Charles Sturt University in Wagga Wagga, Australia. Bannister did research with an Australian army band, Kapooka, which could not be anonymised as it was unique. Band members consented to take part in Bannister’s research. He offered participants the opportunity to comment on drafts of his academic publications, but they weren’t interested. Yet when one of these was published in the Australian Defence Force Journal, which was read by band members, their peers, and superiors, participants became unhappy with how they were represented. Bannister had to undertake some fairly onerous aftercare in responding to their telephone calls and letters. Of course it was far too late for participants to withdraw their data, as this would have meant retracting several publications, which is in any case limited in its effectiveness. However, particularly in these days of ‘long tail’ online publications, we need to be aware that participants may want to review research outputs years, even decades, after the substantive work on the project is done. We have a responsibility to respond as ethically as we can although, as yet, there are no guidelines to follow.

Data also needs aftercare, particularly now that we’re beginning to understand the value of reusing data. Reuse increases the worth of participants’ contributions, and helps to reduce ‘research fatigue’. However, for data to be reusable, it needs to be adequately stored and easy to find. Data can be uploaded to a website, but it also needs to be carefully preserved to withstand technological changes. Also, it needs a ‘global persistent identifier’ such as a DOI (digital object identifier) or Handle. These can be obtained on application to organisations such as DataCite (DOIs) or The Dataverse Project (DOIs and Handles). As well as enabling reuse, a global persistent identifier also means you can put links to your data in other outputs, such as research reports, so that readers can see your data for themselves if they wish. This too is an ethical approach, being based in openness and transparency.

Then there are the findings we draw from our data. Aftercare here involves doing all we can to ensure that our findings are shared and used. Of course this may be beyond our power at times, such as when working for governments who require complete control of research they commission. In other contexts, it is unlikely that researchers can have much say in how our findings are used. But we should do all we can to ensure that they are used, whether to support future research or to inform practice or policy.

Researchers too need aftercare. In theory the aftermath of a research project is a warm and fuzzy place containing a pay cheque, favourably reviewed publications, and an enhanced CV. While this is no doubt some people’s experience, at the opposite end of the spectrum there are a number of documented cases of researchers developing post-traumatic stress disorder as a result of their research work. In between these two extremes, researchers may experience a wide range of minor or major difficulties that can leave them needing aftercare beyond the lifetime of the project. For that, at present, there is no provision.

Not much has yet been written on aftercare in research. If it interests you, there is a chapter on aftercare in my book on research ethics. I expect aftercare to be taken increasingly seriously by researchers and funders over the coming years.

An earlier version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

 

Academic taboos #4: what cannot be published

An earlier version of this article first appeared in Funding Insight in summer 2017; this updated version is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

what can't be publishedWe are all familiar with the structural faultlines of inequality that exist around attributes such as age, ethnicity, and gender. These faultlines act, and sometimes interact, to create barriers to academic publication. For example, Michael Eisen, a US biologist, found in 2016 that, in US-funded health research, less than 30% of senior academic authors are women. He also found that male authors write with fewer female co-writers (35%) than female authors do (45%). Leaving aside the whole ethical problem with treating gender as binary, this demonstrates an interaction between gender and publishing that disadvantages women.

So far, so straightforward. While of course institutionalised sexism needs to be addressed, it is hardly news these days, and there are legislative and policy structures designed to assist. A more unusual take is to look at the structural faultlines of inequality that exist around institutions and managerial practices, which are not currently addressed by equalities legislation or policy. These faultlines, too, act and interact to prevent people from publishing academic work. And by ‘people’ I mean academics, independent scholars, and Indigenous researchers.

Many academics of my acquaintance want their research to change minds and hearts and lives. They long for wide exposure, which often means publishing in open access (OA) journals. However, in many fields, the impact factors of OA journals are not high enough to satisfy audit requirements. So academics have to settle for publication in paywalled journals, read primarily by other academics.

With the growth of OA publishing, some OA journals are now reaching the dizzy heights of audit-worthy impact factors. But then there is another barrier. Access to these journals is open to all readers, but only to those writers with enough money – or an institutional budget – to pay the article processing charges (APCs). This can exclude many junior academics, whose senior colleagues get first dibs on the budget, and most independent scholars (though, to be fair, some OA journals do waive part or all of their APCs for indies).

Being outside an institution can cause barriers to publication in unexpected places. Take the reputable online publication The Conversation, whose strapline is ‘Academic rigour, journalistic flair’. The Conversation covers virtually all disciplines and has a lofty ‘charter‘ which claims to ‘support and foster academic freedom to conduct research, teach, write and publish.’ The charter speaks of freedom from bias, and operation for the public good. Yet the author information states that ‘you must be a member of an academic or research institution to write for The Conversation’. So academics who are between jobs, or independent scholars who prefer to work free from institutional biases and constraints, or retired scholars who have plenty more to say, have no voice within this so-called ‘academic freedom’.

Perhaps the biggest exclusion affects Indigenous researchers and those from the global South. In her 2012 book Indigenous Research Methods, Professor Bagele Chilisa of Botswana noted that Indigenous researchers find it almost impossible to publish their work through Euro-Western publishing systems (p. 55). Some organisations are working to counteract this, such as the international research development charity INASP, whose Journals Online Project currently covers work from Africa, Latin America, the Philippines, Vietnam, Bangladesh, Mongolia, Nepal and Sri Lanka. (More info here.)

However, it is notable that most of the action to increase authors’ access to scholarly publishing comes from outside academia. The much-vaunted ‘public engagement agenda’ doesn’t seem to consider that some of the public might like to engage, not only as passive consumers of lectures, but also as active authors of scholarly work. Until all of these inequalities are systematically and effectively tackled, academic publishing will continue to represent privileged voices alone.

Academic taboos #3: what cannot be written

An earlier version of this article first appeared in Funding Insight in summer 2017; this updated version is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

what can't be writtenAcademic writing has powerful conventions that lecturers, doctoral supervisors, and published academics work to uphold. Proper academic writing should be correct in every detail of grammar, punctuation, spelling and structure. It should use the third person, for neutrality, and to remove any sign of personal bias. The author should be as specific and precise as possible, and careful not to over-claim.

All this leads to some interesting linguistic contortions. ‘Two categories were studied to assess… the results highlight… the article will show…’ These kinds of constructions are commonplace in academic writing like nowhere else. Nothing is studied in a vacuum, and it is not ‘results’ that highlight or an ‘article’ that will show. Research is carried out by human beings, who decide what will be highlighted or shown in the reports of their research. Whose interests does it serve to conceal these truths?

In some disciplines, it is becoming more acceptable to acknowledge the researcher’s and authors’ roles in writing; to use the first person, and to accept the inevitability of bias while looking for ways to reduce it as far as possible. Yet moving away from attempted precision and correct use of English is still taboo. This causes problems, for example when the author needs to represent spoken English, such as in quotes from participants. Academics, research participants, and readers disagree about whether quotes should be rendered exactly, with their ‘incorrect’ grammar, or tidied up. If quotes are collected online, entering them into a search engine can identify participants. Quotes including swear words may alienate some readers. Exact quotes rendered in writing, with all their ‘ums’ and ‘ers’ and half-formed sentences, can make participants seem uneducated or unintelligent. Generally, academia prefers sanitised quotes. However, this can be viewed as an abuse of authorial power, as it removes authenticity from participants’ words.

In fact academic writing conventions are all about power. The apparently laudable aims of precise, unbiased writing conceal the power dynamics at play. Academic writing conventions – themselves allegedly neutral – in fact operate to exclude those who cannot or will not abide by them.

The good news is that there is now a tiny but growing movement to break down these conventions, led by some brave doctoral students, supervisors, and universities. For example:

  • Nick Sousanis, now Assistant Professor at San Francisco State University in the US, presented his doctoral dissertation as a graphic novel at Columbia University in 2014. The following year it was published by Harvard University Press, entitled Unflattening.
  • Patrick Stewart, a First Nation architect in Canada, successfully defended his doctoral dissertation at the University of British Columbia in 2015. Entitled Indigenous Architecture through Indigenous Knowledge, it has almost no capital letters or punctuation, as a form of resistance to the unthinking acceptance of English academic writing conventions.
  • Piper Harron is an assistant professor of mathematics at the University of Hawai’i in Manoa. She was awarded her PhD from Princeton University in the US in 2016. Her dissertation included in each chapter a section for ‘the layperson’, another for ‘the initiated’, and a third for ‘the mathematician’, as well as a whole lot of jokes.
  • Ashleigh Watson, a doctoral candidate at Griffith University in Queensland, Australia, founded So Fi, a sociological zine publishing creative sociological writing including fiction and poetry, in 2017.

Academia needs to take these kinds of alternative formats seriously. They enable more voices to be heard, more fully, than the conventional style of writing. Some universities have developed helpful alternative format policies to support this movement, such as this one from the University of Exeter in the UK. Implementing these kinds of policies will enrich academia.

Academic taboos #2: what cannot be paid for

An earlier version of this article first appeared in Funding Insight in summer 2017; this updated version is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

what can't be paid forThe external examiner for my viva was not the person I wanted, who was seminal in my field, but someone more peripheral to my topic but who owed my supervisor a favour. For that reason alone, she thought he would agree to examine my thesis – and he did. Alongside core work for their own institutions, academics give guest lectures, seminars, and keynote speeches at other universities, act as external examiners for vivas and courses, review journal articles and write testimonials for books. No money changes hands (apart from perhaps travel expenses, or sometimes a small honorarium) and nor does it need to, because everyone involved is drawing an academic salary.

Favours are the currency of academia. However, an increasing number of people who do scholarly work are not drawing salaries. Some, like me, are independent researchers or scholars. Others are early or mid-career academics who find themselves without a contract. Others still are ‘stakeholders’ or ‘the public’.

A combination of the increasing casualisation of academia, the increasing accessibility of academic work through open access publishing, and the public engagement agenda, is creating an environment where institutional boundaries are more and more permeable. This is creating a problem. Salaried academics are expecting non-salaried people contributing to scholarly work to be content with the academic currency of favours. However, non-salaried people tend to prefer the real-world currency of money, as it’s much more use when you need to eat and pay bills.

This isn’t so much the elephant in the room as the blue whale in the bath. An article was published last year on the LSE Impact Blog, by three academics from the University of Exeter, encouraging the involvement of ‘non-academic partners at all relevant stages of the research process’. They argue for ‘a more collaborative approach to research’ in which ‘partners and publics’ will ‘contribute to the value of academic research’. They assert that ‘genuine partnership relies on respect and will produce mutual benefit’ without saying anything about what that mutual benefit might look like or how they propose to ensure the benefit is truly mutual. And nowhere, in the entire article, do they mention money. The journal article on which the blog post is based, which is entitled ‘The value of experts, the importance of partners, and the worth of the people in between’ also makes no mention of any of their financial value or worth.

In the Western world, a university education costs tens of thousands and senior university staff earn hundreds of thousands. Universities are wealthy organisations; most make annual surpluses in the millions. In my view, as someone external to academia who contributes to the value of academic research, genuine partnership relies on adequate sharing of resources. Refusal to pay a sensible market rate to non-salaried collaborators for their skills and input is, quite simply, exploitation.

Academics need to be clear about the employment status of those they wish to work with, and understand who they can and can’t ask for favours. I have been an independent researcher for almost 20 years, an independent scholar for eight years, and continually vocal about my needs as a self-employed person. Yet I still get requests from salaried academics to teach, examine, or speak, for expenses only, or for a derisory sum that equates to less than minimum wage. It is very boring having to keep banging on about money, especially when people’s enthusiasm for your involvement dwindles rapidly as soon as you mention a fee. When a university’s water pipes leak, everyone understands that a plumber will have to be paid. In exactly the same way, academics need to understand that when they want to engage a self-employed researcher or scholar, or involve a member of the public, that person must be paid a market rate for their work.

Academic taboos #1: what cannot be said

An earlier version of this article first appeared in Funding Insight in summer 2017; this updated version is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

what can't be saidAcademia is a community with conventions, customs, and no-go areas. These vary, to some extent, between disciplines. For example, in most STEM subjects it is taboo for research authors to refer to themselves in writing in the first person. This leads to some astonishing linguistic contortions. Conversely, in arts disciplines, and increasingly in the humanities and social sciences, it is permissible to use more natural language.

It seems, though, that some conventions exist across all disciplines. For example, conference “provocations” are rarely provocative, though they may stretch the discussion’s comfort zone by a millimetre or two. Then conference “questions” are rarely questions that will draw more interesting and useful material from the speaker. Instead, they are taken as opportunities for academic grandstanding. Someone will seize the floor, and spend as long as they can get away with, effectively saying: “Look at me, aren’t I clever?” I have found, through personal experiment, that asking an actual question at a conference can cause consternation. I confess it amuses me to do this.

Perhaps the most interesting conventions are those around what cannot be said. Rosalind Gill, Professor of Cultural and Social Analysis at City University of London, UK, has noted the taboo around admitting how difficult, even impossible, it can be to cope with the pressures of life as an academic (2010:229). The airy tone when a colleague is heard to say: “I’m so shattered. The jobs on my to-do list seem to be multiplying. Haha, you know how it is.” Such statements can be a smokescreen for serious mental health problems.

A journal article published in 2017 by the theoretical physicist Oliver Rosten made a heartfelt statement about this in its acknowledgements, dedicating the article to the memory of a late colleague, and referring to “the psychological brutality of the post-doctoral system”. Several journals accepted the article for its scientific quality but refused to publish the acknowledgements in full; it took Rosten years to find a journal that would publish what he wrote. He has left academia and now works as a Senior Software Developer at Future Facilities Ltd in Brighton, UK.

Another thing that cannot be said, identified by Tseen Khoo, a Lecturer in Research Education and Development at La Trobe University, Melbourne, Australia, is that some academic research doesn’t need funding, it just needs time. This is anathema because everyone accepts that external funding makes the academic world go round. But what if it didn’t? What if student fees, other income (e.g. from hiring out university premises in the holidays), and careful stewardship was enough? What if all the time academics spent on funding applications, and making their research fit funders’ priorities, was actually spent on independent scholarship? It seems this is not only unsayable but also unthinkable. One of Khoo’s interlocutors described this as “a failure of the imagination”.

Another unspeakable truth I’m aware of is for someone to say that the system of research ethics governance is itself unethical. Ethics governance is something to comply with, not to question. That has led us to the situation where most research training contains little or no time spent on research ethics itself. Instead, young researchers learn that working ethically equates to filling in an audit form about participant welfare and data storage. They don’t receive the detailed reflective instruction necessary to equip them to manage the manifold ethical difficulties any researcher will encounter in the field.

I wonder what role the lack of research ethics education plays in the increasing number of journal articles that are retracted each year? I would argue that we need to separate ethical audit from ethical research, because they have different aims. The former exists to protect institutions, the latter to promote the quality of research and ensure the well-being of all concerned.

These areas of silence are particularly interesting given that academia exists to enable and develop conversations. However, I think that as well as acknowledging what academia enables, we also need to take a long hard look at what academia silences.