Challenging The Dominance Of English

languagesIn a thought-provoking blog post, Naomi Barnes of Brisbane, Australia, recently asked what other white people were doing to break down the barriers built by whiteness. This is a very good question. One thing we can do is to challenge the dominance of English.

Language is not neutral in research or education. English is the dominant language of both, worldwide, as a direct result of colonialism. English is dominant even though it ranks only third in the world: more of the world’s people speak Mandarin Chinese, or Spanish. Studying for a PhD (or equivalent), or writing an academic journal article, is demanding enough when you can do it in your native language. Every year, around the world, millions of people have to study and write in English when that is not their native language, which makes already difficult work much more difficult. People like me, who are born into an English-speaking country, are unbelievably lucky and have a massive head-start. A lot of us, I think, don’t realise how lucky we are.

Professor Bagele Chilisa of Botswana, in her excellent book from 2012 on Indigenous Research Methodologies, calls this the ‘hierarchy of language’. (The English version of the search engine I use, duckduckgo, has never heard of her book, which rather proves her point.) The hierarchy of language comes with a range of ethical implications for native English speakers, and I will outline three of the main ones here.

First, we need to understand that there is not just one form of English, there are many: from Bangalore to Boston, from London to Lagos, from Sydney to Sao Paulo. This means we should not assume that someone’s ideas have less worth because their spoken English is heavily accented, or formulated differently from our own, or their written English is not entirely fluent.

‘Language-ism’ is embedded in structures such as academia and publishing. People who write non-standard English, regardless of the quality of the content, are less likely to have their work formally published in academic journals – or, at least, not the journals usually indexed by Google Scholar or the Directory of Open Access Journals. This is one of the ‘barriers built by whiteness’ referred to by Naomi Barnes. As a result, work in non-standard English is harder to find, so it is less likely to be used, shared, or cited. Yet some of these researchers are doing excellent work which is well worth exploring.

This is the second ethical point: we need to try harder to find, and use, work by non-native English speakers. Those of us who can read other languages have a head-start here. (Many non-English speaking countries teach languages, including English, to children throughout their schooling. In England in the 1970s, when I was at school, learning other languages was mostly optional – I spent just three years learning elementary French and have only needed to use it, since then, when actually in France. Even there many people speak better English than my French. This is another indication of the dominance of English.) But whether or not you can read other languages, you need to know where to look for research from beyond the countries where English is dominant. Here are some ‘starters for 10’ thanks to Andy Nobes of INASP on Twitter, in conversation with Raul Pacheco-Vega, Pat Thomson and Jo VanEvery:

African Journals Online

Bangladesh Journals Online

Central American Journals Online

Journals from Latin America, the Caribbean, Spain and Portugal

Latindex (Latin America, the Caribbean, Spain and Portugal – Spanish only)

Mongolia Journals Online

Nepal Journals Online

Philippine Journals Online

Scientific Electronic Library Online (Latin America, Spain, Portugal and South Africa)

Sri Lanka Journals Online

Many of these are supported by the research, knowledge and development charity INASP through its Journals Online project. Most have an English option on their website and some, if not all, articles available in English. Much of the content is openly accessible.

The third ethical point is to look at this the other way around. If we write in English, we should do all we can to get our work translated into other majority languages. There are 23 languages in the world that are each spoken as a first language by over 50 million people. The top 10 are: Chinese, Spanish, English, Arabic, Hindi, Bengali, Portuguese, Russian, Japanese and Lahnda (a Pakistani language). Translation brings its own ethical problems, as there is not always a straightforwardly equivalent word for an idea or a concept, so translating from one language to another can involve some creativity and interpretation. However, a careful translation between any two majority languages will make your work available to many more scholars. In particular, translations from English help to reduce its dominance.

So there are three ways for white people (and native English speakers of colour) to challenge the dominance of English and so help to break down some of the barriers built by whiteness. If you can think of other ways to do this, please add them in the comments.

The Variety Of Indie Research Work

varietyOne of the things I love about being an independent researcher is the sheer variety of projects I work on and tasks I might do in a day. Yesterday, I was only in the office for the afternoon, yet I worked on at least seven different things. Here’s what I did.

First, I checked Twitter, and found a tweet with a link to a blog post I wrote about an event that is part of a project I’m working on with and for the forensic science community. This is a new departure for me, in that I haven’t worked with forensic scientists before, though the work itself is straightforward. I’m supporting a small group of people with research to identify the best way to create a repository for good quality student research data, and it’s surprisingly interesting. So I retweeted the tweet.

Second, I dealt with the morning’s emails. The arrival of a purchase order I’d been waiting for weeks to receive – hurrah! I formulated the invoice and sent it off to the client. Then some correspondence about the creative research methods summer school I’m facilitating at Keele in early July – just three weeks away now, so the planning is hotting up (and there are still some places left if you’d like to join us – it’ll be informative and fun). The most interesting email was a blog post from Naomi Barnes, an Australian education scholar who is considering what it means to be a white educator in the Australian school system. This chimes with the work I am doing on my next book, so I leave a comment and tweet the link.

While on Twitter, I got side-tracked by a tweet announcing #AuthorsForGrenfell, an initiative set up by authors for authors to donate items for auction to raise funds for the Red Cross London Fire Relief Fund to help survivors of the Grenfell Tower fire. I’d been wanting to help: my father is a Londoner, I have always had family in London, I lived in London myself from 1982-1997, and one member of my family is working in the tower right now to recover bodies. So it feels very close to home. But I’m not in a position to give lots of money, so I was delighted to find this option which I hope will enable me to raise more money than I could give myself. I have offered one copy of each of my books plus a Skype consultation with each one. My items aren’t yet up on the site, but I hope they will be soon because bidding is open already. If you’re one of my wealthy readers, please go over there and make a bid!

Then I spent some time researching aftercare for data. Yes, indeed there is such a thing. So far I’ve come up with two ways to take care of your data after your project is finished: secure storage and open publication. They are of course diametrically opposed, and which you choose depends on the nature of your data. Open publication is the ethical choice in most cases, enabling your data to be reused and cited, increasing your visibility as a researcher, and reducing the overall burden on potential research participants. In some cases, though, personal or commercial sensitivities will require secure storage of data. There may be other ways to take care of data after the end of a project, and I’ll be on the lookout for those as I work on my next book.

By now it was 6 pm so I did a last trawl of the emails, and found one from Sage Publishing with a link to a Dropbox folder containing 20 research methods case studies for me to review. They publish these cases online as part of their Methodspace website. I like this work: it’s flexible enough to fit around other commitments and, like other kinds of review, it tests my knowledge of research methods while also helping me to stay up to date. Best of all, unlike other kinds of review, Sage pay for my expertise. So I downloaded all the documents, checked and signed the contract, and emailed it back with a ‘thank you’. By then it was 6.30 pm and time to go home.

As the old saying goes, variety is the spice of life. I certainly like the flavour it gives to my work. Some days I work on a single project all day; those days are fun too. Yesterday I worked in my own office, today I’m out at meetings locally, tomorrow I’m off to London. It’s always ‘all change’ and I wouldn’t have it any other way.

Let’s Talk About Research Misconduct

detective-152085__340Research misconduct is on the rise, certainly within hard science subjects, quite possibly elsewhere. Researchers around the world are inventing data, falsifying findings, and plagiarising the work of others. Part of this is due to the pressure on some researchers to publish their findings in academic journals. There is also career-related pressure on researchers to conduct accurate polls, produce statistically significant results, and get answers to questions, among other things. Some clients, managers, funders and publishers have a low tolerance for findings that chime with common sense or the familiar conclusion of ‘more research is needed’. They may expect researchers to produce interesting or novel findings that will direct action or support change.

Publishers are working to counteract misconduct in a variety of ways. Plagiarism detection software is now routinely used by most big publishers. Also, journal articles can be retracted (i.e. de-published) and this is on the increase, most commonly as a result of fraud. However, the effectiveness of retraction is questionable. The US organisation Retraction Watch has a ‘leaderboard’ of researchers with the most retracted papers, some of whom have had more papers retracted than you or I will ever write, which suggests that retraction of a paper – even for fraud – does not necessarily discredit a researcher or prevent them from working.

Some research misconduct can have devastating effects on people, organisations, and professions. People may lose their jobs, be stripped of prizes or honours, and be prosecuted in criminal courts. Organisations lose money, such as the cost of wasted research, disciplinary hearings, and recruitment to fill vacancies left by fraudulent researchers. And whole professions can suffer, as misconduct slows progress based on research. For example, in 2012 the Journal of Medical Ethics published a study showing that thousands of patients had been treated on the basis of research published in papers that were subsequently retracted. Retraction Watch shows that some papers receive hundreds of citations even after they have been retracted, which suggests that retraction may not be communicated effectively.

Yet even the potentially devastating consequences of misconduct are clearly not much of a deterrent – and in many cases may not occur at all. Let’s examine a case in more detail. Hwang Woo-Suk is a researcher from South Korea. In the early 2000s he was widely regarded as an eminent scientist. Then in 2006 he was found to have faked much of his research, and he admitted fraud. Hwang’s funding was withdrawn, criminal charges were laid against him, and in 2009 he received a suspended prison sentence. Yet he continued to work as a researcher (albeit in a different specialism) and to contribute to publications as a named author.

Closer to home, a survey of over 2,700 medical researchers published by the British Medical Journal in 2012 found that one in seven had ‘witnessed colleagues intentionally altering or fabricating data during their research or for the purposes of publication’. Given the pressures on researchers, perhaps this is not surprising – though it is deeply shocking.

The examples given in this article are from hard science rather than social research. Evidence of misconduct in social research is hard to find, so it would be tempting to conclude that it happens less and perhaps that social researchers are somehow more ethical and virtuous than other researchers. I feel very wary about making such assumptions. It is also possible that social research is less open about misconduct than other related disciplines, or that it’s easier to get away with misconduct in social research.

So what is the answer? Ethics books, seminars, conferences etc frequently exhort individual researchers to think and act ethically, but I’m not sure this provides sufficient safeguards. Should we watch each other, as well as ourselves? Maybe we should, at least up to a point. Working collaboratively can be a useful guard against unethical practice – but many researchers work alone or unsupervised. I don’t think formal ethical approval is much help here, either; it is certainly no safeguard against falsifying findings or plagiarism. Perhaps all we can do at present is to maintain awareness of the potential for, and dangers of, misconduct.

A version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

Creative Methods for Evaluation: A Frustration

frustrationEvaluation is a particular type of applied research designed to assess the value of a service, intervention, policy or other such phenomenon. This is relevant to all of us as it forms the basis for many decisions about public service provision. Despite being applied research, evaluation also has a significant academic profile, with dedicated journals, many books, and university departments with professors of evaluation in countries around the world.

There are a range of types of, and approaches to, evaluation research. They all have some things in common: they start with the desired outcomes of the service, intervention etc; formulate indicators that would show those outcomes had been met; then collect data in line with those indicators and analyse it to identify the extent to which the outcomes have been met. So, for example, if a community service aims to reduce loneliness, they might decide that one indicator could be a reduction of reports of loneliness to community-based doctors and nurses, then work with health colleagues to collect information from health records before and after the provision of the service to show whether there was any difference. Evaluators also write recommendations for ways to improve the service, intervention etc. The intention is that these recommendations are implemented, then later reviewed in another cycle of evaluation research.

The basics of general research practice also apply to evaluation: plan thoroughly, collect and analyse data, produce written and other outcomes, and publish your findings. From time to time I teach a course called ‘Creative Research Methods for Evaluation’, usually as part of the UK and Ireland Social Research Association’s open training programme. All sorts of people come on this course: central and local Government researchers, charity researchers, health researchers, researchers in private practice, research funders – a real mix, which makes it great fun.

I know quite a bit about creative methods; after all, I wrote a book on the subject. I tell my students about arts-based methods, research using technology, mixed methods, and transformative research frameworks. We talk about when these methods are appropriate to use, and how they can work side-by-side with more established methods. I give them lots of examples of creative methods in use.

And here is the huge frustration. While I have plenty of examples of creative methods in practice, very few come from evaluation research. I have some examples from my own practice, though only as verbal stories because the written and other outputs are subject to client confidentiality. This is a big problem with evaluation research: because it is applied, i.e. often conducted by and for individual organisations, it is rarely published beyond its immediate audience. When it is published, it is often simply uploaded to a web page and so disappears into the depths of the internet. And if it is both published and findable, it is not likely to include the use of creative methods.

There are many examples of perfectly competent evaluations using well-established methods. However, evaluators today are working on complex projects and benefit from having more methodological options at their fingertips. I know my course helps, because former students have told me so, but during the course someone always asks why I’m not using examples from evaluation research. (Even though I explain this problem at the start!) I wish I could use such examples; I’m sure they’re out there; but even though I have searched, and asked, and searched again, I can’t find them. So this is by way of an appeal: do you know of any good resources that showcase creative methods in evaluation research? By ‘good resources’ I mean well written outputs or short engaging videos (3-4 minutes at most) that are not too basic, as my students are generally quite experienced. If you have anything to suggest, please let me know in the comments.