Ten Top Tips For Managing Your Own Research

crossroads-1580168__340When someone mentions research methods, what do you think of? Questionnaires? Interviews? Focus groups? Ways of doing research online? Do you only think of data gathering, or do you think of methods of planning research, analysing data, presenting and disseminating findings?

Research methods is a huge and growing field with many books and innumerable journal articles offering useful information. But nobody talks about methods for managing your own research. Perhaps you’re doing postgraduate research in academia or workplace research such as an evaluation. Even if you’re a fully funded full-time doctoral student, research is not all you do. Research has to fit in with the rest of your life and all its domestic work, family needs, other paid or voluntary work, hobbies, exercise, and so on.

Nobody talks about the methods for doing this kind of personal research management. Or, at least, not many people. I said quite a lot about it in my book Research and Evaluation for Busy Students and Practitioners. Petra Boynton also addresses it in her book The Research Companion. But I haven’t seen it mentioned anywhere else (if you have, please let us know in the comments). So here are ten top tips:

  1. Plan everything. Lots of books will tell you how to plan your research project. What they don’t say is that you also need to plan for the changes to your life and work which will result from you taking on the research. How will your research affect your other commitments? What do you need to do to minimise the impact of your research on your other commitments and vice versa? Build in contingency time for unforeseen events.
  2. Manage your time carefully. Use your plan to help you. Break down the main tasks into monthly, weekly and daily to-do lists. Review these regularly.
  3. Learn to work productively in short bursts. It may seem counter-intuitive, but most people get more done this way than by setting aside whole days to work on a project.
  4. Use time when your mind is under-occupied, e.g. when you’re waiting in a queue or doing repetitive household tasks, to think about and solve problems related to your research.
  5. Seek support from your family. Make sure they know about your research and understand its importance to you.
  6. Seek support from colleagues, managers, tutors etc, whether your work is paid or unpaid. Make sure they know about your research and understand its importance in your life.
  7. Don’t cut corners in ways that could damage your health. Eat sensibly, take exercise, get enough sleep and rest.
  8. Take breaks. At least three short breaks in each day, one day off in each week, and four weeks off in each year.
  9. Don’t beat yourself up if things go wrong. Be kind to yourself and learn what you can from the experience. Then re-group, re-plan, and set off again.
  10. Reward yourself appropriately for milestones reached and successes achieved.

In my view, these are as much research methods as questionnaires and interviews. Learning to use them involves acquiring tacit knowledge. I’ve been on a mission to convert tacit knowledge to explicit knowledge ever since I started writing for professionals. This blog post is part of that process. If you have other tips, please add them in the comments.

This blog is funded by my beloved patrons. It takes me around one working day per month to post here each week. At the time of writing I’m receiving funding of $12 per month. If you think 4-5 of my blog posts is worth more than $12 – you can help! Ongoing support would be fantastic but you can also support for a single month if that works better for you. Support from Patrons also enables me to keep this blog ad-free. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

What Offends People Makes Great Data

im-right-1458410__340Have you noticed how people seem to be getting offended about the strangest things? For example, there has been controversy this month over two songs that are regularly played in English-speaking countries at this time of year. The first is Baby It’s Cold Outside, a duet between two people (usually a man and a woman, though the lyrics are not gender-specific). It was written by Frank Loesser in 1944 to sing with his wife as a party trick. One character is persuading a slightly reluctant other to stay in the warm rather than go out into the winter weather. It’s flirtatious and funny, especially in my favourite version by Cerys Matthews and Tom Jones with an entertaining video which is worth watching right to the end. But some people say the lyrics promote date rape, and the song was banned by some radio stations earlier this month.

This is interesting as evidence of how our perceptions change. In 1944 no doubt date rape existed but it was not a widespread topic of conversation or media interest. In 2018, after #metoo, the world is a different place. So rather than looking like an innocent piece of comedy, to some people Baby It’s Cold Outside looks like a sinister instruction manual for would-be date rapists.

The other song that has been getting people hot under the collar is Fairytale Of New York, a more recent classic by The Pogues featuring Kirsty McColl. This is one of my two favourite seasonal songs. I love that it portrays people having an argument – so very common at this time of year – rather than the more usual saccharine sweetness. During the argument, two people hurl insults at each other, one uses the word ‘faggot’ and this is what has caused upset. (The other uses the word ‘slut’ but that doesn’t seem to be a problem… ho hum.) Shane McGowan, who co-wrote the song, made a dignified statement explaining that the song features two fictional characters who are not nice people so some of the things they say are not nice. He explains that storytelling requires unpleasant characters – which it does, or there is no drama.

One interesting thing about all this offence being taken is that more people hear about and listen to the songs. So people’s outrage, amplified by the media, has the opposite effect from that which they intend.

Perhaps words are easy to be offended by. It’s harder to be offended by the really offensive things going on around our planet such as famine and capitalism. We long for simplicity, for a world with problems we can solve. Yet banning a song containing the word ‘faggot’ is not going to fix global homophobia.

Research can be offensive, too. Of course unethical research is deeply offensive, but even careful, rigorous, ethical research can cause offence. John Bohannon, in his TED talk, said, “The great pleasure of science is the defeat of intuition.” I think that’s a wonderful sentence. Yet so many people hold fast to their intuition in the face of research evidence, outraged by the facts that challenge their beloved and long-held beliefs.

This, too, is interesting. I suspect this very human trait contributes to the barriers in translating research into policy and into practice. It certainly fuels many debates and slanging matches on social media. That gets exhausting sometimes… so I’ll be taking a break for the holiday season. See you in January!

This blog is funded by my beloved patrons. It takes me around one working day per month to post here each week. At the time of writing I’m receiving funding of $12 per month. If you think 4-5 of my blog posts is worth more than $12 – you can help! Ongoing support would be fantastic but you can also support for a single month if that works better for you. Support from Patrons also enables me to keep this blog ad-free. If you are not able to support me financially, please consider reviewing any of my books you have read – even a single-line review on Amazon or Goodreads is a huge help – or sharing a link to my work on social media. Thank you!

Engage Your Audience Beyond the Slides: 4 Ways to Add Creativity to Your Presentation Package

I’m delighted to offer you a guest post this week, by Echo Rivera, an expert on research presentation. She has some terrifically creative ideas and resources to share with you (and me!). My post this week is on Echo’s blog, and is about creativity and ethics in presentation. Here’s what Echo has to say:

Echo-Blog_thumbnailI’m so excited to be a guest contributor to Helen’s blog. I’ve learned a lot by reading her posts and love that she is helping folks be more creative in their research methods. I thought this would be a perfect place for me to talk about how to engage your audience beyond the use of your slides, so you can maximize your potential presentation impact. Specifically, I’ll be talking about how to add more creative elements to your presentation package.

1. Ask your audience questions

Ask yourself: during your last presentation, for how many minutes in a row did you talk at people? If your answer is longer than 7-10 minutes, then chances are they disengaged. We humans don’t really like to be talked to for too long because it can be overwhelming for the brain.

I’m sure there are gadgets and apps that are designed to get your audience engaged. Personally, however, I prefer no-tech or low-tech engagement approaches so that the tech doesn’t get in the way.

The easiest way to get your audience interacting with your presentation is to ask them questions every few minutes or so. They don’t even need to respond out loud–you could just ask them to think about their answer or write it down in their notes. It doesn’t add a lot of time to your presentations, and it keeps people interested.

You could also take this to the next step by having them respond in some way. A really effective way of your audience to engage is have then guess an answer before it’s revealed. For example, I’ll (a) pose a question should you use a default slide template?”, (b) ask them write down their guess on their handout and/or to share their answer (e.g., raising their hands, answering in the chat, shouting out loud), (c) add a dramatic pause, then (d) reveal the answer, no”. For those who are surprised by the answer, it will now be more memorable. For those who already knew the answer, it will validate and reinforce that knowledge.

2. Use engaging visuals

Okay so this is technically about your slides, so I’m kind of cheating here. But, a lot of the visuals I see in my clients’ or students’ presentations could use an extra boost of creativity.

BadSlide-Gears

Let me ask you this: When you need a photo for your slides, how often do you go to Unsplash (because you already know to not use Google Images, right?) and then type in the description of what you’re looking for–with terms like “STEM” or “Surveys” or “Researcher.”

If you’re like most academics, evaluators, or researchers then chances are that’s exactly what you do. And that’s exactly how you end up with Clip Art or really clichéd images that won’t resonate. I’m talking about those puzzle pieces, shaking hands, word clouds, and over-the-top cheesy smiles of business people. Your audience is not going to engage with those types of images.

So, another easy way to add more creativity is to start moving towards more modern, non-cheesy, photos and away from outdated Clip Art. Build up your visual database. Maybe even consider finding creative ways to make your own visuals, like what Ann K. Emery did with play-doh.

good-slide-gears

3. Create interactive or “gamified” handouts

I mentioned earlier that your handouts should not just be a printout of your slides. Instead, you should be creating custom handouts for your presentations. Don’t worry–it takes less time than you think because it’s very easy to copy slides or your speaker notes and paste them into Word.

When creating your handout, don’t hesitate to be creative! Add fill-in-blank sections so your audience needs to engage with your presentation by taking notes. To reduce anxiety and improve real-time cognitive processing, I often tell them I’ll provide the answer key after the presentation.

If you want to take the next step, then you could “gamifiy” the handout. Turn your presentation material into a crossword puzzle, word matching, or other types of games. This is a great way to formalize what I suggested earlier for asking your audience questions. Imagine if you created a handout where your audience had to guess the answers.

4. Create a data placemat

A data placement is an interactive handout times ten. The purpose is to engage your audience in interpreting and understanding the data, so it works for qualitative and quantitative projects.

“Data placemats display thematically grouped data designed to encourage stakeholder interaction with collected data and to promote the cocreation of meaning under the facilitative guidance of the evaluator.” (Pankaj & Emery, 2016, p. 81)

I encourage you to read the 2016 article by Veena Pankaj and Ann K. Emery which provides a helpful blueprint for how to create one and host a successful data placemat meeting. Then, be sure to check out this PDF which actually shows you their data placemat (and, as a bonus, beautiful data visualization examples). Finally, there is also a useful blog post about data placements, with some lessons learned and examples, on the American Evaluation Association 365 blog.  

Your Action Plan

These are all great ways to add more audience engagement and creativity into your presentations. Take a moment to review your last (or next) presentation and conduct an “engagement audit.” Start by adding some form of audience engagement at least every 10 minutes and updating your visuals to be more engaging and creative. Then, revise your handouts so they’re more engaging and memorable.

Just remember that it’s all part of a “presentation package,” which is my fancy way of reminding people that their presentation always involves multiple components: what you say (your speaker notes), what people see (your slides), and what people read or interact with (your handout). As a bonus tip, those three things should never be identical: your slides should not just be your speaker notes and your handout should not just be all your slides printed out.

EchoRivera-StarterKit-Mockup

If you’d like some bonus resources to help make your slides better, then check out my free Stellar Slides Starter Kit instant download. It includes my top 10 favorite presentation tips (illustrated by me), a presentation design workflow, and more!

About Echo

EchoRivera-Teal-CircleHi! I’m Dr. Echo Rivera, founder and owner of Creative Research Communications, LLC. I’m here to help you communicate your research and educational information more effectively and creatively. I have a PhD in Community Psychology and over a decade of research and evaluation experience. I moved on from my research & evaluation career to focus solely on helping others share their work more effectively. I’d love to connect with you on TwitterYouTube, Facebook, and Instagram.

Researching Research Ethics

Research ethics in the real world [FC]I have written on this blog before about my book launch which is now only four weeks away (or less, if you’re reading this after 11 October). It’s a free event and you’re welcome to come along if you’re in London that day; details here. Copies of the book itself should arrive in the next 2-3 weeks. Exciting times!

I’ve written this week’s blog post on SAGE MethodSpace, talking about the research I did into research ethics around the world as background for writing the book. Head on over and have a read, and please feel free to leave a comment there or here.

How Do Research Methods Affect Results?

questionsLast week, for reasons best known to one of my clients, I was reading a bunch of systematic reviews and meta-analyses. A systematic review is a way of assessing a whole lot of research at once. A researcher picks a topic, say the effectiveness of befriending services in reducing the isolation of housebound people, then searches all the databases they can for relevant research. That usually yields tens of thousands of results, which of course is far more than anyone can read, so the researcher has to devise inclusion and/or exclusion criteria. Some of these may be about the quality of the research. Does it have a good enough sample size? Is the methodology robust? And some may be about the topic. Would the researcher include research into befriending services for people who have learning disabilities but are not housebound? Would they include research into befriending services for people in prison?

These decisions are not always easy to make. Researcher discretion is variable and fallible, and this means that systematic reviews themselves can vary in quality. One thing they almost all have in common, though, is a despairing paragraph about the tremendous variability of the research they have assessed and a plea to other researchers to work more carefully and consistently.

One of the systematic reviews I read last week reported an earlier meta-analysis on the same topic. A meta-analysis is similar to a systematic review but uses statistical techniques to assess the combined numerical results of the studies, and may even re-analyse data if available. The report of the meta-analysis I read, in the systematic review, contained a sentence which jumped out at me: ‘…differences in study design explained much of the heterogeneity [in findings], with studies using randomised designs showing weaker results.’

Randomised designs are at the top of the hierarchy of evidence. The theory behind the hierarchy of evidence is that the methods at the top are free from bias. I don’t subscribe to this theory. I think all research methods are subject to bias, and different methods are subject to different biases. For example, take the randomised controlled trial or RCT. This is an experimental design where participants are randomly assigned to the treatment or intervention group (i.e. they receive some kind of service) or to the control group (i.e. they don’t). This design assumes that random allocation alone can iron out all the differences between people. It also assumes that the treatment/intervention/service is the only factor that changes in people’s lives. Clearly, each of those may not in fact be the case.

Now don’t get me wrong, I’m not anti-RCTs. After all, every research method is based on assumptions, and in the right context an RCT is a great tool. But I am against bias in favour of any particular method per se. And the sentence in the systematic review stood out for me because I know the current UK Government is heavily biased towards randomised designs. It got me wondering, do randomised designs always show weaker results? If so, is that because the method is more robust – or less? And does the UK Government, which is anti-public spending, prefer randomised designs because they show weaker results, and therefore are less likely to lead to conclusions that investment is needed?

And that got me thinking we really don’t know enough about how research methods influence research results. I went looking for work on this and found none, just the occasional assertion that methods do affect results. Which seems like common sense… but how do they? Does the systematic review I read hold a clue, or is it a red herring? The authors didn’t say any more on the subject.

We can’t always do an RCT, even when the context means it would be useful, because (for example) in some circumstances it would be unethical to withhold provision of a treatment/intervention/service. So what about other methods? Do we understand the implications of asking a survey question that a participant has never thought about and doesn’t care about – or cares about a great deal? I know that taking part in an interview or focus group can lead people to think and feel in ways they would not otherwise have done. What impact does that have on our research? Can we trust participants to tell us the truth, or at least something useful?

This is troubling me and I have more questions than answers. I fear I may be up an epistemological creek without an ontological paddle. But I think that bias in favour of – or against – a particular research method, without good evidence of its benefits and disadvantages, is poor research practice. And it’s not only the positivists who are subject to this. Advocates of participatory research are every bit as biased, albeit in the opposite direction. The way some participatory researchers write, you’d think their research caused bluebirds to sing and rainbows to gleam and all to be well in the world.

It seems to me that we all need to be more discerning about method. And that’s not easy when there are so many available, and a plethora of arguments about what works in which circumstances. So I think we may need to go meta here and do some research on the research. But ‘further research needed’ is a very researcher-y way of thinking, and I’m a researcher, so… does my bias look big in this?

Aftercare in Social Research

aftercareWhen does a research project end? When a report has been written? When a budget has been spent? When the last discussion of a project has taken place? It’s not clear, is it?

Neither is it clear when a researcher’s responsibility ends. This is rarely spoken of in the context of social research, which is an unfortunate omission. A few Euro-Western researchers recognise the need for aftercare, but they are a tiny minority of individuals. There seems to be no collective or institutional support for aftercare. In the Indigenous paradigm, by contrast, aftercare is part of people’s existing commitment to community-based life and work. Euro-Western researchers could learn much from Indigenous researchers about aftercare: for participants, data, findings, and researchers ourselves.

The standard Euro-Western aftercare for participants is to tell them they can withdraw their data if they wish. However, it is rare for researchers to explain the limits to this, which can cause problems as it did for Roland Bannister from Charles Sturt University in Wagga Wagga, Australia. Bannister did research with an Australian army band, Kapooka, which could not be anonymised as it was unique. Band members consented to take part in Bannister’s research. He offered participants the opportunity to comment on drafts of his academic publications, but they weren’t interested. Yet when one of these was published in the Australian Defence Force Journal, which was read by band members, their peers, and superiors, participants became unhappy with how they were represented. Bannister had to undertake some fairly onerous aftercare in responding to their telephone calls and letters. Of course it was far too late for participants to withdraw their data, as this would have meant retracting several publications, which is in any case limited in its effectiveness. However, particularly in these days of ‘long tail’ online publications, we need to be aware that participants may want to review research outputs years, even decades, after the substantive work on the project is done. We have a responsibility to respond as ethically as we can although, as yet, there are no guidelines to follow.

Data also needs aftercare, particularly now that we’re beginning to understand the value of reusing data. Reuse increases the worth of participants’ contributions, and helps to reduce ‘research fatigue’. However, for data to be reusable, it needs to be adequately stored and easy to find. Data can be uploaded to a website, but it also needs to be carefully preserved to withstand technological changes. Also, it needs a ‘global persistent identifier’ such as a DOI (digital object identifier) or Handle. These can be obtained on application to organisations such as DataCite (DOIs) or The Dataverse Project (DOIs and Handles). As well as enabling reuse, a global persistent identifier also means you can put links to your data in other outputs, such as research reports, so that readers can see your data for themselves if they wish. This too is an ethical approach, being based in openness and transparency.

Then there are the findings we draw from our data. Aftercare here involves doing all we can to ensure that our findings are shared and used. Of course this may be beyond our power at times, such as when working for governments who require complete control of research they commission. In other contexts, it is unlikely that researchers can have much say in how our findings are used. But we should do all we can to ensure that they are used, whether to support future research or to inform practice or policy.

Researchers too need aftercare. In theory the aftermath of a research project is a warm and fuzzy place containing a pay cheque, favourably reviewed publications, and an enhanced CV. While this is no doubt some people’s experience, at the opposite end of the spectrum there are a number of documented cases of researchers developing post-traumatic stress disorder as a result of their research work. In between these two extremes, researchers may experience a wide range of minor or major difficulties that can leave them needing aftercare beyond the lifetime of the project. For that, at present, there is no provision.

Not much has yet been written on aftercare in research. If it interests you, there is a chapter on aftercare in my book on research ethics. I expect aftercare to be taken increasingly seriously by researchers and funders over the coming years.

An earlier version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

 

Academic taboos #1: what cannot be said

An earlier version of this article first appeared in Funding Insight in summer 2017; this updated version is reproduced with kind permission of Research Professional. For more articles like this, visit www.researchprofessional.com.

what can't be saidAcademia is a community with conventions, customs, and no-go areas. These vary, to some extent, between disciplines. For example, in most STEM subjects it is taboo for research authors to refer to themselves in writing in the first person. This leads to some astonishing linguistic contortions. Conversely, in arts disciplines, and increasingly in the humanities and social sciences, it is permissible to use more natural language.

It seems, though, that some conventions exist across all disciplines. For example, conference “provocations” are rarely provocative, though they may stretch the discussion’s comfort zone by a millimetre or two. Then conference “questions” are rarely questions that will draw more interesting and useful material from the speaker. Instead, they are taken as opportunities for academic grandstanding. Someone will seize the floor, and spend as long as they can get away with, effectively saying: “Look at me, aren’t I clever?” I have found, through personal experiment, that asking an actual question at a conference can cause consternation. I confess it amuses me to do this.

Perhaps the most interesting conventions are those around what cannot be said. Rosalind Gill, Professor of Cultural and Social Analysis at City University of London, UK, has noted the taboo around admitting how difficult, even impossible, it can be to cope with the pressures of life as an academic (2010:229). The airy tone when a colleague is heard to say: “I’m so shattered. The jobs on my to-do list seem to be multiplying. Haha, you know how it is.” Such statements can be a smokescreen for serious mental health problems.

A journal article published in 2017 by the theoretical physicist Oliver Rosten made a heartfelt statement about this in its acknowledgements, dedicating the article to the memory of a late colleague, and referring to “the psychological brutality of the post-doctoral system”. Several journals accepted the article for its scientific quality but refused to publish the acknowledgements in full; it took Rosten years to find a journal that would publish what he wrote. He has left academia and now works as a Senior Software Developer at Future Facilities Ltd in Brighton, UK.

Another thing that cannot be said, identified by Tseen Khoo, a Lecturer in Research Education and Development at La Trobe University, Melbourne, Australia, is that some academic research doesn’t need funding, it just needs time. This is anathema because everyone accepts that external funding makes the academic world go round. But what if it didn’t? What if student fees, other income (e.g. from hiring out university premises in the holidays), and careful stewardship was enough? What if all the time academics spent on funding applications, and making their research fit funders’ priorities, was actually spent on independent scholarship? It seems this is not only unsayable but also unthinkable. One of Khoo’s interlocutors described this as “a failure of the imagination”.

Another unspeakable truth I’m aware of is for someone to say that the system of research ethics governance is itself unethical. Ethics governance is something to comply with, not to question. That has led us to the situation where most research training contains little or no time spent on research ethics itself. Instead, young researchers learn that working ethically equates to filling in an audit form about participant welfare and data storage. They don’t receive the detailed reflective instruction necessary to equip them to manage the manifold ethical difficulties any researcher will encounter in the field.

I wonder what role the lack of research ethics education plays in the increasing number of journal articles that are retracted each year? I would argue that we need to separate ethical audit from ethical research, because they have different aims. The former exists to protect institutions, the latter to promote the quality of research and ensure the well-being of all concerned.

These areas of silence are particularly interesting given that academia exists to enable and develop conversations. However, I think that as well as acknowledging what academia enables, we also need to take a long hard look at what academia silences.

The Undisciplined Interdisciplinary Researcher

undisciplinedLast week there was an interesting conference called Undisciplining that I enjoyed following on social media. The conference subtitle was ‘Conversations From The Edges’ and its stated aim included ‘to foster collaborations and dialogues across disciplines and beyond academia’. There was live blogging, a workshop on making sociological board games, a feminist walk, and all manner of other creative ways to promote reflection and discussion at the conference and elsewhere. But although it talked about working across disciplines and beyond academia, the stated purpose of that was ‘to shape the nature and scope of the sociological’.

From what I read about this conference, its participants were keen to consider how sociology might be changed, extended, morphed into anything at all that could be useful in some way – but would still, in the end, be sociology. As the conference was sponsored by The Sociological Review this is perhaps unsurprising. Yet, despite its aspirations to interdisciplinarity – ‘undisciplining’ – it seemed like a disciplinary conference.

While I was pondering on this, my attention was drawn to a blog post by Ayona Datta about why she supports early career researchers. This sentence resonated with me: “Despite the rhetoric of interdisciplinarity, there are very few institutional and intellectual spaces that actually support interdisciplinary work.”

My first degree was a BSc in social psychology at the London School of Economics. In the early 1980s, few psychologists were experimenting with qualitative research, so my degree was entirely quantitative. What I learned in my first degree influences me today, yet I’m neither a psychologist nor a quantitative researcher. I studied social research methods for my masters’ degree which was mostly taught by sociologists and anthropologists. My PhD was cross-disciplinary, with one supervisor from social policy and the other from the business school. Today, I think I am a researcher without a discipline. Perhaps I am an undisciplined researcher.

But research is a topic, not a discipline. So does this mean my work is interdisciplinary? I think it does, for two main reasons. First, my main topics of interest, i.e. research methods and ethics, are interdisciplinary. A geographer might invent a new method, which is then adapted by an anthropologist, reshaped by a poet and used by a lawyer. Research ethics don’t vary much across disciplines either. Second, I read across disciplines, like a magpie, searching by topic, picking out the texts that look shiny and passing over the dull ones. I don’t have a disciplinary imperative to keep up with this journal or that blog. I began to read like this as an undergraduate, pre-internet, finding that tracking trails of interest through bibliographies in the library was far more interesting than trudging through the prescribed reading list (though sadly it was less use when it came to writing assignments).

I’m not anti-disciplines, though, as such. I think perhaps there is merit in learning and thinking within particular fields for some purposes. But I am anti-disciplines when they constrain thought and action. To help avoid this, I think discipline-based researchers and scholars should make regular visits to other disciplines, such as through reading, collaborating, or attending conferences. During my undergraduate degree, every student was expected to take a module outside their core subject. I learned a lot from studying anthropology, sociology, and literature, which enhanced my learning of psychology. (I was amused to find that this approach has been introduced as an ‘innovation’ by another London HE setting recently. My cackling splutter of “LSE did that in the early 80s” received a frosty reception.)

Academics often tell me they can’t work in this kind of way because of constraints which, to be fair, often seem more institutional than disciplinary. So is the problem here that disciplines serve the needs of the institution? Was the Sociological Review able to sponsor a conference more radical than some because it is a publication, not an institution? Is it, as many have suggested and I myself suspect, because I work outside an institution that I can do truly interdisciplinary work?

Being a researcher, I generally have more questions than answers. I wonder, though, whether interdisciplinary work holds dangers for those in power. I wonder whether this is why independent researchers are not able to write for The Conversation or apply for funding from research councils. I suspect my forthcoming book, Research Ethics in the Real World, which certainly is interdisciplinary, is going to annoy some people. More than one academic has told me they wouldn’t have been able to write it from within academia.

I would have loved to go to the Undisciplining conference, but I couldn’t afford the cost plus the unpaid time to attend, so I’m glad they did so much on social media. I will try to do my part on that front at the Research Methods Festival in Bath next week. That’s a truly interdisciplinary conference, with geographers, philosophers, sociologists, criminologists, health researchers, artists, economists, and many others too. I’m running a workshop on writing creatively in academia, which means I get a sizeable discount plus my travel paid, which means I can attend the rest of the conference. I can’t wait!

Methodology, Method, and Theory

debatingLike last week’s post, this one was inspired by @leenie48 on Twitter. My post of the week before was on how to choose a research question, and @leenie48’s view was that I should not tackle that topic without considering theory. Last week’s post dealt with why I didn’t include theory in the previous post (I hope you’re all keeping up at the back). This week’s post, as promised, explains why I think theory sits with methodology rather than with method.

Some people think ‘methodology’ is just a posh word for ‘method’. This is a bit like how some people think ‘statistical significance’ is a more important version of ordinary everyday ‘significance’. As in, it’s completely wrong.

Methods are the tools researchers use to practice our craft: to gather and analyse information, write and present findings. We have methods for searching literature and sources, gathering and analysing data, reporting, presenting, and disseminating findings. Methodologies are the frameworks within which we do all of this work, and are built from opinions, beliefs, and values. These frameworks guide us in selecting the tools we use, though they are not entirely prescriptive. Therefore one method, such as interviewing, may be used for research within different methodologies, such as realist evaluation or feminist research.

Here, as almost everywhere in the field of research methods, terminology is contested. But most people agree that there are several overarching categories of methodologies, such as post-positivist, constructivist, and interpretivist, and that within those overarching categories there are more specific methodologies, such as post-modernism and phenomenology. There are debates about what each category and methodology is, and how different methodologies should be used. These debates are mostly based on theory.

As I explained last week, theory also comes in many forms and is widely debated. These kinds of debates keep some academics in full-time work and are much too complex to summarise in a blog post. What I can say here is that @leenie48 and I disagree on a fundamental point. She thinks it is not an option to ‘jump from rq [research question] to method choice with no consideration of theory’. I know it is an option because I have seen it done many times, and have done it myself as an independent researcher working on commission for clients who are not interested in considering theory or in paying me to consider theory. The kind of briefs I often work to say, for example, ‘We want to know what our service users think about the service we provide, please do a set of interviews to find out.’ The commissioners don’t want a literature review or any explicit theoretical underpinnings, they simply want me to use my independent research skills to find out something they don’t know which will help them take their service forward. In a different context, I have taught and externally examined Masters’ level students, in subjects such as business studies and advice work, who are learning to do research. Their projects focus on method, not theory. It is as much as they can do, in their small word allocation, to contextualise their work, give a rationale for the method they have chosen, and describe and discuss their findings.

Masters’ level students in some other subjects would need to engage with theory, as I did in my own studies for MSc Social Research Methods, and I cannot imagine anyone doing research at doctoral level without using a theoretical perspective. I agree with @leenie48 that theory is important and has a lot to offer to research. In an ideal world, theory would form an equal part of a triad with research and practice.

In a comment on last week’s blog post, Sherrie Lee suggested that theory may be always present in some form, even if it is not explicitly considered. I think she makes a good point. I would like to use theory explicitly in all the research I do, rather than just some of it, but I am not sure that day will ever come. Much commissioned research isn’t explicit about methodology either. There is a lot of practice-based, and practice, research that goes on in the world where people simply move straight from research question to method. While this is not ideal, it is pragmatic. I think @leenie48 and I will have to agree to disagree on this one.

Why Not Include Theory?

theoryLast week I wrote a post about how to choose a research method. It received a fair amount of approval on social media, and a very interesting response from @leenie48 from Brisbane, Australia, with a couple of contributions from @DrNomyn. I’ve tidied up our exchange a little; it actually ended up in two threads over several hours, so wasn’t as neat as it seems here. I was travelling and in and out of meetings so undoubtedly didn’t give it the attention it deserved. I couldn’t embed the tweets without tedious repetition, so have typed out most of the discussion; our timelines are accessible if anyone feels the need to verify. Here goes:

EH: Your post suggests one can jump from rq to method choice with no consideration of theory. I disagree.

HK:I teach, and write for, students at different levels. Here in the UK masters’ students in many subjects have to do research with no consideration or knowledge of theory.

EH: Perhaps it might be useful to point out advice is for specific readers. Bit sick of having to explain to new phd students that this kind of advice is not for them!

HK: You’re right, and I am sorry for causing you so much inconvenience. I’ll re-tag all my blog posts, though that will take a while as there’s a sizeable archive.

HK: That seems unnecessarily pejorative. I don’t regard practice-based masters’ research as ‘pretend’, but as a learning opportunity for students. Commissioned research and practice-based research is professional rather than academic. Not wrong, simply different.

EH: Then why not include theory?

HK: I’ve explained why I didn’t include it in my blog post, so I’m not sure what you’re asking here?

And that’s where the discussion ended, with me confused as @leenie48’s question was on the other thread. Having put this into a single conversation, though, for the purposes of this post, it makes more sense. I think @leenie48 was asking why not include theory in masters’ level or practice-based research.

My conversation with @leenie48 might lead the uninitiated reader to believe that theory is a homogeneous ‘thing’. Not so. Theory is multiple and multifaceted. There are formal and informal theories; social and scientific theories; grand and engaged theories; Euro-Western and Southern theories. These are oppositional theory labels; there are also aligned options such as post-colonial and Indigenous theories.

I studied a module on social theory for my MSc in Social Research Methods, and used hermeneutic theory (a grand-ish formal Euro-Western social theory) for my PhD. Yet I don’t think I understood what theory is for, i.e. how it can be used as a lens to help us look at our subjects of study, until well after I’d finished my doctoral work.

If you’re doing academic research, theory can be very useful. Some, like @leenie48, may argue that it is essential. It is certainly a powerful counter when you’re playing the academic game. Yet theory is, like everything, value-laden. At present, in the UK, the French social theorist Bourdieu is so fashionable that the British Sociological Association is often spoken of, tongue in cheek, as the Bourdieu Sociological Association. At the other extreme, social theories from the Southern hemisphere are often ignored or unknown. So I would argue that if we are to include theory, we need to engage with the attributes of the theory or theories on which we wish to draw, and give a rationale for our choice. I find it frustrating that so much of academia seems to regard any use of theory as acceptable as long as there is use of theory, rather than questioning why a particular theory is being used.

This kind of engagement and rationale-building takes time and a certain amount of academic expertise. If you’re doing research for more practical reasons, such as to obtain a masters’ degree, evaluate a service, or assess the training needs of an organisation’s staff, theory is a luxury. These kinds of research are done with minimal resources to achieve specific ends. I don’t think this is, as @leenie48 would have it, ‘pretend research’. For sure it’s not aiming to contribute to the global body of knowledge, but I can see the point in working to discover particular information that will enable certain people to move forward in useful ways.

I have still to tackle two other points raised by @leenie48: the ‘methodology vs method’ question, and the issue of writing for masters’ students vs doctoral students on this blog and elsewhere. So that’s my next two blog posts sorted out then!