Little Quick Fixes for Research

Little Quick Fix logoBack in May, I was surprised and delighted to be contacted by a research methods editor from SAGE Publishing, Mila Steele, who asked me to write books for their new Little Quick Fix series on research methods. I had met Mila several times at conferences and other events, and we’d had some good chats, but her email came quite out of the blue.

The series is a new departure for SAGE. It’s also a new departure for me, as the books are intended for undergraduates and I’ve only written for postgraduates before (though some enterprising third-year undergraduates have used, and kindly given me good feedback on, Research and Evaluation for Busy Students and Practitioners: A Time-Saving Guide). There are two other authors currently writing for the series: Zina O’Leary, who is covering the project management side of things, and John MacInnes, who is writing on statistics. Mila wanted me to focus on data, and we agreed that I would start with two books: Do Your Interviews and Write A Questionnaire.

The books are short, pocket-sized, colourful, and interactive. They have a template for consistency, but there is also scope for varying that template as needed. There is no peer review; instead, authors work closely with their editor. In one way this is a joy, though in another way it has caused me problems because I don’t work with undergraduates myself. Luckily I have a colleague/friend who teaches interviewing to undergraduates and was willing to let me pick her brains over lunch. Twitter helped me find another contact who teaches questionnaires to undergraduates and, as she was in Australia, Skype allowed us to speak. I was grateful to both people for alerting me to important points I might otherwise have missed.

Before these, the last book I wrote was Research Ethics in the Real World: Euro-Western and Indigenous Perspectives which took three-and-a-quarter years to complete. So it was a joy to find that I could write a Little Quick Fix book in just a few weeks. They’re not easy, though, because – as anyone who has written for an academic journal knows – ‘easy’ and ‘short’ are not the same thing. Each of these little books is like a puzzle. The text has to be both distilled and accessible; there are strict word counts for different sections; you need to cover the same ground three ways – in under 25, 130 and 600 words – without being repetitive. And then you have to devise interactive exercises to reinforce and embed the points you’ve made. Plus, with the first two, the timescales were tight. SAGE approached me in May, I signed a contract in June, delivered Do Your Interviews in July, Write A Questionnaire in August, they went into production in September and will be published in December. That is a blisteringly fast schedule by traditional publishing standards.

The really good news, from my point of view, is that SAGE has a design team who are doing a proper professional job on the books’ covers and contents. Look at my covers! Aren’t they lovely?

Do Your Interviews coverWrite A Questionnaire cover

I can’t wait to see the contents.

While I was writing, I made some design suggestions, and it will be interesting to see which the team take up and which they ignore or change. Design is not my strong point, to say the least. I can’t bear to show you the flow chart I cobbled together in Word which I could only be proud of if I was five years old. But I have seen these designers’ outputs and I know they are going to make my work look good.

I am also pleased that the books will be very accessibly priced at £6.99, US$9.50, and equivalent prices around the world. Perhaps the best news of all is that I have now contracted to write two more books in the series: Use Your Interview Data and Use Your Questionnaire Data. Plus these have much more relaxed timescales; the first is due by 1 December and the second by 25 February, for publication next July. I love my life!

Researching Research Ethics

Research ethics in the real world [FC]I have written on this blog before about my book launch which is now only four weeks away (or less, if you’re reading this after 11 October). It’s a free event and you’re welcome to come along if you’re in London that day; details here. Copies of the book itself should arrive in the next 2-3 weeks. Exciting times!

I’ve written this week’s blog post on SAGE MethodSpace, talking about the research I did into research ethics around the world as background for writing the book. Head on over and have a read, and please feel free to leave a comment there or here.

Ethical Principles for Independent Researchers – Part Two

ethicsLast week I posted the first five principles of independent research work. This post contains principles 6-10.

  1. Behave professionally at all times

Be polite, turn up on time, maintain confidentiality. Don’t drink alcohol on clients’ time or have affairs with clients. This should really go without saying, but clients can sometimes treat you quite informally, arrange meetings in cafes or pubs, and then boundaries can easily become blurred. If you behave professionally at all times, you can’t go wrong.

  1. Do what you say you’re going to do, when you say you’re going to do it

This is the core of ethical practice as an independent researcher. Don’t make promises you can’t keep; always keep your promises unless the circumstances are exceptional.

  1. Communicate effectively

Find out how your client prefers to communicate and communicate that way. Most people like to use email but there are a lot of other options. If they want to use some kind of software platform you’ve never heard of, be upfront about that (as per (1) above) and give it a try if you can. If it’s going to cost you money and you wouldn’t be using it otherwise, it is legitimate to ask the client to cover the cost. If they prefer to work by phone or VOIP (Skype, Google Hangout etc), then work that way with them even if you hate it. You can always follow up with the key points in an email, to avoid misunderstanding and provide a record – in fact, I would suggest you do.

The most important times to communicate effectively are when you can’t manage (7) above due to unforeseen circumstances such as illness or bereavement. A couple of years ago I experienced the sudden death of a family member in their 40s. The news came in the early evening, and my only appointment for the next day was a mid-morning phone call with a client. I texted him to explain what had happened and said I was very sorry but I wasn’t sure if I would be able to make the call as I didn’t know quite how the next day would pan out, but I would be available if I could. He texted back straight away with such a kind message, saying firmly that we would not speak the next day, I should let him know in a few days if I had time to talk, and in the meantime he would handle everything with our project and I should not worry about it at all. In retrospect his message was rather more professional than mine, but then I was in deep shock. Yet it’s evident that even at such times, managing my client work was a top priority for me.

  1. Know your place

Your role is a support role. Yes, you are the expert in some areas; yes, you may be asked to lead a project. But you are and will always be peripheral to the organisations and the people you work with. You are dispensable. Commissioners of research are fickle for some very good reasons: their roles, circumstances, budgets etc can and do change frequently, and so, accordingly, do their priorities. A client may truly love you for a while, but don’t expect that to last. When necessary, bow out gracefully, with appreciation for the benefits you have received from the relationship rather than resentment of something you feel you should have received. Your ego does not belong in this work.

  1. Remember that everyone’s an expert

Your expertise is valuable, but it is no more valuable than the expertise of others, including research participants. For example, if your participants are homeless people, they are experts in lived homelessness, and probably in other things too – they may have professional backgrounds themselves. And the professionals you deal with may have useful personal expertise to bring to the research. I recommend treating people as whole human beings, rather than solely in the role they initially present to you. You will learn more that way and the people you encounter will have a better time too.

Now you know all ten ethical principles of independent research work. At least, the ones I’ve come up with. There is probably something I’ve missed. If you know what it is, please contribute in the comments below.

The Ethics of Independent Research Work #1

ethicsI guess we all know by now that I bang on a fair bit about research ethics, but I haven’t written about the ethical aspects of working as an independent researcher. I have come up with ten ethical principles for indie researchers. Many of these no doubt apply to other forms of self-employment too, but they definitely all apply to independent research work. This post contains the first five principles; I will post the other five next week.

  1. Be honest about what you don’t know

If a client says, ‘You know the legislation that…’ and you don’t, it’s best to say so. It can be tempting to nod while making a mental note to look it up online later, but that can lead to disaster. People often fear that saying they don’t know something will make them look stupid, but paradoxically the reverse is true. If you are clear about what you do know and honest about what you don’t, you will build trust with your clients much more quickly and effectively.

  1. Be clear about your capacity

Allied to this: don’t take on work you haven’t got time to do, because that won’t do anyone any favours. You won’t produce your best work for your clients, and you’ll end up burned out. OK there are times where you may choose to work at maximum capacity for a short time, e.g. as one contract ends while another begins, or to fit in a quick piece of work for a valued client. But keep these brief and infrequent, and make sure you build in recovery time. Independent research is a great career (at least, in my view), but no career is worth damage to your health and relationships.

  1. Charge a fair rate for the job

If possible, find out what the going rate is, and charge that. The going rate will vary across sectors and between countries. I have written before about how I charge for work: in brief, I charge less for charities and longer projects, more for universities, governments, and work I don’t really want to do.

Also, don’t take on jobs with inadequate budgets, unless you’re desperate for the money and prepared to accept a very low day rate. I’ve been offered a three-year national evaluation with a total budget of £5,000. Perhaps someone ended up doing that work for that money, but they would either have done a very poor job or effectively accepted an extremely low day rate.

  1. Don’t accept work on an unethical basis

One potential client rang me towards the end of the financial year to ask if I could invoice her for several thousand pounds that she had left in her budget. She said she was a bit busy, so could we sort out what I would do for the money at a later date? I didn’t know her so I asked why she had rung me. She told me she had wanted person A, but they were too busy so they suggested person B, who couldn’t take it on either and suggested me. Nowadays I would probably say a simple ‘no’, but it was early in my career, and person B was quite influential. I agreed to invoice, but only after meeting with my potential client to decide whether we could work together and what I would do for her.

Another time a commissioner rang me to ask me to evaluate a service because he wanted to close it down. I said I would evaluate the service if he wished, but I would not pre-determine the findings; they would be based on my analysis of the data I gathered. He agreed to this. I did the evaluation, and found – unequivocally – that the service was highly valued and doing necessary work. The commissioner paid my invoice, then found someone else to do another evaluation saying the service should be closed down, whereupon he closed it down. Again, with the benefit of hindsight I probably should have said ‘no’ to the assignment, but I naïvely thought that if I did the research the commissioner would abide by the findings.

  1. Don’t take work outside your areas of expertise

You may have more than one area of expertise. I have a few: children/young people/families, housing/homelessness, substance misuse, volunteering, service user involvement, third sector, training. Each of these areas formed part of my professional work before I became an independent researcher.

Earlier this decade I got an email asking me to do some work around learning disability. I replied, explaining that it was not one of my areas of expertise, and saying I didn’t think I was the best person for the job. The potential client came back saying they thought I was right and apologising for having bothered me. (I didn’t mind. I never mind answering queries about possible paid work.)

Oddly enough, a few weeks later I got another email, from someone completely different, asking me to do some work around learning disability. After rolling my eyes and thinking about buses, I sent a similar reply. This time the potential client came back saying that I sounded perfect for the piece of work they wanted to commission. They thought someone with a good knowledge of research methods but little knowledge of learning disability would bring a usefully fresh perspective to the problems they were trying to solve. Which is further evidence for (1) above.

So there you have the first five principles of ethical research work, according to me. Come back next week for the other five.

How To Get Paid On Time

lateAs an independent researcher I feel lucky because bad debt is a problem I rarely have to face. My clients are charities, local authorities, government departments, universities – all organisations with money in the bank and not much chance of going bankrupt. Of course that’s always a possibility, but people who work for private sector organisations or private clients are much more likely to find themselves owed money they will never receive.

Late payment, though, is a perennial problem that can play havoc with my cashflow. I yearn to name and shame, though I think that would be counter-productive in the long run, so I won’t. But I will say that, of the groups I’ve mentioned, charities are most likely to pay promptly and universities are by far the worst offenders.

In the UK we have a Late Payment of Debts Act in recognition of the difficulties that late payment can cause to small businesses. If you are a salaried person, imagine your employer told you, towards the end of one month, that they hadn’t got their admin organised so you’d be getting paid a month late. Not good, right? I cite the Late Payment of Debts Act on all my invoices, though I don’t think it makes much difference. What it does mean is that if I issue a big invoice and/or payment is really late, I can claim interest – though the amount is tied to the bank base rate of interest, which is currently very low. Sigh… But even though claiming interest doesn’t do much for my income, it does focus clients’ minds, so I think it’s worth doing from time to time, particularly with serial offenders. I have had clients’ finance departments try to refuse to pay the interest, but when I point out it’s a statutory requirement, they back down.

However, that is a last resort. There are more constructive things you can do to ensure you get paid on time, or at least as near to on time as possible. First, invoice as soon as you’ve done the work, or as near to that as you can manage. If you take your time about invoicing, you have less moral high ground to occupy if you need to chide your client for taking their time about payment. That’s illogical, of course, but nevertheless true. Second, keep track of your invoice dates and amounts – I use a spreadsheet. Third, chase every late payment as soon as it’s late, or as near to that as you can manage. Chase politely: I use phrases like ‘My records show…’ and ‘your organisation agreed…’ to depersonalise the message, as the late payment is very rarely the fault of the person who answers your emails. Ask when you can expect to receive payment, and don’t be afraid to chase again if you don’t receive payment or further information by that date (or a couple of days later, if you want to appear more forgiving than naggy).

International payments may take much longer than UK payments, there is no legislation to help, and it doesn’t matter what you say on your invoice. Payment periods of 90-120 days are not unusual. There is no good reason for this, and it’s annoying, but if you want to do international work you have to suck it up. Of course not all overseas clients will be late payers, but be prepared.

In fact, ‘be prepared’ is the cornerstone of financial survival as an independent researcher. You need to keep enough money in your bank account for six months’ running costs as a minimum, 12 months to be comfortable. ‘Running costs’ include all your business overheads, the amount you feel able to pay yourself, and your tax bill. That way, if you get a lengthy contract with long intervals between payments, you can keep yourself afloat until you get paid. That approach was helpful to me this year when I landed two good-sized contracts, both starting in late May. One is a five-month UK evaluation contract with two payment instalments; I have just received payment of the first, and the second is likely to arrive in late November or early December. The other, a three-year international research ethics contract, is supposed to accept invoices quarterly but I have not yet been able to issue my first invoice. If I hadn’t had a financial cushion I’d have gone under by now. So take heed, would-be or newbie independent researchers, and be prudent.

Fear Of Success

leapI have seen several pieces written online about impostor syndrome (one of them by me) and there is a body of scholarly work about fear of failure. Fear of success can be as big a barrier, in my view, though much less is written about that. For example, on Google Scholar, “fear of success” gets around 8,500 hits, while “fear of failure” gets around 59,000. So here’s a post to help redress the balance.

I have been grappling with a potential project over the last couple of months which requires a brief application of 1000 words. I’m good at writing and I’ve had some top quality help and support, yet this has been a real struggle. I have emailed three separate versions to my main support person for feedback; I haven’t done that since my PhD days over 12 years ago. And I have come to the conclusion that fear of success is part of the problem.

I found myself doing various small acts of self-sabotage, such as putting a relevant electronic document in the wrong folder, and procrastinating about research I needed to do for the application because it felt too difficult to tackle. Those unusual (for me) activities alerted me to something unfamiliar going on in my psyche.

I don’t feel like a fraud, so it’s not impostor syndrome. It’s not fear of failure, either, as if I fail, I lose nothing but the time I have invested. I will be no worse off apart from a temporary feeling of disappointment. So I think it must be fear of success.

Reflecting on this, I realised that fear of success is based on fear of identity change. If I get to do this project, it will change who I am. I will become ‘the person who [does things I don’t do now]’. And change like that is scary, even though the project is something I think I want and something others are encouraging me to attempt. If I become ‘the person who’, will I still fit in my primary relationship with my significant other? Will I still more or less fit into my professional communities? Will I still fit in my skin?

I don’t know the answers to those questions. That means if the people who have the power to offer this project to me do so, and I decide to accept, I will be taking a leap into the unknown. That feels so scary.

I know impostor syndrome well; it was with me for the publication of Research and Evaluation for Busy Students and Practitioners in 2012, and again for the publication of Creative Research Methods in 2015. Fear of failure goes back much further, to my school exams in the 1970s. But fear of success is new to me. I’m not familiar with all its little schemes and wiles, but I expect I’ll counteract them the way I have with fear of failure and impostor syndrome: I will get to know how fear of success works on me, and then I’ll carry on regardless.

How Do Research Methods Affect Results?

questionsLast week, for reasons best known to one of my clients, I was reading a bunch of systematic reviews and meta-analyses. A systematic review is a way of assessing a whole lot of research at once. A researcher picks a topic, say the effectiveness of befriending services in reducing the isolation of housebound people, then searches all the databases they can for relevant research. That usually yields tens of thousands of results, which of course is far more than anyone can read, so the researcher has to devise inclusion and/or exclusion criteria. Some of these may be about the quality of the research. Does it have a good enough sample size? Is the methodology robust? And some may be about the topic. Would the researcher include research into befriending services for people who have learning disabilities but are not housebound? Would they include research into befriending services for people in prison?

These decisions are not always easy to make. Researcher discretion is variable and fallible, and this means that systematic reviews themselves can vary in quality. One thing they almost all have in common, though, is a despairing paragraph about the tremendous variability of the research they have assessed and a plea to other researchers to work more carefully and consistently.

One of the systematic reviews I read last week reported an earlier meta-analysis on the same topic. A meta-analysis is similar to a systematic review but uses statistical techniques to assess the combined numerical results of the studies, and may even re-analyse data if available. The report of the meta-analysis I read, in the systematic review, contained a sentence which jumped out at me: ‘…differences in study design explained much of the heterogeneity [in findings], with studies using randomised designs showing weaker results.’

Randomised designs are at the top of the hierarchy of evidence. The theory behind the hierarchy of evidence is that the methods at the top are free from bias. I don’t subscribe to this theory. I think all research methods are subject to bias, and different methods are subject to different biases. For example, take the randomised controlled trial or RCT. This is an experimental design where participants are randomly assigned to the treatment or intervention group (i.e. they receive some kind of service) or to the control group (i.e. they don’t). This design assumes that random allocation alone can iron out all the differences between people. It also assumes that the treatment/intervention/service is the only factor that changes in people’s lives. Clearly, each of those may not in fact be the case.

Now don’t get me wrong, I’m not anti-RCTs. After all, every research method is based on assumptions, and in the right context an RCT is a great tool. But I am against bias in favour of any particular method per se. And the sentence in the systematic review stood out for me because I know the current UK Government is heavily biased towards randomised designs. It got me wondering, do randomised designs always show weaker results? If so, is that because the method is more robust – or less? And does the UK Government, which is anti-public spending, prefer randomised designs because they show weaker results, and therefore are less likely to lead to conclusions that investment is needed?

And that got me thinking we really don’t know enough about how research methods influence research results. I went looking for work on this and found none, just the occasional assertion that methods do affect results. Which seems like common sense… but how do they? Does the systematic review I read hold a clue, or is it a red herring? The authors didn’t say any more on the subject.

We can’t always do an RCT, even when the context means it would be useful, because (for example) in some circumstances it would be unethical to withhold provision of a treatment/intervention/service. So what about other methods? Do we understand the implications of asking a survey question that a participant has never thought about and doesn’t care about – or cares about a great deal? I know that taking part in an interview or focus group can lead people to think and feel in ways they would not otherwise have done. What impact does that have on our research? Can we trust participants to tell us the truth, or at least something useful?

This is troubling me and I have more questions than answers. I fear I may be up an epistemological creek without an ontological paddle. But I think that bias in favour of – or against – a particular research method, without good evidence of its benefits and disadvantages, is poor research practice. And it’s not only the positivists who are subject to this. Advocates of participatory research are every bit as biased, albeit in the opposite direction. The way some participatory researchers write, you’d think their research caused bluebirds to sing and rainbows to gleam and all to be well in the world.

It seems to me that we all need to be more discerning about method. And that’s not easy when there are so many available, and a plethora of arguments about what works in which circumstances. So I think we may need to go meta here and do some research on the research. But ‘further research needed’ is a very researcher-y way of thinking, and I’m a researcher, so… does my bias look big in this?

Book Launch! And Other Events

Research ethics in the real world [FC]I am delighted to have been invited to launch my forthcoming book, Research Ethics in the Real World: Euro-Western and Indigenous Perspectives, at a seminar at City University in London on Thursday 8 Nov. This is part of a seminar series run by NatCen, City University, and the European Social Survey. I’ll be talking about why it is crucial to view research ethics in the context of its links with individual, social, professional, institutional and political ethics. I will explain why I think the Indigenous research paradigm is as important for our world as the Euro-Western research paradigm. I will outline why applying research ethics at all stages of the research process is equally essential for quantitative, qualitative, and mixed-methods researchers.

This was a much more difficult book to write than my book on creative research methods. Since that book came out, I have been asked to do a lot of speaking and teaching on creative methods. For example, I’m running an open course on creative methods in evaluation research for the UK and Ireland Social Research Association in Sheffield on 16 October, and a more academically-oriented version on using creative methods for the ESRC‘s National Centre for Research Methods in Southampton on 21 November. (And one for social work researchers in Birmingham next week, but that’s been fully booked for some time and has a long waiting list.)

If my ethics book has the same effect, I’m not quite sure how I’ll manage the workload. Still, that would be a great problem to have. In the meantime: fancy a free seminar on research ethics? Of course you do! It’s at 5.45 for 6 pm with a wine reception afterwards. I’d love to see some of my blog followers there – if you can make it, please come and introduce yourself.

Aftercare in Social Research

aftercareWhen does a research project end? When a report has been written? When a budget has been spent? When the last discussion of a project has taken place? It’s not clear, is it?

Neither is it clear when a researcher’s responsibility ends. This is rarely spoken of in the context of social research, which is an unfortunate omission. A few Euro-Western researchers recognise the need for aftercare, but they are a tiny minority of individuals. There seems to be no collective or institutional support for aftercare. In the Indigenous paradigm, by contrast, aftercare is part of people’s existing commitment to community-based life and work. Euro-Western researchers could learn much from Indigenous researchers about aftercare: for participants, data, findings, and researchers ourselves.

The standard Euro-Western aftercare for participants is to tell them they can withdraw their data if they wish. However, it is rare for researchers to explain the limits to this, which can cause problems as it did for Roland Bannister from Charles Sturt University in Wagga Wagga, Australia. Bannister did research with an Australian army band, Kapooka, which could not be anonymised as it was unique. Band members consented to take part in Bannister’s research. He offered participants the opportunity to comment on drafts of his academic publications, but they weren’t interested. Yet when one of these was published in the Australian Defence Force Journal, which was read by band members, their peers, and superiors, participants became unhappy with how they were represented. Bannister had to undertake some fairly onerous aftercare in responding to their telephone calls and letters. Of course it was far too late for participants to withdraw their data, as this would have meant retracting several publications, which is in any case limited in its effectiveness. However, particularly in these days of ‘long tail’ online publications, we need to be aware that participants may want to review research outputs years, even decades, after the substantive work on the project is done. We have a responsibility to respond as ethically as we can although, as yet, there are no guidelines to follow.

Data also needs aftercare, particularly now that we’re beginning to understand the value of reusing data. Reuse increases the worth of participants’ contributions, and helps to reduce ‘research fatigue’. However, for data to be reusable, it needs to be adequately stored and easy to find. Data can be uploaded to a website, but it also needs to be carefully preserved to withstand technological changes. Also, it needs a ‘global persistent identifier’ such as a DOI (digital object identifier) or Handle. These can be obtained on application to organisations such as DataCite (DOIs) or The Dataverse Project (DOIs and Handles). As well as enabling reuse, a global persistent identifier also means you can put links to your data in other outputs, such as research reports, so that readers can see your data for themselves if they wish. This too is an ethical approach, being based in openness and transparency.

Then there are the findings we draw from our data. Aftercare here involves doing all we can to ensure that our findings are shared and used. Of course this may be beyond our power at times, such as when working for governments who require complete control of research they commission. In other contexts, it is unlikely that researchers can have much say in how our findings are used. But we should do all we can to ensure that they are used, whether to support future research or to inform practice or policy.

Researchers too need aftercare. In theory the aftermath of a research project is a warm and fuzzy place containing a pay cheque, favourably reviewed publications, and an enhanced CV. While this is no doubt some people’s experience, at the opposite end of the spectrum there are a number of documented cases of researchers developing post-traumatic stress disorder as a result of their research work. In between these two extremes, researchers may experience a wide range of minor or major difficulties that can leave them needing aftercare beyond the lifetime of the project. For that, at present, there is no provision.

Not much has yet been written on aftercare in research. If it interests you, there is a chapter on aftercare in my book on research ethics. I expect aftercare to be taken increasingly seriously by researchers and funders over the coming years.

An earlier version of this article was originally published in ‘Research Matters’, the quarterly newsletter for members of the UK and Ireland Social Research Association.

 

The Gear Changes of Independent Research

gear changeI have been an independent researcher for almost 20 years, yet I still find the gear changes difficult.

All of last week I was in top gear. On Sunday night I arrived in a reasonable chain hotel in a city centre. The hotel was as these hotels are: clean, reasonably spacious, comfortable, soulless. I asked for a quiet room and got one, plus the bathroom had an actual bath of a decent size. These things can take on enormous importance when you’re working away from home.

I spent the week zooming around the city in taxis and holding meetings with people during which I typed copious notes. I was facilitating the meetings too so I had to pay a lot of attention to what was going on. There was no scope for doing other things on my phone under the table or staring out of the window. Luckily most of the people I met with were lovely and what they had to say was interesting – that isn’t always the case.

I managed one brief meeting for a different project, one bus journey, and one long walk (it wasn’t supposed to be that long but I got lost, ahem). Other than that it was wall-to-wall meetings then back to my hotel room to work on the report for my client while dining al desko on snacks grabbed from a local supermarket. I could have gone out to eat but didn’t want to take the time. Also I didn’t want a restaurant meal after a full hotel breakfast and a working lunch which was generally quite copious too.

I got home on Thursday night. On Friday I had another meeting for a different project and spent the rest of the day writing the report. And all of Saturday and all of Sunday too, holed up in my office, nose to keyboard, while the rest of the world lazed around in the sunshine. I finished the report around 6 pm on Sunday and emailed it off to the client with a deep sigh of deadline-met relief.

In top gear I run smoothly and at high speed. I’m comfortable there, but of course it’s not sustainable long-term. Sometimes I have to drop to a lower gear for a while. And that’s where this whole analogy breaks down because, in a vehicle, changing gear is generally quick and fairly uncomplicated. For this independent researcher, it takes at least 24 bumpy hours, sometimes several days.

Monday was a weird day. I didn’t quite know what to do with myself; how to prioritise the jobs on my to-do list; whether to take some time off. I felt unsettled, out of sorts. I know this feeling so well and yet it always takes me by surprise. It can be almost as discombobulating when I’ve been in low gear and need to rev up, though at least then I can deploy my personal carrot-and-stick self-discipline techniques. But I haven’t found equivalent methods for managing the change-down days. Stopping altogether, that’s easy. It’s when I still need to work, but don’t have to go at steam heat, that I find the adjustment hard.

I’ve heard similar tales from people with jobs as diverse as journalists and heating engineers, so evidently this isn’t unique to independent research. But it’s an odd phenomenon and, in my experience, rarely discussed. Perhaps this comes in that well-known category of ‘more research needed’.