After an amazing day of meeting incredible teachers, researchers, commentators and many (many) tweeters, most people left #researchED 2013 committed to improving the quality, availability and use of research in education. Tom Bennett rightly finished the day by urging us not to think the day would change everything, nor that it would change nothing, but at least that it would change something. So what do we need to think about if we’re going to make sure it does change something?
At ResearchED we talked a lot about system level, transformational changes but less about the constraints and challenges that stop us from making incremental changes in the current environment. Two of the issues I think we might be wise to think about more are:
- Commissioned research.
- Research on a budget
If we can be open and honest about the challenges these present then we can already start making some basic improvements to the quality of evidence.
As someone who (amongst other things) does research for a living, I (and LKMco as an organisation) face these challenges quite often. The type of research we do is not huge scale and we don’t work within a university or with access to the kind of ‘information architecture’ Ben Goldacre was arguing for. For the most part we work on small studies of how charities and youth organisations are impacting on young people. In some cases these are year-long national projects but in many they’re much smaller- as someone who began my career in small, community youth-work projects and charities I welcome the opportunity to work with small organisations. However, it means we frequently find ourselves at the coal face of some of practical challenges of research in education.
In terms of budget, lots of little charities are run by people who struggle to pay themselves a wage – let alone commission large pieces of research. If they can only afford to spend five days on research it’s no use telling them they should carry out a national RCT. So what should they do? Nothing? Plough on without information? No – that’s never going to improve what they’re doing or help them scale and develop. We therefore need to find solutions that raise the quality of their evidence step by step. Think of it like this:
|Type of ‘evidence’||Example/explanation|
|‘Generic tripe’ (as I like to call it)||“Was it useful?”|
|More specific perception data||“I am able to do x”|
|Perceived change with attribution||“As a result of y I am more able to x”|
|Specific perception with a baseline||“How able are you to do y” (at start and finish)?|
|Baseline and endpoint with comparator group||Did the group involved improve more than those who weren’t?|
|Baseline and endpoint with comparator group from similar start-points||Did pupils on x grade who were involved improve more than others on x grade who weren’t involved?|
|Comparator group randomly selected (school level)||Did pupils randomly selected in one context for the intervention improve less than those not chosen?|
|Comparator group randomly selected (across contexts)||Did pupils randomly selected across different contexts for the intervention improve less than those not selected?|
(Clearly it’d be wrong to think of this as a straightforward hierarchy- different things are right in different contexts)
Criticising everyone who’s not at the ‘top’ of this hierarchy won’t help people get better- particularly given that so many are stuck right at the ‘bottom’. What we need to do is look at what organisations are trying to achieve, what they want to find out and what resources they have available. We can then find nifty ways of getting them to the highest (or most appropriate) level possible (and of supplementing it with existing literature and qualitative analysis too). The onus is then both on researchers to evaluate their findings honestly and on the consumers of research (schools, teachers and academics), to be discerning enough to see these findings for what they are- a starting point. But this needn’t mean writing off the organisation or the findings. Our immediate concern, and the one we can impact on most quickly, is making the evidence as rigorous possible within the constraints since too often simple changes to the research design could massively improve quality, even if they don’t make it ‘gold standard’.
The second challenge is that there are thousands of charities and social enterprises working in the education and youth sector. Many of them are far too small for academics or research bodies to notice and study of their own accord (let alone to secure grant funding to do so). However, if the organisation itself commissions research, there is a risk of bias. My approach to this challenge is:
1. To remind funders that for the research to be useful to them it has to be unbiased or they will shoot themselves in the foot by being led in the wrong direction
2. I’m now pushing for research contracts to include reference to BERA, or similar’s, ethical guidelines. These state that ‘attempts by sponsors or funding agencies to use questionable influence should be reported to the Association’. Whilst I’m not naive enough to think that BERA will wave a stick and put a dodgy funder on the naughty step, I hope that being able to defer to these guidelines could make it easier to push back against a funder seeking undue influence (should they try to do so).
3. So far I’ve been lucky enough not to face any attempts to skew our research, but I’m enough of a realist to know it will happen at some point. However, as an optimist, my position is that for every client we would alienate by walking away in such a case, we would earn another who welcomed the credibility it displayed- (if it happens you’ll hear about it here!). As far as I’m concerned there are plenty of organisations out there and I’m fine with limiting the ones we work with to the honest ones.
Whilst bias is always a risk and should always be taken into account, all those involved in doing research should reduce it through the best possible research design and by taking responsibility for following these three points.
Ultimately, if we turn a blind eye to everything but the ‘gold standard’ in research we will deplete the evidence base as much as we enrich it. It’s right for us to think about how we can build a solid information infrastructure, it’s right for us to criticise a lot of research for not being good enough, but let’s also think about the small steps we can take to increase quality and reduce bias. In seeking to place research at the heart of education, we should make sure we combine system level change with making sure everything is as good as it could be.
All of this is one reason why we’re currently recruiting for a super-dooper new researcher – find out more here