I would like to see more rigorous evaluations of social policy interventions. This doesn’t just mean more randomised control trials (RCTs) but that is an important step.
If we want more RCTs, how do we achieve this? I think it helps to learn about the situations where RCTs have happened in the past. Perhaps we can replicate these situations in the future.
I can think of four scenarios in which RCTs seem to happen.
Charity programme evaluation
The majority of RCTs happen because a charity receives funding to evaluate one of their programmes. This programme could be an intervention for children with reading difficulties, or mentoring children, or providing breakfasts in schools. The charity finds a funder who is willing to pay for an evaluation and appoint a research team to conduct it. The What Works Centres (particularly EEF, YEF and WWCSC) have led to a large increase in this type of research.
It’s relatively easy (compared to other activities in this post) to evaluate these sorts of programmes using an RCT. They often aren’t ‘core’ activities that services are expected to deliver, which means it’s acceptable to randomise participants. Schools are generally happy to get a 50% chance of having a breakfast club for the cost of allowing researchers in to collect data.
‘Pracademics’
Some RCTs have happened because a practitioner (e.g. a police officer or a teacher) has decided to conduct one. This often happens as part of a masters or PhD. You sometimes hear these people described using the portmanteau ‘pracademic’ – they combine research and practice roles.
This approach can have great advantages. The practitioner can potentially convince their colleagues to randomise activities that external researchers could not. This can include the types of core activities, delivered by statutory services, that are harder to test using the charity programme evaluation scenario described above. Pracademics can ask some important research questions that will be out of reach to research teams that lack these relationships. There have been some great examples of this sort of RCT in policing, including trials of hotspot policing and deferred prosecution.
But it also has limitations. This type of RCT is often poorly funded. The pracademic often has a professional interest in seeing the intervention succeed – it is not an independent evaluation. If successful, it can be hard to scale the activity: unlike many charities, police officers don’t have strong incentives to scale up activity to new locations.
Government policy evaluation
Very occasionally, the Government will decide to commission an RCT to evaluate a policy before it is rolled out. For example, the evaluation of Knife Crime Prevention Orders was designed as an RCT. The Cabinet Office recently made a large pot of funding available for trials. This led to some great projects.
It’s obviously great when this happens as it provides information about national policy, possibly before it gets rolled out. This is pretty rare though – it’s not often that political timescales and incentives allow for rigorous evaluation.
Academics get funding to do an RCT
Occasionally academics get to do an RCT that doesn’t fit neatly into one of the buckets above. They aren’t evaluating another organisation’s programme or a government policy. Nor are they pracademics working on a PhD. This is often because the research team developed the intervention themselves, like the trial of Learning Together, an anti-bullying intervention. Or the Nuffield Early Language Intervention is a really successful example.
It’s interesting how rare it is to find evaluations that aren’t prompted by funding for a programme or policy evaluation. It seems really rare for the funding councils to fund intervention research, for example. I’m struggling to think of many examples in my fields of education and youth violence prevention! Historically, the health research council seems more likely to have funded RCTs than the social science research council (e.g. the health research council funded the Learning Together trial). However, this is potentially changing: UKRI have recently launched some exciting grant rounds.
So what?
What does this mean for efforts to produce more RCTs?
There are clear efforts to support more trials in three of the four scenarios. The What Works Centres continue to fund programme evaluation, the Cabinet Office launched a fund to support government evaluation, and UKRI launched a relevant grant round for directly funding academics. This leaves the pracademics without any systematic support. It would be interesting to see how we could bring more of these trials about – I think this is a neglected area.
Leave a comment