Two features of the report particularly stand out:
1. The decision to focus on parliamentary constituencies
2. The focus on comparing outcomes in similar types of area.
1) Focus on constituencies: a call to (some of) those in power
With the data available, it would be possible to map reading outcomes by a range of different administrative geographies, from postcode sectors to Census Output Areas and local authorities. Instead of using an administrative geography, however, Reading England’s Future is based on an analysis of reading outcomes by parliamentary constituencies. As well as offering a relatively fine-grained analysis of how reading achievement varies geographically, parliamentary constituencies also offer built-in clout: they identify the back yards of the people at the heart of the political process, and the report is explicit in its intent here:
“In the run up to the election, we want MPs to understand what is happening in their areas with regards to children’s early language and reading, in order to encourage them to take action with us.”
It’s an important reminder of how seemingly dry decisions about data can in fact be strategic and action-oriented. However, it’s also a reminder of how perspectives on the proper site of intervention can vary: by conducting its analysis by parliamentary constituency rather than local authority, for instance, the report implicitly suggests that policy interventions to raise reading achievement are best initiated by central, rather than local government. This may to some extent be at odds with the report’s call for localised action to establish talking and reading towns and cities.
2) Beyond location: talking about types of area
A real strength of the report is that it goes beyond a simple descriptive account of those parts of the country where reading achievement is strongest and those where it is weakest, to highlight areas with similar characteristics which nonetheless produce divergent reading outcomes. The report ‘matches’ areas based on their degree of urbanisation, the proportion of pupils on Free School Meals, and the proportion of non-white pupils, and then goes on to identify matched areas with widely contrasting reading outcomes. For example Daventry and Calder Valley are identified as similar ‘countryside constituencies’ – however, while in Daventry only 58% of 11 year-olds read well, in Calder Valley the figure is 83%.
This sort of analysis is powerful in two ways. Firstly, it allows areas with similar demographics, but with widely different reading outcomes, to ask themselves the question: “what are you doing that we’re not doing?” without being able to cursively write-off the difference as being due to an uneven playing field. Such an approach, applied to ‘families of schools’ with similar intakes, was a key component of the success of the City Challenges in London and Manchester.
Secondly, by identifying areas based on their characteristics, rather than by arbitrary administrative boundaries, we can begin the process of making causal claims about how particular types of area might impact on the outcomes of the people who live there. Rather than talking about outcomes being poor in specific geographical locations, we can instead begin to identify the types of area where outcomes are poor, as is the case in Reading England’s Future:
“Children from low-income families in smaller towns and rural areas are particularly likely to fall behind in reading.”
With area ‘types’ based on a rich range of data, such as the Output Area Classification, we can then begin to unpick the common Census characteristics of those areas with poor outcomes, wherever they may lie in the country, and that’s not a bad place to start when designing future area-based interventions.