Latest evaluation results from Sierra Leone

oxford study 2016-17 chart.png

Rising students in Sierra Leone continue to progress in reading and maths at 2 to 3 times the rate of their peers in other schools, but given their starting point they need to be progressing even faster.

That's the headline finding from the latest annual report from our independent evaluators at Oxford University.

For a summary of what the report says and my reflections on it, read on. To skip the commentary and dive straight into the report itself, click here.

Background

A team at Oxford University led by Dr David Johnson have been leading a 3 year impact evaluation of Rising's schools in Sierra Leone. The baseline report completed in early 2016 is here; the first annual report (completed in September 2016) is here. This is the team's second annual report covering the 2016-17 academic year.

The study's full set of outcome measures include computer-adaptive tests of reading and maths, a measure of writing, and a measure of non-cognitive traits or 'learning dispositions'. Some of these measures will only be followed up at endline, however; the annual reports focus on the reading and maths measures.

The progress of Rising students on these measures is benchmarked against random samples of students from two types of schools: government schools and other private schools.

The first annual report found encouraging evidence that Rising students were making rapid learning gains compared to their peers in other schools. We were curious to see whether that trend would continue.

Key findings

Broadly speaking, the report finds that it has. To quote the conclusion:

“RAN schools seem to make better cumulative gains, learn at a faster rate, and weaker students make stronger transitions from poor performance to good performance bands when compared to matched student samples in private and government schools. It is also the case that learning outcomes are more equitable in RAN schools.”

Let's unpack each of those claims a bit.

First, Rising students are making more rapid progress than their peers in other schools. In reading, the cumulative gains represent about twice as much progress as students in comparison schools; in maths, about 2.4 times as much as students in other private schools and more than 3 times as much as students in government schools.

Second, these gains are evenly distributed across both boys and girls. Girls at Rising schools are progressing much faster than their male counterparts in comparison schools. In reading, Rising’s boys do continue to progress slightly faster than Rising’s girls, but in maths girls are closing the gap.  

Third, the distribution of learning gains between stronger and weaker learners is also more equitable in Rising schools. At baseline, about 75% of students in all three school types were in the lowest performance band for reading. At Rising, that proportion had nearly halved, to 40%. But in the other private and government schools, the percentage of students transitioning out of the lowest performance band was much lower: 56% of private school students and 59% of government students were still in the lowest performance band.

All of these findings suggest that, regardless of which subject or which subsample of students you look at, the learning trajectories of Rising students are now significantly steeper than their peers in other schools. The World Bank has been promoting the idea of "equivalent years of schooling" as an intuitive way to understand programme effectiveness. On this basis, each year in a Rising school is, according to these data, worth an extra 1 to 2 years in another private or government school.

The challenge the study still points to, however, is whether we are bending these learning trajectories sharply enough, particularly in maths. The baseline showed that students are enrolling into secondary school with reading and maths skills 5 grades below their expected level - closer to a second grader. This report shows they are making significant progress, but they are still behind where they should be.

What the report doesn't or can't say (yet)

Some of the report's limitations are by design. It doesn't revisit the non-cognitive measures, so we'll have to wait until the final report to see what our impact has been there. Those measures are important because there's a common critique that prioritising the achievement of excellence in foundational skills must somehow come at the expense of a more holistic approach to students' development, including their socio-emotional learning. I think that reasoning is flawed - my hypothesis is that the schools that are good at socio-emotional learning are probably the schools that are good at other types of learning too - but we'll have to wait to see what the data say before we settle that argument.

Other limitations were not by design. In particular, the study team experienced a significant amount of sample attrition. This seems to be about student absence on testing days - the perils of trying to do follow-ups during Sierra Leone's rainy season - rather than student drop-out. Both would be challenges, but the latter would be more worrying and harder to fix in the final year of the study.

The concern is whether differential attrition rates change the composition of the sample of students being assessed. In particular, in government schools attrition seems to have been concentrated in the lowest achieving students. This makes it problematic to measure progress by comparing the average scores of all students who took a particular test: if the sample who showed up for the latest follow up test differ in meaningful ways from the sample who showed up for the baseline test, there is a risk of attributing a change in average scores to a change in what students know when in fact it is just a change in which students are tested. The report is therefore cautious about interpreting these data. (In fact, these changes in the composition of the sample don't seem to make much difference to the estimated effect for Rising or the other private schools, but they make a big difference for the government schools.)

The way round this, and to guarantee the comparison over time is like-for-like, is to focus just on those students who were tested at baseline and the latest follow-up (see Tables 6 and 7 in the report).

What we'd like to see

We've enjoyed a good dialogue with the evaluation team about their research. Here are three areas we'd like to see the next (and final) stage of the research look at.

The first is the use of criterion-referencing (that is, using an absolute benchmark of performance against a fixed standard or level of competency, rather than a relative benchmark of performance as compared to other students) for setting the performance bands and student growth targets. There is a perfectly reasonable theoretical justification for defining the thresholds of the different bands and the target learning trajectories in the way that the team have. As a way of measuring progress towards mastery, it makes sense. But by definition it isn't contextualised - geometry is geometry, whether you are doing it in Freetown or Finland. As a result, you end up with the situation that the vast majority (above 75%) of all students in the sample were in the lowest performance band at baseline, making a granular analysis of exactly which students are progressing quite difficult. Without losing the focus on what mastery requires in absolute terms, it would be helpful to see more analysis using relative not just absolute benchmarks.

The second is to look at what might be driving these results. A study on this scale is unlikely to be able to identify precisely which features of our model are having the biggest impact; just detecting whether there's been an impact at all is methodologically challenging enough. But it would be interesting if the study were also able to document differences in inputs or intermediate outputs, such as teacher quality, instructional time or other factors we might expect to be associated with higher quality learning. Finding differences in these areas would lend support to the idea that observed differences in outcomes are true effects, and might help to focus future evaluation efforts.

The third area is around socio-economic status. One of the big debates around private schools is the extent to which any performance advantage they enjoy over public schools is purely the result of differences in the students they serve. 'Consensus' is probably too strong a word, but I would say the balance of evidence in the literature is that SES accounts for some but not all of the private school performance advantage. In any case, it's a legitimate concern in interpreting these sorts of results.

There are a number of features of the study that help address this concern. The first is obviously that the comparison group includes other private schools, rather than just making a crude private vs public comparison. Second, the team purposively sampled schools in the same geographic areas as Rising schools. Fee levels and average primary school leaving exam grades were also compared to ensure that student populations were drawn from broadly similar demographics. Third, the study benefits from being focused on secondary school students. We know that SES impacts are felt very early: in work by Pauline Rose and her colleagues in the Young Lives study, literacy gaps between rich and poor students are already significant by age 8 in many countries. Students in this study were ~12-13 years old at baseline and had all completed six years of primary school. That there were no significant differences in baseline reading and maths abilities across students in the three types of schools does not rule out the possibility of an SES effect, but it does at least beg the question of why this SES effect should suddenly be kicking in now when it had not materialised in students' academic trajectories through primary.

But for all that, it would still be interesting to see the team include a larger set of SES controls.

What this report adds

These quibbles notwithstanding, we're grateful to the Oxford team for another illuminating annual report. Alongside other sources of evidence - encouraging recent public exam results in Sierra Leone, the midline results from the randomised controlled trial of Partnership Schools for Liberia - it adds to an increasingly compelling picture of the potential our model seems to have to transform the quality of education available to families in Sierra Leone. The teachers, school leaders and HQ staff that have made this possible should be very proud.

But "however well we do, we always strive to do better", as we like to say round here, so I know my team, like me, will be asking how we can do more and do better.

Two areas stand out: first, what's behind the slower rate of progress in maths, and what can we do about it? Relative progress compared to other schools was actually greater in maths than in reading, so is it just that the content is more challenging? Are there specific topics where students are struggling? This requires further investigation. Second, we are still fairly new and fairly small. In the academic year studied here, we had a total of 13 schools and 2,400 students across Sierra Leone and Liberia. This year we have 39 schools and more than 8,500 students. We need to demonstrate not just that our model can produce big impacts, but that it can continue to do so consistently over time and as we grow.

So, those are my takeaways: what are yours? Read the report for yourself and then leave a comment below, or Tweet us @pjskids or @risingacademies.

Previous
Previous

Why do parents choose Rising?

Next
Next

Rising students excel in first public exams