By Lydia Namubiru
On July 22nd 2015, the Guardian published an article titled, “new study debunks merits of global deworming programmes.” According to the story, researchers at the London School of Hygiene had debunked a famous and very influential study by re-analysed it’s own data. The original study was a randomised control trial conducted in Kenya between 1997 and 2001. It determined that mass deworming in schools; reduced worm infections, improved the nutritional status of children, improved their school attendance and even had the spillover effect of improving school attendance among children who lived within 6 kilometres of those who were dewormed. The Guardian reported that the LSH re-analysis found evidence of decreased worm infections, small improvements in nutritional status and none for better school attendance be it for the treated children or their neighbours.
Reports of the ‘debunking’ reanalysis caused more than a ripple in the development aid sector. Experts from as high as the World Bank wrote blogs for or against either research. On twitter, a storm of counter opinions raged. A hashtag #wormwars emerged and a month later was still being referred to in media reports.
The fiasco illustrated the fact that data often has an agenda; often the agenda of the people who create it. The original study was very influential. Development organisations bankrolled national mass school immunisation programmes on the basis of this one study. Their vested interest was brought to the fore in the fervor with which they attacked its reported ‘debunking.” On the other hand, development impact assessment is big business with statisticians, economists and subject experts jostling each other for influence, fame and fortune in the practice. Debunking each other’s work is one way of doing this.
However, The Guardian’s follow up coverage failed to raise above the fray of those vested interest. Instead of discussing why the two studies produced different results, follow up articles just repeated what different parties in the ‘worm wars’ were saying. It may have been more helpful to the reader if the Guardian had discussed the major differences between the two studies. While the re-analysis was supposed to be a replication of the first, it made significant changes to the evaluation approach. For instance, while the original experiment was done over several years with all participants being continually followed up over time, the replication divided the participants by the year they joined the experiment. This treated each cohort as if it were an independent experiment and possibly changed the results. Additionally, the re-analysis changed the experimental unit. While the original study measured effects on students (whether one’s individual outcomes changed), the re-analysis averaged outcomes at the school level. That is; whether or not outcomes changed for students going to a particular school. A few other differences in approach make the re-analysis less of a replication of the first study. These make it hard to justify the ‘debunking’ claimed by the Guardian’s headline and article.
Nonetheless, it was an important story pick by the Guardian. The subsequent conversation was very illuminating. A lot of research isn’t scrutinised, much less research about policy in the developing world. In the ‘worm wars’ that followed the Guardian’s article, scrutiny of both studies was done. Alas, it wasn’t done elegantly.