Decision-focused evaluations: Generating evidence to inform the scale-up of the Zambian social cash transfer programme
3 October 2018
Author: Alice Redfern and Alison Connor
While the development sector is moving toward making more evidence-informed decisions, there is often a disconnect between the information that decision-makers need and the information that researchers produce. Photo: Neil Palmer_CIAT/Flickr

Billions of dollars are spent on development programmes each year. Yet, programme managers and policymakers have to make decisions on a daily basis on how to spend this sizable amount of funds with incomplete information. In contrast, while more than 2,500 impact evaluations on development programmes have been published since the year 2000, there are relatively few examples of the evidence generated by these evaluations resulting in at-scale action by implementers. While the development sector is moving toward making more evidence-informed decisions, there is often a disconnect between the information that decision-makers need and the information that researchers produce.

Through our work, we have identified several priorities for the design of evaluations that help us to close this gap by making evidence more useful for decision-makers.

1. Evaluations should be demand-driven. To maximise the usefulness of evaluations for decision-makers, key questions they seek to answer should be custom-tailored to implementers’ needs.

2. Evaluations should be crafted to the constraints of real-life implementation. They should be designed with decision-making time frames and implementation cost constraints in mind.

3. Evaluations should be specific to the context. There should be a clear path for evidence generated to be used by local decision-makers. Secondly, the evidence gathered should be specific to the context in which it will be scaled up.

To give you a real-world case study of these priorities, in 2014- 2015, IDinsight partnered with Zambia’s Ministry of Community Development and Social Services (MCDSS) to improve the scale-up of their unconditional social cash transfer (SCT) programme. At the start of the engagement, the government was ready to roll out an electronic enumeration system with the assumption that this would cost less, result in fewer errors, and shorten the time between household enumeration and dispensing payments to beneficiaries, when compared to the previous paper data collection system. We followed the above principles to help MCDSS identify barriers that prevented them from rolling out a low performing enumeration system and to generate actionable recommendations to better achieve government’s overall goals.

Choosing the right question

Misalignment of evaluator incentives with those of the decision-maker serves as one of the biggest challenges we have encountered in the promotion of evidence-informed decision-making. For instance, many evaluations about SCT programmes have focussed on learning more about development theory, by understanding which households to target and the expected impacts of the programme. However, we realised that this was not the priority question when we began conversations with the Zambian government about their SCT scheme. The evaluation we designed was demand-driven, in that it focused on the immediate needs of the implementers; understanding how to effectively scale-up and monitor the programme with the given resources. By conducting an impact evaluation specifically comparing “˜MTech’ to paper enumeration, we were able to fully explore barriers to an efficient data collection system. Our first finding was that the electronic system did not lead directly to the expected gains in accuracy and efficiency. We used our results to inform the development of a comprehensive implementation guide to improve data collection. In-depth analysis about enumeration errors allowed us to recommend a supervision structure that improved accuracy for both data collection methods and provided on-going assurance of data quality. The evidence generated over the course of the evaluation was immediately used to inform decisions, because it had been designed specifically with these decisions in mind.

Working within the constraints of the implementer

An important reason that evidence may fail to inform a decision, is that it may not fit with the implementation timeline and context. In this example, the Zambian government had set a clear timeline to scale-up their SCT programme within three years. A plan for implementation had already been partially developed before our engagement began. Therefore, we quickly designed a flexible evaluation that balanced the need for rigorous evidence and the constraints of reality. The evidence we’ve generated is less likely to be useful in a different context than a traditional evaluation; it is specific to the implementation of a cash transfer programme by the Zambian government. However, by tying evidence to implementation on the ground, we were able to immediately produce useful data that directly influenced scale-up.

The study also found that the length of time required for the processing of electronic data was a lot longer than anticipated, due to problems with integration with the background data system. Fortunately, given that we built the evaluation into the predetermined time frame for developing and piloting the MTech software, the developer was able to use this information to make direct improvements to the application, leading to immediate efficiency gains.

Fitting the evaluation to the context

In academic research, the goal is often to conduct an evaluation whose results can generalise to multiple contexts. However, often predictions of how an intervention might work in a new location, based on previous research does not pan out. A substantial part of the evaluation we conducted in Zambia was aimed at fully understanding the context, so that we could confidently say that any recommendations are likely to improve impact. By fully mapping out and interviewing all relevant stakeholders for the SCT programme, we identified that different groups had very different visions for the electronic data collection. As a result, we recommended a comprehensive list of roles and responsibilities for each stakeholder to ensure that no gaps in the upkeep of the programme were created by these differing points of view. In this case study, by focusing on the implementers needs, respecting the constraints of reality, and fitting the evaluation to the context of the decision, we were able to encourage the use of evidence to inform multiple implementation decisions. In doing so, the Government of Zambia decided to wait to roll out their MTech system until they had addressed the critical challenges our evaluation identified. As a result, we improved the efficiency and social impact of Zambia’s cash transfer programme and encouraged the government to think differently about how to incorporate evidence generation into future programmes.

Alice Redfern is a Senior Associate, IDinsight. alice.redfern@idinsight.org

Alison Connor is the Health Director, IDinsight. alison.connor@idinsight.org

This blog is published in our latest issue of African Development Perspectives. Read more stories here.

Related Posts