7 things everyone at BRAC should know about impact evaluation

  1. What impact evaluation does is provide a framework to understand whether clients are truly benefiting from a program – and not from other factors. It does this by either deliberately or naturally creating comparison groups to the program participants.
  2. Broadly, the question of causality makes impact evaluation different from ‘monitoring and evaluation’ – which is mostly what BRAC does. In the absence of data on counterfactual outcomes (that is outcomes for participants had they not been exposed to a program), impact evaluation can be rigorous in identifying program effects by applying different models to survey data to construct comparison groups.
  3. However, impact evaluation cannot replace high quality monitoring and evaluation. Although monitoring and evaluation does not spell out whether outcomes are a result of program intervention, it is necessary to understand the goals of a project, the ways in which intervention can take place, and the potential metrics to measure effects on the target clients.
  4. Often, programs allow the evaluator to dictate what the evaluation entails—either because the evaluator is perceived to be “the expert”, or out of concern that they not be seen as influencing the study. But the truth is, unless program management articulates up front what decisions it hopes to make coming out of the evaluation, the evaluation will probably not be very useful.
  5. Impact evaluation spans both quantitative and qualitative methods. Quantitative results can be generalizable – that makes them very popular. And this has been, to a large extent, at the cost of qualitative methods – the results of which may not always be generalizable. Nonetheless, such methods are indispensable when it comes to gauging potential impacts that a program may generate and understanding the mechanisms through which it helps clients. Good quantitative evaluations always begin with qualitative investigation.
  6. Impact evaluations are often designed to answer one question: Do clients achieve greater outcomes than similar individuals not receiving services? But far too few studies are adequately designed to answer the critical follow-on question: Why or why not? This is one of the deadly sins that evaluators and program management commit. Qualitative methods can be very effective in addressing this issue.
  7. Impact evaluations are often incorrectly equated with randomized controlled trials (RCTs). This leads to the wrong assumption that if a comparison group doesn’t naturally exist for a program, then impact evaluation cannot be conducted. In fact, comparison groups don’t naturally exist in RCTs, they are artificially created to approximate the counterfactual. It’s actually natural experiments that leverage naturally occurring comparison groups through an instrumental variable (IV). Learn more about some of the different methods of impact evaluation here.

“BRAC comes as near to a pure example of a learning organization as one is likely to find.” This is how David Korten, an expert on voluntary organizations working with the Ford Foundation, praised our organization back in 1979. Since then, both the nature and complexity of problems in the world of development have changed a lot. Throughout this period, BRAC has tested new models in new sites and contexts. And all throughout, BRAC has relied on constant measurement and evaluation to learn about what works and what doesn’t in order to come up with innovations to enhance effectiveness. The challenge now is to make impact evaluation a part of every program design so that we can assess the effects of our interventions more rigorously and enable our management to take more evidence based decisions.

Leave a Reply

Your email address will not be published. Required fields are marked *