Guest post by Toby Norman (PhD candidate in Management Studies at Judge Business School, University of Cambridge). Toby spent last week in Dhaka exploring opportunities to do his dissertation research with BRAC.
Experimentation in the private sector is simple. Not because the technologies or processes are necessarily simple, quite the contrary. A feat like manufacturing an iPad, convincing 14.8 million consumers buy it, and delivering it to them all over the world reflects an incredible interplay of technologies and behaviors that could keep business academics gainfully employed for a lifetime. No, the reason experimentation in the private sector is simple is because the outcomes are clear. At the end of the day trying new things either makes the company money or it doesn’t. All of the complexities of shareholder value, Tobin’s Qs, and five-year strategic forecasts all boil down to this:. successful experimentation makes money, failed experimentation loses it. Fail often enough and you’re out of the game.
With a multitude of competitors nipping at your market share, you have to figure out what works. Unfortunately this is not so straightforward in the social sector. We work in our messy environment where successful outcomes aren’t clear, much less agreed-upon in the first place. Worse, we are held accountable to these outcomes not by those we serve, but by a myriad of donors with conflicting priorities and the best intentions of our consciences. BRAC has long prided itself on its culture of constant experimentation, and on its ability to resist the worst excesses of donors by sharing its thinking candidly. These traits have served its programs well. But as it confronts an increasingly different Bangladesh and expands its operations around the world, how can BRAC ensure that its experimentation is really making things better, not just different? The answer lies in crossing the uncertain space between actions and its outcomes. For example, how do we know if BRAC’s CFPR program is really making a difference in increasing the income of the ultra-poor? Simply asking the poor that income before and after the program is not enough, because we don’t know what changes in income are due to BRAC versus due to external factors. If incomes are rising everywhere in Bangladesh, claiming the program works when it’s done nothing but ride this change is taking false credit. Similarly, if incomes go down we may get blamed, when in fact without the program people may have been much worse off. This is why evaluating programs alongside a control group allows us to draw much deeper insights into what effects are directly attributable to the program. Launching these evaluations in the form of randomized controlled trials is rapidly becoming the gold standard of impact evaluation (to learn more about RCTs from one of the leaders in this field, watch this Ted talk). And in fact, the CFPR program recently did exactly that, proving major gains through participation in its services with a large-scale field experiment. We need to do more experiments like this. However many people assume that every RCT must be massive, logistically complex, and cost huge amounts of money. They quickly sideline them to the portfolio of cumbersome ex post evaluation techniques that get pulled out long after the experimentation phase is over. But every time program changes there is a natural opportunity for controlled evaluation. When a program changes its services or increases its pay there is a chance to compare the impact of this change far more rigorously by adding a control group. If you’re expanding training to 100 new workers, by randomly choosing them from a pool of 200 potentials, you have an RCT at your fingertips that costs almost nothing more than doing the expansion haphazardly. A little bit of thought up front can give you give you rigorous impact contribution down the line. Almost every change you can think of, especially those that are happening anyway, can be evaluated right from the get-got. That’s the potential of RCTs, and the huge missed opportunity that occurs when we fail to integrate evaluation with experimentation. Experimentation has been at the heart of BRAC’s innovative programs from the beginning. But if we want to lead in the global fight against poverty, and increase our impact in an increasingly complex field, we need to link our experimentation with outcomes. We need to know the difference what we are doing makes, to do it even better. Rigorous evaluations through RCTs and other techniques can make us more accountable to our outcomes, rather than just our intentions. But doing so takes some foresight and commitment up front. Evaluate early, evaluate often. Because its that evaluation coupled with constant experimentation that ultimately will help us figure out what really works.