Guest post by Toby Norman (PhD candidate in Management Studies at Judge Business School, University of Cambridge).
Managers love data. Pre-computer the constraints of crunching too many numbers may have kept a manager’s focus on a few key indicators, but now in the age of Excel absolutely anything is fair game for a spreadsheet or database. NGOs are no exception, and as the world’s largest NGO BRAC collects more data than most. In a recent information mapping exercise with 16 programs we identified over 50 management information systems (MIS) that extend from the branch offices through up the chain to headquarters, collating millions of data bits into thousands of totals. This is a huge amount of information. Across the board NGOs create similar compiled counts, sharing them with employees, donors, and clients. With all of this information, surely we must know exactly how well we’re doing in the fight against poverty?
The problem is that numbers by themselves are meaningless. By its design the numeric system only creates value through comparison. Six is useful to us because we can say it’s two more than four and three less than nine. However a single metric in isolation tells you nothing. Let’s say for example that in the month of June ’12, BRAC MIS reports that a total of 36 children were enrolled in primary school at the Dorshona branch office division. Is that good or bad? Well, we don’t actually know. For example, is that number more than last month? Is it less than the neighboring branch office? Is it more than what a different NGO with the same resources could have achieved? Ultimately, is it a reflection of top performance or does it signal a massive failure in execution compared to where we should be?
In business every number is typically compared to two things, the same number over time e.g. “are sales are up from last month?” (trends) or the same number compared to competitors e.g. “did company A sell more than company B?” (benchmarks). These comparisons tell managers, employees, and shareholders who is winning the game. They’re also a powerful driving force of change and innovation as organizations compete to stay ahead—it’s easier to experiment when you know how well you’re doing.
However development organizations can rarely make such easy comparisons with their numbers. First, NGOs aren’t supposed to have competitors, so blatant “we’re winning” comparisons are generally frowned upon and unlikely to make the annual brochure. Furthermore although most NGOs do indeed have rivals (typically referred to as “partners”) in their territory, different NGO programs vary so widely in terms of service delivery that direct benchmarking of numbers—if the same metric exists at all—is often meaningless. CARE claims they reached 65 million people in 2007 and Save the Children says they
helped 10.6 million in 2008, but what do these numbers actually tell us about who is doing a better job?
Second, even within the same organization, comparisons between numbers usually tell us far less than you would think. For example the take-up of microfinance loans month-to-month isn’t just a function of an organization’s efficiency, it swings with agricultural season, labor demand, and changes in local lending availability. Health, education, and legal metrics are no different. Furthermore in development too much pressure on internal numbers often has the nasty side effect of creating misreporting rather than real efficiency gains. If the head office demands a 10% month-on-month growth of a certain metric they’re very likely to see it—neatly drafted up in an MIS report that has about as much in common with reality as the latest Geoffrey Archer novel. Finally, NGOs at the peak of their game should not always be seeing large numbers or positive trends. Ironically an excellent NGO hunting down tuberculosis through city slums would see decreasing monthly TB patients, while their less-able counterparts are happily reporting constant patient growth to donors. Data can sometimes not only be meaningless, but actively hide the real narrative of what’s working.
So what’s the alternative? Qualitative investigation surely has a place here. The criticism that too many managers hide behind their spreadsheets far removed from the realities of the field is a fair one. In business and in nonprofits, getting closer to the action teaches you things numbers can never replace. Anecdotes are also a powerful way of sharing success or failure stories to explain what’s working. However, as anyone who regularly reads NGO annual reports will tell you anecdotes can also be cherry-picked to highlight what we’d like the world to see. Nor is it feasible to manage massive programs through stories and intuition alone. The war on poverty cannot be won with anecdotes, at some point we fundamentally need quantitative data to track where, when, and how much we’re doing relative to the scale of the challenges.
The solution then is not to abandon data, but to use data better. Given the challenges of numeric comparisons in the nonprofit sector we need to be even smarter in using data to drive meaningful insights. Tools like segmentation, performance matrices, and counterfactuals can provide valuable knowledge to improve performance, often using data that already exists but with a few key
tweaks. We’ll explore these tools and their applications in Part II.