Assessing real impact beyond Western models

As discussions to critique the colonial roots of the development aid sector gain traction, light is also shone on monitoring, evaluation and learning (MEL), which is very much part of that architecture to perpetuate power imbalances. 

Origins of evaluation and rationale for change

The notion of “planful social evaluation” can be dated back to as early as 2200 B.C. with personnel selection in China, but its increased adoption came after Second World War in US and UK. As aid expanded in the 1980s and 1990s, monitoring and evaluation became increasingly common as part of the donors’ requirements, to understand the value of money and accountability. 

Those at the receiving ends of such evaluations, commissioned by Western aid agencies, have often found that scientific and academic rigour and objectivity are used as “excuses” for not including local communities in the design of the evaluation. “The impact narrative has come from centres of power,” said Charles Kojo Vandyck from the West Africa Civil Society Institute (WACSI), “and evaluation has been based on assumptions made by a certain (small) group of people.” 

The people of colour researchers and evaluators group has also discussed the need for diversifying the leadership of the evaluation sector, and for decolonising the methodologies being used. When we talk about capacity building, we often think of Global North transferring knowledge to Global South, but actually, when it comes to evaluation, a lot of knowledge has been transferred from the Global South – the so-called beneficiaries – to the Global South – where the evaluators are often located.

Shifting methods and mindsets 

Going beyond Western models requires both a change in methods and mindsets. Methods don’t actually require anything “new”, but rather, going back to basics. The case of Buen Vivir Fund shows the value of indigenous knowledge, and adopting the approach of learning together. Charles has shared his experience of implementing evaluation centred around the desires of local communities, “Simply asking “What does success look like for you?” to local communities is a good start. It is not just about the outcomes but also about the process. For communities, evaluation should be a live conversation – not pre and post – they live in those realities.” Here at TSIC, we also have the USERS methodology to help with embedding users’ voices into 

Methods are however secondary to mindsets. More than learning, we actually need to unlearn – because as much as societies are under the influence of colonialism, so are our minds, said Dr Arjun Trivedi from Karunar Kheti Trust. We also need to critique who the learning is for – is the learning for the local communities, or for donors and powerful actors in the West? Finally, we need to rethink what skills we want from evaluators. We shouldn’t look only at analytical skills, but also interpersonal skills such as patience, empathy and listening. 

On the mindsets point, there was a sense that donors particularly are not interested in these discussions – and given the economics of evaluations, donors ultimately need to be the ones initiating the change. 

This blog post is a summary of the original conference session at Bond, the network for UK international NGOs. Thanks to Charles Kojo Vandyck from the West Africa Civil Society Institute (WACSI) and Dr Arjun Trivedi from Karunar Kheti Trust.