BLOG

Finding the Nexus: M&E, Research and Learning

 

Monitoring and evaluation (M&E) has always been the nerdy cousin of the development world; we know that we have to invite M&E to Sunday lunch, but we hope it sits at the end of the table. And I can say this – I’ve been that M&E nerd on a number of projects! Traditionally, M&E has been about tracking the number of participants trained and the number of waterholes built, and frankly, I can understand why the rest of the development world wanted me to sit at the end of the table while I waxed lyrical about sign-in sheets. In recent years, however, M&E has been evolving; we got cooler glasses … and we started asking ourselves why trainings mattered, and how to measure that ‘why.’

Wasafiri has also been evolving; we have focused the past few years on how to better understand and tackle complex problems, and a big push for our organization this year is to further understand how to measure change within complex systems. Unfortunately, we haven’t quite found that silver bullet, but we have thought through some lessons learnt when it comes to measuring real, lasting change. So allow me to quickly put on my nerdy glasses again and wax lyrical about two key factors we should consider when measuring complex change:

The nexus between M&E and research should be clear:

Too often, these complimentary activities are seen as distinct, and therefore they are not built to speak to and build off one another. But in Wasafiri’s experience, when measuring change within complex problems, M&E cannot stand alone. M&E is built to be internally focused on the progress of the solution we are testing; we track the progress of a development programme supporting food security in Turkana; we track the progress of the Government of Kenya’s National Countering Violent Extremism (CVE) Strategy. Inherently, the questions we ask to monitor and evaluate progress of these initiatives will focus inwards on whether our work is having the intended results we foresaw for the programme or the strategy. While necessary and meaningful, M&E thus tends to look at the initiative’s impact on its key stakeholders and direct beneficiaries. However, we should also acknowledge that these programmes and strategies are players within a wider system, and therefore we also need to understand their external-facing impact; how does the initiative interact with the wider complex system; how do we account and adjust for external political, economic and social dynamics; how do non-direct beneficiaries feel change; how do the interests and power dynamics of the wider system impact on the initiative?
These are all difficult but important questions, and ones which go beyond the scope of more initiative-specific M&E. Research must therefore be understood as intrinsically connected to M&E when thinking about a framework for measuring change within complex systems, and designed in a way that helps to answer questions ranging from a wide-lens perspective – how do changing political dynamics within East Africa impact our food security programme in Turkana – to a granular-lens perspective – do marginalized populations within Majengo understand and relate to Kenya’s National CVE Strategy?

Learning should be the central driving force of change measurement: 

As no simple solutions exist for solving complex problems, we need to

  • place greater emphasis on learning through iterative piloting and adapting; and
  • value failure (when it happens quickly!) Numbers are important to measuring change in complex systems, but equally so is tracking change trends. Doing so within the development space requires a robust learning strategy that serves as an umbrella framework for measuring change both internal to the programme and external to the wider complex system. The learning strategy should outline a series of key, yet flexible, learning objectives that the project intends to feed into. From Wasafiri’s growing body of experience in spearheading such learning strategies, we believe that a critical key question to always consider within a learning strategy is ‘what is working, and what is not, where and why:”
    • What’s working and what’s not working: again, it may seem like an obvious question, but too often we forget to be self-reflexive and ask ourselves what’s going well and what is just not having the traction we expected. Of particular importance is the frequency with which we ask ourselves this question, which must balance a realistic need to allow sufficient time for meaningful change to manifest, with the pressure felt by development programmes to adapt or scale quickly to produce results. This point brings us back to the importance of the nexus between M&E and research, and in particular, community-led research. Project M&E data could tell us that an activity is meeting its milestones – Ugandan police are trained and are demonstrating increased understanding of human rights, for example. But the community-led research is telling us that non-direct beneficiaries at the local level – for example, marginalized youth who are often targeted by the police –express little, no or even a negative change in their lived experiences – they continue to feel fear and harassment. The dichotomy presented by these different data sources will trigger a critically reflective process to better understand why the activity is not having the intended results at the community level – perhaps we have not allowed sufficient time to track change, perhaps the activity was designed without appropriate community engagement, etc. These discussions can then help us decide how to adapt, pilot a new approach, and continue with the learning cycle.
    • Where: in acknowledging that some approaches may work within certain contexts and not within others, it is critical to position the question ‘what is working and what is not’ against a backdrop of ‘where.’ Particularly when tackling complex problems, context-driven solutions requires a systems-lens perspective to track and triangulate local, national and regional dynamics. Digging into the contextual factors that may impact our initiative thus enables us to design conflict sensitive programming and anticipate disruptions and spoilers. Again, we come back to the importance of triangulating M&E with focused and purposeful research designed to help us understand why an activity is having tremendous impact in one context but not in another, or to help envision the types of adaptations that will be needed to scale one successful activity into a neighboring location.
    • Why: relatedly, an activity might be considered successful from the perspective of one individual, but another may have a wholly different understanding of that activity’s impact. Past experiences and relationships will shape an individual’s perspective, and varying perspectives should be well understood when designing activities, because not all participants will react to the activity in the same way. Asking simple questions of beneficiaries, for example, around ‘why this activity resonated’ or ‘why I did not want to participate in this activity’ can unearth a myriad of factors that help to test assumptions and adapt activities to be more culturally, conflict and gender-sensitive.

When the problems you are trying to tackle are about combatting violent extremism in East Africa or ending extreme poverty in the arid and semi-arid regions of Kenya, a confluence of factors can blow the winds of change in nearly any direction, making a single initiative feel like a mere drop in the bucket. But if that drop is able to better account for these evolving external factors that impact the direction of travel, not only can we design interventions that leverage and build into more exponential change, but equally we can better understand why change is happening and how we can further blow it in the right direction. Doing so requires a dedication to M&E, research and most importantly to learning, both from our successes and our failures, and to sharing that learning with the wider system. So please keep inviting us to Sunday lunch, we promise not to talk about sign-in sheets anymore!