This section presents actionable insights for practitioners from our collaboration of experts.
Monitoring, Evaluation and Learning — abbreviated as M&E, MEL, MLE, or MEaL — is the key tool of a movement to bring transparency and accountability into projects in the development sector. In this era of evidence-based-everything, the usefulness of data to inform, justify and audit programs is indisputable. Every organization does it. But working with data, projects and people involves endless complexities.
A group of MEL practitioners from USAID/India’s partner organizations came together to discuss just this at the #Numbers2Narratives workshop in Delhi on the 21st of June, 2018. Though they work across a range of public health issues, approaches and organizational structures, they still have a lot of the same issues to share: Trying to work with ‘program people’ with very different priorities and approaches to data, who may see data as a distraction, inaccurate, and even a threat. Trying to meet multiple and extensive reporting requirements with limited resources. Trying to capture the project’s dynamism while consistently tracking results over time. Struggling to collect data, analyze it, turn it into usable evidence, and ensure the uptake of that evidence. By any account, all of this together is a mammoth task, and often a struggle — and it can be hard to keep track of what the benefits are.
Photo credit: Rachana Raj/L4i
So why do we keep ‘reporting’?
The most obvious reason, of course, is because it’s a funding requirement. But more importantly, organizations now believe that putting in place a good monitoring system brings accountability, which in turn sets off a chain of learning, more effective programming, and greater impact. What’s assumed here, though, is that these organizational motivations for MEL will be evident and inspiring to everyone involved in the project, and so take effect the moment the systems are put in place. Instead, according to the practitioners: Program people may be indifferent or defensive to efforts to collect data around what they do, many of those involved in data collection fail to see the point of it, and MEL teams themselves get lost in tables, charts and quarterly reports. While everyone senses — or has been told — that ‘proving and improving’ is at the heart of MEL, the bigger picture gets lost among the everyday details of project implementation.
So for the sake of returning to the idealistic origins of the MEL project, here are three great reasons why the public health sector needs to keep on striving to monitor, evaluate and learn from its projects:
Reason 1: Knowledge Management
MEL teams spend most of their time on data collection: capacity-building of those collecting the data, cleaning and analyzing it, compiling it into reports every month, quarter or year. Yet there’s a sense that all this work is disappearing down a black hole, not contributing to the project objectives. This is where a shift in perspective around what they do might be required: Rather than accountability-enforcers, MEL people need to be seen (and see themselves!) as knowledge managers. Their role in the project is not just to meet reporting requirements, but to collect, consolidate, distill and share the knowledge that’s relevant to the project, helping it increase its effectiveness, scale and impact. This automatically means a more collaborative interaction with ‘program people’, with a two-way flow of information and an appreciation of the different priorities each group brings to the table. Win-win.
Program teams justifiably feel that time spent documenting or reporting activities takes away from time spent actually doing them. Moreover, it can seem that the MEL team’s accountability mandate — measuring achievement against targets — puts them on the spot. And when the task of communicating successes is left to a separate Communications team, the MEL division appear to be the vultures looking for bad news.
In this context, the motivator for program teams to collaborate might be to see proof of what they have managed to achieve over the course of a reporting period. Once the MEL team starts supplementing data for reports with qualitative methods and success stories that document and appreciate the program team’s efforts, challenges and strategies, and create a pool of insights for the project to reflect on, the magic of learning can start to happen.
‘Numbers2Narratives’ emphasizes the need to understand ‘how results were achieved’ and ‘why desired results could not be achieved’, beyond numbers. This contributes to the larger picture, for sustaining and scaling efforts. — Neeta Rao, USAID
Photo credit: Rachana Raj/L4i
Reason 3: Strengthening the investment case for public health
This is the big one. For all public health professionals, ensuring high quality and cost-effective access to health and wellness for all, and especially those currently excluded from it is the Big Picture. Everything else: projects, silos, health themes, and so on, are all contributors to this larger objective. In light of this, and of the issues facing the public health sector in India right now, one overarching motivation for MEL in public health has to be strengthening the investment case for health.
Everyone knows the issues around public health funding: The Indian government currently spends a meager 1.3% of GDP on healthcare, and over 60% of healthcare spending in the country is out-of-pocket, causing the impoverishment of close to 50 million people every year. This shortage of funds is set to affect many key ‘targets’ in health, including the SDG 3 goals, the goal to End Preventable Maternal and Child Deaths, the End TB Strategy, and the 90–90–90 goals for HIV-AIDS. Investment from other actors, whether aid funding, impact investing, or corporate social responsibility, will depend on the health sector’s ability to make a business case that synthesizes the learning so far to maximize the social impact of further funds invested. This can be done by projects highlighting the challenges and resource gaps that they face in each area, alongside credible evidence for the effectiveness of the solutions they generate.
We work every day to create effective and sustainable solutions to major current public health issues, and to keep improving upon them. The need of the hour is to develop innovative, cost-effective ways of measuring their outcomes and demonstrating their impact. At this point, MEL can’t be simply a matter of ‘reporting’ — it needs to credibly highlight what works, but more importantly what doesn’t work, what to do differently, and what needs more attention and resources. And it needs to proactively share these insights in usable forms with those taking decisions — whether they are policymakers or program designers at the national level, or patients and their caregivers considering how and when to seek treatment.
So this is why we have to keep doing MEL and doing it better: It’s the only way to ensure the work we do creates the impact it should.
‘Doing BEST today is a relative term, plan for BETTER every day, you will achieve BEST every day’ — Subrato Mondal, USAID.
First published on LinkedIn Pulse on July 9, 2018.
Rhea John is Learning Catalyst at Swasti.