What are the best evaluation metrics for humanitarian ICT programs? In the past few weeks I’ve spent days in the field with communities who’ve been integrating ICT into their existing drought early warning surveillance system. So much of this specific evaluation has been about shared learning. Listening to people’s challenges and successes. Thinking together about ways to make actionable changes going forward. And in many ways and more importantly… finding ways to sustain the successes to date.
But at the same time there are many guidelines, metrics, and criteria in our humanitarian system that aim to “define” evaluation. Much of this is related to accountability. The very well known OECD Criteria comes to mind and it has been applied to humanitarian complex emergencies, in the GUIDANCE FOR EVALUATING HUMANITARIAN ASSISTANCE IN COMPLEX EMERGENCIES document and support by evaluation entities like ALNAP. (link).
The deep and burning question for me right now is what part of OECD is applicable to humanitarian ICT program evaluations, especially programs that are driven by community-based approaches. While it may be easy to apply the terms “efficiency, effectiveness, coverage, scale, etc…” taking a step back for just a moment and asking..
“is this the right approach?”
can open up many more questions.
Are we leveraging and complementing existing criteria like OECD with nuanced approaches that appropriately fit this new and emerging environment we call humanitarian technologies or humanitarian ICT. The ways of working may be quite a different animal than what existing evaluation frameworks were designed for. For example, some innovative ICT programs support and even encourage iteration and change during the program or project cycle. What does this mean for frameworks that require pre-determined indicators with logframes and provide very little room for iterative changes?
How to understand OECD criteria in these settings, reapply it and potentially reshape it to humanitarian ICT programs are the burning questions for me right now. While other approaches in the innovation and technologies disciplines that can certainly inform us (ref) , I believe that we may just not be there yet.
At the moment I’m struggling with how do you look at outcomes and “effectiveness” of a humanitarian ICT projects during the pilot phase. The Humanitarian Innovation Fund (HIF) has provided guidance on this for their funded pilot projects and uses the OECD criteria as well. (ref) This has provided some valuable approaches, but I’m still left with questions.
What should we expect when evaluating for outcomes and especially scale, for a community-base program that had just begun to integrate digital data collection only 9-12 months ago. How do you measure something that is expected in many ways to evolve, iterate and change during an early stage of development. And should you force order and pre-existing metrics to holistic, dynamic evolving systems so one can evaluate using currently accepted approaches.
Is it possible that this approach will lead us to erroneously come to conclusions because the method was not fully fit for the purpose.
I would say let’s not throw out the baby with the bathwater, but let’s accept the possibility that we need to take a new approach or at least question whether or not we need to integrate new ways of working into existing evaluative methods for humanitarian ICT projects.