Standardised or generic evaluation tools


I’ve been doing work for two different clients who would like to implement a standardised or generic evaluation method or instrument across portfolios of projects implemented by unrelated NGOs.  Both clients are eager to have a consistent way to conduct comparative analysis of individual projects, and sets of projects.  They want to know if individual projects have achieved what was anticipated; and if the portfolio of projects as a whole has fostered desireable outcomes.

One client is predominantly focussed on gathering (to the extent possible) quantitative performance data; the other is predominantly focussed on qualitative perceptions gathered from a range of stakeholders.  But both assume that it is possible to obtain ‘scalable’ performance information.  In other words, information that will be meaningful at any scale: individuals, communities, projects and programs.  The other complication is that this kind of information must be obtained with a minimum of cost/effort.

It seems that in order to be meaningful and yet efficient, a standardised evaluation method must balance a range of dilemmas:

  • There is need for project-specific performance information, and yet there is a need for generic/scalable information to enable interpretation and aggregation from the program-wide perspective.
  • There is need for brevity and simplification of performance issues, and yet the program performance issues are inevitably complex and require elaboration.
  • There is need for quantitative information that can be aggregated and disaggregated to meet specific requirements, and yet this form of data frequently lacks meaning.
  • There is need for qualitative information to shed light on beneficiary changes, and yet this form of data can not be readily aggregated or manipulated.
  • There are precise and anticipated information needs, and yet there are ad hoc or emergent information needs likely to arise.

I wonder whether the competing demands for efficiency and meaningfulness are in such conflict that it is impossible to achieve both?  In other words, given the complexity of human changes anticipated by most international aid projects, can an evaluation method or instrument garner sufficiently meaningful information at reasonable cost/effort while being sufficiently generic that it can be applied at any scale?

Recent Content

link to The 'theory of change' approach

The 'theory of change' approach

For a long time, I’ve been using the phrase ‘theory of change’ to express the idea that a project is essentially a social experiment, and that M&E is about testing the hypotheses implicit in the social experiment.  Recently I was challenged to succinctly elaborate what I thought embodied the ‘theory of change’ approach.  The following […]