I occasionally come across debate about how to distinguish ‘monitoring’ (M) from ‘evaluation’ (E). I have to admit that I have also engaged in this debate in the past, but in the end I’ve found most defintions unhelpful for practical purposes. As a result I tend to side-step the issue by avoiding a distinction and simply using the combined label ‘M&E’ rather than distinguishing ‘M’ from ‘E’. Whichever way someone chooses to define ‘M’ v ‘E’, for practical purposes, both processes end up involving the identifcation –> capture –> analysis –> dissemination –> utilisation of information for accountability and for learning.
Most people attempt to differentiate ‘M’ from ‘E’ in one of four ways:
- The nature/focus of the data collected?
- How often it happens?
- Who collects the data?
Who uses the information?
Perhaps the most common basis for differentiation is the first point above…the nature or focus of the data. But even within this basis for defintion I’m aware of three different constructs. Each of these is described below with reference to the logframe matrix:
- Above/below the line: evaluation is concerned with outcome and goal-level inquiry, and monitoring is the domain of outputs and below.
- Left/right of the line: evaluation involves assessment of performance at each level of the design logic (i.e. left-hand column of the logframe), and monitoring involves assessment of the prevalence and consequence of risks/assumptions within the design logic (i.e. the right-hand column of the logframe).
- Within/between the lines: monitoring is the on-going comparison of achievements against targets within each horizontal level in the logframe (i.e. between the first and second columns of the logframe); evaluation is the analyses of the contribution of project achievements at one level in the logframe to a higher-level (i.e. assessment of the causality between horizontal logic).
Intriguing how much diversity of definition and language we have within this space!